[jira] [Commented] (HADOOP-18701) Generic Build Improvements

2023-04-18 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17713928#comment-17713928
 ] 

ASF GitHub Bot commented on HADOOP-18701:
-

hadoop-yetus commented on PR #5567:
URL: https://github.com/apache/hadoop/pull/5567#issuecomment-1514212026

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  docker  |  11m 33s |  |  Docker failed to build run-specific 
yetus/hadoop:tp-10388}.  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/5567 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5567/2/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Generic Build Improvements
> --
>
> Key: HADOOP-18701
> URL: https://issues.apache.org/jira/browse/HADOOP-18701
> Project: Hadoop Common
>  Issue Type: Wish
>Reporter: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
>
> Some proposed build changes.
>  * Add  {{surefire.failIfNoSpecifiedTests}} as false in POM, else it fails if 
> the test specified in -Dtest isn't there in that module, creates problem when 
> you plan to run multiple tests across multiple sub-projects from the root of 
> the project. 
> (https://maven.apache.org/surefire/maven-surefire-plugin/test-mojo.html#failifnospecifiedtests)
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:3.0.0:test (default-test) on 
> project hadoop-build-tools: No tests matching pattern 
> "TestServiceInterruptHandling" were executed! (Set 
> -Dsurefire.failIfNoSpecifiedTests=false to ignore this error.)
> {noformat}
>  * Disable Concurrent builds: Folks push multiple commits in 5-10 mins while 
> pre-commit is running already, so good to discourage this.
>  * Add threshold to number of builds per day, saves resources for genuine 
> PR's against someone pushing multiple commits. (This & the above one: Copied 
> idea from Hive)
>  * Leverage Github Actions to delegate some of the tasks to them, so a bit of 
> parallel execution and might save time, may be explore pushing JDK-11 related 
> stuff to Github Actions (We don't run tests as of now for both JDK-11 & 8, 
> tests are for 8 only in precommit)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5567: HADOOP-18701. Generic Build Improvements.

2023-04-18 Thread via GitHub


hadoop-yetus commented on PR #5567:
URL: https://github.com/apache/hadoop/pull/5567#issuecomment-1514212026

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  docker  |  11m 33s |  |  Docker failed to build run-specific 
yetus/hadoop:tp-10388}.  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/5567 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5567/2/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5565: YARN-11467. RM failover may fail when the nodes.exclude-path file does not exist

2023-04-18 Thread via GitHub


hadoop-yetus commented on PR #5565:
URL: https://github.com/apache/hadoop/pull/5565#issuecomment-1514194057

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 35s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m  3s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  6s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   1m  0s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  6s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   2m  6s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 18s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 52s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 57s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 49s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 59s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m  1s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  98m 29s |  |  
hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 197m 26s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5565/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5565 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 60aa71ff8861 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6b14594eb34f249b26a84df1e43c8ad382a697b5 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5565/2/testReport/ |
   | Max. process+thread count | 964 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5565/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL abo

[jira] [Commented] (HADOOP-18705) hadoop-azure: AzureBlobFileSystem should exclude incompatible credential providers when binding DelegationTokenManagers

2023-04-18 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17713901#comment-17713901
 ] 

ASF GitHub Bot commented on HADOOP-18705:
-

tomicooler commented on PR #5560:
URL: https://github.com/apache/hadoop/pull/5560#issuecomment-1514156973

   @steveloughran Thanks for the review. I just read the testing_azure.md, I 
haven't run the integration tests yet.




> hadoop-azure: AzureBlobFileSystem should exclude incompatible credential 
> providers when binding DelegationTokenManagers
> ---
>
> Key: HADOOP-18705
> URL: https://issues.apache.org/jira/browse/HADOOP-18705
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.4.0, 3.3.5
>Reporter: Tamas Domok
>Assignee: Tamas Domok
>Priority: Major
>  Labels: pull-request-available
>
> The DelegationTokenManager in AzureBlobFileSystem.initialize() gets the 
> untouched configuration which may contain a credentialProviderPath config 
> with incompatible credential providers (e.g.: jceks stored on abfs). This 
> results in an error:
> {quote}
> Caused by: org.apache.hadoop.fs.PathIOException: 
> `jceks://abfs@a@b.c.d/tmp/a.jceks': Recursive load of credential provider; if 
> loading a JCEKS file, this means that the filesystem connector is trying to 
> load the same file
> {quote}
> {code}
> this.delegationTokenManager = 
> abfsConfiguration.getDelegationTokenManager();
> delegationTokenManager.bind(getUri(), configuration);
> {code}
> The abfsConfiguration excludes the incompatible credential providers already.
> Reproduction steps:
> {code}
> export HADOOP_ROOT_LOGGER=DEBUG,console
> hdfs dfs -rm -r -skipTrash /user/qa/sort_input; hadoop jar 
> hadoop-mapreduce-examples.jar randomwriter 
> "-Dmapreduce.randomwriter.totalbytes=100" 
> "-Dhadoop.security.credential.provider.path=jceks://abfs@a@b.c.d/tmp/a.jceks" 
> /user/qa/sort_input 
> {code}
> Error:
> {code}
> ...
> org.apache.hadoop.fs.azurebfs.security.AbfsDelegationTokenManager.bind(AbfsDelegationTokenManager.java:96)
> at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.initialize(AzureBlobFileSystem.java:224)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3452)
> at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:162)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3557)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3504)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:522)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
> at 
> org.apache.hadoop.security.alias.KeyStoreProvider.initFileSystem(KeyStoreProvider.java:84)
> at 
> org.apache.hadoop.security.alias.AbstractJavaKeyStoreProvider.(AbstractJavaKeyStoreProvider.java:85)
> at 
> org.apache.hadoop.security.alias.KeyStoreProvider.(KeyStoreProvider.java:49)
> at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:42)
> at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:35)
> at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:68)
> at 
> org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:91)
> at 
> org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:2450)
> at 
> org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:2388)
> at 
> org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBClient.getTruststorePassword(AbfsIDBClient.java:104)
> at 
> org.apache.knox.gateway.cloud.idbroker.AbstractIDBClient.initializeAsFullIDBClient(AbstractIDBClient.java:860)
> at 
> org.apache.knox.gateway.cloud.idbroker.AbstractIDBClient.(AbstractIDBClient.java:139)
> at 
> org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBClient.(AbfsIDBClient.java:74)
> at 
> org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBIntegration.getClient(AbfsIDBIntegration.java:287)
> at 
> org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBIntegration.serviceStart(AbfsIDBIntegration.java:240)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
> at 
> org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBIntegration.fromDelegationTokenManager(AbfsIDBIntegration.java:205)
> at 
> org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBDelegationTokenManager.bind(AbfsIDBDelegationTokenManager.java:66)
> at 
> org.apache.hadoop.fs.azurebfs.extensions.ExtensionHelper.bind(ExtensionHelper.java:54)
> at 
> org.apache.hadoop.fs.azurebfs.security.AbfsDelega

[GitHub] [hadoop] tomicooler commented on pull request #5560: HADOOP-18705. hadoop-azure: AzureBlobFileSystem should exclude incomp…

2023-04-18 Thread via GitHub


tomicooler commented on PR #5560:
URL: https://github.com/apache/hadoop/pull/5560#issuecomment-1514156973

   @steveloughran Thanks for the review. I just read the testing_azure.md, I 
haven't run the integration tests yet.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomicooler commented on a diff in pull request #5560: HADOOP-18705. hadoop-azure: AzureBlobFileSystem should exclude incomp…

2023-04-18 Thread via GitHub


tomicooler commented on code in PR #5560:
URL: https://github.com/apache/hadoop/pull/5560#discussion_r1170825511


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java:
##
@@ -196,6 +196,11 @@ public void initialize(URI uri, Configuration 
configuration)
 
 final AbfsConfiguration abfsConfiguration = abfsStore
 .getAbfsConfiguration();
+
+// Ensures that configuration excludes incompatible credential providers

Review Comment:
   Done.
   
   Note: there is 2x `excludeIncompatibleCredentialProviders` calls, because 
the AbfsConfiguration does it again. Another approach would be to move the 
`super.initialize` after the `AbfsConfiguration` is ready, there is a v1 patch 
uploaded to this PR, check that version too. The v2 is simpler and it's less 
error prone.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18705) hadoop-azure: AzureBlobFileSystem should exclude incompatible credential providers when binding DelegationTokenManagers

2023-04-18 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17713900#comment-17713900
 ] 

ASF GitHub Bot commented on HADOOP-18705:
-

tomicooler commented on code in PR #5560:
URL: https://github.com/apache/hadoop/pull/5560#discussion_r1170825511


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java:
##
@@ -196,6 +196,11 @@ public void initialize(URI uri, Configuration 
configuration)
 
 final AbfsConfiguration abfsConfiguration = abfsStore
 .getAbfsConfiguration();
+
+// Ensures that configuration excludes incompatible credential providers

Review Comment:
   Done.
   
   Note: there is 2x `excludeIncompatibleCredentialProviders` calls, because 
the AbfsConfiguration does it again. Another approach would be to move the 
`super.initialize` after the `AbfsConfiguration` is ready, there is a v1 patch 
uploaded to this PR, check that version too. The v2 is simpler and it's less 
error prone.





> hadoop-azure: AzureBlobFileSystem should exclude incompatible credential 
> providers when binding DelegationTokenManagers
> ---
>
> Key: HADOOP-18705
> URL: https://issues.apache.org/jira/browse/HADOOP-18705
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.4.0, 3.3.5
>Reporter: Tamas Domok
>Assignee: Tamas Domok
>Priority: Major
>  Labels: pull-request-available
>
> The DelegationTokenManager in AzureBlobFileSystem.initialize() gets the 
> untouched configuration which may contain a credentialProviderPath config 
> with incompatible credential providers (e.g.: jceks stored on abfs). This 
> results in an error:
> {quote}
> Caused by: org.apache.hadoop.fs.PathIOException: 
> `jceks://abfs@a@b.c.d/tmp/a.jceks': Recursive load of credential provider; if 
> loading a JCEKS file, this means that the filesystem connector is trying to 
> load the same file
> {quote}
> {code}
> this.delegationTokenManager = 
> abfsConfiguration.getDelegationTokenManager();
> delegationTokenManager.bind(getUri(), configuration);
> {code}
> The abfsConfiguration excludes the incompatible credential providers already.
> Reproduction steps:
> {code}
> export HADOOP_ROOT_LOGGER=DEBUG,console
> hdfs dfs -rm -r -skipTrash /user/qa/sort_input; hadoop jar 
> hadoop-mapreduce-examples.jar randomwriter 
> "-Dmapreduce.randomwriter.totalbytes=100" 
> "-Dhadoop.security.credential.provider.path=jceks://abfs@a@b.c.d/tmp/a.jceks" 
> /user/qa/sort_input 
> {code}
> Error:
> {code}
> ...
> org.apache.hadoop.fs.azurebfs.security.AbfsDelegationTokenManager.bind(AbfsDelegationTokenManager.java:96)
> at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.initialize(AzureBlobFileSystem.java:224)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3452)
> at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:162)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3557)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3504)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:522)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
> at 
> org.apache.hadoop.security.alias.KeyStoreProvider.initFileSystem(KeyStoreProvider.java:84)
> at 
> org.apache.hadoop.security.alias.AbstractJavaKeyStoreProvider.(AbstractJavaKeyStoreProvider.java:85)
> at 
> org.apache.hadoop.security.alias.KeyStoreProvider.(KeyStoreProvider.java:49)
> at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:42)
> at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:35)
> at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:68)
> at 
> org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:91)
> at 
> org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:2450)
> at 
> org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:2388)
> at 
> org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBClient.getTruststorePassword(AbfsIDBClient.java:104)
> at 
> org.apache.knox.gateway.cloud.idbroker.AbstractIDBClient.initializeAsFullIDBClient(AbstractIDBClient.java:860)
> at 
> org.apache.knox.gateway.cloud.idbroker.AbstractIDBClient.(AbstractIDBClient.java:139)
> at 
> org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBClient.(AbfsIDBClient.java:74)
> at 
> org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBIntegration.getClient(AbfsIDBIntegration.java:287)
>

[jira] [Commented] (HADOOP-18705) hadoop-azure: AzureBlobFileSystem should exclude incompatible credential providers when binding DelegationTokenManagers

2023-04-18 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17713897#comment-17713897
 ] 

ASF GitHub Bot commented on HADOOP-18705:
-

tomicooler commented on code in PR #5560:
URL: https://github.com/apache/hadoop/pull/5560#discussion_r1170822606


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemConfiguration.java:
##
@@ -0,0 +1,42 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import org.apache.hadoop.security.alias.CredentialProviderFactory;
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+
+public class ITestAzureBlobFileSystemConfiguration extends 
AbstractAbfsIntegrationTest {

Review Comment:
   done



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemConfiguration.java:
##
@@ -0,0 +1,42 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import org.apache.hadoop.security.alias.CredentialProviderFactory;
+import org.junit.Test;

Review Comment:
   done





> hadoop-azure: AzureBlobFileSystem should exclude incompatible credential 
> providers when binding DelegationTokenManagers
> ---
>
> Key: HADOOP-18705
> URL: https://issues.apache.org/jira/browse/HADOOP-18705
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.4.0, 3.3.5
>Reporter: Tamas Domok
>Assignee: Tamas Domok
>Priority: Major
>  Labels: pull-request-available
>
> The DelegationTokenManager in AzureBlobFileSystem.initialize() gets the 
> untouched configuration which may contain a credentialProviderPath config 
> with incompatible credential providers (e.g.: jceks stored on abfs). This 
> results in an error:
> {quote}
> Caused by: org.apache.hadoop.fs.PathIOException: 
> `jceks://abfs@a@b.c.d/tmp/a.jceks': Recursive load of credential provider; if 
> loading a JCEKS file, this means that the filesystem connector is trying to 
> load the same file
> {quote}
> {code}
> this.delegationTokenManager = 
> abfsConfiguration.getDelegationTokenManager();
> delegationTokenManager.bind(getUri(), configuration);
> {code}
> The abfsConfiguration excludes the incompatible credential providers already.
> Reproduction steps:
> {code}
> export HADOOP_ROOT_LOGGER=DEBUG,console
> hdfs dfs -rm -r -skipTrash /user/qa/sort_input; hadoop jar 
> hadoop-mapreduce-examples.jar randomwriter 
> "-Dmapreduce.randomwriter.totalbytes=100" 
> "-Dhadoop.security.credential.provider.path=jceks://abfs@a@b.c.d/tmp/a.jceks" 
> /user/qa/sort_input 
> {code}
> Error:
> {code}
> ...
> org.apache.hadoop.fs.azurebfs.security.AbfsDelegationTokenManager.bind(AbfsDelegationTokenManager.java:96)
> at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.initialize(AzureBlobFileSystem.java:224)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3452)
> at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:162)
> at org.apache.hadoop.fs.File

[jira] [Commented] (HADOOP-18705) hadoop-azure: AzureBlobFileSystem should exclude incompatible credential providers when binding DelegationTokenManagers

2023-04-18 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17713898#comment-17713898
 ] 

ASF GitHub Bot commented on HADOOP-18705:
-

tomicooler commented on code in PR #5560:
URL: https://github.com/apache/hadoop/pull/5560#discussion_r1170822736


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemConfiguration.java:
##
@@ -0,0 +1,42 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import org.apache.hadoop.security.alias.CredentialProviderFactory;
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+
+public class ITestAzureBlobFileSystemConfiguration extends 
AbstractAbfsIntegrationTest {
+
+  public ITestAzureBlobFileSystemConfiguration() throws Exception {
+  }
+
+  @Test
+  public void testIncompatibleCredentialProviderIsExcluded() throws Exception {
+Configuration rawConfig = getRawConfiguration();
+rawConfig.set(CredentialProviderFactory.CREDENTIAL_PROVIDER_PATH,
+"jceks://abfs@a@b.c.d/tmp/a.jceks,jceks://file/tmp/secret.jceks");
+AzureBlobFileSystem fs = (AzureBlobFileSystem) FileSystem.get(rawConfig);

Review Comment:
   done





> hadoop-azure: AzureBlobFileSystem should exclude incompatible credential 
> providers when binding DelegationTokenManagers
> ---
>
> Key: HADOOP-18705
> URL: https://issues.apache.org/jira/browse/HADOOP-18705
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.4.0, 3.3.5
>Reporter: Tamas Domok
>Assignee: Tamas Domok
>Priority: Major
>  Labels: pull-request-available
>
> The DelegationTokenManager in AzureBlobFileSystem.initialize() gets the 
> untouched configuration which may contain a credentialProviderPath config 
> with incompatible credential providers (e.g.: jceks stored on abfs). This 
> results in an error:
> {quote}
> Caused by: org.apache.hadoop.fs.PathIOException: 
> `jceks://abfs@a@b.c.d/tmp/a.jceks': Recursive load of credential provider; if 
> loading a JCEKS file, this means that the filesystem connector is trying to 
> load the same file
> {quote}
> {code}
> this.delegationTokenManager = 
> abfsConfiguration.getDelegationTokenManager();
> delegationTokenManager.bind(getUri(), configuration);
> {code}
> The abfsConfiguration excludes the incompatible credential providers already.
> Reproduction steps:
> {code}
> export HADOOP_ROOT_LOGGER=DEBUG,console
> hdfs dfs -rm -r -skipTrash /user/qa/sort_input; hadoop jar 
> hadoop-mapreduce-examples.jar randomwriter 
> "-Dmapreduce.randomwriter.totalbytes=100" 
> "-Dhadoop.security.credential.provider.path=jceks://abfs@a@b.c.d/tmp/a.jceks" 
> /user/qa/sort_input 
> {code}
> Error:
> {code}
> ...
> org.apache.hadoop.fs.azurebfs.security.AbfsDelegationTokenManager.bind(AbfsDelegationTokenManager.java:96)
> at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.initialize(AzureBlobFileSystem.java:224)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3452)
> at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:162)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3557)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3504)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:522)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
> at 
> org.apache.hadoop.security.alias.KeyStoreProvider.initFileSystem(KeyStoreProvider.java:84)
> at 
> org.apache.hadoop.security.alias.AbstractJavaKeyStoreProvider.(AbstractJavaKeyStoreProvider.java:85)
> at 
> org.apache.hadoop.security.alias.KeyStoreProvider.(KeyStoreProvider.java:49)
> at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:42)
> at 
> org.apache.hadoop.security.a

[GitHub] [hadoop] tomicooler commented on a diff in pull request #5560: HADOOP-18705. hadoop-azure: AzureBlobFileSystem should exclude incomp…

2023-04-18 Thread via GitHub


tomicooler commented on code in PR #5560:
URL: https://github.com/apache/hadoop/pull/5560#discussion_r1170822736


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemConfiguration.java:
##
@@ -0,0 +1,42 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import org.apache.hadoop.security.alias.CredentialProviderFactory;
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+
+public class ITestAzureBlobFileSystemConfiguration extends 
AbstractAbfsIntegrationTest {
+
+  public ITestAzureBlobFileSystemConfiguration() throws Exception {
+  }
+
+  @Test
+  public void testIncompatibleCredentialProviderIsExcluded() throws Exception {
+Configuration rawConfig = getRawConfiguration();
+rawConfig.set(CredentialProviderFactory.CREDENTIAL_PROVIDER_PATH,
+"jceks://abfs@a@b.c.d/tmp/a.jceks,jceks://file/tmp/secret.jceks");
+AzureBlobFileSystem fs = (AzureBlobFileSystem) FileSystem.get(rawConfig);

Review Comment:
   done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomicooler commented on a diff in pull request #5560: HADOOP-18705. hadoop-azure: AzureBlobFileSystem should exclude incomp…

2023-04-18 Thread via GitHub


tomicooler commented on code in PR #5560:
URL: https://github.com/apache/hadoop/pull/5560#discussion_r1170822606


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemConfiguration.java:
##
@@ -0,0 +1,42 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import org.apache.hadoop.security.alias.CredentialProviderFactory;
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+
+public class ITestAzureBlobFileSystemConfiguration extends 
AbstractAbfsIntegrationTest {

Review Comment:
   done



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemConfiguration.java:
##
@@ -0,0 +1,42 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import org.apache.hadoop.security.alias.CredentialProviderFactory;
+import org.junit.Test;

Review Comment:
   done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5568: HDFS-16653.Add error message for maxEvictableMmapedSize related Precondition check suite.

2023-04-18 Thread via GitHub


hadoop-yetus commented on PR #5568:
URL: https://github.com/apache/hadoop/pull/5568#issuecomment-1514108239

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 35s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m 32s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  1s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 56s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 36s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  3s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   2m 46s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m  2s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 51s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 52s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 52s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 47s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |   0m 47s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 20s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5568/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt)
 |  hadoop-hdfs-project/hadoop-hdfs-client: The patch generated 2 new + 7 
unchanged - 0 fixed = 9 total (was 7)  |
   | +1 :green_heart: |  mvnsite  |   0m 50s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   2m 32s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 40s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 28s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 102m 46s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5568/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5568 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux bced305d0d02 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 3b11174f0d935788a21816746305e355ece7c261 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5568/1/testReport/ |
   | Max. process+thread count | 641 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5568/1/console |
   | ve

[GitHub] [hadoop] hadoop-yetus commented on pull request #5551: YARN-11378. [Federation] Support checkForDecommissioningNodes、refreshClusterMaxPriority API's for Federation.

2023-04-18 Thread via GitHub


hadoop-yetus commented on PR #5551:
URL: https://github.com/apache/hadoop/pull/5551#issuecomment-1514095343

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 45s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  buf  |   0m  1s |  |  buf was not available.  |
   | +0 :ok: |  buf  |   0m  1s |  |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  17m 20s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m  0s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   9m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   8m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   1m 49s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 52s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 46s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   5m 20s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 53s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 30s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m  4s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  cc  |   9m  4s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   9m  4s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  cc  |   8m 24s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   8m 24s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 37s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5551/5/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt)
 |  hadoop-yarn-project/hadoop-yarn: The patch generated 1 new + 8 unchanged - 
1 fixed = 9 total (was 9)  |
   | +1 :green_heart: |  mvnsite  |   2m 39s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   5m 21s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 58s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 13s |  |  hadoop-yarn-api in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   5m 42s |  |  hadoop-yarn-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   0m 45s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 56s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 165m 51s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5551/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5551 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets cc buflint 
bufcompat |
   | uname | Linux f9af01454fc9 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ace4cbffbfd56eab65fd9f0fce9a403112798bb4 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-U

[GitHub] [hadoop] LiuGuH commented on a diff in pull request #5552: HDFS-16979. RBF: Add dfsrouter port in hdfsauditlog

2023-04-18 Thread via GitHub


LiuGuH commented on code in PR #5552:
URL: https://github.com/apache/hadoop/pull/5552#discussion_r1170763729


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java:
##
@@ -453,15 +453,15 @@ private void logAuditEvent(boolean succeeded,
 
   private void appendClientPortToCallerContextIfAbsent() {
 final CallerContext ctx = CallerContext.getCurrent();
-if (isClientPortInfoAbsent(ctx)) {
-  String origContext = ctx == null ? null : ctx.getContext();
-  byte[] origSignature = ctx == null ? null : ctx.getSignature();
-  CallerContext.setCurrent(
-  new CallerContext.Builder(origContext, contextFieldSeparator)
-  .append(CallerContext.CLIENT_PORT_STR, 
String.valueOf(Server.getRemotePort()))
-  .setSignature(origSignature)
-  .build());
-}
+String origContext = ctx == null ? null : ctx.getContext();
+byte[] origSignature = ctx == null ? null : ctx.getSignature();
+String clientPort = isClientPortInfoAbsent(ctx) ? 
CallerContext.CLIENT_PORT_STR :

Review Comment:
   Yes,it is.  Because when is from dfsrouter ,the callcontext will add 
clientPort .  The code is in RouterRpcClient.addClientInfoToCallerContext().
In a word, when is comes form dfsrouter , the callcontext has clientPort, or it 
is not contains clientPort,so we can use isClientPortInfoAbsent()  to 
distinguish sources



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Likkey opened a new pull request, #5569: HDFS-16697.Add code to check for minimumRedundantVolumes.

2023-04-18 Thread via GitHub


Likkey opened a new pull request, #5569:
URL: https://github.com/apache/hadoop/pull/5569

   ### Description of PR
   
   It is found that “dfs.namenode.resource.checked.volumes.minimum” lacks a 
condition check and an associated exception handling mechanism, which makes it 
impossible to find the root cause of the impact when a misconfiguration occurs.
   Add a mechanism to check the value of minimumRedundantVolumes to ensure that 
the value is greater than the number of NameNode storage volumes to avoid never 
being able to turn off safe mode afterwards.
   
   
JIRA:HDFS-16697](https://issues.apache.org/jira/browse/HDFS-16697)](https://issues.apache.org/jira/browse/%5BHDFS-16697%5D(https://issues.apache.org/jira/browse/HDFS-16697))](https://issues.apache.org/jira/browse/[HDFS-16697](https://issues.apache.org/jira/browse/HDFS-16697))]
   
   ### How was this patch tested?
   
   This patch provides a check of the configuration items,it will throw an 
IllegalArgumentException and a detailed error message when the value is greater 
than the number of NameNode storage volumes, and printing a warning message in 
the log in order to solve the problem in time and avoid the misconfiguration 
from affecting the subsequent operations of the program.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Likkey opened a new pull request, #5568: HDFS-16653.Add error message for maxEvictableMmapedSize related Precondition check suite.

2023-04-18 Thread via GitHub


Likkey opened a new pull request, #5568:
URL: https://github.com/apache/hadoop/pull/5568

   ### Description of PR
   
   When the configuration item “dfs.client.mmap.cache.size” is set to a 
negative number, it will cause /hadoop/bin hdfs dfsadmin -safemode provides all 
the operation options including enter, leave, get, wait and forceExit are 
invalid, the terminal returns security mode is null and no exceptions are 
thrown.
   
[[[HDFS-16653](https://issues.apache.org/jira/browse/HDFS-16653)](https://issues.apache.org/jira/browse/HDFS-16653)](https://issues.apache.org/jira/browse/HDFS-16653)
   
   ### How was this patch tested?
   
   This patch adds maxEvictableMmapedSize that is "dfs.client.mmap.cache.size" 
related Precondition check suite error message, and give a clear indication 
when the configuration is abnormal in order to solve the problem in time and 
reduce the impact on the safe mode related operations.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Likkey closed pull request #4849: HDFS-16697.Add code to check for minimumRedundantVolumes.

2023-04-18 Thread via GitHub


Likkey closed pull request #4849: HDFS-16697.Add code to check for 
minimumRedundantVolumes.
URL: https://github.com/apache/hadoop/pull/4849


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a diff in pull request #5552: HDFS-16979. RBF: Add dfsrouter port in hdfsauditlog

2023-04-18 Thread via GitHub


goiri commented on code in PR #5552:
URL: https://github.com/apache/hadoop/pull/5552#discussion_r1170739964


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java:
##
@@ -453,15 +453,15 @@ private void logAuditEvent(boolean succeeded,
 
   private void appendClientPortToCallerContextIfAbsent() {
 final CallerContext ctx = CallerContext.getCurrent();
-if (isClientPortInfoAbsent(ctx)) {
-  String origContext = ctx == null ? null : ctx.getContext();
-  byte[] origSignature = ctx == null ? null : ctx.getSignature();
-  CallerContext.setCurrent(
-  new CallerContext.Builder(origContext, contextFieldSeparator)
-  .append(CallerContext.CLIENT_PORT_STR, 
String.valueOf(Server.getRemotePort()))
-  .setSignature(origSignature)
-  .build());
-}
+String origContext = ctx == null ? null : ctx.getContext();
+byte[] origSignature = ctx == null ? null : ctx.getSignature();
+String clientPort = isClientPortInfoAbsent(ctx) ? 
CallerContext.CLIENT_PORT_STR :

Review Comment:
   Are we even sure that if it is not a client, it is a router?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Likkey closed pull request #4848: HDFS-16653.Add error message for maxEvictableMmapedSize related Precondition check suite.

2023-04-18 Thread via GitHub


Likkey closed pull request #4848: HDFS-16653.Add error message for 
maxEvictableMmapedSize related Precondition check suite.
URL: https://github.com/apache/hadoop/pull/4848


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a diff in pull request #5552: HDFS-16979. RBF: Add dfsrouter port in hdfsauditlog

2023-04-18 Thread via GitHub


goiri commented on code in PR #5552:
URL: https://github.com/apache/hadoop/pull/5552#discussion_r1170739316


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java:
##
@@ -453,15 +453,15 @@ private void logAuditEvent(boolean succeeded,
 
   private void appendClientPortToCallerContextIfAbsent() {
 final CallerContext ctx = CallerContext.getCurrent();
-if (isClientPortInfoAbsent(ctx)) {
-  String origContext = ctx == null ? null : ctx.getContext();
-  byte[] origSignature = ctx == null ? null : ctx.getSignature();
-  CallerContext.setCurrent(
-  new CallerContext.Builder(origContext, contextFieldSeparator)
-  .append(CallerContext.CLIENT_PORT_STR, 
String.valueOf(Server.getRemotePort()))
-  .setSignature(origSignature)
-  .build());
-}
+String origContext = ctx == null ? null : ctx.getContext();
+byte[] origSignature = ctx == null ? null : ctx.getSignature();
+String clientPort = isClientPortInfoAbsent(ctx) ? 
CallerContext.CLIENT_PORT_STR :

Review Comment:
   It would be nice to have a comment explaining the logic for when this is a 
Router request, etc.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Likkey closed pull request #4847: HDFS-16721.Improve the check code of “dfs.client.socket-timeout”.

2023-04-18 Thread via GitHub


Likkey closed pull request #4847: HDFS-16721.Improve the check code of 
“dfs.client.socket-timeout”.
URL: https://github.com/apache/hadoop/pull/4847


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5519: MAPREDUCE-7435. Manifest Committer OOM on abfs

2023-04-18 Thread via GitHub


hadoop-yetus commented on PR #5519:
URL: https://github.com/apache/hadoop/pull/5519#issuecomment-1514011109

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  48m  2s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 12 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m 21s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  29m  7s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  25m 18s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |  21m 55s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   4m  6s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m  8s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 48s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | -1 :x: |  spotbugs  |   1m 30s | 
[/branch-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5519/9/artifact/out/branch-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core-warnings.html)
 |  
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
in trunk has 1 extant spotbugs warnings.  |
   | +1 :green_heart: |  shadedclient  |  23m 40s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | -1 :x: |  javac  |  24m 23s | 
[/results-compile-javac-root-jdkUbuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5519/9/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1.txt)
 |  root-jdkUbuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 generated 1 new + 2806 unchanged 
- 0 fixed = 2807 total (was 2806)  |
   | +1 :green_heart: |  compile  |  21m 47s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | -1 :x: |  javac  |  21m 47s | 
[/results-compile-javac-root-jdkPrivateBuild-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5519/9/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09.txt)
 |  root-jdkPrivateBuild-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 generated 1 new + 2602 
unchanged - 0 fixed = 2603 total (was 2602)  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5519/9/artifact/out/blanks-eol.txt)
 |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   4m 21s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5519/9/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 10 new + 33 unchanged - 0 fixed = 43 total (was 
33)  |
   | +1 :green_heart: |  mvnsite  |   4m 10s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 41s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | -1 :x: |  spotbugs  |   1m 44s | 
[/new-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5519/9/artifact/out/new-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.html)
 |  
hadoop-mapreduce-project/hadoop-mapreduce-client

[GitHub] [hadoop] hadoop-yetus commented on pull request #5566: Bump jetty-server from 9.4.48.v20220622 to 10.0.14 in /hadoop-project

2023-04-18 Thread via GitHub


hadoop-yetus commented on PR #5566:
URL: https://github.com/apache/hadoop/pull/5566#issuecomment-1513927807

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 36s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m 24s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 23s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  shadedclient  |  62m 15s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 14s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |   0m 14s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 13s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 14s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | -1 :x: |  shadedclient  |   2m 55s |  |  patch has errors when building 
and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 14s |  |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  69m 44s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5566/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5566 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint |
   | uname | Linux f9fa8a600818 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / e441dd99cf18a968a624cca6dd9861557734f9dd |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5566/1/testReport/ |
   | Max. process+thread count | 556 (vs. ulimit of 5500) |
   | modules | C: hadoop-project U: hadoop-project |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5566/1/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: com

[jira] [Commented] (HADOOP-18701) Generic Build Improvements

2023-04-18 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17713799#comment-17713799
 ] 

Ayush Saxena commented on HADOOP-18701:
---

yeps, should add it in hadoop-main. 

Some confusion: you too added the *trimStackTrace* as false explicitly here: 
https://github.com/apache/hadoop/pull/5543/files#diff-22b9fbf2d456e024bad08d789df2f55744cef1c8a8c585209b0f0a52a068350dR276

and they made it by default false in M6 in that Surefire ticket, we are on M1 
that is why you had to explicitly add

trying some wip stuff, yet to validate stuff.

> Generic Build Improvements
> --
>
> Key: HADOOP-18701
> URL: https://issues.apache.org/jira/browse/HADOOP-18701
> Project: Hadoop Common
>  Issue Type: Wish
>Reporter: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
>
> Some proposed build changes.
>  * Add  {{surefire.failIfNoSpecifiedTests}} as false in POM, else it fails if 
> the test specified in -Dtest isn't there in that module, creates problem when 
> you plan to run multiple tests across multiple sub-projects from the root of 
> the project. 
> (https://maven.apache.org/surefire/maven-surefire-plugin/test-mojo.html#failifnospecifiedtests)
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:3.0.0:test (default-test) on 
> project hadoop-build-tools: No tests matching pattern 
> "TestServiceInterruptHandling" were executed! (Set 
> -Dsurefire.failIfNoSpecifiedTests=false to ignore this error.)
> {noformat}
>  * Disable Concurrent builds: Folks push multiple commits in 5-10 mins while 
> pre-commit is running already, so good to discourage this.
>  * Add threshold to number of builds per day, saves resources for genuine 
> PR's against someone pushing multiple commits. (This & the above one: Copied 
> idea from Hive)
>  * Leverage Github Actions to delegate some of the tasks to them, so a bit of 
> parallel execution and might save time, may be explore pushing JDK-11 related 
> stuff to Github Actions (We don't run tests as of now for both JDK-11 & 8, 
> tests are for 8 only in precommit)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18701) Generic Build Improvements

2023-04-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-18701:

Labels: pull-request-available  (was: )

> Generic Build Improvements
> --
>
> Key: HADOOP-18701
> URL: https://issues.apache.org/jira/browse/HADOOP-18701
> Project: Hadoop Common
>  Issue Type: Wish
>Reporter: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
>
> Some proposed build changes.
>  * Add  {{surefire.failIfNoSpecifiedTests}} as false in POM, else it fails if 
> the test specified in -Dtest isn't there in that module, creates problem when 
> you plan to run multiple tests across multiple sub-projects from the root of 
> the project. 
> (https://maven.apache.org/surefire/maven-surefire-plugin/test-mojo.html#failifnospecifiedtests)
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:3.0.0:test (default-test) on 
> project hadoop-build-tools: No tests matching pattern 
> "TestServiceInterruptHandling" were executed! (Set 
> -Dsurefire.failIfNoSpecifiedTests=false to ignore this error.)
> {noformat}
>  * Disable Concurrent builds: Folks push multiple commits in 5-10 mins while 
> pre-commit is running already, so good to discourage this.
>  * Add threshold to number of builds per day, saves resources for genuine 
> PR's against someone pushing multiple commits. (This & the above one: Copied 
> idea from Hive)
>  * Leverage Github Actions to delegate some of the tasks to them, so a bit of 
> parallel execution and might save time, may be explore pushing JDK-11 related 
> stuff to Github Actions (We don't run tests as of now for both JDK-11 & 8, 
> tests are for 8 only in precommit)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18701) Generic Build Improvements

2023-04-18 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17713798#comment-17713798
 ] 

ASF GitHub Bot commented on HADOOP-18701:
-

ayushtkn opened a new pull request, #5567:
URL: https://github.com/apache/hadoop/pull/5567

   ### Description of PR
   
   **WIP:**
   
   Attempting basic improvements
   
   ### How was this patch tested?
   
   **WIP**
   Tried failIfNoSpecifiedTests prop by now
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> Generic Build Improvements
> --
>
> Key: HADOOP-18701
> URL: https://issues.apache.org/jira/browse/HADOOP-18701
> Project: Hadoop Common
>  Issue Type: Wish
>Reporter: Ayush Saxena
>Priority: Major
>
> Some proposed build changes.
>  * Add  {{surefire.failIfNoSpecifiedTests}} as false in POM, else it fails if 
> the test specified in -Dtest isn't there in that module, creates problem when 
> you plan to run multiple tests across multiple sub-projects from the root of 
> the project. 
> (https://maven.apache.org/surefire/maven-surefire-plugin/test-mojo.html#failifnospecifiedtests)
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:3.0.0:test (default-test) on 
> project hadoop-build-tools: No tests matching pattern 
> "TestServiceInterruptHandling" were executed! (Set 
> -Dsurefire.failIfNoSpecifiedTests=false to ignore this error.)
> {noformat}
>  * Disable Concurrent builds: Folks push multiple commits in 5-10 mins while 
> pre-commit is running already, so good to discourage this.
>  * Add threshold to number of builds per day, saves resources for genuine 
> PR's against someone pushing multiple commits. (This & the above one: Copied 
> idea from Hive)
>  * Leverage Github Actions to delegate some of the tasks to them, so a bit of 
> parallel execution and might save time, may be explore pushing JDK-11 related 
> stuff to Github Actions (We don't run tests as of now for both JDK-11 & 8, 
> tests are for 8 only in precommit)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn opened a new pull request, #5567: HADOOP-18701. Generic Build Improvements.

2023-04-18 Thread via GitHub


ayushtkn opened a new pull request, #5567:
URL: https://github.com/apache/hadoop/pull/5567

   ### Description of PR
   
   **WIP:**
   
   Attempting basic improvements
   
   ### How was this patch tested?
   
   **WIP**
   Tried failIfNoSpecifiedTests prop by now
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #4963: YARN-11326. [Federation] Add RM FederationStateStoreService Metrics.

2023-04-18 Thread via GitHub


slfan1989 commented on PR #4963:
URL: https://github.com/apache/hadoop/pull/4963#issuecomment-1513893947

   @goiri Thank you very much for helping to review the code!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #5537: YARN-11438. [Federation] ZookeeperFederationStateStore Support Version.

2023-04-18 Thread via GitHub


slfan1989 commented on PR #5537:
URL: https://github.com/apache/hadoop/pull/5537#issuecomment-1513894068

   @goiri Thank you very much for helping to review the code!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dependabot[bot] opened a new pull request, #5566: Bump jetty-server from 9.4.48.v20220622 to 10.0.14 in /hadoop-project

2023-04-18 Thread via GitHub


dependabot[bot] opened a new pull request, #5566:
URL: https://github.com/apache/hadoop/pull/5566

   Bumps [jetty-server](https://github.com/eclipse/jetty.project) from 
9.4.48.v20220622 to 10.0.14.
   
   Release notes
   Sourced from https://github.com/eclipse/jetty.project/releases";>jetty-server's 
releases.
   
   10.0.14
   Special Thanks to the following Eclipse Jetty community members
   
   https://github.com/pzygielo";>@​pzygielo 
(Piotrek Żygieło)
   https://github.com/jluehe";>@​jluehe 
(jluehe)
   https://github.com/dzoech";>@​dzoech (Dominik 
Zöchbauer)
   
   Changelog
   
   https://redirect.github.com/eclipse/jetty.project/issues/9344";>#9344 
- Cleanup Multipart handling for CVE-2023-26048
   https://redirect.github.com/eclipse/jetty.project/issues/9343";>#9343 
- URI Host Mismatch with optional Compliance modes
   https://redirect.github.com/eclipse/jetty.project/issues/9339";>#9339 
- Cleanup Cookie Cutter handling for CVE-2023-26049
   https://redirect.github.com/eclipse/jetty.project/issues/9337";>#9337 
- LowResourceMonitor.getReasons should include detailed reason instead of 
hard-coded message (https://github.com/jluehe";>@​jluehe)
   https://redirect.github.com/eclipse/jetty.project/issues/9334";>#9334 
- Better support for Cookie RFC 2965 compliance
   https://redirect.github.com/eclipse/jetty.project/issues/9285";>#9285 
- ContextHandler sends redirect on BaseResponse instead of Wrapped Response 
object from Handler chain
   https://redirect.github.com/eclipse/jetty.project/issues/9283";>#9283 
- Configurable Unsafe Host Header Behaviors
   https://redirect.github.com/eclipse/jetty.project/issues/9188";>#9188 
- Log as info exceptions from server after sending stop with StopMojo.
   https://redirect.github.com/eclipse/jetty.project/issues/9183";>#9183 
- ConnectHandler may close the connection instead of sending 200 OK
   https://redirect.github.com/eclipse/jetty.project/issues/9128";>#9128 
- Do not execute any phase for maven plugin :start (https://github.com/pzygielo";>@​pzygielo)
   https://redirect.github.com/eclipse/jetty.project/issues/9119";>#9119 
- Wrong value of javax.servlet.forward.context_path attribute
   https://redirect.github.com/eclipse/jetty.project/issues/9092";>#9092 
- Use ASM Bom
   https://redirect.github.com/eclipse/jetty.project/issues/9059";>#9059 
- IteratingCallback not serializing close() and failed()
   https://redirect.github.com/eclipse/jetty.project/issues/9055";>#9055 
- PathMappings optimizations
   https://redirect.github.com/eclipse/jetty.project/issues/7650";>#7650 
- QueuedThreadPool: Stopped without executing or closing null (https://github.com/dzoech";>@​dzoech)
   
   Dependencies
   
   https://redirect.github.com/eclipse/jetty.project/issues/9242";>#9242 
- Bump infinispan-bom to 11.0.17.Final
   https://redirect.github.com/eclipse/jetty.project/issues/9359";>#9359 
- Bump maven.version to 3.9.0
   https://redirect.github.com/eclipse/jetty.project/issues/9102";>#9102 
- Bump org.apache.aries.spifly.dynamic.bundle to 1.3.6
   https://redirect.github.com/eclipse/jetty.project/issues/9098";>#9098 
- Bump org.eclipse.osgi to 3.18.200
   https://redirect.github.com/eclipse/jetty.project/issues/9106";>#9106 
- Bump org.eclipse.osgi.services to 3.11.100
   https://redirect.github.com/eclipse/jetty.project/issues/9097";>#9097 
- Bump protostream to 4.6.0.Final
   https://redirect.github.com/eclipse/jetty.project/issues/9367";>#9367 
- Bump tycho-p2-repository-plugin to 3.0.2
   
   10.0.13
   Special Thanks to the following Eclipse Jetty community members
   
   https://github.com/janvojt";>@​janvojt (Jan 
Vojt)
   https://github.com/joschi";>@​joschi (Jochen 
Schalanda)
   https://github.com/leonchen83";>@​leonchen83 
(Baoyi Chen)
   https://github.com/cowwoc";>@​cowwoc (Gili 
Tzabari)
   https://github.com/Vlatombe";>@​Vlatombe 
(Vincent Latombe)
   
   Changelog
   
   https://redirect.github.com/eclipse/jetty.project/issues/9006";>#9006 
- WebSocket Message InputStream read() returns signed byte
   https://redirect.github.com/eclipse/jetty.project/issues/8913";>#8913 
- Review Jetty XML syntax to allow calling JDK methods
   
   
   
   ... (truncated)
   
   
   Commits
   
   https://github.com/eclipse/jetty.project/commit/976721d0f3e903a243584d47870ad2f2c1bf9e55";>976721d
 Updating to version 10.0.14
   https://github.com/eclipse/jetty.project/commit/b7075161d015ddce23fbf3db873d5f6b539f6a6b";>b707516
 Fix osgi dependencies for update to org.eclipse.osgi.services.
   https://github.com/eclipse/jetty.project/commit/4d146412c8feac05c25d171b15c4f6ab4d42719b";>4d14641
 Fix https://redirect.github.com/eclipse/jetty.project/issues/9334";>#9334 
Cookie Compliance (https://redirect.github.com/eclipse/jetty.project/issues/9402";>#9402)
   https://github.com/eclipse/jetty.project/commit/f01d53895f8930e1ebc52c9d89944df14fe5d6f2";>f01d538
 Merge pull request https://redirect.github.com/eclipse/jetty.project/issues/9380";>#9380 
from eclipse/dep

[jira] [Commented] (HADOOP-18701) Generic Build Improvements

2023-04-18 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17713765#comment-17713765
 ] 

Steve Loughran commented on HADOOP-18701:
-

not sure if that surefire jira is the one, as our build is still at 3.0.0-M1 
and i saw it on trunk. maybe some other aspect of the build has changed now 
homebrew put me on to maven 3.3.9.

anyway, we should pre-emptively fix our build so a surefire upgrade won't lose 
the output we depend on.

> Generic Build Improvements
> --
>
> Key: HADOOP-18701
> URL: https://issues.apache.org/jira/browse/HADOOP-18701
> Project: Hadoop Common
>  Issue Type: Wish
>Reporter: Ayush Saxena
>Priority: Major
>
> Some proposed build changes.
>  * Add  {{surefire.failIfNoSpecifiedTests}} as false in POM, else it fails if 
> the test specified in -Dtest isn't there in that module, creates problem when 
> you plan to run multiple tests across multiple sub-projects from the root of 
> the project. 
> (https://maven.apache.org/surefire/maven-surefire-plugin/test-mojo.html#failifnospecifiedtests)
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:3.0.0:test (default-test) on 
> project hadoop-build-tools: No tests matching pattern 
> "TestServiceInterruptHandling" were executed! (Set 
> -Dsurefire.failIfNoSpecifiedTests=false to ignore this error.)
> {noformat}
>  * Disable Concurrent builds: Folks push multiple commits in 5-10 mins while 
> pre-commit is running already, so good to discourage this.
>  * Add threshold to number of builds per day, saves resources for genuine 
> PR's against someone pushing multiple commits. (This & the above one: Copied 
> idea from Hive)
>  * Leverage Github Actions to delegate some of the tasks to them, so a bit of 
> parallel execution and might save time, may be explore pushing JDK-11 related 
> stuff to Github Actions (We don't run tests as of now for both JDK-11 & 8, 
> tests are for 8 only in precommit)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18704) Support a "permissive" mode for secure clusters to allow "simple" auth clients

2023-04-18 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18704:

Component/s: security

> Support a "permissive" mode for secure clusters to allow "simple" auth clients
> --
>
> Key: HADOOP-18704
> URL: https://issues.apache.org/jira/browse/HADOOP-18704
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, security
>Affects Versions: 3.4.0, 2.10.3, 3.2.5, 3.3.6
>Reporter: Ravi Kishore Valeti
>Priority: Minor
>
> Similar to HBASE-14700, would like to add support for Secure Server to 
> fallback to simple auth for non-secure clients.
> Secure Hadoop to support a permissive mode to allow mixed secure and insecure 
> clients. This allows clients to be incrementally migrated over to a secure 
> configuration. To enable clients to continue to connect using SIMPLE 
> authentication when the cluster is configured for security, set 
> "hadoop.ipc.server.fallback-to-simple-auth-allowed" equal to "true" in 
> hdfs-site.xml. NOTE: This setting should ONLY be used as a temporary measure 
> while converting clients over to secure authentication. It MUST BE DISABLED 
> for secure operation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18704) Support a "permissive" mode for secure clusters to allow "simple" auth clients

2023-04-18 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18704:

Issue Type: New Feature  (was: Improvement)

> Support a "permissive" mode for secure clusters to allow "simple" auth clients
> --
>
> Key: HADOOP-18704
> URL: https://issues.apache.org/jira/browse/HADOOP-18704
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc, security
>Affects Versions: 3.4.0, 2.10.3, 3.2.5, 3.3.6
>Reporter: Ravi Kishore Valeti
>Priority: Minor
>
> Similar to HBASE-14700, would like to add support for Secure Server to 
> fallback to simple auth for non-secure clients.
> Secure Hadoop to support a permissive mode to allow mixed secure and insecure 
> clients. This allows clients to be incrementally migrated over to a secure 
> configuration. To enable clients to continue to connect using SIMPLE 
> authentication when the cluster is configured for security, set 
> "hadoop.ipc.server.fallback-to-simple-auth-allowed" equal to "true" in 
> hdfs-site.xml. NOTE: This setting should ONLY be used as a temporary measure 
> while converting clients over to secure authentication. It MUST BE DISABLED 
> for secure operation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18704) Support a "permissive" mode for secure clusters to allow "simple" auth clients

2023-04-18 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18704:

Priority: Major  (was: Minor)

> Support a "permissive" mode for secure clusters to allow "simple" auth clients
> --
>
> Key: HADOOP-18704
> URL: https://issues.apache.org/jira/browse/HADOOP-18704
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc, security
>Affects Versions: 3.4.0, 2.10.3, 3.2.5, 3.3.6
>Reporter: Ravi Kishore Valeti
>Priority: Major
>
> Similar to HBASE-14700, would like to add support for Secure Server to 
> fallback to simple auth for non-secure clients.
> Secure Hadoop to support a permissive mode to allow mixed secure and insecure 
> clients. This allows clients to be incrementally migrated over to a secure 
> configuration. To enable clients to continue to connect using SIMPLE 
> authentication when the cluster is configured for security, set 
> "hadoop.ipc.server.fallback-to-simple-auth-allowed" equal to "true" in 
> hdfs-site.xml. NOTE: This setting should ONLY be used as a temporary measure 
> while converting clients over to secure authentication. It MUST BE DISABLED 
> for secure operation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18704) Support a "permissive" mode for secure clusters to allow "simple" auth clients

2023-04-18 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17713757#comment-17713757
 ] 

Steve Loughran commented on HADOOP-18704:
-

this is going to be up to the yarn/hdfs team to worry about *I will not review*

what I would suggest, however, is that the list of users allowed to 
authenticate with simple auth is part of the config, so you can restrict the 
exposure of this *very dangerous* feature.

> Support a "permissive" mode for secure clusters to allow "simple" auth clients
> --
>
> Key: HADOOP-18704
> URL: https://issues.apache.org/jira/browse/HADOOP-18704
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.4.0, 2.10.3, 3.2.5, 3.3.6
>Reporter: Ravi Kishore Valeti
>Priority: Minor
>
> Similar to HBASE-14700, would like to add support for Secure Server to 
> fallback to simple auth for non-secure clients.
> Secure Hadoop to support a permissive mode to allow mixed secure and insecure 
> clients. This allows clients to be incrementally migrated over to a secure 
> configuration. To enable clients to continue to connect using SIMPLE 
> authentication when the cluster is configured for security, set 
> "hadoop.ipc.server.fallback-to-simple-auth-allowed" equal to "true" in 
> hdfs-site.xml. NOTE: This setting should ONLY be used as a temporary measure 
> while converting clients over to secure authentication. It MUST BE DISABLED 
> for secure operation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18149) The FSDownload verifyAndCopy method doesn't support S3

2023-04-18 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17713756#comment-17713756
 ] 

Steve Loughran commented on HADOOP-18149:
-

why closing as invalid?

> The FSDownload verifyAndCopy method doesn't support S3
> --
>
> Key: HADOOP-18149
> URL: https://issues.apache.org/jira/browse/HADOOP-18149
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Chris Bevard
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The modification time comparison in FSDownload's verifyAndCopy method fails 
> for S3, which prohibits distributed cache files being loaded from S3. This 
> change allows S3 to be supported via a config change, that would replace the 
> IO Exception with a warning log entry.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18706) The temporary files for disk-block buffer aren't unique enough to recover partial uploads. 

2023-04-18 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17713754#comment-17713754
 ] 

ASF GitHub Bot commented on HADOOP-18706:
-

hadoop-yetus commented on PR #5563:
URL: https://github.com/apache/hadoop/pull/5563#issuecomment-1513746171

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 47s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  42m 50s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 17s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 23s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 17s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5563/4/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 4 new + 2 unchanged - 0 fixed 
= 6 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 34s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 22s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 103m 50s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5563/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5563 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 050c1edeb490 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6b43d44c17303d5a202dedaedd53bfad8e0719e4 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5563/4/testReport/ |
   | Max. process+thread count | 528 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job

[GitHub] [hadoop] hadoop-yetus commented on pull request #5563: HADOOP-18706: Improve S3ABlockOutputStream recovery

2023-04-18 Thread via GitHub


hadoop-yetus commented on PR #5563:
URL: https://github.com/apache/hadoop/pull/5563#issuecomment-1513746171

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 47s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  42m 50s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 17s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 23s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 17s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5563/4/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 4 new + 2 unchanged - 0 fixed 
= 6 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 34s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 22s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 103m 50s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5563/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5563 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 050c1edeb490 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6b43d44c17303d5a202dedaedd53bfad8e0719e4 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5563/4/testReport/ |
   | Max. process+thread count | 528 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5563/4/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5554: HDFS-16978. RBF: Admin command to support bulk add of mount points

2023-04-18 Thread via GitHub


hadoop-yetus commented on PR #5554:
URL: https://github.com/apache/hadoop/pull/5554#issuecomment-1513708832

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  16m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +0 :ok: |  buf  |   0m  1s |  |  buf was not available.  |
   | +0 :ok: |  buf  |   0m  1s |  |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m 57s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 46s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 35s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 46s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 52s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 34s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 50s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  21m  9s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  cc  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  cc  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 35s | 
[/results-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5554/8/artifact/out/results-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1.txt)
 |  
hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 generated 20 new + 79 
unchanged - 20 fixed = 99 total (was 99)  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 35s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  22m 52s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5554/8/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 39s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 135m 37s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterRPCMultipleDestinationMountTableResolver
 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5554/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5554 |
   | Optional Tests | dupname asflicense codespell detsecrets xmllint compile 
javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle cc 
buflint bufcompat |
   | uname | Linux 

[jira] [Commented] (HADOOP-18706) The temporary files for disk-block buffer aren't unique enough to recover partial uploads. 

2023-04-18 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17713742#comment-17713742
 ] 

ASF GitHub Bot commented on HADOOP-18706:
-

cbevard1 commented on PR #5563:
URL: https://github.com/apache/hadoop/pull/5563#issuecomment-1513699708

   @steveloughran thanks for your feedback. I've added the span ID to the file 
name as you suggested for better debugging.
   
   > If you really want upload to be recoverable then you need to be able to 
combine blocks on the hard disk with the in-progress multipart upload such that 
you can build finish the upload, build the list of etags and then POST the 
complete operation.
   
   With the part number and key derived from the local file name, I've been 
using calls to `list-mulipart-uploads`/`list-parts` to get the uploadID/ETags 
and complete partial uploads. For single part files I call putObject with the 
key, and for multipart uploads I use the upload ID and part number returned by 
`list-mulipart-uploads`/`list-parts` to submit the local file as the final 
part. The key could exceed an OS's file name char limit though, so I think 
including the span ID is a very good idea.
   
   I know it's not a typical use case to recover a partial upload rather than 
retry the entire file, but it's very helpful with using S3A as the underlying 
file system in Accumulo. 




> The temporary files for disk-block buffer aren't unique enough to recover 
> partial uploads. 
> ---
>
> Key: HADOOP-18706
> URL: https://issues.apache.org/jira/browse/HADOOP-18706
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Chris Bevard
>Priority: Minor
>  Labels: pull-request-available
>
> If an application crashes during an S3ABlockOutputStream upload, it's 
> possible to complete the upload if fast.upload.buffer is set to disk by 
> uploading the s3ablock file with putObject as the final part of the multipart 
> upload. If the application has multiple uploads running in parallel though 
> and they're on the same part number when the application fails, then there is 
> no way to determine which file belongs to which object, and recovery of 
> either upload is impossible.
> If the temporary file name for disk buffering included the s3 key, then every 
> partial upload would be recoverable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] cbevard1 commented on pull request #5563: HADOOP-18706: Improve S3ABlockOutputStream recovery

2023-04-18 Thread via GitHub


cbevard1 commented on PR #5563:
URL: https://github.com/apache/hadoop/pull/5563#issuecomment-1513699708

   @steveloughran thanks for your feedback. I've added the span ID to the file 
name as you suggested for better debugging.
   
   > If you really want upload to be recoverable then you need to be able to 
combine blocks on the hard disk with the in-progress multipart upload such that 
you can build finish the upload, build the list of etags and then POST the 
complete operation.
   
   With the part number and key derived from the local file name, I've been 
using calls to `list-mulipart-uploads`/`list-parts` to get the uploadID/ETags 
and complete partial uploads. For single part files I call putObject with the 
key, and for multipart uploads I use the upload ID and part number returned by 
`list-mulipart-uploads`/`list-parts` to submit the local file as the final 
part. The key could exceed an OS's file name char limit though, so I think 
including the span ID is a very good idea.
   
   I know it's not a typical use case to recover a partial upload rather than 
retry the entire file, but it's very helpful with using S3A as the underlying 
file system in Accumulo. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18691) Add a CallerContext getter on the Schedulable interface

2023-04-18 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17713710#comment-17713710
 ] 

ASF GitHub Bot commented on HADOOP-18691:
-

smengcl commented on PR #5540:
URL: https://github.com/apache/hadoop/pull/5540#issuecomment-1513603521

   Thanks @xBis7 . Latest changes lgtm. Looks like all comments from 
@steveloughran are addressed.
   
   I will merge after a few days or when Steve approves this.




> Add a CallerContext getter on the Schedulable interface
> ---
>
> Key: HADOOP-18691
> URL: https://issues.apache.org/jira/browse/HADOOP-18691
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Christos Bisias
>Priority: Major
>  Labels: pull-request-available
>
> We would like to add a default *{color:#00875a}CallerContext{color}* getter 
> on the *{color:#00875a}Schedulable{color}* interface
> {code:java}
> default public CallerContext getCallerContext() {
>   return null;  
> } {code}
> and then override it on the 
> *{color:#00875a}i{color}{color:#00875a}{*}pc/{*}Server.Call{color}* class
> {code:java}
> @Override
> public CallerContext getCallerContext() {  
>   return this.callerContext;
> } {code}
> to expose the already existing *{color:#00875a}callerContext{color}* field.
>  
> This change will help us access the *{color:#00875a}CallerContext{color}* on 
> an Apache Ozone *{color:#00875a}IdentityProvider{color}* implementation.
> On Ozone side the *{color:#00875a}FairCallQueue{color}* doesn't work with the 
> Ozone S3G, because all users are masked under a special S3G user and there is 
> no impersonation. Therefore, the FCQ reads only 1 user and becomes 
> ineffective. We can use the *{color:#00875a}CallerContext{color}* field to 
> store the current user and access it on the Ozone 
> {*}{color:#00875a}IdentityProvider{color}{*}.
>  
> This is a presentation with the proposed approach.
> [https://docs.google.com/presentation/d/1iChpCz_qf-LXiPyvotpOGiZ31yEUyxAdU4RhWMKo0c0/edit#slide=id.p]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] smengcl commented on pull request #5540: HADOOP-18691. Add a CallerContext getter on the Schedulable interface

2023-04-18 Thread via GitHub


smengcl commented on PR #5540:
URL: https://github.com/apache/hadoop/pull/5540#issuecomment-1513603521

   Thanks @xBis7 . Latest changes lgtm. Looks like all comments from 
@steveloughran are addressed.
   
   I will merge after a few days or when Steve approves this.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18706) The temporary files for disk-block buffer aren't unique enough to recover partial uploads. 

2023-04-18 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17713708#comment-17713708
 ] 

ASF GitHub Bot commented on HADOOP-18706:
-

hadoop-yetus commented on PR #5563:
URL: https://github.com/apache/hadoop/pull/5563#issuecomment-1513592514

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  46m 29s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 16s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m  0s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 18s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5563/3/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 2 new + 2 unchanged - 0 fixed 
= 4 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 21s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   2m 22s | 
[/patch-unit-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5563/3/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt)
 |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 108m  4s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.s3a.TestS3ABlockOutputStream |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5563/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5563 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 7026cd53cc3a 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 23120945595441a326785b0aad3111d1f6ec4c90 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5563: HADOOP-18706: Improve S3ABlockOutputStream recovery

2023-04-18 Thread via GitHub


hadoop-yetus commented on PR #5563:
URL: https://github.com/apache/hadoop/pull/5563#issuecomment-1513592514

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  46m 29s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 16s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m  0s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 18s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5563/3/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 2 new + 2 unchanged - 0 fixed 
= 4 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 21s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   2m 22s | 
[/patch-unit-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5563/3/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt)
 |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 108m  4s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.s3a.TestS3ABlockOutputStream |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5563/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5563 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 7026cd53cc3a 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 23120945595441a326785b0aad3111d1f6ec4c90 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5563/3/testReport/ |
   | Max. process+thread count | 578 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache

[GitHub] [hadoop] goiri merged pull request #5556: HDFS-16982 Use the right Quantiles Array for Inverse Quantiles snapshot

2023-04-18 Thread via GitHub


goiri merged PR #5556:
URL: https://github.com/apache/hadoop/pull/5556


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5565: YARN-11467. RM failover may fail when the nodes.exclude-path file does not exist

2023-04-18 Thread via GitHub


hadoop-yetus commented on PR #5565:
URL: https://github.com/apache/hadoop/pull/5565#issuecomment-1513564362

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  17m 35s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m 14s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  4s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 58s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 54s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  5s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   2m  7s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 12s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 52s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 57s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 49s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 38s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5565/1/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt)
 |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 4 new + 19 unchanged - 0 fixed = 23 total (was 19)  |
   | +1 :green_heart: |  mvnsite  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 56s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 51s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  99m 41s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5565/1/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt)
 |  hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 215m 34s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestQueueConfigurationAutoRefreshPolicy
 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5565/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5565 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 8027b2b23bfd 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / a3e54dc344a5e517f64bd937263d1e1146d1d766 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-

[GitHub] [hadoop] rdingankar commented on pull request #5556: HDFS-16982 Use the right Quantiles Array for Inverse Quantiles snapshot

2023-04-18 Thread via GitHub


rdingankar commented on PR #5556:
URL: https://github.com/apache/hadoop/pull/5556#issuecomment-1513527413

   Thanks for the review @goiri 
   Can you also help in merging the change. Thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18425) [ABFS]: RenameFilePath Source File Not Found (404) error in retry loop

2023-04-18 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17713679#comment-17713679
 ] 

ASF GitHub Bot commented on HADOOP-18425:
-

steveloughran closed pull request #5485: HADOOP-18425. ABFS rename resilience 
through etags
URL: https://github.com/apache/hadoop/pull/5485




> [ABFS]: RenameFilePath Source File Not Found (404) error in retry loop
> --
>
> Key: HADOOP-18425
> URL: https://issues.apache.org/jira/browse/HADOOP-18425
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Sree Bhattacharyya
>Assignee: Sree Bhattacharyya
>Priority: Minor
>  Labels: pull-request-available
>
> RenameFilePath on its first try receives a Request timed out error with code 
> 500. On retrying the same operation, a Source file not found (404) error is 
> received. 
> Possible mitigation: Check whether etags remain the same before and after the 
> retry and accordingly send an Operation Successful result, instead of source 
> file not found. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran closed pull request #5485: HADOOP-18425. ABFS rename resilience through etags

2023-04-18 Thread via GitHub


steveloughran closed pull request #5485: HADOOP-18425. ABFS rename resilience 
through etags
URL: https://github.com/apache/hadoop/pull/5485


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18695) S3A: reject multipart copy requests when disabled

2023-04-18 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17713678#comment-17713678
 ] 

ASF GitHub Bot commented on HADOOP-18695:
-

steveloughran commented on PR #5548:
URL: https://github.com/apache/hadoop/pull/5548#issuecomment-1513517379

   I think this is ready to go in. more reviews please!




> S3A: reject multipart copy requests when disabled
> -
>
> Key: HADOOP-18695
> URL: https://issues.apache.org/jira/browse/HADOOP-18695
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
>
> follow-on to HADOOP-18637 and support for huge file uploads with stores which 
> don't support MPU.
> * prevent use of API against any s3 store when disabled, using logging 
> auditor to reject it
> * tests to verify rename of huge files still works (by setting large part 
> size)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #5548: HADOOP-18695. S3A: reject multipart copy requests when disabled

2023-04-18 Thread via GitHub


steveloughran commented on PR #5548:
URL: https://github.com/apache/hadoop/pull/5548#issuecomment-1513517379

   I think this is ready to go in. more reviews please!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri merged pull request #4963: YARN-11326. [Federation] Add RM FederationStateStoreService Metrics.

2023-04-18 Thread via GitHub


goiri merged PR #4963:
URL: https://github.com/apache/hadoop/pull/4963


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on pull request #5551: YARN-11378. [Federation] Support checkForDecommissioningNodes、refreshClusterMaxPriority API's for Federation.

2023-04-18 Thread via GitHub


goiri commented on PR #5551:
URL: https://github.com/apache/hadoop/pull/5551#issuecomment-1513443941

   Let's fix the checkstyle too.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a diff in pull request #5554: HDFS-16978. RBF: Admin command to support bulk add of mount points

2023-04-18 Thread via GitHub


goiri commented on code in PR #5554:
URL: https://github.com/apache/hadoop/pull/5554#discussion_r1170271873


##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java:
##
@@ -462,6 +481,136 @@ public int run(String[] argv) throws Exception {
 return exitCode;
   }
 
+  /**
+   * Add all mount point entries provided in the request.
+   *
+   * @param parameters Parameters for the mount points.
+   * @param i Current index on the parameters array.
+   * @return True if adding all mount points was successful, False otherwise.
+   * @throws IOException If the RPC call to add the mount points fail.
+   */
+  private boolean addAllMount(String[] parameters, int i) throws IOException {
+List addMountAttributesList = new ArrayList<>();
+while (i < parameters.length) {
+  AddMountAttributes addMountAttributes = 
getAddMountAttributes(parameters, i, true);
+  if (addMountAttributes == null) {
+return false;
+  }
+  i = addMountAttributes.getParamIndex();
+  addMountAttributesList.add(addMountAttributes);
+}
+List addEntries = 
getMountTablesFromAddAllAttributes(addMountAttributesList);
+AddMountTableEntriesRequest request =
+AddMountTableEntriesRequest.newInstance(addEntries);
+MountTableManager mountTable = client.getMountTableManager();
+AddMountTableEntriesResponse addResponse =
+mountTable.addMountTableEntries(request);
+boolean added = addResponse.getStatus();
+if (!added) {
+  System.err.println("Cannot add some or all mount points");
+}
+return added;
+  }
+
+  /**
+   * From the given params, form and retrieve AddMountAttributes object. This 
object is meant
+   * to be used while adding single or multiple mount points with their own 
specific attributes.
+   *
+   * @param parameters Parameters for the mount point.
+   * @param i Current index on the parameters array.
+   * @param isMultipleAdd True if multiple mount points are to be added, False 
if single mount
+   * point is to be added.
+   * @return AddMountAttributes object.
+   */
+  private AddMountAttributes getAddMountAttributes(String[] parameters, int i,

Review Comment:
   static?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18399) S3A Prefetch - SingleFilePerBlockCache to use LocalDirAllocator

2023-04-18 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18399:

Fix Version/s: 3.4.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> S3A Prefetch - SingleFilePerBlockCache to use LocalDirAllocator
> ---
>
> Key: HADOOP-18399
> URL: https://issues.apache.org/jira/browse/HADOOP-18399
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> prefetching stream's SingleFilePerBlockCache uses Files.tempFile() to 
> allocate a temp file.
> it should be using LocalDirAllocator to allocate space from a list of dirs, 
> taking a config key to use. for s3a we will use the Constants.BUFFER_DIR 
> option, which on yarn deployments is fixed under the env.LOCAL_DIR path, so 
> automatically cleaned up on container exit



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18399) S3A Prefetch - SingleFilePerBlockCache to use LocalDirAllocator

2023-04-18 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18399:

Summary: S3A Prefetch - SingleFilePerBlockCache to use LocalDirAllocator  
(was: SingleFilePerBlockCache to use LocalDirAllocator for file allocation)

> S3A Prefetch - SingleFilePerBlockCache to use LocalDirAllocator
> ---
>
> Key: HADOOP-18399
> URL: https://issues.apache.org/jira/browse/HADOOP-18399
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> prefetching stream's SingleFilePerBlockCache uses Files.tempFile() to 
> allocate a temp file.
> it should be using LocalDirAllocator to allocate space from a list of dirs, 
> taking a config key to use. for s3a we will use the Constants.BUFFER_DIR 
> option, which on yarn deployments is fixed under the env.LOCAL_DIR path, so 
> automatically cleaned up on container exit



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri merged pull request #5537: YARN-11438. [Federation] ZookeeperFederationStateStore Support Version.

2023-04-18 Thread via GitHub


goiri merged PR #5537:
URL: https://github.com/apache/hadoop/pull/5537


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18399) SingleFilePerBlockCache to use LocalDirAllocator for file allocation

2023-04-18 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17713632#comment-17713632
 ] 

ASF GitHub Bot commented on HADOOP-18399:
-

steveloughran merged PR #5054:
URL: https://github.com/apache/hadoop/pull/5054




> SingleFilePerBlockCache to use LocalDirAllocator for file allocation
> 
>
> Key: HADOOP-18399
> URL: https://issues.apache.org/jira/browse/HADOOP-18399
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> prefetching stream's SingleFilePerBlockCache uses Files.tempFile() to 
> allocate a temp file.
> it should be using LocalDirAllocator to allocate space from a list of dirs, 
> taking a config key to use. for s3a we will use the Constants.BUFFER_DIR 
> option, which on yarn deployments is fixed under the env.LOCAL_DIR path, so 
> automatically cleaned up on container exit



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran merged pull request #5054: HADOOP-18399. S3A Prefetch - SingleFilePerBlockCache to use LocalDirAllocator

2023-04-18 Thread via GitHub


steveloughran merged PR #5054:
URL: https://github.com/apache/hadoop/pull/5054


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18671) Add recoverLease(), setSafeMode(), isFileClosed() APIs to FileSystem

2023-04-18 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17713631#comment-17713631
 ] 

ASF GitHub Bot commented on HADOOP-18671:
-

steveloughran commented on PR #5553:
URL: https://github.com/apache/hadoop/pull/5553#issuecomment-1513370122

   there must still be a CommonPathCapabilities name as we will also need to 
have to consider about having
   
   * filterfs implement and pass through to the wrapped fs
   * viewfs to pass down to resolved fs.
   
   a cast and a hasPathCapability() is the safe way, as for any of the wrappers 
it will let you know if the method works all the way through.
   
   oh, and if we do the viewfs/filterfs (which I don't think we need...yet), 
then we will need tests that they pass through to hdfs.
   




> Add recoverLease(), setSafeMode(), isFileClosed() APIs to FileSystem
> 
>
> Key: HADOOP-18671
> URL: https://issues.apache.org/jira/browse/HADOOP-18671
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Wei-Chiu Chuang
>Priority: Major
>  Labels: pull-request-available
>
> We are in the midst of enabling HBase and Solr to run on Ozone.
> An obstacle is that HBase relies heavily on HDFS APIs and semantics for its 
> Write Ahead Log (WAL) file (similarly, for Solr's transaction log). We 
> propose to push up these HDFS APIs, i.e. recoverLease(), setSafeMode(), 
> isFileClosed() to FileSystem abstraction so that HBase and other applications 
> do not need to take on Ozone dependency at compile time. This work will 
> (hopefully) enable HBase to run on other storage system implementations in 
> the future.
> There are other HDFS features that HBase uses, including hedged read and 
> favored nodes. Those are FS-specific optimizations and are not critical to 
> enable HBase on Ozone.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #5553: HADOOP-18671 Add recoverLease(), setSafeMode(), isFileClosed() APIs to FileSystem

2023-04-18 Thread via GitHub


steveloughran commented on PR #5553:
URL: https://github.com/apache/hadoop/pull/5553#issuecomment-1513370122

   there must still be a CommonPathCapabilities name as we will also need to 
have to consider about having
   
   * filterfs implement and pass through to the wrapped fs
   * viewfs to pass down to resolved fs.
   
   a cast and a hasPathCapability() is the safe way, as for any of the wrappers 
it will let you know if the method works all the way through.
   
   oh, and if we do the viewfs/filterfs (which I don't think we need...yet), 
then we will need tests that they pass through to hdfs.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18671) Add recoverLease(), setSafeMode(), isFileClosed() APIs to FileSystem

2023-04-18 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17713628#comment-17713628
 ] 

ASF GitHub Bot commented on HADOOP-18671:
-

steveloughran commented on code in PR #5553:
URL: https://github.com/apache/hadoop/pull/5553#discussion_r1170210411


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AFileSystemContract.java:
##
@@ -137,4 +142,30 @@ public void testRenameNonExistentPath() throws Exception {
 () -> super.testRenameNonExistentPath());
 
   }
+
+  @Test
+  public void testFileSystemCapabilities() throws Exception {

Review Comment:
   but if we only have the interface implemented by those filesystems which 
actually do so (and viewfs + filterfs) then s3a, abfs etc don't care.
   
   and viewfs and filterfs do need custom implementations; having default 
implementations hides this fact.





> Add recoverLease(), setSafeMode(), isFileClosed() APIs to FileSystem
> 
>
> Key: HADOOP-18671
> URL: https://issues.apache.org/jira/browse/HADOOP-18671
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Wei-Chiu Chuang
>Priority: Major
>  Labels: pull-request-available
>
> We are in the midst of enabling HBase and Solr to run on Ozone.
> An obstacle is that HBase relies heavily on HDFS APIs and semantics for its 
> Write Ahead Log (WAL) file (similarly, for Solr's transaction log). We 
> propose to push up these HDFS APIs, i.e. recoverLease(), setSafeMode(), 
> isFileClosed() to FileSystem abstraction so that HBase and other applications 
> do not need to take on Ozone dependency at compile time. This work will 
> (hopefully) enable HBase to run on other storage system implementations in 
> the future.
> There are other HDFS features that HBase uses, including hedged read and 
> favored nodes. Those are FS-specific optimizations and are not critical to 
> enable HBase on Ozone.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a diff in pull request #5553: HADOOP-18671 Add recoverLease(), setSafeMode(), isFileClosed() APIs to FileSystem

2023-04-18 Thread via GitHub


steveloughran commented on code in PR #5553:
URL: https://github.com/apache/hadoop/pull/5553#discussion_r1170210411


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AFileSystemContract.java:
##
@@ -137,4 +142,30 @@ public void testRenameNonExistentPath() throws Exception {
 () -> super.testRenameNonExistentPath());
 
   }
+
+  @Test
+  public void testFileSystemCapabilities() throws Exception {

Review Comment:
   but if we only have the interface implemented by those filesystems which 
actually do so (and viewfs + filterfs) then s3a, abfs etc don't care.
   
   and viewfs and filterfs do need custom implementations; having default 
implementations hides this fact.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18705) hadoop-azure: AzureBlobFileSystem should exclude incompatible credential providers when binding DelegationTokenManagers

2023-04-18 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17713626#comment-17713626
 ] 

ASF GitHub Bot commented on HADOOP-18705:
-

steveloughran commented on code in PR #5560:
URL: https://github.com/apache/hadoop/pull/5560#discussion_r1169748503


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemConfiguration.java:
##
@@ -0,0 +1,42 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import org.apache.hadoop.security.alias.CredentialProviderFactory;
+import org.junit.Test;

Review Comment:
   import structure not what we prefer, which is
   ```
   java
   
   javax
   
   not-org-apache 
   
   org.apache.*
   
   statics
   ```
   



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemConfiguration.java:
##
@@ -0,0 +1,42 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import org.apache.hadoop.security.alias.CredentialProviderFactory;
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+
+public class ITestAzureBlobFileSystemConfiguration extends 
AbstractAbfsIntegrationTest {

Review Comment:
   needs a name which explains what the test does, e.g "ITestABFSJceksFiltering"



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemConfiguration.java:
##
@@ -0,0 +1,42 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import org.apache.hadoop.security.alias.CredentialProviderFactory;
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+
+public class ITestAzureBlobFileSystemConfiguration extends 
AbstractAbfsIntegrationTest {
+
+  public ITestAzureBlobFileSystemConfiguration() throws Exception {
+  }
+
+  @Test
+  public void testIncompatibleCredentialProviderIsExcluded() throws Exception {
+Configuration rawConfig = getRawConfiguration();
+rawConfig.set(CredentialProviderFactory.CREDENTIAL_PROVIDER_PATH,
+"jceks://abfs@a@b.c.d/tmp/a.jceks,jceks://file/tmp/secret.jceks");
+AzureBlobFileSystem fs = (AzureBlobFileSystem) FileSystem.get(rawConfig);

Review Comment:
   use try-with-resources to ensure that this is closed afterwards



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java:
#

[GitHub] [hadoop] steveloughran commented on a diff in pull request #5560: HADOOP-18705. hadoop-azure: AzureBlobFileSystem should exclude incomp…

2023-04-18 Thread via GitHub


steveloughran commented on code in PR #5560:
URL: https://github.com/apache/hadoop/pull/5560#discussion_r1169748503


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemConfiguration.java:
##
@@ -0,0 +1,42 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import org.apache.hadoop.security.alias.CredentialProviderFactory;
+import org.junit.Test;

Review Comment:
   import structure not what we prefer, which is
   ```
   java
   
   javax
   
   not-org-apache 
   
   org.apache.*
   
   statics
   ```
   



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemConfiguration.java:
##
@@ -0,0 +1,42 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import org.apache.hadoop.security.alias.CredentialProviderFactory;
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+
+public class ITestAzureBlobFileSystemConfiguration extends 
AbstractAbfsIntegrationTest {

Review Comment:
   needs a name which explains what the test does, e.g "ITestABFSJceksFiltering"



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemConfiguration.java:
##
@@ -0,0 +1,42 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import org.apache.hadoop.security.alias.CredentialProviderFactory;
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+
+public class ITestAzureBlobFileSystemConfiguration extends 
AbstractAbfsIntegrationTest {
+
+  public ITestAzureBlobFileSystemConfiguration() throws Exception {
+  }
+
+  @Test
+  public void testIncompatibleCredentialProviderIsExcluded() throws Exception {
+Configuration rawConfig = getRawConfiguration();
+rawConfig.set(CredentialProviderFactory.CREDENTIAL_PROVIDER_PATH,
+"jceks://abfs@a@b.c.d/tmp/a.jceks,jceks://file/tmp/secret.jceks");
+AzureBlobFileSystem fs = (AzureBlobFileSystem) FileSystem.get(rawConfig);

Review Comment:
   use try-with-resources to ensure that this is closed afterwards



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java:
##
@@ -196,6 +196,11 @@ public void initialize(URI uri, Configuration 
configuration)
 
 final AbfsConfiguration abfsConfiguration = abfsStore
 .getAbfsConfiguration();
+
+// Ensures that configuration excludes incompatible credential providers


[jira] [Commented] (HADOOP-18706) The temporary files for disk-block buffer aren't unique enough to recover partial uploads. 

2023-04-18 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17713618#comment-17713618
 ] 

ASF GitHub Bot commented on HADOOP-18706:
-

steveloughran commented on code in PR #5563:
URL: https://github.com/apache/hadoop/pull/5563#discussion_r1170131112


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestDataBlocks.java:
##
@@ -51,7 +51,7 @@ public void testByteBufferIO() throws Throwable {
  new S3ADataBlocks.ByteBufferBlockFactory(null)) {
   int limit = 128;
   S3ADataBlocks.ByteBufferBlockFactory.ByteBufferBlock block
-  = factory.create(1, limit, null);
+  = factory.create("object/key", 1, limit, null);

Review Comment:
   add a backslash here too for completeness





> The temporary files for disk-block buffer aren't unique enough to recover 
> partial uploads. 
> ---
>
> Key: HADOOP-18706
> URL: https://issues.apache.org/jira/browse/HADOOP-18706
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Chris Bevard
>Priority: Minor
>  Labels: pull-request-available
>
> If an application crashes during an S3ABlockOutputStream upload, it's 
> possible to complete the upload if fast.upload.buffer is set to disk by 
> uploading the s3ablock file with putObject as the final part of the multipart 
> upload. If the application has multiple uploads running in parallel though 
> and they're on the same part number when the application fails, then there is 
> no way to determine which file belongs to which object, and recovery of 
> either upload is impossible.
> If the temporary file name for disk buffering included the s3 key, then every 
> partial upload would be recoverable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a diff in pull request #5563: HADOOP-18706: Improve S3ABlockOutputStream recovery

2023-04-18 Thread via GitHub


steveloughran commented on code in PR #5563:
URL: https://github.com/apache/hadoop/pull/5563#discussion_r1170129401


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ABlockOutputStream.java:
##
@@ -726,7 +726,7 @@ public void hsync() throws IOException {
   /**
* Shared processing of Syncable operation reporting/downgrade.
*/
-  private void handleSyncableInvocation() {
+  private void handleSyncableInvocation() throws IOException {

Review Comment:
   update javadocs



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a diff in pull request #5563: HADOOP-18706: Improve S3ABlockOutputStream recovery

2023-04-18 Thread via GitHub


steveloughran commented on code in PR #5563:
URL: https://github.com/apache/hadoop/pull/5563#discussion_r1170131112


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestDataBlocks.java:
##
@@ -51,7 +51,7 @@ public void testByteBufferIO() throws Throwable {
  new S3ADataBlocks.ByteBufferBlockFactory(null)) {
   int limit = 128;
   S3ADataBlocks.ByteBufferBlockFactory.ByteBufferBlock block
-  = factory.create(1, limit, null);
+  = factory.create("object/key", 1, limit, null);

Review Comment:
   add a backslash here too for completeness



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18706) The temporary files for disk-block buffer aren't unique enough to recover partial uploads. 

2023-04-18 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17713617#comment-17713617
 ] 

ASF GitHub Bot commented on HADOOP-18706:
-

steveloughran commented on code in PR #5563:
URL: https://github.com/apache/hadoop/pull/5563#discussion_r1170129401


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ABlockOutputStream.java:
##
@@ -726,7 +726,7 @@ public void hsync() throws IOException {
   /**
* Shared processing of Syncable operation reporting/downgrade.
*/
-  private void handleSyncableInvocation() {
+  private void handleSyncableInvocation() throws IOException {

Review Comment:
   update javadocs





> The temporary files for disk-block buffer aren't unique enough to recover 
> partial uploads. 
> ---
>
> Key: HADOOP-18706
> URL: https://issues.apache.org/jira/browse/HADOOP-18706
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Chris Bevard
>Priority: Minor
>  Labels: pull-request-available
>
> If an application crashes during an S3ABlockOutputStream upload, it's 
> possible to complete the upload if fast.upload.buffer is set to disk by 
> uploading the s3ablock file with putObject as the final part of the multipart 
> upload. If the application has multiple uploads running in parallel though 
> and they're on the same part number when the application fails, then there is 
> no way to determine which file belongs to which object, and recovery of 
> either upload is impossible.
> If the temporary file name for disk buffering included the s3 key, then every 
> partial upload would be recoverable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18706) The temporary files for disk-block buffer aren't unique enough to recover partial uploads. 

2023-04-18 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17713616#comment-17713616
 ] 

ASF GitHub Bot commented on HADOOP-18706:
-

hadoop-yetus commented on PR #5563:
URL: https://github.com/apache/hadoop/pull/5563#issuecomment-1513283429

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  46m 52s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 31s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m 19s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 13s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m 40s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 26s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 110m 20s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5563/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5563 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 65c65fb67bb3 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 2d04346bd70249249f274a48521ee9b44d4715a1 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5563/2/testReport/ |
   | Max. process+thread count | 611 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5563/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> The temporary files

[GitHub] [hadoop] hadoop-yetus commented on pull request #5563: HADOOP-18706: Improve S3ABlockOutputStream recovery

2023-04-18 Thread via GitHub


hadoop-yetus commented on PR #5563:
URL: https://github.com/apache/hadoop/pull/5563#issuecomment-1513283429

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  46m 52s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 31s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m 19s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 13s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m 40s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 26s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 110m 20s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5563/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5563 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 65c65fb67bb3 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 2d04346bd70249249f274a48521ee9b44d4715a1 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5563/2/testReport/ |
   | Max. process+thread count | 611 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5563/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact I

[jira] [Commented] (HADOOP-18694) Client.Connection#updateAddress needs to ensure that address is resolved before updating

2023-04-18 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17713614#comment-17713614
 ] 

ASF GitHub Bot commented on HADOOP-18694:
-

hadoop-yetus commented on PR #5542:
URL: https://github.com/apache/hadoop/pull/5542#issuecomment-1513282177

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  17m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  42m 46s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  25m 14s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |  21m 43s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   1m  6s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 38s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   2m 47s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 24s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  1s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  27m  2s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |  27m  2s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 55s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |  22m 55s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 42s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   2m 46s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 12s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 25s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 54s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 242m 38s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5542/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5542 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux c172ac28a844 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 9620ed8db83b5f4c5bc73ecf86b6efc3b5f05275 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5542/3/testReport/ |
   | Max. process+thread count | 1375 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5542/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   

[GitHub] [hadoop] hadoop-yetus commented on pull request #5542: HADOOP-18694. Client.Connection#updateAddress needs to ensure that address is resolved before updating

2023-04-18 Thread via GitHub


hadoop-yetus commented on PR #5542:
URL: https://github.com/apache/hadoop/pull/5542#issuecomment-1513282177

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  17m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  42m 46s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  25m 14s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |  21m 43s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   1m  6s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 38s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   2m 47s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 24s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  1s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  27m  2s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |  27m  2s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 55s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |  22m 55s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 42s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   2m 46s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 12s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 25s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 54s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 242m 38s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5542/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5542 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux c172ac28a844 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 9620ed8db83b5f4c5bc73ecf86b6efc3b5f05275 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5542/3/testReport/ |
   | Max. process+thread count | 1375 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5542/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about t

[GitHub] [hadoop] cxzl25 opened a new pull request, #5565: YARN-11467. RM failover may fail when the nodes.exclude-path file does not exist

2023-04-18 Thread via GitHub


cxzl25 opened a new pull request, #5565:
URL: https://github.com/apache/hadoop/pull/5565

   ### Description of PR
   When RM starts, because the file corresponding to the 
`yarn.resourcemanager.nodes.include-path` or 
`yarn.resourcemanager.nodes.exclude-path` configuration item does not exist, it 
will disable this function.
   
   
https://github.com/apache/hadoop/blob/405ed1dde6ba1e07e45a356a89c1b583e236/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/NodesListManager.java#L552-L570
   
   
   But in RM failover scenario, because the file does not exist, it will fail.
   
   ```java
   Caused by: org.apache.hadoop.ha.ServiceFailedException: RefreshAll operation 
failed
at 
org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshAll(AdminService.java:788)
at 
org.apache.hadoop.yarn.server.resourcemanager.AdminService.transitionToActive(AdminService.java:315)
... 29 more
   Caused by: java.nio.file.NoSuchFileException: 
/tmp/non-existent-path-788aa744-1395-40ca-bdb5-f93bffc92cfb
at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at 
sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
at java.nio.file.Files.newByteChannel(Files.java:361)
at java.nio.file.Files.newByteChannel(Files.java:407)
at 
java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:384)
at java.nio.file.Files.newInputStream(Files.java:152)
at 
org.apache.hadoop.util.HostsFileReader.readFileToMap(HostsFileReader.java:126)
at 
org.apache.hadoop.util.HostsFileReader.refreshInternal(HostsFileReader.java:214)
at 
org.apache.hadoop.util.HostsFileReader.refresh(HostsFileReader.java:192)
at 
org.apache.hadoop.yarn.server.resourcemanager.NodesListManager.refreshHostsReader(NodesListManager.java:258)
at 
org.apache.hadoop.yarn.server.resourcemanager.NodesListManager.refreshNodes(NodesListManager.java:232)
at 
org.apache.hadoop.yarn.server.resourcemanager.NodesListManager.refreshNodes(NodesListManager.java:224)
at 
org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshNodes(AdminService.java:490)
at 
org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshAll(AdminService.java:778)
   ```
   
   ### How was this patch tested?
   add UT
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ashutoshcipher commented on a diff in pull request #5562: YARN-11463. Node Labels root directory creation doesn't have a retry logic

2023-04-18 Thread via GitHub


ashutoshcipher commented on code in PR #5562:
URL: https://github.com/apache/hadoop/pull/5562#discussion_r1169877196


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java:
##
@@ -137,6 +137,16 @@ private static void addDeprecatedKeys() {
   // Resource types configs
   
 
+  public static final String NODE_STORE_ROOT_DIR_NUM_RETRIES =
+  YARN_PREFIX + "nodestore-rootdir.num-retries";
+
+  public static final int NODE_STORE_ROOT_DIR_NUM_DEFAULT_RETRIES = 3;

Review Comment:
   I will change it to 500



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ashutoshcipher commented on a diff in pull request #5562: YARN-11463. Node Labels root directory creation doesn't have a retry logic

2023-04-18 Thread via GitHub


ashutoshcipher commented on code in PR #5562:
URL: https://github.com/apache/hadoop/pull/5562#discussion_r1169876664


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java:
##
@@ -137,6 +137,16 @@ private static void addDeprecatedKeys() {
   // Resource types configs
   
 
+  public static final String NODE_STORE_ROOT_DIR_NUM_RETRIES =
+  YARN_PREFIX + "nodestore-rootdir.num-retries";

Review Comment:
   Makes sense. I will do that



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ashutoshcipher commented on pull request #5562: YARN-11463. Node Labels root directory creation doesn't have a retry logic

2023-04-18 Thread via GitHub


ashutoshcipher commented on PR #5562:
URL: https://github.com/apache/hadoop/pull/5562#issuecomment-1512895864

   > Thanks @ashutoshcipher for the update. Apart from two nits it looks good 
to me. Can you please check the test failure?
   
   Thanks @brumi1024 for review. I will look at it. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ashutoshcipher commented on a diff in pull request #5562: YARN-11463. Node Labels root directory creation doesn't have a retry logic

2023-04-18 Thread via GitHub


ashutoshcipher commented on code in PR #5562:
URL: https://github.com/apache/hadoop/pull/5562#discussion_r1169876169


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/store/AbstractFSNodeStore.java:
##
@@ -65,8 +65,30 @@ protected void initStore(Configuration conf, Path 
fsStorePath,
 this.fsWorkingPath = fsStorePath;
 this.manager = mgr;
 initFileSystem(conf);
-// mkdir of root dir path
-fs.mkdirs(fsWorkingPath);
+// mkdir of root dir path with retry logic
+int maxRetries = 3;

Review Comment:
   Thanks @brumi1024 for your review. I hve made this change in last commit



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] brumi1024 commented on a diff in pull request #5562: YARN-11463. Node Labels root directory creation doesn't have a retry logic

2023-04-18 Thread via GitHub


brumi1024 commented on code in PR #5562:
URL: https://github.com/apache/hadoop/pull/5562#discussion_r1169804356


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java:
##
@@ -137,6 +137,16 @@ private static void addDeprecatedKeys() {
   // Resource types configs
   
 
+  public static final String NODE_STORE_ROOT_DIR_NUM_RETRIES =
+  YARN_PREFIX + "nodestore-rootdir.num-retries";

Review Comment:
   Nit: Since this is an RM config (capacity scheduler feature) I think we 
could use the yarn.resourcemanager prefix.



##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java:
##
@@ -137,6 +137,16 @@ private static void addDeprecatedKeys() {
   // Resource types configs
   
 
+  public static final String NODE_STORE_ROOT_DIR_NUM_RETRIES =
+  YARN_PREFIX + "nodestore-rootdir.num-retries";
+
+  public static final int NODE_STORE_ROOT_DIR_NUM_DEFAULT_RETRIES = 3;

Review Comment:
   Nit: To be inline with the other retries I think 500/1000 would be a more 
appropriate default.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5564: HDFS-16985. delete local block file when FileNotFoundException occurred may lead to missing block.

2023-04-18 Thread via GitHub


hadoop-yetus commented on PR #5564:
URL: https://github.com/apache/hadoop/pull/5564#issuecomment-1512750257

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  43m 55s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 34s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   1m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   1m  8s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 34s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   3m 47s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 41s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 27s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   1m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 56s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5564/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 26 unchanged - 
0 fixed = 27 total (was 26)  |
   | +1 :green_heart: |  mvnsite  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   3m 29s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 36s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 227m 51s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5564/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 347m 38s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestDatanodeReport |
   |   | hadoop.hdfs.server.namenode.ha.TestObserverNode |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5564/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5564 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 62d0c87a431f 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 710fdc5f7eb3b069a0b1322b96a68ff1bdde8186 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/had

[jira] [Updated] (HADOOP-18705) hadoop-azure: AzureBlobFileSystem should exclude incompatible credential providers when binding DelegationTokenManagers

2023-04-18 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18705:

Affects Version/s: 3.3.5

> hadoop-azure: AzureBlobFileSystem should exclude incompatible credential 
> providers when binding DelegationTokenManagers
> ---
>
> Key: HADOOP-18705
> URL: https://issues.apache.org/jira/browse/HADOOP-18705
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.4.0, 3.3.5
>Reporter: Tamas Domok
>Assignee: Tamas Domok
>Priority: Major
>  Labels: pull-request-available
>
> The DelegationTokenManager in AzureBlobFileSystem.initialize() gets the 
> untouched configuration which may contain a credentialProviderPath config 
> with incompatible credential providers (e.g.: jceks stored on abfs). This 
> results in an error:
> {quote}
> Caused by: org.apache.hadoop.fs.PathIOException: 
> `jceks://abfs@a@b.c.d/tmp/a.jceks': Recursive load of credential provider; if 
> loading a JCEKS file, this means that the filesystem connector is trying to 
> load the same file
> {quote}
> {code}
> this.delegationTokenManager = 
> abfsConfiguration.getDelegationTokenManager();
> delegationTokenManager.bind(getUri(), configuration);
> {code}
> The abfsConfiguration excludes the incompatible credential providers already.
> Reproduction steps:
> {code}
> export HADOOP_ROOT_LOGGER=DEBUG,console
> hdfs dfs -rm -r -skipTrash /user/qa/sort_input; hadoop jar 
> hadoop-mapreduce-examples.jar randomwriter 
> "-Dmapreduce.randomwriter.totalbytes=100" 
> "-Dhadoop.security.credential.provider.path=jceks://abfs@a@b.c.d/tmp/a.jceks" 
> /user/qa/sort_input 
> {code}
> Error:
> {code}
> ...
> org.apache.hadoop.fs.azurebfs.security.AbfsDelegationTokenManager.bind(AbfsDelegationTokenManager.java:96)
> at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.initialize(AzureBlobFileSystem.java:224)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3452)
> at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:162)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3557)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3504)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:522)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
> at 
> org.apache.hadoop.security.alias.KeyStoreProvider.initFileSystem(KeyStoreProvider.java:84)
> at 
> org.apache.hadoop.security.alias.AbstractJavaKeyStoreProvider.(AbstractJavaKeyStoreProvider.java:85)
> at 
> org.apache.hadoop.security.alias.KeyStoreProvider.(KeyStoreProvider.java:49)
> at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:42)
> at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:35)
> at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:68)
> at 
> org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:91)
> at 
> org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:2450)
> at 
> org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:2388)
> at 
> org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBClient.getTruststorePassword(AbfsIDBClient.java:104)
> at 
> org.apache.knox.gateway.cloud.idbroker.AbstractIDBClient.initializeAsFullIDBClient(AbstractIDBClient.java:860)
> at 
> org.apache.knox.gateway.cloud.idbroker.AbstractIDBClient.(AbstractIDBClient.java:139)
> at 
> org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBClient.(AbfsIDBClient.java:74)
> at 
> org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBIntegration.getClient(AbfsIDBIntegration.java:287)
> at 
> org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBIntegration.serviceStart(AbfsIDBIntegration.java:240)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
> at 
> org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBIntegration.fromDelegationTokenManager(AbfsIDBIntegration.java:205)
> at 
> org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBDelegationTokenManager.bind(AbfsIDBDelegationTokenManager.java:66)
> at 
> org.apache.hadoop.fs.azurebfs.extensions.ExtensionHelper.bind(ExtensionHelper.java:54)
> at 
> org.apache.hadoop.fs.azurebfs.security.AbfsDelegationTokenManager.bind(AbfsDelegationTokenManager.java:96)
> at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.initialize(AzureBlobFileSystem.java:224)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3452)
> at org.ap

[jira] [Updated] (HADOOP-18705) hadoop-azure: AzureBlobFileSystem should exclude incompatible credential providers when binding DelegationTokenManagers

2023-04-18 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18705:

Component/s: fs/azure
 (was: tools)

> hadoop-azure: AzureBlobFileSystem should exclude incompatible credential 
> providers when binding DelegationTokenManagers
> ---
>
> Key: HADOOP-18705
> URL: https://issues.apache.org/jira/browse/HADOOP-18705
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Tamas Domok
>Assignee: Tamas Domok
>Priority: Major
>  Labels: pull-request-available
>
> The DelegationTokenManager in AzureBlobFileSystem.initialize() gets the 
> untouched configuration which may contain a credentialProviderPath config 
> with incompatible credential providers (e.g.: jceks stored on abfs). This 
> results in an error:
> {quote}
> Caused by: org.apache.hadoop.fs.PathIOException: 
> `jceks://abfs@a@b.c.d/tmp/a.jceks': Recursive load of credential provider; if 
> loading a JCEKS file, this means that the filesystem connector is trying to 
> load the same file
> {quote}
> {code}
> this.delegationTokenManager = 
> abfsConfiguration.getDelegationTokenManager();
> delegationTokenManager.bind(getUri(), configuration);
> {code}
> The abfsConfiguration excludes the incompatible credential providers already.
> Reproduction steps:
> {code}
> export HADOOP_ROOT_LOGGER=DEBUG,console
> hdfs dfs -rm -r -skipTrash /user/qa/sort_input; hadoop jar 
> hadoop-mapreduce-examples.jar randomwriter 
> "-Dmapreduce.randomwriter.totalbytes=100" 
> "-Dhadoop.security.credential.provider.path=jceks://abfs@a@b.c.d/tmp/a.jceks" 
> /user/qa/sort_input 
> {code}
> Error:
> {code}
> ...
> org.apache.hadoop.fs.azurebfs.security.AbfsDelegationTokenManager.bind(AbfsDelegationTokenManager.java:96)
> at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.initialize(AzureBlobFileSystem.java:224)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3452)
> at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:162)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3557)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3504)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:522)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
> at 
> org.apache.hadoop.security.alias.KeyStoreProvider.initFileSystem(KeyStoreProvider.java:84)
> at 
> org.apache.hadoop.security.alias.AbstractJavaKeyStoreProvider.(AbstractJavaKeyStoreProvider.java:85)
> at 
> org.apache.hadoop.security.alias.KeyStoreProvider.(KeyStoreProvider.java:49)
> at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:42)
> at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:35)
> at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:68)
> at 
> org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:91)
> at 
> org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:2450)
> at 
> org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:2388)
> at 
> org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBClient.getTruststorePassword(AbfsIDBClient.java:104)
> at 
> org.apache.knox.gateway.cloud.idbroker.AbstractIDBClient.initializeAsFullIDBClient(AbstractIDBClient.java:860)
> at 
> org.apache.knox.gateway.cloud.idbroker.AbstractIDBClient.(AbstractIDBClient.java:139)
> at 
> org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBClient.(AbfsIDBClient.java:74)
> at 
> org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBIntegration.getClient(AbfsIDBIntegration.java:287)
> at 
> org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBIntegration.serviceStart(AbfsIDBIntegration.java:240)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
> at 
> org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBIntegration.fromDelegationTokenManager(AbfsIDBIntegration.java:205)
> at 
> org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBDelegationTokenManager.bind(AbfsIDBDelegationTokenManager.java:66)
> at 
> org.apache.hadoop.fs.azurebfs.extensions.ExtensionHelper.bind(ExtensionHelper.java:54)
> at 
> org.apache.hadoop.fs.azurebfs.security.AbfsDelegationTokenManager.bind(AbfsDelegationTokenManager.java:96)
> at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.initialize(AzureBlobFileSystem.java:224)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.ja

[jira] [Commented] (HADOOP-18470) Release hadoop 3.3.5

2023-04-18 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17713496#comment-17713496
 ] 

ASF GitHub Bot commented on HADOOP-18470:
-

steveloughran merged PR #5558:
URL: https://github.com/apache/hadoop/pull/5558




> Release hadoop 3.3.5
> 
>
> Key: HADOOP-18470
> URL: https://issues.apache.org/jira/browse/HADOOP-18470
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Affects Versions: 3.3.5
>Reporter: Mukund Thakur
>Assignee: Steve Loughran
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.3.5
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran merged pull request #5558: HADOOP-18470. 3.3.5 Release wrap-up: jdiff files

2023-04-18 Thread via GitHub


steveloughran merged PR #5558:
URL: https://github.com/apache/hadoop/pull/5558


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao commented on pull request #5564: HDFS-16985. delete local block file when FileNotFoundException occurred may lead to missing block.

2023-04-18 Thread via GitHub


Hexiaoqiao commented on PR #5564:
URL: https://github.com/apache/hadoop/pull/5564#issuecomment-1512646630

   Agree that we need to protect data for this case. But the current 
improvement will leave another issue. 
   Considering that DataNode notify NameNode without delete block file here, 
meta at NameNode will be inconsistent at next blockreport round because 
`ReplicaMap` is not updated, and it will be involved at next blockreport, 
right? The result is that NameNode believe this replica is health, but actually 
it had lost.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5363: YARN-11424. [Federation] Router Supports DeregisterSubCluster.

2023-04-18 Thread via GitHub


hadoop-yetus commented on PR #5363:
URL: https://github.com/apache/hadoop/pull/5363#issuecomment-1512630408

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  43m 40s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  buf  |   0m  1s |  |  buf was not available.  |
   | +0 :ok: |  buf  |   0m  1s |  |  buf was not available.  |
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m 44s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  28m 50s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  10m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   8m 51s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   1m 49s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   5m 52s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   5m 10s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   4m 45s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |  11m  4s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m  3s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  24m 23s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m  8s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m 50s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  cc  |   9m 50s |  |  the patch passed  |
   | -1 :x: |  javac  |   9m 50s | 
[/results-compile-javac-hadoop-yarn-project_hadoop-yarn-jdkUbuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5363/19/artifact/out/results-compile-javac-hadoop-yarn-project_hadoop-yarn-jdkUbuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1.txt)
 |  
hadoop-yarn-project_hadoop-yarn-jdkUbuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 generated 1 new + 709 
unchanged - 0 fixed = 710 total (was 709)  |
   | +1 :green_heart: |  compile  |   8m 45s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  cc  |   8m 45s |  |  the patch passed  |
   | -1 :x: |  javac  |   8m 45s | 
[/results-compile-javac-hadoop-yarn-project_hadoop-yarn-jdkPrivateBuild-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5363/19/artifact/out/results-compile-javac-hadoop-yarn-project_hadoop-yarn-jdkPrivateBuild-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09.txt)
 |  
hadoop-yarn-project_hadoop-yarn-jdkPrivateBuild-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
 with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 generated 3 new 
+ 620 unchanged - 2 fixed = 623 total (was 622)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 40s |  |  
hadoop-yarn-project/hadoop-yarn: The patch generated 0 new + 230 unchanged - 1 
fixed = 230 total (was 231)  |
   | +1 :green_heart: |  mvnsite  |   5m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   4m 39s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   4m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |  11m 57s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m 16s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m  4s |  |  hadoop-yarn-api in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   5m 21s |  |  hadoop-yarn-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m 15s |  |  hadoop-yarn-se

[jira] [Commented] (HADOOP-18657) Tune ABFS create() retry logic

2023-04-18 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17713439#comment-17713439
 ] 

ASF GitHub Bot commented on HADOOP-18657:
-

snvijaya commented on code in PR #5462:
URL: https://github.com/apache/hadoop/pull/5462#discussion_r1131060581


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java:
##
@@ -621,37 +622,57 @@ private AbfsRestOperation 
conditionalCreateOverwriteFile(final String relativePa
   isAppendBlob, null, tracingContext);
 
 } catch (AbfsRestOperationException e) {
+  LOG.debug("Failed to create {}", relativePath, e);
   if (e.getStatusCode() == HttpURLConnection.HTTP_CONFLICT) {
 // File pre-exists, fetch eTag
+LOG.debug("Fetching etag of {}", relativePath);
 try {
   op = client.getPathStatus(relativePath, false, tracingContext);
 } catch (AbfsRestOperationException ex) {
+  LOG.debug("Failed to to getPathStatus {}", relativePath, ex);
   if (ex.getStatusCode() == HttpURLConnection.HTTP_NOT_FOUND) {

Review Comment:
   Hi @steveloughran, Given Hadoop is single writer semantic, would it be 
correct to expect that as part of job parallelization only one worker process 
should try to create a file ? As this check for FileNotFound is post an attempt 
to create the file with overwrite=false, which inturn failed with conflict 
indicating file was just present, concurrent operation on the file is indeed 
confirmed. 
   
   Its quite possible that if we let this create proceed, some other operation 
such as delete can kick in later on as well. Below code that throws exception 
at the first indication of parallel activity would be the right thing to do ? 
   
   
   As the workload pattern is not honoring the single writer semantic I feel we 
should retain the logic to throw  ConcurrentWriteOperationDetectedException. 





> Tune ABFS create() retry logic
> --
>
> Key: HADOOP-18657
> URL: https://issues.apache.org/jira/browse/HADOOP-18657
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.3.5
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> Based on experience trying to debug this happening
> # add debug statements when create() fails
> # generated exception text to reference string shared with tests, path and 
> error code
> # generated exception to include inner exception for full stack trace
> Currently the retry logic is
> # create(overwrite=false)
> # if HTTP_CONFLICT/409 raised; call HEAD
> # use etag in create(path, overwrite=true, etag)
> # special handling of error HTTP_PRECON_FAILED = 412
> There's a race condition here, which is if between 1 and 2 the file which 
> exists is deleted. The retry should succeed, but currently a 404 from the 
> head is escalated to a failure
> proposed changes
> # if HEAD is 404, leave etag == null and continue
> # special handling of 412 also to handle 409



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] snvijaya commented on a diff in pull request #5462: HADOOP-18657. Tune ABFS create() retry logic

2023-04-18 Thread via GitHub


snvijaya commented on code in PR #5462:
URL: https://github.com/apache/hadoop/pull/5462#discussion_r1131060581


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java:
##
@@ -621,37 +622,57 @@ private AbfsRestOperation 
conditionalCreateOverwriteFile(final String relativePa
   isAppendBlob, null, tracingContext);
 
 } catch (AbfsRestOperationException e) {
+  LOG.debug("Failed to create {}", relativePath, e);
   if (e.getStatusCode() == HttpURLConnection.HTTP_CONFLICT) {
 // File pre-exists, fetch eTag
+LOG.debug("Fetching etag of {}", relativePath);
 try {
   op = client.getPathStatus(relativePath, false, tracingContext);
 } catch (AbfsRestOperationException ex) {
+  LOG.debug("Failed to to getPathStatus {}", relativePath, ex);
   if (ex.getStatusCode() == HttpURLConnection.HTTP_NOT_FOUND) {

Review Comment:
   Hi @steveloughran, Given Hadoop is single writer semantic, would it be 
correct to expect that as part of job parallelization only one worker process 
should try to create a file ? As this check for FileNotFound is post an attempt 
to create the file with overwrite=false, which inturn failed with conflict 
indicating file was just present, concurrent operation on the file is indeed 
confirmed. 
   
   Its quite possible that if we let this create proceed, some other operation 
such as delete can kick in later on as well. Below code that throws exception 
at the first indication of parallel activity would be the right thing to do ? 
   
   
   As the workload pattern is not honoring the single writer semantic I feel we 
should retain the logic to throw  ConcurrentWriteOperationDetectedException. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org