Re: [PR] HDFS-17249. Fix TestDFSUtil.testIsValidName() unit test failure [hadoop]

2023-11-08 Thread via GitHub


LiuGuH commented on PR #6249:
URL: https://github.com/apache/hadoop/pull/6249#issuecomment-1803322401

   > I've started a new run - 
https://ci-hadoop.apache.org/view/Hadoop/job/hadoop-qbt-trunk-java8-win10-x86_64/310/console
   
   When this run finished , I will add  all asserts in the test case switched 
over to this @steveloughran  Thanks
   
   assertValidName(String name) {
assertFalse("Should have been rejected '" + name + "'", isValidName(name);
   }


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-16791. Add getEnclosingRoot() API to filesystem interface and im… [hadoop]

2023-11-08 Thread via GitHub


hadoop-yetus commented on PR #6262:
URL: https://github.com/apache/hadoop/pull/6262#issuecomment-1803254031

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   5m 10s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +0 :ok: |  buf  |   0m  0s |  |  buf was not available.  |
   | +0 :ok: |  buf  |   0m  0s |  |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 11 new or modified test files.  |
    _ branch-3.3 Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m  3s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 46s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |  11m 38s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   1m 47s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   3m 59s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   3m 38s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |   6m 33s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  23m 28s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |   0m 19s | 
[/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6262/1/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch failed.  |
   | -1 :x: |  compile  |   3m 13s | 
[/patch-compile-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6262/1/artifact/out/patch-compile-root.txt)
 |  root in the patch failed.  |
   | -1 :x: |  cc  |   3m 13s | 
[/patch-compile-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6262/1/artifact/out/patch-compile-root.txt)
 |  root in the patch failed.  |
   | -1 :x: |  javac  |   3m 13s | 
[/patch-compile-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6262/1/artifact/out/patch-compile-root.txt)
 |  root in the patch failed.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 32s |  |  the patch passed  |
   | -1 :x: |  mvnsite  |   0m 22s | 
[/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6262/1/artifact/out/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch failed.  |
   | -1 :x: |  javadoc  |   0m 43s | 
[/results-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6262/1/artifact/out/results-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-project_hadoop-hdfs-rbf generated 1 new + 0 unchanged - 0 fixed 
= 1 total (was 0)  |
   | -1 :x: |  spotbugs  |   0m 21s | 
[/patch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6262/1/artifact/out/patch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch failed.  |
   | +1 :green_heart: |  shadedclient  |  23m 47s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  15m 23s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m 53s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 179m 50s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6262/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  unit  |   0m 32s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6262/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 40s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 336m 37s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade |
   |   | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
   |   | 

Re: [PR] HDFS-16791. Add getEnclosingRoot() API to filesystem interface and im… [hadoop]

2023-11-08 Thread via GitHub


mccormickt12 commented on PR #6262:
URL: https://github.com/apache/hadoop/pull/6262#issuecomment-1803015457

   @steveloughran Nearly clean cherry pick. Just a couple of things that exist 
in trunk but not in 3.3, so all deletes.
   
   ```
   Unmerged paths:
 (use "git add ..." to mark resolution)
   both modified:   
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
   both modified:   
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
   
   Untracked files:
 (use "git add ..." to include in what will be committed)
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/placement/
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[PR] HDFS-16791. Add getEnclosingRoot() API to filesystem interface and im… [hadoop]

2023-11-08 Thread via GitHub


mccormickt12 opened a new pull request, #6262:
URL: https://github.com/apache/hadoop/pull/6262

   …plementations (#6198)
   
   The enclosing root path is a common ancestor that should be used for temp 
and staging dirs as well as within encryption zones and other restricted 
directories.
   
   Contributed by Tom McCormick
   
   
   
   ### Description of PR
   Cherry pick of 
https://github.com/apache/hadoop/pull/6198#pullrequestreview-1712503495 
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-15413. add dfs.client.read.striped.datanode.max.attempts to fix read ecfile timeout [hadoop]

2023-11-08 Thread via GitHub


bbeaudreault commented on PR #5829:
URL: https://github.com/apache/hadoop/pull/5829#issuecomment-1802929219

   @ayushtkn @zhangshuyan0 looks like the remaining failing checks are 
unrelated, and the feedback was addressed. Any chance for another look?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18257) Analyzing S3A Audit Logs

2023-11-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17784214#comment-17784214
 ] 

ASF GitHub Bot commented on HADOOP-18257:
-

mukund-thakur commented on code in PR #6000:
URL: https://github.com/apache/hadoop/pull/6000#discussion_r1387231564


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/audit/mapreduce/S3AAuditLogMergerAndParser.java:
##
@@ -83,17 +83,20 @@ public HashMap parseAuditLog(String 
singleAuditLog) {
   return auditLogMap;
 }
 final Matcher matcher = LOG_ENTRY_PATTERN.matcher(singleAuditLog);
-boolean patternMatching = matcher.matches();
-if (patternMatching) {
+boolean patternMatched = matcher.matches();
+if (patternMatched) {
   for (String key : AWS_LOG_REGEXP_GROUPS) {
 try {
   final String value = matcher.group(key);
   auditLogMap.put(key, value);
 } catch (IllegalStateException e) {
+  LOG.debug("Skipping key :{} due to no matching with the audit log "
+  + "pattern :", key);
   LOG.debug(String.valueOf(e));
 }
   }
 }
+LOG.info("MMT audit map: {}", auditLogMap);

Review Comment:
   nit: is MMT needed? :P



##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/audit/TestS3AAuditLogMergerAndParser.java:
##
@@ -0,0 +1,273 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.audit;
+
+import java.io.File;
+import java.io.FileWriter;
+import java.io.IOException;
+import java.nio.file.Files;
+import java.util.Map;
+
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.s3a.AbstractS3ATestBase;
+import org.apache.hadoop.fs.s3a.audit.mapreduce.S3AAuditLogMergerAndParser;
+
+/**
+ * This will implement different tests on S3AAuditLogMergerAndParser class.
+ */
+public class TestS3AAuditLogMergerAndParser extends AbstractS3ATestBase {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(TestS3AAuditLogMergerAndParser.class);
+
+  /**
+   * A real log entry.
+   * This is derived from a real log entry on a test run.
+   * If this needs to be updated, please do it from a real log.
+   * Splitting this up across lines has a tendency to break things, so
+   * be careful making changes.
+   */
+  static final String SAMPLE_LOG_ENTRY =
+  "183c9826b45486e485693808f38e2c4071004bf5dfd4c3ab210f0a21a400"
+  + " bucket-london"
+  + " [13/May/2021:11:26:06 +]"
+  + " 109.157.171.174"
+  + " arn:aws:iam::152813717700:user/dev"
+  + " M7ZB7C4RTKXJKTM9"
+  + " REST.PUT.OBJECT"
+  + " fork-0001/test/testParseBrokenCSVFile"
+  + " \"PUT /fork-0001/test/testParseBrokenCSVFile HTTP/1.1\""
+  + " 200"
+  + " -"
+  + " -"
+  + " 794"
+  + " 55"
+  + " 17"
+  + " \"https://audit.example.org/hadoop/1/op_create/;
+  + "e8ede3c7-8506-4a43-8268-fe8fcbb510a4-0278/"
+  + "?op=op_create"
+  + "=fork-0001/test/testParseBrokenCSVFile"
+  + "=alice"
+  + "=2eac5a04-2153-48db-896a-09bc9a2fd132"
+  + "=e8ede3c7-8506-4a43-8268-fe8fcbb510a4-0278=154"
+  + "=e8ede3c7-8506-4a43-8268-fe8fcbb510a4=156&"
+  + "ts=1620905165700\""
+  + " \"Hadoop 3.4.0-SNAPSHOT, java/1.8.0_282 vendor/AdoptOpenJDK\""
+  + " -"
+  + " TrIqtEYGWAwvu0h1N9WJKyoqM0TyHUaY+ZZBwP2yNf2qQp1Z/0="
+  + " SigV4"
+  + " ECDHE-RSA-AES128-GCM-SHA256"
+  + " AuthHeader"
+  + " bucket-london.s3.eu-west-2.amazonaws.com"
+  + " TLSv1.2" + "\n";
+
+  static final String SAMPLE_LOG_ENTRY_1 =
+  "01234567890123456789"
+  + " bucket-london1"
+  + " [13/May/2021:11:26:06 +]"
+  + " 109.157.171.174"
+  + " arn:aws:iam::152813717700:user/dev"
+  + " M7ZB7C4RTKXJKTM9"
+  + " REST.PUT.OBJECT"
+  + " 

Re: [PR] HADOOP-18257. Merging and Parsing S3A audit logs into Avro format for analysis. [hadoop]

2023-11-08 Thread via GitHub


mukund-thakur commented on code in PR #6000:
URL: https://github.com/apache/hadoop/pull/6000#discussion_r1387231564


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/audit/mapreduce/S3AAuditLogMergerAndParser.java:
##
@@ -83,17 +83,20 @@ public HashMap parseAuditLog(String 
singleAuditLog) {
   return auditLogMap;
 }
 final Matcher matcher = LOG_ENTRY_PATTERN.matcher(singleAuditLog);
-boolean patternMatching = matcher.matches();
-if (patternMatching) {
+boolean patternMatched = matcher.matches();
+if (patternMatched) {
   for (String key : AWS_LOG_REGEXP_GROUPS) {
 try {
   final String value = matcher.group(key);
   auditLogMap.put(key, value);
 } catch (IllegalStateException e) {
+  LOG.debug("Skipping key :{} due to no matching with the audit log "
+  + "pattern :", key);
   LOG.debug(String.valueOf(e));
 }
   }
 }
+LOG.info("MMT audit map: {}", auditLogMap);

Review Comment:
   nit: is MMT needed? :P



##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/audit/TestS3AAuditLogMergerAndParser.java:
##
@@ -0,0 +1,273 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.audit;
+
+import java.io.File;
+import java.io.FileWriter;
+import java.io.IOException;
+import java.nio.file.Files;
+import java.util.Map;
+
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.s3a.AbstractS3ATestBase;
+import org.apache.hadoop.fs.s3a.audit.mapreduce.S3AAuditLogMergerAndParser;
+
+/**
+ * This will implement different tests on S3AAuditLogMergerAndParser class.
+ */
+public class TestS3AAuditLogMergerAndParser extends AbstractS3ATestBase {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(TestS3AAuditLogMergerAndParser.class);
+
+  /**
+   * A real log entry.
+   * This is derived from a real log entry on a test run.
+   * If this needs to be updated, please do it from a real log.
+   * Splitting this up across lines has a tendency to break things, so
+   * be careful making changes.
+   */
+  static final String SAMPLE_LOG_ENTRY =
+  "183c9826b45486e485693808f38e2c4071004bf5dfd4c3ab210f0a21a400"
+  + " bucket-london"
+  + " [13/May/2021:11:26:06 +]"
+  + " 109.157.171.174"
+  + " arn:aws:iam::152813717700:user/dev"
+  + " M7ZB7C4RTKXJKTM9"
+  + " REST.PUT.OBJECT"
+  + " fork-0001/test/testParseBrokenCSVFile"
+  + " \"PUT /fork-0001/test/testParseBrokenCSVFile HTTP/1.1\""
+  + " 200"
+  + " -"
+  + " -"
+  + " 794"
+  + " 55"
+  + " 17"
+  + " \"https://audit.example.org/hadoop/1/op_create/;
+  + "e8ede3c7-8506-4a43-8268-fe8fcbb510a4-0278/"
+  + "?op=op_create"
+  + "=fork-0001/test/testParseBrokenCSVFile"
+  + "=alice"
+  + "=2eac5a04-2153-48db-896a-09bc9a2fd132"
+  + "=e8ede3c7-8506-4a43-8268-fe8fcbb510a4-0278=154"
+  + "=e8ede3c7-8506-4a43-8268-fe8fcbb510a4=156&"
+  + "ts=1620905165700\""
+  + " \"Hadoop 3.4.0-SNAPSHOT, java/1.8.0_282 vendor/AdoptOpenJDK\""
+  + " -"
+  + " TrIqtEYGWAwvu0h1N9WJKyoqM0TyHUaY+ZZBwP2yNf2qQp1Z/0="
+  + " SigV4"
+  + " ECDHE-RSA-AES128-GCM-SHA256"
+  + " AuthHeader"
+  + " bucket-london.s3.eu-west-2.amazonaws.com"
+  + " TLSv1.2" + "\n";
+
+  static final String SAMPLE_LOG_ENTRY_1 =
+  "01234567890123456789"
+  + " bucket-london1"
+  + " [13/May/2021:11:26:06 +]"
+  + " 109.157.171.174"
+  + " arn:aws:iam::152813717700:user/dev"
+  + " M7ZB7C4RTKXJKTM9"
+  + " REST.PUT.OBJECT"
+  + " fork-0001/test/testParseBrokenCSVFile"
+  + " \"PUT /fork-0001/test/testParseBrokenCSVFile HTTP/1.1\""
+  + " 200"
+  + " -"
+  + " -"
+  + " 794"
+  + " 55"
+  + " 17"
+  + " 

[jira] [Commented] (HADOOP-18965) ITestS3AHugeFilesEncryption failure

2023-11-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17784203#comment-17784203
 ] 

ASF GitHub Bot commented on HADOOP-18965:
-

hadoop-yetus commented on PR #6261:
URL: https://github.com/apache/hadoop/pull/6261#issuecomment-1802649000

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m 32s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   1m  8s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  37m 55s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 20s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6261/1/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 1 new + 0 unchanged - 0 fixed 
= 1 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   1m  7s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  38m 28s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 47s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 143m 44s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6261/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6261 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 5ebf35449e05 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 09eeb45002f767a2bc268c65b41afb88c4a8b87f |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6261/1/testReport/ |
   | Max. process+thread count | 531 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6261/1/console |
   | versions | 

Re: [PR] HADOOP-18965. ITestS3AHugeFilesEncryption failure [hadoop]

2023-11-08 Thread via GitHub


hadoop-yetus commented on PR #6261:
URL: https://github.com/apache/hadoop/pull/6261#issuecomment-1802649000

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m 32s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   1m  8s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  37m 55s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 20s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6261/1/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 1 new + 0 unchanged - 0 fixed 
= 1 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   1m  7s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  38m 28s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 47s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 143m 44s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6261/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6261 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 5ebf35449e05 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 09eeb45002f767a2bc268c65b41afb88c4a8b87f |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6261/1/testReport/ |
   | Max. process+thread count | 531 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6261/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to 

Re: [PR] YARN-11611. Remove json-io to 4.14.1 due to CVE-2023-34610 [hadoop]

2023-11-08 Thread via GitHub


hadoop-yetus commented on PR #6257:
URL: https://github.com/apache/hadoop/pull/6257#issuecomment-1802444258

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 41s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  50m 56s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  shadedclient  |  93m  8s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   0m 23s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  shadedclient  |  40m 46s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   4m 27s |  |  
hadoop-yarn-server-applicationhistoryservice in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 41s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 146m  0s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6257/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6257 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint |
   | uname | Linux c45fe2d91683 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 8d42e4b1c8ffcfec53118316e0ea105eac6464e4 |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6257/3/testReport/ |
   | Max. process+thread count | 2775 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice
 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6257/3/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this 

Re: [PR] YARN-11483. [Federation] Router AdminCLI Supports Clean Finish Apps. [hadoop]

2023-11-08 Thread via GitHub


hadoop-yetus commented on PR #6251:
URL: https://github.com/apache/hadoop/pull/6251#issuecomment-1802430924

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  buf  |   0m  0s |  |  buf was not available.  |
   | +0 :ok: |  buf  |   0m  0s |  |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 15s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  36m 22s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   7m 56s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   7m 15s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   1m 58s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   5m  7s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   5m  0s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   4m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   9m 44s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  38m 38s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   7m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  cc  |   7m 20s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   7m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   7m  7s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  cc  |   7m  7s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   7m  7s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 52s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   4m 45s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   4m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   4m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |  10m 23s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  38m 51s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m  7s |  |  hadoop-yarn-api in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   5m 34s |  |  hadoop-yarn-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m 56s |  |  hadoop-yarn-server-common in 
the patch passed.  |
   | +1 :green_heart: |  unit  | 103m 15s |  |  
hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  unit  |  28m 16s |  |  hadoop-yarn-client in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   0m 39s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 56s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 365m 54s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6251/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6251 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets cc buflint 
bufcompat |
   | uname | Linux 384a6b78ad6b 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4721e3feae980f0109cacb2b64145a808fe28607 |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 

[jira] [Commented] (HADOOP-18965) ITestS3AHugeFilesEncryption failure

2023-11-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17784168#comment-17784168
 ] 

ASF GitHub Bot commented on HADOOP-18965:
-

steveloughran opened a new pull request, #6261:
URL: https://github.com/apache/hadoop/pull/6261

   
   ### Description of PR
   
   * moves to per-bucket load of encryption algorithm in test
   * prints diagnostics on failure
   
   ### How was this patch tested?
   
   reran failing test suite
   
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [x] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> ITestS3AHugeFilesEncryption failure
> ---
>
> Key: HADOOP-18965
> URL: https://issues.apache.org/jira/browse/HADOOP-18965
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Priority: Major
>
> test failures for me with a test setup of per-bucket encryption of sse-kms.
> suspect (but can't guarantee) HADOOP-18850 may be a factor.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18965) ITestS3AHugeFilesEncryption failure

2023-11-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-18965:

Labels: pull-request-available  (was: )

> ITestS3AHugeFilesEncryption failure
> ---
>
> Key: HADOOP-18965
> URL: https://issues.apache.org/jira/browse/HADOOP-18965
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> test failures for me with a test setup of per-bucket encryption of sse-kms.
> suspect (but can't guarantee) HADOOP-18850 may be a factor.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[PR] HADOOP-18965. ITestS3AHugeFilesEncryption failure [hadoop]

2023-11-08 Thread via GitHub


steveloughran opened a new pull request, #6261:
URL: https://github.com/apache/hadoop/pull/6261

   
   ### Description of PR
   
   * moves to per-bucket load of encryption algorithm in test
   * prints diagnostics on failure
   
   ### How was this patch tested?
   
   reran failing test suite
   
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [x] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18965) ITestS3AHugeFilesEncryption failure

2023-11-08 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17784151#comment-17784151
 ] 

Steve Loughran commented on HADOOP-18965:
-

happens with per bucket encryption as the new check for algorithm picks up base 
encryption mech only

> ITestS3AHugeFilesEncryption failure
> ---
>
> Key: HADOOP-18965
> URL: https://issues.apache.org/jira/browse/HADOOP-18965
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Priority: Major
>
> test failures for me with a test setup of per-bucket encryption of sse-kms.
> suspect (but can't guarantee) HADOOP-18850 may be a factor.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18965) ITestS3AHugeFilesEncryption failure

2023-11-08 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17784145#comment-17784145
 ] 

Steve Loughran commented on HADOOP-18965:
-


{code}
[ERROR] Tests run: 10, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 19.062 
s <<< FAILURE! - in org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFilesEncryption
[ERROR] 
test_090_verifyRenameSourceEncryption(org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFilesEncryption)
  Time elapsed: 0.547 s  <<< FAILURE!
java.lang.AssertionError: Invalid encryption configured
at 
org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFilesEncryption.assertEncrypted(ITestS3AHugeFilesEncryption.java:77)
at 
org.apache.hadoop.fs.s3a.scale.AbstractSTestS3AHugeFiles.test_090_verifyRenameSourceEncryption(AbstractSTestS3AHugeFiles.java:640)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at

[ERROR] 
test_110_verifyRenameDestEncryption(org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFilesEncryption)
  Time elapsed: 0.654 s  <<< FAILURE!
java.lang.AssertionError: Invalid encryption configured
at 
org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFilesEncryption.assertEncrypted(ITestS3AHugeFilesEncryption.java:77)
at 
org.apache.hadoop.fs.s3a.scale.AbstractSTestS3AHugeFiles.test_110_verifyRenameDestEncryption(AbstractSTestS3AHugeFiles.java:696)
 
{code}


> ITestS3AHugeFilesEncryption failure
> ---
>
> Key: HADOOP-18965
> URL: https://issues.apache.org/jira/browse/HADOOP-18965
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Priority: Major
>
> test failures for me with a test setup of per-bucket encryption of sse-kms.
> suspect (but can't guarantee) HADOOP-18850 may be a factor.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18965) ITestS3AHugeFilesEncryption failure

2023-11-08 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-18965:
---

 Summary: ITestS3AHugeFilesEncryption failure
 Key: HADOOP-18965
 URL: https://issues.apache.org/jira/browse/HADOOP-18965
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3, test
Affects Versions: 3.4.0
Reporter: Steve Loughran


test failures for me with a test setup of per-bucket encryption of sse-kms.

suspect (but can't guarantee) HADOOP-18850 may be a factor.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18850) Enable dual-layer server-side encryption with AWS KMS keys (DSSE-KMS)

2023-11-08 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17784138#comment-17784138
 ] 

Steve Loughran commented on HADOOP-18850:
-

hey, think this is breaking my test runs on a bucket set up with SSE-KMS 
encryption.

i will try and run from my IDE but think we'll need a followup. which must 
include the encryption string in the new assertion error message.

{code}
[ERROR] Tests run: 10, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 19.062 
s <<< FAILURE! - in org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFilesEncryption
[ERROR] 
test_090_verifyRenameSourceEncryption(org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFilesEncryption)
  Time elapsed: 0.547 s  <<< FAILURE!
java.lang.AssertionError: Invalid encryption configured
at 
org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFilesEncryption.assertEncrypted(ITestS3AHugeFilesEncryption.java:77)
at 
org.apache.hadoop.fs.s3a.scale.AbstractSTestS3AHugeFiles.test_090_verifyRenameSourceEncryption(AbstractSTestS3AHugeFiles.java:640)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:750)

[ERROR] 
test_110_verifyRenameDestEncryption(org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFilesEncryption)
  Time elapsed: 0.654 s  <<< FAILURE!
java.lang.AssertionError: Invalid encryption configured
at 
org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFilesEncryption.assertEncrypted(ITestS3AHugeFilesEncryption.java:77)
at 
org.apache.hadoop.fs.s3a.scale.AbstractSTestS3AHugeFiles.test_110_verifyRenameDestEncryption(AbstractSTestS3AHugeFiles.java:696)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:750)


{code}



> Enable dual-layer server-side encryption with AWS KMS keys (DSSE-KMS)
> -
>
> Key: HADOOP-18850
> URL: https://issues.apache.org/jira/browse/HADOOP-18850
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, security
>Affects Versions: 3.4.0
>Reporter: Akira Ajisaka
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> Add support for DSSE-KMS
> https://docs.aws.amazon.com/AmazonS3/latest/userguide/specifying-dsse-encryption.html



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To 

Re: [PR] YARN-11610. [Federation] Add WeightedHomePolicyManager. [hadoop]

2023-11-08 Thread via GitHub


hadoop-yetus commented on PR #6256:
URL: https://github.com/apache/hadoop/pull/6256#issuecomment-1802007069

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 27s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 34s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 25s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   1m  2s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 50s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   0m 23s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 14s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6256/2/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt)
 |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common: 
The patch generated 5 new + 0 unchanged - 0 fixed = 5 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 50s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 41s |  |  hadoop-yarn-server-common in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 29s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  87m 58s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6256/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6256 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 8532abf887c2 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / c6561757110796b846714b7b2517c4345dbe7a69 |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6256/2/testReport/ |
   | Max. process+thread count | 686 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common |
   | Console output | 

[jira] [Resolved] (HADOOP-18487) Make protobuf 2.5 an optional runtime dependency.

2023-11-08 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18487.
-
Resolution: Fixed

> Make protobuf 2.5 an optional runtime dependency.
> -
>
> Key: HADOOP-18487
> URL: https://issues.apache.org/jira/browse/HADOOP-18487
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, ipc
>Affects Versions: 3.3.4
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>
> uses of protobuf 2.5 and RpcEnginej have been deprecated since 3.3.0 in 
> HADOOP-17046
> while still keeping those files around (for a long time...), how about we 
> make the protobuf 2.5.0 export off hadoop common and hadoop-hdfs *provided*, 
> rather than *compile*
> that way, if apps want it for their own apis, they have to explicitly ask for 
> it, but at least our own scans don't break.
> i have no idea what will happen to the rest of the stack at this point, it 
> will be "interesting" to see



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18487) Make protobuf 2.5 an optional runtime dependency.

2023-11-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17784073#comment-17784073
 ] 

ASF GitHub Bot commented on HADOOP-18487:
-

steveloughran merged PR #6258:
URL: https://github.com/apache/hadoop/pull/6258




> Make protobuf 2.5 an optional runtime dependency.
> -
>
> Key: HADOOP-18487
> URL: https://issues.apache.org/jira/browse/HADOOP-18487
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, ipc
>Affects Versions: 3.3.4
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>
> uses of protobuf 2.5 and RpcEnginej have been deprecated since 3.3.0 in 
> HADOOP-17046
> while still keeping those files around (for a long time...), how about we 
> make the protobuf 2.5.0 export off hadoop common and hadoop-hdfs *provided*, 
> rather than *compile*
> that way, if apps want it for their own apis, they have to explicitly ask for 
> it, but at least our own scans don't break.
> i have no idea what will happen to the rest of the stack at this point, it 
> will be "interesting" to see



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18487. Protobuf 2.5 removal part 2: stop exporting protobuf-2.5 (#6185) [hadoop]

2023-11-08 Thread via GitHub


steveloughran merged PR #6258:
URL: https://github.com/apache/hadoop/pull/6258


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18487) Make protobuf 2.5 an optional runtime dependency.

2023-11-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17784072#comment-17784072
 ] 

ASF GitHub Bot commented on HADOOP-18487:
-

steveloughran commented on PR #6258:
URL: https://github.com/apache/hadoop/pull/6258#issuecomment-1802001442

   test failure is due to loss of storage, not as far as I can see this change




> Make protobuf 2.5 an optional runtime dependency.
> -
>
> Key: HADOOP-18487
> URL: https://issues.apache.org/jira/browse/HADOOP-18487
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, ipc
>Affects Versions: 3.3.4
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>
> uses of protobuf 2.5 and RpcEnginej have been deprecated since 3.3.0 in 
> HADOOP-17046
> while still keeping those files around (for a long time...), how about we 
> make the protobuf 2.5.0 export off hadoop common and hadoop-hdfs *provided*, 
> rather than *compile*
> that way, if apps want it for their own apis, they have to explicitly ask for 
> it, but at least our own scans don't break.
> i have no idea what will happen to the rest of the stack at this point, it 
> will be "interesting" to see



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18487. Protobuf 2.5 removal part 2: stop exporting protobuf-2.5 (#6185) [hadoop]

2023-11-08 Thread via GitHub


steveloughran commented on PR #6258:
URL: https://github.com/apache/hadoop/pull/6258#issuecomment-1802001442

   test failure is due to loss of storage, not as far as I can see this change


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-16791. Add getEnclosingRoot() API to filesystem interface and implementations [hadoop]

2023-11-08 Thread via GitHub


steveloughran commented on PR #6198:
URL: https://github.com/apache/hadoop/pull/6198#issuecomment-1801996383

   merged. @mccormickt12 if you can do a PR for branch-3.3 with just this 
cherrypick, we can merge in there too once yetus approves. marking jira as 
fixed for 3.4


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-16791. Add getEnclosingRoot() API to filesystem interface and implementations [hadoop]

2023-11-08 Thread via GitHub


steveloughran merged PR #6198:
URL: https://github.com/apache/hadoop/pull/6198


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-16791 Add getEnclosingRoot API to filesystem interface and all implementations [hadoop]

2023-11-08 Thread via GitHub


steveloughran commented on PR #6198:
URL: https://github.com/apache/hadoop/pull/6198#issuecomment-1801989663

   note, there is a new test failure but it is unrelated and being addressed in 
#6249
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17249. Fix TestDFSUtil.testIsValidName() unit test failure [hadoop]

2023-11-08 Thread via GitHub


steveloughran commented on PR #6249:
URL: https://github.com/apache/hadoop/pull/6249#issuecomment-1801983058

   having hit this myself, I'm going to say "this PR needss to include the name 
passed in rather than just assert true/false"
   
   proposed, 
   
   ```java
   assertValidName(String name) {
assertFalse("Should have been rejected '" + name + "'", isValidName(name);
   }
   ```
   
   ...and all asserts in the test case switched over to this. think how much 
easier it will be debugging yetus falures the next time there's a regressoin.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-11577. Improve FederationInterceptorREST Method Result. [hadoop]

2023-11-08 Thread via GitHub


hadoop-yetus commented on PR #6190:
URL: https://github.com/apache/hadoop/pull/6190#issuecomment-1801975622

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 27s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 12s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 23s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 44s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m  1s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 13s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 50s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 23s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 28s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  82m 13s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6190/22/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6190 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | Linux 709d8d42f07a 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / a37985eda9be27dc6fee808ac942a076e52e2499 |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6190/22/testReport/ |
   | Max. process+thread count | 624 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6190/22/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use 

Re: [PR] YARN-11577. Improve FederationInterceptorREST Method Result. [hadoop]

2023-11-08 Thread via GitHub


hadoop-yetus commented on PR #6190:
URL: https://github.com/apache/hadoop/pull/6190#issuecomment-1801945352

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 39s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  47m 45s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 28s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   1m  0s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  35m 40s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   0m 23s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   1m  0s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  36m 28s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 29s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 134m 52s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6190/21/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6190 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | Linux 7b47f6d8e4c0 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b72da1cc28d2f241ad2ec9e6e830aff8a9ac4a74 |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6190/21/testReport/ |
   | Max. process+thread count | 557 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6190/21/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use 

[jira] [Updated] (HADOOP-18964) Update plugin for SBOM generation to 2.7.10 #6235

2023-11-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-18964:

Labels: pull-request-available  (was: )

> Update plugin for SBOM generation to 2.7.10 #6235
> -
>
> Key: HADOOP-18964
> URL: https://issues.apache.org/jira/browse/HADOOP-18964
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Vinod Anandan
>Priority: Major
>  Labels: pull-request-available
>
> Update the CycloneDX Maven plugin for SBOM generation to 2.7.10



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18964) Update plugin for SBOM generation to 2.7.10 #6235

2023-11-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17784036#comment-17784036
 ] 

ASF GitHub Bot commented on HADOOP-18964:
-

VinodAnandan commented on PR #6235:
URL: https://github.com/apache/hadoop/pull/6235#issuecomment-1801828991

   > have triggered the build again. @VinodAnandan can you create a HADOOP 
ticket & prefix the jira id on this PR
   
   Done.




> Update plugin for SBOM generation to 2.7.10 #6235
> -
>
> Key: HADOOP-18964
> URL: https://issues.apache.org/jira/browse/HADOOP-18964
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Vinod Anandan
>Priority: Major
>
> Update the CycloneDX Maven plugin for SBOM generation to 2.7.10



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18964 Update plugin for SBOM generation to 2.7.10 [hadoop]

2023-11-08 Thread via GitHub


VinodAnandan commented on PR #6235:
URL: https://github.com/apache/hadoop/pull/6235#issuecomment-1801828991

   > have triggered the build again. @VinodAnandan can you create a HADOOP 
ticket & prefix the jira id on this PR
   
   Done.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18964) Update plugin for SBOM generation to 2.7.10 #6235

2023-11-08 Thread Vinod Anandan (Jira)
Vinod Anandan created HADOOP-18964:
--

 Summary: Update plugin for SBOM generation to 2.7.10 #6235
 Key: HADOOP-18964
 URL: https://issues.apache.org/jira/browse/HADOOP-18964
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Vinod Anandan


Update the CycloneDX Maven plugin for SBOM generation to 2.7.10



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-11612. [Federation] Fix the name of unmanaged app. [hadoop]

2023-11-08 Thread via GitHub


zhengchenyu closed pull request #6260: YARN-11612. [Federation] Fix the name of 
unmanaged app.
URL: https://github.com/apache/hadoop/pull/6260


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-11612. [Federation] Fix the name of unmanaged app. [hadoop]

2023-11-08 Thread via GitHub


zhengchenyu commented on code in PR #6260:
URL: https://github.com/apache/hadoop/pull/6260#discussion_r1386534300


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedApplicationManager.java:
##
@@ -431,6 +431,7 @@ private void submitUnmanagedApp(ApplicationId appId) throws 
YarnException, IOExc
 context.setResource(resource);
 context.setAMContainerSpec(amContainer);
 if (applicationSubmissionContext != null) {
+  
context.setApplicationName(applicationSubmissionContext.getApplicationName());

Review Comment:
   Yes, you are right. I just wanna search the application by name. But it is 
the quick way to find home sub cluster indeed. I will close this pr firstly 
until someone will be interested in this.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-11612. [Federation] Fix the name of unmanaged app. [hadoop]

2023-11-08 Thread via GitHub


zhengchenyu commented on code in PR #6260:
URL: https://github.com/apache/hadoop/pull/6260#discussion_r1386534300


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedApplicationManager.java:
##
@@ -431,6 +431,7 @@ private void submitUnmanagedApp(ApplicationId appId) throws 
YarnException, IOExc
 context.setResource(resource);
 context.setAMContainerSpec(amContainer);
 if (applicationSubmissionContext != null) {
+  
context.setApplicationName(applicationSubmissionContext.getApplicationName());

Review Comment:
   Yes, you are right. I just wanna search the application by name. But It is 
the quick way to find home sub cluster indeed. I will close this pr firstly 
until someone will be interested in this.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-11612. [Federation] Fix the name of unmanaged app. [hadoop]

2023-11-08 Thread via GitHub


zhengchenyu commented on code in PR #6260:
URL: https://github.com/apache/hadoop/pull/6260#discussion_r1386528806


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/utils/FederationStateStoreFacade.java:
##
@@ -1104,7 +1104,7 @@ public Collection getActiveSubClusters()
   public ApplicationSubmissionContext 
getApplicationSubmissionContext(ApplicationId appId) {
 try {
   GetApplicationHomeSubClusterResponse response = 
stateStore.getApplicationHomeSubCluster(
-  GetApplicationHomeSubClusterRequest.newInstance(appId));
+  GetApplicationHomeSubClusterRequest.newInstance(appId, true));
   ApplicationHomeSubCluster appHomeSubCluster = 
response.getApplicationHomeSubCluster();

Review Comment:
   I found that the originalSubmissionContext is null. And when call 
getApplicationSubmissionContext, 
GetApplicationHomeSubClusterRequest::setContainsAppSubmissionContext is not 
called, so will return null context.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-11612. [Federation] Fix the name of unmanaged app. [hadoop]

2023-11-08 Thread via GitHub


slfan1989 commented on code in PR #6260:
URL: https://github.com/apache/hadoop/pull/6260#discussion_r1386482442


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedApplicationManager.java:
##
@@ -431,6 +431,7 @@ private void submitUnmanagedApp(ApplicationId appId) throws 
YarnException, IOExc
 context.setResource(resource);
 context.setAMContainerSpec(amContainer);
 if (applicationSubmissionContext != null) {
+  
context.setApplicationName(applicationSubmissionContext.getApplicationName());

Review Comment:
   This is a good idea, but I'm still a little worried. The initial decision 
not to assign application name was to quickly differentiate whether the task 
originated from a different sub-cluster. If we were to assign a name, it might 
make it more challenging to discern if the application belongs to another 
sub-cluster. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-11612. [Federation] Fix the name of unmanaged app. [hadoop]

2023-11-08 Thread via GitHub


slfan1989 commented on code in PR #6260:
URL: https://github.com/apache/hadoop/pull/6260#discussion_r1386482442


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedApplicationManager.java:
##
@@ -431,6 +431,7 @@ private void submitUnmanagedApp(ApplicationId appId) throws 
YarnException, IOExc
 context.setResource(resource);
 context.setAMContainerSpec(amContainer);
 if (applicationSubmissionContext != null) {
+  
context.setApplicationName(applicationSubmissionContext.getApplicationName());

Review Comment:
   This is a good idea, but I'm still a little worried. The initial decision 
not to assign  application name was to quickly differentiate whether the task 
originated from a different sub-cluster. If we were to assign a name, it might 
make it more challenging to discern if the task belongs to another sub-cluster. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-11612. [Federation] Fix the name of unmanaged app. [hadoop]

2023-11-08 Thread via GitHub


slfan1989 commented on code in PR #6260:
URL: https://github.com/apache/hadoop/pull/6260#discussion_r1386473150


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/utils/FederationStateStoreFacade.java:
##
@@ -1104,7 +1104,7 @@ public Collection getActiveSubClusters()
   public ApplicationSubmissionContext 
getApplicationSubmissionContext(ApplicationId appId) {
 try {
   GetApplicationHomeSubClusterResponse response = 
stateStore.getApplicationHomeSubCluster(
-  GetApplicationHomeSubClusterRequest.newInstance(appId));
+  GetApplicationHomeSubClusterRequest.newInstance(appId, true));
   ApplicationHomeSubCluster appHomeSubCluster = 
response.getApplicationHomeSubCluster();

Review Comment:
   Why do we modify this parameter? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18872) ABFS: Misreporting Retry Count for Sub-sequential and Parallel Operations

2023-11-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17783979#comment-17783979
 ] 

ASF GitHub Bot commented on HADOOP-18872:
-

hadoop-yetus commented on PR #6019:
URL: https://github.com/apache/hadoop/pull/6019#issuecomment-1801581621

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 27s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 7 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 44s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 25s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 48s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 25s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 15s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6019/4/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 1 new + 4 unchanged - 0 
fixed = 5 total (was 4)  |
   | +1 :green_heart: |  mvnsite  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 42s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 17s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 54s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 29s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  85m 46s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6019/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6019 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux fbb3fdc4b378 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 213cdaa703a3dc5c205cecc8cdf45672a5dc0cfc |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6019/4/testReport/ |
   | Max. process+thread count | 573 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6019/4/console |
   | 

Re: [PR] HADOOP-18872: [ABFS] [BugFix] Misreporting Retry Count for Sub-sequential and Parallel Operations [hadoop]

2023-11-08 Thread via GitHub


hadoop-yetus commented on PR #6019:
URL: https://github.com/apache/hadoop/pull/6019#issuecomment-1801581621

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 27s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 7 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 44s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 25s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 48s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 25s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 15s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6019/4/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 1 new + 4 unchanged - 0 
fixed = 5 total (was 4)  |
   | +1 :green_heart: |  mvnsite  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 42s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 17s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 54s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 29s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  85m 46s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6019/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6019 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux fbb3fdc4b378 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 213cdaa703a3dc5c205cecc8cdf45672a5dc0cfc |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6019/4/testReport/ |
   | Max. process+thread count | 573 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6019/4/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To 

Re: [PR] YARN-11612. [Federation] Fix the name of unmanaged app. [hadoop]

2023-11-08 Thread via GitHub


hadoop-yetus commented on PR #6260:
URL: https://github.com/apache/hadoop/pull/6260#issuecomment-1801378410

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 27s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 34s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 25s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   1m  1s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 25s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 14s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 23s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m  9s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 42s |  |  hadoop-yarn-server-common in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 28s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  85m 33s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6260/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6260 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 2109bafbb0cb 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 86e94ef286c29a10703b36a5e02176f216155c05 |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6260/3/testReport/ |
   | Max. process+thread count | 697 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6260/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: 

[jira] [Commented] (HADOOP-18872) ABFS: Misreporting Retry Count for Sub-sequential and Parallel Operations

2023-11-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17783950#comment-17783950
 ] 

ASF GitHub Bot commented on HADOOP-18872:
-

anujmodi2021 commented on PR #6019:
URL: https://github.com/apache/hadoop/pull/6019#issuecomment-1801376282

   Hi @steveloughran 
   Really sorry for making it difficult for you. There were some merge 
conflicts that I wanted to resolve. 
   I later learned that instead of using a git rebase, I should have used git 
merge.
   
   I will keep this in mind for all my future PRs. Kindly request you to please 
take the effort this time and review the PRs.
   Apologies again.




> ABFS: Misreporting Retry Count for Sub-sequential and Parallel Operations
> -
>
> Key: HADOOP-18872
> URL: https://issues.apache.org/jira/browse/HADOOP-18872
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 3.3.6
>Reporter: Anmol Asrani
>Assignee: Anuj Modi
>Priority: Major
>  Labels: Bug, pull-request-available
>
> There was a bug identified where retry count in the client correlation id was 
> wrongly reported for sub-sequential and parallel operations triggered by a 
> single file system call. This was due to reusing same tracing context for all 
> such calls.
> We create a new tracing context as soon as HDFS call comes. We keep on 
> passing that same TC for all the client calls.
> For instance, when we get a createFile call, we first call metadata 
> operations. If those metadata operations somehow succeeded after a few 
> retries, the tracing context will have that many retry count in it. Now when 
> actual call for create is made, same retry count will be used to construct 
> the headers(clientCorrelationId). Alhough the create operation never failed, 
> we will still see retry count from the previous request.
> Fix is to use a new tracing context object for all the network calls made. 
> All the sub-sequential and parallel operations will have same primary request 
> Id to correlate them, yet they will have their own tracing of retry count.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18872: [ABFS] [BugFix] Misreporting Retry Count for Sub-sequential and Parallel Operations [hadoop]

2023-11-08 Thread via GitHub


anujmodi2021 commented on PR #6019:
URL: https://github.com/apache/hadoop/pull/6019#issuecomment-1801376282

   Hi @steveloughran 
   Really sorry for making it difficult for you. There were some merge 
conflicts that I wanted to resolve. 
   I later learned that instead of using a git rebase, I should have used git 
merge.
   
   I will keep this in mind for all my future PRs. Kindly request you to please 
take the effort this time and review the PRs.
   Apologies again.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18872: [ABFS] [BugFix] Misreporting Retry Count for Sub-sequential and Parallel Operations [hadoop]

2023-11-08 Thread via GitHub


anujmodi2021 commented on code in PR #6019:
URL: https://github.com/apache/hadoop/pull/6019#discussion_r1386236894


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/AbfsClientTestUtil.java:
##
@@ -0,0 +1,147 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.net.HttpURLConnection;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.locks.ReentrantLock;
+
+import org.assertj.core.api.Assertions;
+import org.mockito.Mockito;
+import org.mockito.invocation.InvocationOnMock;
+import org.mockito.stubbing.Answer;
+
+import org.apache.hadoop.fs.azurebfs.utils.TracingContext;
+import org.apache.hadoop.util.functional.FunctionRaisingIOE;
+
+import static java.net.HttpURLConnection.HTTP_OK;
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.services.AuthType.OAuth;
+import static org.mockito.ArgumentMatchers.any;
+import static org.mockito.ArgumentMatchers.eq;
+import static org.mockito.ArgumentMatchers.nullable;
+
+/**
+ * Utility class to help defining mock behavior on AbfsClient and 
AbfsRestOperation
+ * objects which are protected inside services package.
+ */
+public final class AbfsClientTestUtil {
+
+  private AbfsClientTestUtil() {
+
+  }
+
+  public static void setMockAbfsRestOperationForListPathOperation(
+  final AbfsClient spiedClient,
+  FunctionRaisingIOE 
functionRaisingIOE)
+  throws Exception {
+ExponentialRetryPolicy retryPolicy = 
Mockito.mock(ExponentialRetryPolicy.class);
+AbfsHttpOperation httpOperation = Mockito.mock(AbfsHttpOperation.class);
+AbfsRestOperation abfsRestOperation = Mockito.spy(new AbfsRestOperation(
+AbfsRestOperationType.ListPaths,
+spiedClient,
+HTTP_METHOD_GET,
+null,
+new ArrayList<>()
+));
+
+Mockito.doReturn(abfsRestOperation).when(spiedClient).getAbfsRestOperation(
+eq(AbfsRestOperationType.ListPaths), any(), any(), any());
+
+addMockBehaviourToAbfsClient(spiedClient, retryPolicy);
+addMockBehaviourToRestOpAndHttpOp(abfsRestOperation, httpOperation);
+
+functionRaisingIOE.apply(httpOperation);
+  }
+
+  public static void addMockBehaviourToRestOpAndHttpOp(final AbfsRestOperation 
abfsRestOperation,
+  final AbfsHttpOperation httpOperation) throws IOException {
+HttpURLConnection httpURLConnection = 
Mockito.mock(HttpURLConnection.class);
+Mockito.doNothing().when(httpURLConnection)
+.setRequestProperty(nullable(String.class), nullable(String.class));
+Mockito.doReturn(httpURLConnection).when(httpOperation).getConnection();
+Mockito.doReturn("").when(abfsRestOperation).getClientLatency();
+
Mockito.doReturn(httpOperation).when(abfsRestOperation).createHttpOperation();
+  }
+
+  public static void addMockBehaviourToAbfsClient(final AbfsClient abfsClient,

Review Comment:
   Added javadocs



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18872) ABFS: Misreporting Retry Count for Sub-sequential and Parallel Operations

2023-11-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17783947#comment-17783947
 ] 

ASF GitHub Bot commented on HADOOP-18872:
-

anujmodi2021 commented on code in PR #6019:
URL: https://github.com/apache/hadoop/pull/6019#discussion_r1386236597


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemListStatus.java:
##
@@ -62,34 +80,101 @@ public ITestAzureBlobFileSystemListStatus() throws 
Exception {
   public void testListPath() throws Exception {
 Configuration config = new Configuration(this.getRawConfiguration());
 config.set(AZURE_LIST_MAX_RESULTS, "5000");
-final AzureBlobFileSystem fs = (AzureBlobFileSystem) FileSystem
-.newInstance(getFileSystem().getUri(), config);
-final List> tasks = new ArrayList<>();
-
-ExecutorService es = Executors.newFixedThreadPool(10);
-for (int i = 0; i < TEST_FILES_NUMBER; i++) {
-  final Path fileName = new Path("/test" + i);
-  Callable callable = new Callable() {
-@Override
-public Void call() throws Exception {
-  touch(fileName);
-  return null;
-}
-  };
-
-  tasks.add(es.submit(callable));
-}
-
-for (Future task : tasks) {
-  task.get();
+try (final AzureBlobFileSystem fs = (AzureBlobFileSystem) FileSystem
+.newInstance(getFileSystem().getUri(), config)) {
+  final List> tasks = new ArrayList<>();
+
+  ExecutorService es = Executors.newFixedThreadPool(10);
+  for (int i = 0; i < TEST_FILES_NUMBER; i++) {
+final Path fileName = new Path("/test" + i);
+Callable callable = new Callable() {
+  @Override
+  public Void call() throws Exception {
+touch(fileName);
+return null;
+  }
+};
+
+tasks.add(es.submit(callable));
+  }
+
+  for (Future task : tasks) {
+task.get();
+  }
+
+  es.shutdownNow();
+  fs.registerListener(
+  new 
TracingHeaderValidator(getConfiguration().getClientCorrelationId(),
+  fs.getFileSystemId(), FSOperationType.LISTSTATUS, true, 
0));
+  FileStatus[] files = fs.listStatus(new Path("/"));
+  assertEquals(TEST_FILES_NUMBER, files.length /* user directory */);
 }
+  }
 
-es.shutdownNow();
-fs.registerListener(
-new TracingHeaderValidator(getConfiguration().getClientCorrelationId(),
-fs.getFileSystemId(), FSOperationType.LISTSTATUS, true, 0));
-FileStatus[] files = fs.listStatus(new Path("/"));
-assertEquals(TEST_FILES_NUMBER, files.length /* user directory */);
+  /**
+   * Test to verify that each paginated call to ListBlobs uses a new tracing 
context.
+   * @throws Exception
+   */
+  @Test
+  public void testListPathTracingContext() throws Exception {
+final AzureBlobFileSystem fs = getFileSystem();
+final AzureBlobFileSystem spiedFs = Mockito.spy(fs);
+final AzureBlobFileSystemStore spiedStore = Mockito.spy(fs.getAbfsStore());
+final AbfsClient spiedClient = Mockito.spy(fs.getAbfsClient());
+final TracingContext spiedTracingContext = Mockito.spy(
+new TracingContext(
+fs.getClientCorrelationId(), fs.getFileSystemId(),
+FSOperationType.LISTSTATUS, true, 
TracingHeaderFormat.ALL_ID_FORMAT, null));
+
+Mockito.doReturn(spiedStore).when(spiedFs).getAbfsStore();
+spiedStore.setClient(spiedClient);
+spiedFs.setWorkingDirectory(new Path("/"));
+
+
AbfsClientTestUtil.setMockAbfsRestOperationForListPathOperation(spiedClient,
+(httpOperation) -> {
+
+  ListResultEntrySchema entry = new ListResultEntrySchema()
+  .withName("a")
+  .withIsDirectory(true);
+  List paths = new ArrayList<>();
+  paths.add(entry);
+  paths.clear();
+  entry = new ListResultEntrySchema()
+  .withName("abc.txt")
+  .withIsDirectory(false);
+  paths.add(entry);
+  ListResultSchema schema1 = new ListResultSchema().withPaths(paths);
+  ListResultSchema schema2 = new ListResultSchema().withPaths(paths);
+
+  when(httpOperation.getListResultSchema()).thenReturn(schema1)
+  .thenReturn(schema2);
+  when(httpOperation.getResponseHeader(
+  HttpHeaderConfigurations.X_MS_CONTINUATION))
+  .thenReturn(TEST_CONTINUATION_TOKEN)
+  .thenReturn(EMPTY_STRING);
+
+  Stubber stubber = Mockito.doThrow(
+  new SocketTimeoutException(CONNECTION_TIMEOUT_JDK_MESSAGE));
+  stubber.doNothing().when(httpOperation).processResponse(
+  nullable(byte[].class), nullable(int.class), 
nullable(int.class));
+
+  
when(httpOperation.getStatusCode()).thenReturn(-1).thenReturn(HTTP_OK);
+  return httpOperation;
+});
+
+

[jira] [Commented] (HADOOP-18872) ABFS: Misreporting Retry Count for Sub-sequential and Parallel Operations

2023-11-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17783948#comment-17783948
 ] 

ASF GitHub Bot commented on HADOOP-18872:
-

anujmodi2021 commented on code in PR #6019:
URL: https://github.com/apache/hadoop/pull/6019#discussion_r1386236894


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/AbfsClientTestUtil.java:
##
@@ -0,0 +1,147 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.net.HttpURLConnection;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.locks.ReentrantLock;
+
+import org.assertj.core.api.Assertions;
+import org.mockito.Mockito;
+import org.mockito.invocation.InvocationOnMock;
+import org.mockito.stubbing.Answer;
+
+import org.apache.hadoop.fs.azurebfs.utils.TracingContext;
+import org.apache.hadoop.util.functional.FunctionRaisingIOE;
+
+import static java.net.HttpURLConnection.HTTP_OK;
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.services.AuthType.OAuth;
+import static org.mockito.ArgumentMatchers.any;
+import static org.mockito.ArgumentMatchers.eq;
+import static org.mockito.ArgumentMatchers.nullable;
+
+/**
+ * Utility class to help defining mock behavior on AbfsClient and 
AbfsRestOperation
+ * objects which are protected inside services package.
+ */
+public final class AbfsClientTestUtil {
+
+  private AbfsClientTestUtil() {
+
+  }
+
+  public static void setMockAbfsRestOperationForListPathOperation(
+  final AbfsClient spiedClient,
+  FunctionRaisingIOE 
functionRaisingIOE)
+  throws Exception {
+ExponentialRetryPolicy retryPolicy = 
Mockito.mock(ExponentialRetryPolicy.class);
+AbfsHttpOperation httpOperation = Mockito.mock(AbfsHttpOperation.class);
+AbfsRestOperation abfsRestOperation = Mockito.spy(new AbfsRestOperation(
+AbfsRestOperationType.ListPaths,
+spiedClient,
+HTTP_METHOD_GET,
+null,
+new ArrayList<>()
+));
+
+Mockito.doReturn(abfsRestOperation).when(spiedClient).getAbfsRestOperation(
+eq(AbfsRestOperationType.ListPaths), any(), any(), any());
+
+addMockBehaviourToAbfsClient(spiedClient, retryPolicy);
+addMockBehaviourToRestOpAndHttpOp(abfsRestOperation, httpOperation);
+
+functionRaisingIOE.apply(httpOperation);
+  }
+
+  public static void addMockBehaviourToRestOpAndHttpOp(final AbfsRestOperation 
abfsRestOperation,
+  final AbfsHttpOperation httpOperation) throws IOException {
+HttpURLConnection httpURLConnection = 
Mockito.mock(HttpURLConnection.class);
+Mockito.doNothing().when(httpURLConnection)
+.setRequestProperty(nullable(String.class), nullable(String.class));
+Mockito.doReturn(httpURLConnection).when(httpOperation).getConnection();
+Mockito.doReturn("").when(abfsRestOperation).getClientLatency();
+
Mockito.doReturn(httpOperation).when(abfsRestOperation).createHttpOperation();
+  }
+
+  public static void addMockBehaviourToAbfsClient(final AbfsClient abfsClient,

Review Comment:
   Added javadocs





> ABFS: Misreporting Retry Count for Sub-sequential and Parallel Operations
> -
>
> Key: HADOOP-18872
> URL: https://issues.apache.org/jira/browse/HADOOP-18872
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 3.3.6
>Reporter: Anmol Asrani
>Assignee: Anuj Modi
>Priority: Major
>  Labels: Bug, pull-request-available
>
> There was a bug identified where retry count in the client correlation id was 
> wrongly reported for sub-sequential and parallel operations triggered by a 
> single file system call. This was due to reusing same tracing context for all 
> such calls.
> We create a new tracing context as soon as HDFS call comes. We keep on 

Re: [PR] HADOOP-18872: [ABFS] [BugFix] Misreporting Retry Count for Sub-sequential and Parallel Operations [hadoop]

2023-11-08 Thread via GitHub


anujmodi2021 commented on code in PR #6019:
URL: https://github.com/apache/hadoop/pull/6019#discussion_r1386236597


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemListStatus.java:
##
@@ -62,34 +80,101 @@ public ITestAzureBlobFileSystemListStatus() throws 
Exception {
   public void testListPath() throws Exception {
 Configuration config = new Configuration(this.getRawConfiguration());
 config.set(AZURE_LIST_MAX_RESULTS, "5000");
-final AzureBlobFileSystem fs = (AzureBlobFileSystem) FileSystem
-.newInstance(getFileSystem().getUri(), config);
-final List> tasks = new ArrayList<>();
-
-ExecutorService es = Executors.newFixedThreadPool(10);
-for (int i = 0; i < TEST_FILES_NUMBER; i++) {
-  final Path fileName = new Path("/test" + i);
-  Callable callable = new Callable() {
-@Override
-public Void call() throws Exception {
-  touch(fileName);
-  return null;
-}
-  };
-
-  tasks.add(es.submit(callable));
-}
-
-for (Future task : tasks) {
-  task.get();
+try (final AzureBlobFileSystem fs = (AzureBlobFileSystem) FileSystem
+.newInstance(getFileSystem().getUri(), config)) {
+  final List> tasks = new ArrayList<>();
+
+  ExecutorService es = Executors.newFixedThreadPool(10);
+  for (int i = 0; i < TEST_FILES_NUMBER; i++) {
+final Path fileName = new Path("/test" + i);
+Callable callable = new Callable() {
+  @Override
+  public Void call() throws Exception {
+touch(fileName);
+return null;
+  }
+};
+
+tasks.add(es.submit(callable));
+  }
+
+  for (Future task : tasks) {
+task.get();
+  }
+
+  es.shutdownNow();
+  fs.registerListener(
+  new 
TracingHeaderValidator(getConfiguration().getClientCorrelationId(),
+  fs.getFileSystemId(), FSOperationType.LISTSTATUS, true, 
0));
+  FileStatus[] files = fs.listStatus(new Path("/"));
+  assertEquals(TEST_FILES_NUMBER, files.length /* user directory */);
 }
+  }
 
-es.shutdownNow();
-fs.registerListener(
-new TracingHeaderValidator(getConfiguration().getClientCorrelationId(),
-fs.getFileSystemId(), FSOperationType.LISTSTATUS, true, 0));
-FileStatus[] files = fs.listStatus(new Path("/"));
-assertEquals(TEST_FILES_NUMBER, files.length /* user directory */);
+  /**
+   * Test to verify that each paginated call to ListBlobs uses a new tracing 
context.
+   * @throws Exception
+   */
+  @Test
+  public void testListPathTracingContext() throws Exception {
+final AzureBlobFileSystem fs = getFileSystem();
+final AzureBlobFileSystem spiedFs = Mockito.spy(fs);
+final AzureBlobFileSystemStore spiedStore = Mockito.spy(fs.getAbfsStore());
+final AbfsClient spiedClient = Mockito.spy(fs.getAbfsClient());
+final TracingContext spiedTracingContext = Mockito.spy(
+new TracingContext(
+fs.getClientCorrelationId(), fs.getFileSystemId(),
+FSOperationType.LISTSTATUS, true, 
TracingHeaderFormat.ALL_ID_FORMAT, null));
+
+Mockito.doReturn(spiedStore).when(spiedFs).getAbfsStore();
+spiedStore.setClient(spiedClient);
+spiedFs.setWorkingDirectory(new Path("/"));
+
+
AbfsClientTestUtil.setMockAbfsRestOperationForListPathOperation(spiedClient,
+(httpOperation) -> {
+
+  ListResultEntrySchema entry = new ListResultEntrySchema()
+  .withName("a")
+  .withIsDirectory(true);
+  List paths = new ArrayList<>();
+  paths.add(entry);
+  paths.clear();
+  entry = new ListResultEntrySchema()
+  .withName("abc.txt")
+  .withIsDirectory(false);
+  paths.add(entry);
+  ListResultSchema schema1 = new ListResultSchema().withPaths(paths);
+  ListResultSchema schema2 = new ListResultSchema().withPaths(paths);
+
+  when(httpOperation.getListResultSchema()).thenReturn(schema1)
+  .thenReturn(schema2);
+  when(httpOperation.getResponseHeader(
+  HttpHeaderConfigurations.X_MS_CONTINUATION))
+  .thenReturn(TEST_CONTINUATION_TOKEN)
+  .thenReturn(EMPTY_STRING);
+
+  Stubber stubber = Mockito.doThrow(
+  new SocketTimeoutException(CONNECTION_TIMEOUT_JDK_MESSAGE));
+  stubber.doNothing().when(httpOperation).processResponse(
+  nullable(byte[].class), nullable(int.class), 
nullable(int.class));
+
+  
when(httpOperation.getStatusCode()).thenReturn(-1).thenReturn(HTTP_OK);
+  return httpOperation;
+});
+
+List fileStatuses = new ArrayList<>();
+spiedStore.listStatus(new Path("/"), "", fileStatuses, true, null, 
spiedTracingContext);
+
+// Assert that there were 2 paginated ListPath calls were made.

Review Comment:
   Updated the 

[jira] [Commented] (HADOOP-18872) ABFS: Misreporting Retry Count for Sub-sequential and Parallel Operations

2023-11-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17783944#comment-17783944
 ] 

ASF GitHub Bot commented on HADOOP-18872:
-

anujmodi2021 commented on code in PR #6019:
URL: https://github.com/apache/hadoop/pull/6019#discussion_r1386228827


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemListStatus.java:
##
@@ -62,34 +80,101 @@ public ITestAzureBlobFileSystemListStatus() throws 
Exception {
   public void testListPath() throws Exception {
 Configuration config = new Configuration(this.getRawConfiguration());
 config.set(AZURE_LIST_MAX_RESULTS, "5000");
-final AzureBlobFileSystem fs = (AzureBlobFileSystem) FileSystem
-.newInstance(getFileSystem().getUri(), config);
-final List> tasks = new ArrayList<>();
-
-ExecutorService es = Executors.newFixedThreadPool(10);
-for (int i = 0; i < TEST_FILES_NUMBER; i++) {
-  final Path fileName = new Path("/test" + i);
-  Callable callable = new Callable() {
-@Override
-public Void call() throws Exception {
-  touch(fileName);
-  return null;
-}
-  };
-
-  tasks.add(es.submit(callable));
-}
-
-for (Future task : tasks) {
-  task.get();
+try (final AzureBlobFileSystem fs = (AzureBlobFileSystem) FileSystem
+.newInstance(getFileSystem().getUri(), config)) {
+  final List> tasks = new ArrayList<>();
+
+  ExecutorService es = Executors.newFixedThreadPool(10);
+  for (int i = 0; i < TEST_FILES_NUMBER; i++) {
+final Path fileName = new Path("/test" + i);
+Callable callable = new Callable() {
+  @Override
+  public Void call() throws Exception {
+touch(fileName);
+return null;
+  }
+};
+
+tasks.add(es.submit(callable));
+  }
+
+  for (Future task : tasks) {
+task.get();
+  }
+
+  es.shutdownNow();
+  fs.registerListener(
+  new 
TracingHeaderValidator(getConfiguration().getClientCorrelationId(),
+  fs.getFileSystemId(), FSOperationType.LISTSTATUS, true, 
0));
+  FileStatus[] files = fs.listStatus(new Path("/"));
+  assertEquals(TEST_FILES_NUMBER, files.length /* user directory */);
 }
+  }
 
-es.shutdownNow();
-fs.registerListener(
-new TracingHeaderValidator(getConfiguration().getClientCorrelationId(),
-fs.getFileSystemId(), FSOperationType.LISTSTATUS, true, 0));
-FileStatus[] files = fs.listStatus(new Path("/"));
-assertEquals(TEST_FILES_NUMBER, files.length /* user directory */);
+  /**
+   * Test to verify that each paginated call to ListBlobs uses a new tracing 
context.
+   * @throws Exception
+   */
+  @Test
+  public void testListPathTracingContext() throws Exception {
+final AzureBlobFileSystem fs = getFileSystem();
+final AzureBlobFileSystem spiedFs = Mockito.spy(fs);
+final AzureBlobFileSystemStore spiedStore = Mockito.spy(fs.getAbfsStore());
+final AbfsClient spiedClient = Mockito.spy(fs.getAbfsClient());
+final TracingContext spiedTracingContext = Mockito.spy(
+new TracingContext(
+fs.getClientCorrelationId(), fs.getFileSystemId(),
+FSOperationType.LISTSTATUS, true, 
TracingHeaderFormat.ALL_ID_FORMAT, null));
+
+Mockito.doReturn(spiedStore).when(spiedFs).getAbfsStore();
+spiedStore.setClient(spiedClient);
+spiedFs.setWorkingDirectory(new Path("/"));
+
+
AbfsClientTestUtil.setMockAbfsRestOperationForListPathOperation(spiedClient,
+(httpOperation) -> {
+
+  ListResultEntrySchema entry = new ListResultEntrySchema()
+  .withName("a")
+  .withIsDirectory(true);
+  List paths = new ArrayList<>();
+  paths.add(entry);
+  paths.clear();
+  entry = new ListResultEntrySchema()
+  .withName("abc.txt")
+  .withIsDirectory(false);
+  paths.add(entry);
+  ListResultSchema schema1 = new ListResultSchema().withPaths(paths);
+  ListResultSchema schema2 = new ListResultSchema().withPaths(paths);
+
+  when(httpOperation.getListResultSchema()).thenReturn(schema1)
+  .thenReturn(schema2);
+  when(httpOperation.getResponseHeader(
+  HttpHeaderConfigurations.X_MS_CONTINUATION))
+  .thenReturn(TEST_CONTINUATION_TOKEN)
+  .thenReturn(EMPTY_STRING);
+
+  Stubber stubber = Mockito.doThrow(
+  new SocketTimeoutException(CONNECTION_TIMEOUT_JDK_MESSAGE));
+  stubber.doNothing().when(httpOperation).processResponse(
+  nullable(byte[].class), nullable(int.class), 
nullable(int.class));
+
+  
when(httpOperation.getStatusCode()).thenReturn(-1).thenReturn(HTTP_OK);
+  return httpOperation;
+});
+
+

Re: [PR] HADOOP-18872: [ABFS] [BugFix] Misreporting Retry Count for Sub-sequential and Parallel Operations [hadoop]

2023-11-08 Thread via GitHub


anujmodi2021 commented on code in PR #6019:
URL: https://github.com/apache/hadoop/pull/6019#discussion_r1386228827


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemListStatus.java:
##
@@ -62,34 +80,101 @@ public ITestAzureBlobFileSystemListStatus() throws 
Exception {
   public void testListPath() throws Exception {
 Configuration config = new Configuration(this.getRawConfiguration());
 config.set(AZURE_LIST_MAX_RESULTS, "5000");
-final AzureBlobFileSystem fs = (AzureBlobFileSystem) FileSystem
-.newInstance(getFileSystem().getUri(), config);
-final List> tasks = new ArrayList<>();
-
-ExecutorService es = Executors.newFixedThreadPool(10);
-for (int i = 0; i < TEST_FILES_NUMBER; i++) {
-  final Path fileName = new Path("/test" + i);
-  Callable callable = new Callable() {
-@Override
-public Void call() throws Exception {
-  touch(fileName);
-  return null;
-}
-  };
-
-  tasks.add(es.submit(callable));
-}
-
-for (Future task : tasks) {
-  task.get();
+try (final AzureBlobFileSystem fs = (AzureBlobFileSystem) FileSystem
+.newInstance(getFileSystem().getUri(), config)) {
+  final List> tasks = new ArrayList<>();
+
+  ExecutorService es = Executors.newFixedThreadPool(10);
+  for (int i = 0; i < TEST_FILES_NUMBER; i++) {
+final Path fileName = new Path("/test" + i);
+Callable callable = new Callable() {
+  @Override
+  public Void call() throws Exception {
+touch(fileName);
+return null;
+  }
+};
+
+tasks.add(es.submit(callable));
+  }
+
+  for (Future task : tasks) {
+task.get();
+  }
+
+  es.shutdownNow();
+  fs.registerListener(
+  new 
TracingHeaderValidator(getConfiguration().getClientCorrelationId(),
+  fs.getFileSystemId(), FSOperationType.LISTSTATUS, true, 
0));
+  FileStatus[] files = fs.listStatus(new Path("/"));
+  assertEquals(TEST_FILES_NUMBER, files.length /* user directory */);
 }
+  }
 
-es.shutdownNow();
-fs.registerListener(
-new TracingHeaderValidator(getConfiguration().getClientCorrelationId(),
-fs.getFileSystemId(), FSOperationType.LISTSTATUS, true, 0));
-FileStatus[] files = fs.listStatus(new Path("/"));
-assertEquals(TEST_FILES_NUMBER, files.length /* user directory */);
+  /**
+   * Test to verify that each paginated call to ListBlobs uses a new tracing 
context.
+   * @throws Exception
+   */
+  @Test
+  public void testListPathTracingContext() throws Exception {
+final AzureBlobFileSystem fs = getFileSystem();
+final AzureBlobFileSystem spiedFs = Mockito.spy(fs);
+final AzureBlobFileSystemStore spiedStore = Mockito.spy(fs.getAbfsStore());
+final AbfsClient spiedClient = Mockito.spy(fs.getAbfsClient());
+final TracingContext spiedTracingContext = Mockito.spy(
+new TracingContext(
+fs.getClientCorrelationId(), fs.getFileSystemId(),
+FSOperationType.LISTSTATUS, true, 
TracingHeaderFormat.ALL_ID_FORMAT, null));
+
+Mockito.doReturn(spiedStore).when(spiedFs).getAbfsStore();
+spiedStore.setClient(spiedClient);
+spiedFs.setWorkingDirectory(new Path("/"));
+
+
AbfsClientTestUtil.setMockAbfsRestOperationForListPathOperation(spiedClient,
+(httpOperation) -> {
+
+  ListResultEntrySchema entry = new ListResultEntrySchema()
+  .withName("a")
+  .withIsDirectory(true);
+  List paths = new ArrayList<>();
+  paths.add(entry);
+  paths.clear();
+  entry = new ListResultEntrySchema()
+  .withName("abc.txt")
+  .withIsDirectory(false);
+  paths.add(entry);
+  ListResultSchema schema1 = new ListResultSchema().withPaths(paths);
+  ListResultSchema schema2 = new ListResultSchema().withPaths(paths);
+
+  when(httpOperation.getListResultSchema()).thenReturn(schema1)
+  .thenReturn(schema2);
+  when(httpOperation.getResponseHeader(
+  HttpHeaderConfigurations.X_MS_CONTINUATION))
+  .thenReturn(TEST_CONTINUATION_TOKEN)
+  .thenReturn(EMPTY_STRING);
+
+  Stubber stubber = Mockito.doThrow(
+  new SocketTimeoutException(CONNECTION_TIMEOUT_JDK_MESSAGE));
+  stubber.doNothing().when(httpOperation).processResponse(
+  nullable(byte[].class), nullable(int.class), 
nullable(int.class));
+
+  
when(httpOperation.getStatusCode()).thenReturn(-1).thenReturn(HTTP_OK);
+  return httpOperation;
+});
+
+List fileStatuses = new ArrayList<>();
+spiedStore.listStatus(new Path("/"), "", fileStatuses, true, null, 
spiedTracingContext);
+
+// Assert that there were 2 paginated ListPath calls were made.

Review Comment:
   2 calls were made 

Re: [PR] YARN-11483. [Federation] Router AdminCLI Supports Clean Finish Apps. [hadoop]

2023-11-08 Thread via GitHub


hadoop-yetus commented on PR #6251:
URL: https://github.com/apache/hadoop/pull/6251#issuecomment-1801317025

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  buf  |   0m  0s |  |  buf was not available.  |
   | +0 :ok: |  buf  |   0m  0s |  |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 48s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  36m  1s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   7m 58s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   7m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   2m  5s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   5m 48s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   7m 51s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   7m 10s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |  12m 12s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  39m  0s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   7m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  cc  |   7m 23s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   7m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   7m 11s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  cc  |   7m 11s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   7m 11s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 49s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   4m 45s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   4m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   4m 14s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | -1 :x: |  spotbugs  |   1m 57s | 
[/new-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6251/5/artifact/out/new-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.html)
 |  hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  38m 45s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m  7s |  |  hadoop-yarn-api in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   5m 34s |  |  hadoop-yarn-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m 48s |  |  hadoop-yarn-server-common in 
the patch passed.  |
   | +1 :green_heart: |  unit  | 103m 11s |  |  
hadoop-yarn-server-resourcemanager in the patch passed.  |
   | -1 :x: |  unit  |  28m  8s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6251/5/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt)
 |  hadoop-yarn-client in the patch passed.  |
   | +1 :green_heart: |  unit  |   0m 40s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 58s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 375m  9s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
   |  |  Impossible cast from 
org.apache.hadoop.yarn.server.api.protocolrecords.DeleteFederationApplicationRequest
 to 
org.apache.hadoop.yarn.server.api.protocolrecords.impl.pb.DeleteFederationApplicationResponsePBImpl
 in