Re: [PR] HDFS-17129. mis-order of ibr and fbr on datanode [hadoop]

2023-11-13 Thread via GitHub


LiuGuH commented on PR #6244:
URL: https://github.com/apache/hadoop/pull/6244#issuecomment-1809692690

   When BPServiceActor.blockreport()  executes , it send ibr first.  So to 
prevent mis-order, we can use a lock with fbr and ibr. 
   It will make block report order like that:
   ibr ,  ibr , BPServiceActor.blockreport(ibr,  fbr) , ibr
   So with this we can make the right order with blockreport.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] Hadoop-18759: [ABFS][Backoff-Optimization] Have a Static retry policy for connection timeout. [hadoop]

2023-11-13 Thread via GitHub


hadoop-yetus commented on PR #5881:
URL: https://github.com/apache/hadoop/pull/5881#issuecomment-1809691512

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   8m 16s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 15 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 44s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 27s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m  8s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  20m 21s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   0m 16s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 13s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5881/20/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 2 new + 7 unchanged - 0 
fixed = 9 total (was 7)  |
   | +1 :green_heart: |  mvnsite  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 45s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m  7s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 51s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 24s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  91m 25s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5881/20/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5881 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux ce8382f19cdf 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5b04e773b2fe949e76c5c4c6e5dca56eec1b19f0 |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5881/20/testReport/ |
   | Max. process+thread count | 564 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5881/20/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Po

Re: [PR] Hadoop-18759: [ABFS][Backoff-Optimization] Have a Static retry policy for connection timeout. [hadoop]

2023-11-13 Thread via GitHub


anujmodi2021 commented on PR #5881:
URL: https://github.com/apache/hadoop/pull/5881#issuecomment-1809652638

   
    AGGREGATED TEST RESULT 
   
   HNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 3
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 340, Failures: 0, Errors: 0, Skipped: 41
   
   HNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 3
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 340, Failures: 0, Errors: 0, Skipped: 41
   
   NonHNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 9
   [INFO] Results:
   [INFO] 
   [ERROR] Tests run: 605, Failures: 0, Errors: 0, Skipped: 276
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 340, Failures: 0, Errors: 0, Skipped: 44
   
   AppendBlob-HNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 3
   [INFO] Results:
   [INFO] 
   [ERROR] Failures: 
   [ERROR]   
ITestAbfsFileSystemContractDistCp>AbstractContractDistCpTest.testUpdateDeepDirectoryStructureToRemote:259->AbstractContractDistCpTest.distCpUpdateDeepDirectoryStructure:334->AbstractContractDistCpTest.assertCounterInRange:294->Assert.assertTrue:42->Assert.fail:89
 Files Copied value 2 above maximum 1
   [INFO] 
   [ERROR] Tests run: 340, Failures: 1, Errors: 0, Skipped: 41
   
   Time taken: 24 mins 6 secs.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17129. mis-order of ibr and fbr on datanode [hadoop]

2023-11-13 Thread via GitHub


LiuGuH commented on PR #6244:
URL: https://github.com/apache/hadoop/pull/6244#issuecomment-1809646245

   @Hexiaoqiao , hi sir , do you have to review it ? The Test Results are all 
finished without errors. 
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17129. mis-order of ibr and fbr on datanode [hadoop]

2023-11-13 Thread via GitHub


hadoop-yetus commented on PR #6244:
URL: https://github.com/apache/hadoop/pull/6244#issuecomment-1809636669

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 28s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m  6s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 58s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 53s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 50s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 56s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   2m 22s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  28m 15s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 51s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 51s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   0m 44s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 43s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 13s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   2m  0s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m  0s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 193m 58s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 304m 29s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6244/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6244 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 8314d376cf74 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 272e27d2aed39c669c94853dcea09f2c0e655907 |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6244/2/testReport/ |
   | Max. process+thread count | 3578 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6244/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-

Re: [PR] HDFS-17223. Add journalnode maintenance node list [hadoop]

2023-11-13 Thread via GitHub


xinglin commented on code in PR #6183:
URL: https://github.com/apache/hadoop/pull/6183#discussion_r1392039679


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/QuorumJournalManager.java:
##
@@ -62,6 +66,7 @@
 import org.apache.hadoop.classification.VisibleForTesting;
 import org.apache.hadoop.thirdparty.com.google.common.base.Joiner;
 import org.apache.hadoop.util.Preconditions;
+

Review Comment:
   remove this.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17223. Add journalnode maintenance node list [hadoop]

2023-11-13 Thread via GitHub


xinglin commented on code in PR #6183:
URL: https://github.com/apache/hadoop/pull/6183#discussion_r1392037472


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java:
##
@@ -1982,4 +1983,28 @@ public static void addTransferRateMetric(final 
DataNodeMetrics metrics, final lo
   LOG.warn("Unexpected value for data transfer bytes={} duration={}", 
read, duration);
 }
   }
+
+  /**
+   * Retrieve InetSocketAddress set by ip port string array.
+   * @param nodesHostPort ip port string array.
+   * @return HostSet of InetSocketAddress.
+   */
+  public static HostSet convertHostSet(String[] nodesHostPort) {
+HostSet retSet = new HostSet();
+for (String hostPort : nodesHostPort) {
+  try {
+URI uri = new URI("dummy", hostPort, null, null, null);
+int port = uri.getPort() == -1 ? 0 : uri.getPort();

Review Comment:
   is it appropriate here? It seems 0 is a valid port but -1 should indicate 
port is not set?
   
   > A valid port value is between 0 and 65535. A port number of zero will let 
the system pick up an ephemeral port in a bind operation.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17223. Add journalnode maintenance node list [hadoop]

2023-11-13 Thread via GitHub


xinglin commented on code in PR #6183:
URL: https://github.com/apache/hadoop/pull/6183#discussion_r1392034064


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java:
##
@@ -1982,4 +1983,28 @@ public static void addTransferRateMetric(final 
DataNodeMetrics metrics, final lo
   LOG.warn("Unexpected value for data transfer bytes={} duration={}", 
read, duration);
 }
   }
+
+  /**
+   * Retrieve InetSocketAddress set by ip port string array.
+   * @param nodesHostPort ip port string array.
+   * @return HostSet of InetSocketAddress.
+   */
+  public static HostSet convertHostSet(String[] nodesHostPort) {

Review Comment:
   nit: convertHostSet -> getHostSet() or ConvertToHostSet()



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17223. Add journalnode maintenance node list [hadoop]

2023-11-13 Thread via GitHub


xinglin commented on code in PR #6183:
URL: https://github.com/apache/hadoop/pull/6183#discussion_r1392032362


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java:
##
@@ -1982,4 +1983,28 @@ public static void addTransferRateMetric(final 
DataNodeMetrics metrics, final lo
   LOG.warn("Unexpected value for data transfer bytes={} duration={}", 
read, duration);
 }
   }
+
+  /**
+   * Retrieve InetSocketAddress set by ip port string array.

Review Comment:
   nit: -> "Construct a HostSet from an array of "ip:port" strings.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17912) ABFS: Support for Encryption Context

2023-11-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17785748#comment-17785748
 ] 

ASF GitHub Bot commented on HADOOP-17912:
-

hadoop-yetus commented on PR #6221:
URL: https://github.com/apache/hadoop/pull/6221#issuecomment-1809580054

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 40s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 13 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 31s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 36s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 44s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   1m  8s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  32m 53s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  33m 14s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 21s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6221/10/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 10 new + 7 unchanged - 0 
fixed = 17 total (was 7)  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  33m 36s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  2s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 41s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 128m 58s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6221/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6221 |
   | Optional Tests | dupname asflicense codespell detsecrets compile javac 
javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle |
   | uname | Linux 99ae3f453f70 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / bc0bfb53d2d27a3a4b82515993baa3b88107052d |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6221/10/testReport/ |
   | Max. process+thread count |

Re: [PR] HADOOP-17912. ABFS: Support for Encryption Context [hadoop]

2023-11-13 Thread via GitHub


hadoop-yetus commented on PR #6221:
URL: https://github.com/apache/hadoop/pull/6221#issuecomment-1809580054

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 40s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 13 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 31s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 36s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 44s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   1m  8s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  32m 53s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  33m 14s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 21s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6221/10/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 10 new + 7 unchanged - 0 
fixed = 17 total (was 7)  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  33m 36s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  2s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 41s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 128m 58s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6221/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6221 |
   | Optional Tests | dupname asflicense codespell detsecrets compile javac 
javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle |
   | uname | Linux 99ae3f453f70 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / bc0bfb53d2d27a3a4b82515993baa3b88107052d |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6221/10/testReport/ |
   | Max. process+thread count | 554 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6221/10/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
  

[jira] [Commented] (HADOOP-17912) ABFS: Support for Encryption Context

2023-11-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17785740#comment-17785740
 ] 

ASF GitHub Bot commented on HADOOP-17912:
-

saxenapranav commented on PR #6221:
URL: https://github.com/apache/hadoop/pull/6221#issuecomment-1809538038

   > +1 pending the rebase to deal with changed tests.
   
   Thank you so much @steveloughran . I have back-merged trunk. Thanks.




> ABFS: Support for Encryption Context
> 
>
> Key: HADOOP-17912
> URL: https://issues.apache.org/jira/browse/HADOOP-17912
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.1
>Reporter: Sumangala Patki
>Assignee: Pranav Saxena
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Support for customer-provided encryption keys at the file level, superceding 
> the global (account-level) key use in HADOOP-17536.
> ABFS driver will support an "EncryptionContext" plugin for retrieving 
> encryption information, the implementation for which should be provided by 
> the client. The keys/context retrieved will be sent via request headers to 
> the server, which will store the encryption context. Subsequent REST calls to 
> server that access data/user metadata of the file will require fetching the 
> encryption context through a GetFileProperties call and retrieving the key 
> from the custom provider, before sending the request.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-17912. ABFS: Support for Encryption Context [hadoop]

2023-11-13 Thread via GitHub


saxenapranav commented on PR #6221:
URL: https://github.com/apache/hadoop/pull/6221#issuecomment-1809538038

   > +1 pending the rebase to deal with changed tests.
   
   Thank you so much @steveloughran . I have back-merged trunk. Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17912) ABFS: Support for Encryption Context

2023-11-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17785739#comment-17785739
 ] 

ASF GitHub Bot commented on HADOOP-17912:
-

saxenapranav commented on PR #6221:
URL: https://github.com/apache/hadoop/pull/6221#issuecomment-1809537619

   
    AGGREGATED TEST RESULT 
   
   HNS-OAuth
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 141, Failures: 0, Errors: 0, Skipped: 5
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut 
test timed o...
   [ERROR]   
ITestAzureBlobFileSystemLease.testTwoWritersCreateAppendWithInfiniteLeaseEnabled:186->twoWriters:154
 » TestTimedOut
   [INFO]
   [ERROR] Tests run: 550, Failures: 0, Errors: 2, Skipped: 24
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 340, Failures: 0, Errors: 0, Skipped: 41
   
   HNS-SharedKey
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 141, Failures: 0, Errors: 0, Skipped: 5
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   
ITestAzureBlobFileSystemRandomRead.testSkipBounds:218->Assert.assertTrue:42->Assert.fail:89
 There should not be any network I/O (elapsedTimeMs=34).
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut 
test timed o...
   [INFO]
   [ERROR] Tests run: 558, Failures: 1, Errors: 1, Skipped: 24
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 340, Failures: 0, Errors: 0, Skipped: 41
   
   NonHNS-SharedKey
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 141, Failures: 0, Errors: 0, Skipped: 11
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   
ITestAzureBlobFileSystemRandomRead.testSkipBounds:218->Assert.assertTrue:42->Assert.fail:89
 There should not be any network I/O (elapsedTimeMs=87).
   [ERROR]   
ITestAzureBlobFileSystemRandomRead.testValidateSeekBounds:269->Assert.assertTrue:42->Assert.fail:89
 There should not be any network I/O (elapsedTimeMs=84).
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut 
test timed o...
   [INFO]
   [ERROR] Tests run: 537, Failures: 2, Errors: 1, Skipped: 264
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 340, Failures: 0, Errors: 0, Skipped: 44
   
   AppendBlob-HNS-OAuth
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 141, Failures: 0, Errors: 0, Skipped: 5
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:336 » TestTimedOut 
test timed o...
   [ERROR]   
ITestAzureBlobFileSystemLease.testTwoWritersCreateAppendWithInfiniteLeaseEnabled:186->twoWriters:154
 » TestTimedOut
   [INFO]
   [ERROR] Tests run: 542, Failures: 0, Errors: 2, Skipped: 24
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 340, Failures: 0, Errors: 0, Skipped: 41
   
   Time taken: 48 mins 55 secs.
   azureuser@Hadoop-VM-EAST2:~/hadoop/hadoop-tools/hadoop-azure$ git log
   commit bc0bfb53d2d27a3a4b82515993baa3b88107052d (HEAD -> 
saxenapranav/HADOOP-17912, origin/saxenapranav/HADOOP-17912)
   Merge: 33e6fe0774d 000a39ba2d2
   Author: Pranav Saxena <>
   Date:   Mon Nov 13 19:28:08 2023 -0800
   
   Merge branch 'trunk' into saxenapranav/HADOOP-17912




> ABFS: Support for Encryption Context
> 
>
> Key: HADOOP-17912
> URL: https://issues.apache.org/jira/browse/HADOOP-17912
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.1
>Reporter: Sumangala Patki
>Assignee: Pranav Saxena
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Support for customer-provided encryption keys at the file level, superceding 
> the global (account-level) key use in HADOOP-17536.
> ABFS driver will support an "EncryptionContext" plugin for retrieving 
> encryption information, the implementation for which should be provided by 
> the client. The keys/context retrieved will be sent via request headers to 
> the server, which will store the encryption context. Subsequent REST calls to 
> server that access data/user metadata of the file will require fetching the 
> encryption context through a GetFileProperties call and retrieving the key 
> from the custom provider, before sending the request.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-17912. ABFS: Support for Encryption Context [hadoop]

2023-11-13 Thread via GitHub


saxenapranav commented on PR #6221:
URL: https://github.com/apache/hadoop/pull/6221#issuecomment-1809537619

   
    AGGREGATED TEST RESULT 
   
   HNS-OAuth
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 141, Failures: 0, Errors: 0, Skipped: 5
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut 
test timed o...
   [ERROR]   
ITestAzureBlobFileSystemLease.testTwoWritersCreateAppendWithInfiniteLeaseEnabled:186->twoWriters:154
 » TestTimedOut
   [INFO]
   [ERROR] Tests run: 550, Failures: 0, Errors: 2, Skipped: 24
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 340, Failures: 0, Errors: 0, Skipped: 41
   
   HNS-SharedKey
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 141, Failures: 0, Errors: 0, Skipped: 5
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   
ITestAzureBlobFileSystemRandomRead.testSkipBounds:218->Assert.assertTrue:42->Assert.fail:89
 There should not be any network I/O (elapsedTimeMs=34).
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut 
test timed o...
   [INFO]
   [ERROR] Tests run: 558, Failures: 1, Errors: 1, Skipped: 24
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 340, Failures: 0, Errors: 0, Skipped: 41
   
   NonHNS-SharedKey
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 141, Failures: 0, Errors: 0, Skipped: 11
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   
ITestAzureBlobFileSystemRandomRead.testSkipBounds:218->Assert.assertTrue:42->Assert.fail:89
 There should not be any network I/O (elapsedTimeMs=87).
   [ERROR]   
ITestAzureBlobFileSystemRandomRead.testValidateSeekBounds:269->Assert.assertTrue:42->Assert.fail:89
 There should not be any network I/O (elapsedTimeMs=84).
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut 
test timed o...
   [INFO]
   [ERROR] Tests run: 537, Failures: 2, Errors: 1, Skipped: 264
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 340, Failures: 0, Errors: 0, Skipped: 44
   
   AppendBlob-HNS-OAuth
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 141, Failures: 0, Errors: 0, Skipped: 5
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:336 » TestTimedOut 
test timed o...
   [ERROR]   
ITestAzureBlobFileSystemLease.testTwoWritersCreateAppendWithInfiniteLeaseEnabled:186->twoWriters:154
 » TestTimedOut
   [INFO]
   [ERROR] Tests run: 542, Failures: 0, Errors: 2, Skipped: 24
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 340, Failures: 0, Errors: 0, Skipped: 41
   
   Time taken: 48 mins 55 secs.
   azureuser@Hadoop-VM-EAST2:~/hadoop/hadoop-tools/hadoop-azure$ git log
   commit bc0bfb53d2d27a3a4b82515993baa3b88107052d (HEAD -> 
saxenapranav/HADOOP-17912, origin/saxenapranav/HADOOP-17912)
   Merge: 33e6fe0774d 000a39ba2d2
   Author: Pranav Saxena <>
   Date:   Mon Nov 13 19:28:08 2023 -0800
   
   Merge branch 'trunk' into saxenapranav/HADOOP-17912


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18972) Bug in SaslPropertiesResolver allows mutation of internal state

2023-11-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17785732#comment-17785732
 ] 

ASF GitHub Bot commented on HADOOP-18972:
-

hadoop-yetus commented on PR #6272:
URL: https://github.com/apache/hadoop/pull/6272#issuecomment-1809484585

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m 39s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  18m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  16m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   1m 14s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 36s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 11s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   2m 35s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  40m 12s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |  17m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 51s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |  16m 51s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 10s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6272/1/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 6 new + 5 
unchanged - 1 fixed = 11 total (was 6)  |
   | +1 :green_heart: |  mvnsite  |   1m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   2m 46s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  40m 21s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 14s |  |  hadoop-common in the patch 
passed.  |
   | -1 :x: |  asflicense  |   0m 59s | 
[/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6272/1/artifact/out/results-asflicense.txt)
 |  The patch generated 1 ASF License warnings.  |
   |  |   | 240m 48s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6272/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6272 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 137bd61f8ca7 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 09426b3cb61a79bfd8950c228797be0cadbb8aa6 |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6272/1/testReport/ |
   | Max. process+thread count | 2279 (vs. ulimit of 5500) |
   | modules | C: h

Re: [PR] HADOOP-18972: Copy SASL properties map when returning from SaslPropertiesResolver [hadoop]

2023-11-13 Thread via GitHub


hadoop-yetus commented on PR #6272:
URL: https://github.com/apache/hadoop/pull/6272#issuecomment-1809484585

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m 39s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  18m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  16m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   1m 14s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 36s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 11s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   2m 35s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  40m 12s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |  17m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 51s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |  16m 51s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 10s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6272/1/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 6 new + 5 
unchanged - 1 fixed = 11 total (was 6)  |
   | +1 :green_heart: |  mvnsite  |   1m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   2m 46s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  40m 21s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 14s |  |  hadoop-common in the patch 
passed.  |
   | -1 :x: |  asflicense  |   0m 59s | 
[/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6272/1/artifact/out/results-asflicense.txt)
 |  The patch generated 1 ASF License warnings.  |
   |  |   | 240m 48s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6272/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6272 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 137bd61f8ca7 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 09426b3cb61a79bfd8950c228797be0cadbb8aa6 |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6272/1/testReport/ |
   | Max. process+thread count | 2279 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6272/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetu

[jira] [Commented] (HADOOP-18971) ABFS: Enable Footer Read Optimizations with Appropriate Footer Read Buffer Size

2023-11-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17785729#comment-17785729
 ] 

ASF GitHub Bot commented on HADOOP-18971:
-

saxenapranav commented on code in PR #6270:
URL: https://github.com/apache/hadoop/pull/6270#discussion_r1391934729


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java:
##
@@ -190,7 +193,8 @@ private void seekReadAndTest(final FileSystem fs, final 
Path testFilePath,
 try (FSDataInputStream iStream = fs.open(testFilePath)) {
   AbfsInputStream abfsInputStream = (AbfsInputStream) iStream
   .getWrappedStream();
-  long bufferSize = abfsInputStream.getBufferSize();
+  long footerReadBufferSize = abfsInputStream.getFooterReadBufferSize();

Review Comment:
   The default value of this config is 256KB. Now, developer can have any other 
config also. Right now, test is very much inline of using 256 KB. What I am 
proposing is, that in the test, we set the config and don't depend on the dev 
given config. Plus, I am proposing we run this test for different values of 
footerBufferSize.





> ABFS: Enable Footer Read Optimizations with Appropriate Footer Read Buffer 
> Size
> ---
>
> Key: HADOOP-18971
> URL: https://issues.apache.org/jira/browse/HADOOP-18971
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.6
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
>
> Footer Read Optimization was introduced to Hadoop azure in this Jira: 
> https://issues.apache.org/jira/browse/HADOOP-17347
> and was kept disabled by default.
> This PR is to enable footer reads by default based on the results of analysis 
> performed as below:
> In our scale workload analysis, it was found that workloads working with 
> Parquet (or for that matter OCR etc.) have a lot of footer reads. Footer 
> reads here refers to the read operations done by workload to get the metadata 
> of the parquet file which is required to understand where the actual data 
> resides in the parquet.
> This whole process takes place in 3 steps:
>  # Workload reads the last 8 bytes of parquet file to get the offset and size 
> of the metadata which is present just above these 8 bytes.
>  # Using that offset, workload reads the metadata to get the exact offset and 
> length of data which it wants to read.
>  # Workload performs the final read operation to get the data it wants to use 
> for its purpose.
> Here the first two steps are metadata reads that can be combined into a 
> single footer read. When workload tries to read certain last few bytes of 
> data (let's say this value is footer size), driver will intelligently read 
> some extra bytes above the footer size to cater to the next read which is 
> going to come.
> Q. What is the footer size of file?
> A: 16KB. Any read request trying to get the data within last 16KB of the file 
> will qualify for whole footer read. This value is enough to cater to all 
> types of files including parquet, OCR, etc.
> Q. What is the buffer size to read when reading the footer?
> A. Let's call this footer read buffer size. Prior to this PR footer read 
> buffer size was same as read buffer size (default 4MB). It was found that for 
> most of the workload required footer size was only 256KB. i.e. For almost all 
> parquet files metadata for that file was found to be within last 256KBs. 
> Keeping this in mind it does not make sense to read whole buffer length of 
> 4MB as a part of footer read. Moreover, reading larger data than require 
> incur additional costs in terms of server and network latencies. Based on 
> this and extensive experimentation it was observed that footer read buffer 
> size of 512KB is ideal for almost all the workloads running on parquet, OCR, 
> etc.
> Following configuration was introduced to configure the footer read buffer 
> size:
> {*}fs.azure.footer.read.request.size{*}: default 512 KB.
> *Quantitative Stats:* For a workload running on parquet files the number of 
> read requests got reduced by 2.3M down from 20M. That means around 10% 
> reduction in overall TPS.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18971: [ABFS] Enable Footer Read Optimizations with Appropriate Footer Read Buffer Size [hadoop]

2023-11-13 Thread via GitHub


saxenapranav commented on code in PR #6270:
URL: https://github.com/apache/hadoop/pull/6270#discussion_r1391934729


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java:
##
@@ -190,7 +193,8 @@ private void seekReadAndTest(final FileSystem fs, final 
Path testFilePath,
 try (FSDataInputStream iStream = fs.open(testFilePath)) {
   AbfsInputStream abfsInputStream = (AbfsInputStream) iStream
   .getWrappedStream();
-  long bufferSize = abfsInputStream.getBufferSize();
+  long footerReadBufferSize = abfsInputStream.getFooterReadBufferSize();

Review Comment:
   The default value of this config is 256KB. Now, developer can have any other 
config also. Right now, test is very much inline of using 256 KB. What I am 
proposing is, that in the test, we set the config and don't depend on the dev 
given config. Plus, I am proposing we run this test for different values of 
footerBufferSize.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-11498 backport jettison exclusions [hadoop]

2023-11-13 Thread via GitHub


hadoop-yetus commented on PR #6063:
URL: https://github.com/apache/hadoop/pull/6063#issuecomment-1809470807

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  10m 46s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ branch-3.3 Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 41s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  40m 13s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |  19m 18s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   8m 15s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   6m 25s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  | 127m 58s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 31s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  17m 57s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  18m 36s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   8m 13s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   6m 15s |  |  the patch passed  |
   | -1 :x: |  shadedclient  |  54m 56s |  |  patch has errors when building 
and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 32s |  |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  18m  4s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   4m 55s |  |  hadoop-yarn-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  22m 55s |  |  hadoop-yarn-server-nodemanager 
in the patch passed.  |
   | +1 :green_heart: |  unit  |   4m 17s |  |  
hadoop-yarn-server-applicationhistoryservice in the patch passed.  |
   | +1 :green_heart: |  unit  |  95m 18s |  |  
hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 14s |  |  
hadoop-yarn-applications-catalog-webapp in the patch passed.  |
   | +1 :green_heart: |  unit  |   0m 56s |  |  hadoop-resourceestimator in the 
patch passed.  |
   | +1 :green_heart: |  unit  |   0m 40s |  |  hadoop-client-minicluster in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 58s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 384m 34s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6063/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6063 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint |
   | uname | Linux 2d94726c2daf 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / 9fd4f7885b7e60f78a5dd9fdce6859cc57217d0b |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~18.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6063/8/testReport/ |
   | Max. process+thread count | 1239 (vs. ulimit of 5500) |
   | modules | C: hadoop-project hadoop-common-project/hadoop-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp
 hadoop-tools/hadoop-resourceestimator 
hadoop-client-modules/hadoop-client-minicluster U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6063/8/console |
   | versions | git=2.17.1 maven=3.6.0 |
  

[jira] [Commented] (HADOOP-18359) Update commons-cli from 1.2 to 1.5.

2023-11-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17785703#comment-17785703
 ] 

ASF GitHub Bot commented on HADOOP-18359:
-

hadoop-yetus commented on PR #6248:
URL: https://github.com/apache/hadoop/pull/6248#issuecomment-1809375718

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   7m 14s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +0 :ok: |  shelldocs  |   0m  1s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 6 new or modified test files.  |
    _ branch-3.3 Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m  0s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  36m  6s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |  19m  4s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   3m  6s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |  27m 40s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   7m 28s |  |  branch-3.3 passed  |
   | +0 :ok: |  spotbugs  |   0m 23s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | +1 :green_heart: |  shadedclient  |  69m 38s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 36s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  41m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 23s |  |  the patch passed  |
   | -1 :x: |  javac  |  18m 23s | 
[/results-compile-javac-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6248/2/artifact/out/results-compile-javac-root.txt)
 |  root generated 105 new + 1806 unchanged - 1 fixed = 1911 total (was 1807)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   2m 51s |  |  root: The patch generated 
0 new + 367 unchanged - 26 fixed = 367 total (was 393)  |
   | +1 :green_heart: |  mvnsite  |  22m 58s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  javadoc  |   7m 10s |  |  the patch passed  |
   | +0 :ok: |  spotbugs  |   0m 23s |  |  hadoop-project has no data from 
spotbugs  |
   | +1 :green_heart: |  shadedclient  |  69m 41s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 703m 32s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6248/2/artifact/out/patch-unit-root.txt)
 |  root in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 54s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 1087m 34s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem |
   |   | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
   |   | 
hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
   |   | hadoop.yarn.sls.nodemanager.TestNMSimulator |
   |   | hadoop.yarn.sls.appmaster.TestAMSimulator |
   |   | hadoop.yarn.client.api.impl.TestAMRMClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6248/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6248 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint 
shellcheck shelldocs |
   | uname | Linux 1a903ef6956e 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / 9ad9e793dd2fc347101c34f20aed3c179131b505 |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~18.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6248/2/testReport/ |
   | Max. process+thread count | 3205 (vs. ulimit of 5500) |
   | modules | C: hadoop-project hadoop-common-project/hadoop-common 
hadoop-c

Re: [PR] HADOOP-18359. Update commons-cli from 1.2 to 1.5. (#5095). [hadoop]

2023-11-13 Thread via GitHub


hadoop-yetus commented on PR #6248:
URL: https://github.com/apache/hadoop/pull/6248#issuecomment-1809375718

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   7m 14s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +0 :ok: |  shelldocs  |   0m  1s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 6 new or modified test files.  |
    _ branch-3.3 Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m  0s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  36m  6s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |  19m  4s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   3m  6s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |  27m 40s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   7m 28s |  |  branch-3.3 passed  |
   | +0 :ok: |  spotbugs  |   0m 23s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | +1 :green_heart: |  shadedclient  |  69m 38s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 36s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  41m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 23s |  |  the patch passed  |
   | -1 :x: |  javac  |  18m 23s | 
[/results-compile-javac-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6248/2/artifact/out/results-compile-javac-root.txt)
 |  root generated 105 new + 1806 unchanged - 1 fixed = 1911 total (was 1807)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   2m 51s |  |  root: The patch generated 
0 new + 367 unchanged - 26 fixed = 367 total (was 393)  |
   | +1 :green_heart: |  mvnsite  |  22m 58s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  javadoc  |   7m 10s |  |  the patch passed  |
   | +0 :ok: |  spotbugs  |   0m 23s |  |  hadoop-project has no data from 
spotbugs  |
   | +1 :green_heart: |  shadedclient  |  69m 41s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 703m 32s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6248/2/artifact/out/patch-unit-root.txt)
 |  root in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 54s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 1087m 34s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem |
   |   | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
   |   | 
hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
   |   | hadoop.yarn.sls.nodemanager.TestNMSimulator |
   |   | hadoop.yarn.sls.appmaster.TestAMSimulator |
   |   | hadoop.yarn.client.api.impl.TestAMRMClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6248/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6248 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint 
shellcheck shelldocs |
   | uname | Linux 1a903ef6956e 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / 9ad9e793dd2fc347101c34f20aed3c179131b505 |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~18.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6248/2/testReport/ |
   | Max. process+thread count | 3205 (vs. ulimit of 5500) |
   | modules | C: hadoop-project hadoop-common-project/hadoop-common 
hadoop-common-project/hadoop-registry hadoop-hdfs-project/hadoop-hdfs 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 
hadoop-mapreduce-project/hadoop-ma

[jira] [Updated] (HADOOP-18972) Bug in SaslPropertiesResolver allows mutation of internal state

2023-11-13 Thread Charles Connell (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Connell updated HADOOP-18972:
-
Description: 
When {{SaslDataTransferServer}} or {{SaslDataTranferClient}} want to get a SASL 
properties map to do a handshake, they call 
{{SaslPropertiesResolver#getServerProperties()}} or 
{{SaslPropertiesResolver#getClientProperties()}}, and they get back a 
{{Map}}. Every call gets the same {{Map}} object back, and then 
the callers sometimes call 
[put()|https://github.com/apache/hadoop/blob/rel/release-3.3.6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java#L385]
 on it. This means that future users of {{SaslPropertiesResolver}} get back the 
wrong information.

I propose that {{SaslPropertiesResolver}} should pass a copy of its internal 
map, so that users can safety modify them.

I discovered this problem in my company's testing environment as we began to 
enable {{dfs.data.transfer.protection}} on our DataNodes, while our NameNodes 
were using {{IngressPortBasedResolver}} to give out block tokens with different 
QOPs depending on the port used. The our HDFS client applications became unable 
to read or write to HDFS because they could not find a QOP in common with the 
DataNodes during SASL handshake. With multiple threads executing SASL 
handshakes at the same time, the properties map used in 
{{SaslDataTransferServer}} in a DataNode could be clobbered during usage, since 
the same map was used by all threads. Also, future clients that do not have a 
QOP embedded in their block tokens would connect to a server with the wrong 
SASL properties map. I think that one or both of these issues explains the 
problem that I saw. I eliminated this unsafety and saw the problem go away.

  was:
When {{SaslDataTransferServer}} or {{SaslDataTranferClient}} want to get a SASL 
properties map to do a handshake, they call 
{{SaslPropertiesResolver#getServerProperties()}} or 
{{SaslPropertiesResolver#getClientProperties()}}, and they get back a 
{{Map}}. Every call gets the same {{Map}} object back, and then 
the callers sometimes call 
[put()|https://github.com/apache/hadoop/blob/rel/release-3.3.6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java#L385]
 on it. This means that future users of {{SaslPropertiesResolver}} get back the 
wrong information.

I propose that {{SaslPropertiesResolver}} should pass a copy of its internal 
map, so that users can safety modify them.

I discovered this problem in my company's testing environment as we began to 
enable {{dfs.data.transfer.protection}} on our DataNodes, while our NameNodes 
were using {{IngressPortBasedResolver}} to give out block tokens with different 
QOPs depending on the port used. The our HDFS client applications became unable 
to read or write to HDFS because they could not find a QOP in common with the 
DataNodes during SASL handshake. With multiple threads executing SASL 
handshakes at the same time, the properties map used in 
{{SaslDataTransferServer}} in a DataNode could be clobbered during usage, since 
the same map was used by all threads. Also, future clients that do not have a 
QOP embedded in their block tokens would get connect to a server with the wrong 
SASL properties map. I think that one or both of these issues explains the 
problem that I saw. I eliminated this unsafety and saw the problem go away.


> Bug in SaslPropertiesResolver allows mutation of internal state
> ---
>
> Key: HADOOP-18972
> URL: https://issues.apache.org/jira/browse/HADOOP-18972
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Charles Connell
>Priority: Minor
>  Labels: pull-request-available
>
> When {{SaslDataTransferServer}} or {{SaslDataTranferClient}} want to get a 
> SASL properties map to do a handshake, they call 
> {{SaslPropertiesResolver#getServerProperties()}} or 
> {{SaslPropertiesResolver#getClientProperties()}}, and they get back a 
> {{Map}}. Every call gets the same {{Map}} object back, and 
> then the callers sometimes call 
> [put()|https://github.com/apache/hadoop/blob/rel/release-3.3.6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java#L385]
>  on it. This means that future users of {{SaslPropertiesResolver}} get back 
> the wrong information.
> I propose that {{SaslPropertiesResolver}} should pass a copy of its internal 
> map, so that users can safety modify them.
> I discovered this problem in my company's testing environment as we began to 
> enable {{dfs.data.transfer.protection}} on our DataNodes, while our NameNodes 
> were using {{IngressPortBasedResolver}} to giv

[jira] [Updated] (HADOOP-18967) Allow secure mode to be enabled with no downtime

2023-11-13 Thread Charles Connell (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Connell updated HADOOP-18967:
-
Description: 
My employer (HubSpot) recently completed transitioning all of the Hadoop 
clusters underlying our HBase databases into secure mode. It was important to 
us that we be able to make this change without impacting the functionality of 
our SaaS product. To accomplish this, we added some new settings to our fork of 
Hadoop, and fixed a latent bug (HADOOP-18972). This ticket is my intention to 
contribute these changes back to the mainline code, so others can benefit. A 
patch will be incoming.

The basic theme of the new functionality is the ability to accept incoming 
secure connections without requiring them or making them outgoing. Secure mode 
enablement will then be done in two stages.
 * First, all nodes are given configuration to accept secure connections, and 
are gracefully rolling-restarted to adopt this new functionality. I'll be 
adding the new settings to make this stage possible.
 * Second, all nodes are told to require incoming connections be secure, and to 
make secure outgoing connections, and the settings added in the first stage are 
removed. Nodes are again rolling-restarted to adopt this functionality. The 
settings in this final state will look the same as in any secure Hadoop cluster 
today.

I'll include documentation changes explaining how to do this.

  was:
My employer (HubSpot) recently completed transitioning all of the Hadoop 
clusters underlying our HBase databases into secure mode. It was important to 
us that we be able to make this change without impacting the functionality of 
our SaaS product. To accomplish this, we added some new settings to our fork of 
Hadoop, and fixed a latent bug. This ticket is my intention to contribute these 
changes back to the mainline code, so others can benefit. A patch will be 
incoming.

The basic theme of the new functionality is the ability to accept incoming 
secure connections without requiring them or making them outgoing. Secure mode 
enablement will then be done in two stages.
 * First, all nodes are given configuration to accept secure connections, and 
are gracefully rolling-restarted to adopt this new functionality. I'll be 
adding the new settings to make this stage possible.
 * Second, all nodes are told to require incoming connections be secure, and to 
make secure outgoing connections, and the settings added in the first stage are 
removed. Nodes are again rolling-restarted to adopt this functionality. The 
settings in this final state will look the same as in any secure Hadoop cluster 
today.

I'll include documentation changes explaining how to do this.


> Allow secure mode to be enabled with no downtime
> 
>
> Key: HADOOP-18967
> URL: https://issues.apache.org/jira/browse/HADOOP-18967
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Charles Connell
>Priority: Minor
>
> My employer (HubSpot) recently completed transitioning all of the Hadoop 
> clusters underlying our HBase databases into secure mode. It was important to 
> us that we be able to make this change without impacting the functionality of 
> our SaaS product. To accomplish this, we added some new settings to our fork 
> of Hadoop, and fixed a latent bug (HADOOP-18972). This ticket is my intention 
> to contribute these changes back to the mainline code, so others can benefit. 
> A patch will be incoming.
> The basic theme of the new functionality is the ability to accept incoming 
> secure connections without requiring them or making them outgoing. Secure 
> mode enablement will then be done in two stages.
>  * First, all nodes are given configuration to accept secure connections, and 
> are gracefully rolling-restarted to adopt this new functionality. I'll be 
> adding the new settings to make this stage possible.
>  * Second, all nodes are told to require incoming connections be secure, and 
> to make secure outgoing connections, and the settings added in the first 
> stage are removed. Nodes are again rolling-restarted to adopt this 
> functionality. The settings in this final state will look the same as in any 
> secure Hadoop cluster today.
> I'll include documentation changes explaining how to do this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18972) Bug in SaslPropertiesResolver allows mutation of internal state

2023-11-13 Thread Charles Connell (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Connell updated HADOOP-18972:
-
Description: 
When {{SaslDataTransferServer}} or {{SaslDataTranferClient}} want to get a SASL 
properties map to do a handshake, they call 
{{SaslPropertiesResolver#getServerProperties()}} or 
{{SaslPropertiesResolver#getClientProperties()}}, and they get back a 
{{Map}}. Every call gets the same {{Map}} object back, and then 
the callers sometimes call 
[put()|https://github.com/apache/hadoop/blob/rel/release-3.3.6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java#L385]
 on it. This means that future users of {{SaslPropertiesResolver}} get back the 
wrong information.

I propose that {{SaslPropertiesResolver}} should pass a copy of its internal 
map, so that users can safety modify them.

I discovered this problem in my company's testing environment as we began to 
enable {{dfs.data.transfer.protection}} on our DataNodes, while our NameNodes 
were using {{IngressPortBasedResolver}} to give out block tokens with different 
QOPs depending on the port used. The our HDFS client applications became unable 
to read or write to HDFS because they could not find a QOP in common with the 
DataNodes during SASL handshake. With multiple threads executing SASL 
handshakes at the same time, the properties map used in 
{{SaslDataTransferServer}} in a DataNode could be clobbered during usage, since 
the same map was used by all threads. Also, future clients that do not have a 
QOP embedded in their block tokens would get connect to a server with the wrong 
SASL properties map. I think that one or both of these issues explains the 
problem that I saw. I eliminated this unsafety and saw the problem go away.

  was:
When {{SaslDataTransferServer}} or {{SaslDataTranferClient}} want to get a SASL 
properties map to do a handshake, they call 
{{SaslPropertiesResolver#getServerProperties()}} or 
{{SaslPropertiesResolver#getClientProperties()}}, and they get back a 
{{Map}}. Every call gets the same {{Map}} object back, and then 
the callers sometimes call 
[put()|https://github.com/apache/hadoop/blob/rel/release-3.3.6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java#L385]
 on it. This means that future users of {{SaslPropertiesResolver}} get back the 
wrong information.

I propose that {{SaslPropertiesResolver}} should pass a copy of its internal 
map, so that users can safety modify them.

I discovered this problem in my company's testing environment as we began to 
enable {{dfs.data.transfer.protection}} on our DataNodes, while our NameNodes 
were using {{IngressPortBasedResolver}} to give out block tokens with different 
QOPs depending on the port used. The our HDFS client applications became unable 
to read or write to HDFS because they could not find a QOP in common with the 
DataNodes during SASL handshake. With multiple threads executing SASL 
handshakes at the same time, the properties map used in 
{{SaslDataTransferServer}} in a DataNode could be clobbered during usage, since 
the same map was used by all threads. I think this is mostly likely the direct 
cause of the problem I saw. I eliminated this thread unsafety and saw the 
problem go away.


> Bug in SaslPropertiesResolver allows mutation of internal state
> ---
>
> Key: HADOOP-18972
> URL: https://issues.apache.org/jira/browse/HADOOP-18972
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Charles Connell
>Priority: Minor
>  Labels: pull-request-available
>
> When {{SaslDataTransferServer}} or {{SaslDataTranferClient}} want to get a 
> SASL properties map to do a handshake, they call 
> {{SaslPropertiesResolver#getServerProperties()}} or 
> {{SaslPropertiesResolver#getClientProperties()}}, and they get back a 
> {{Map}}. Every call gets the same {{Map}} object back, and 
> then the callers sometimes call 
> [put()|https://github.com/apache/hadoop/blob/rel/release-3.3.6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java#L385]
>  on it. This means that future users of {{SaslPropertiesResolver}} get back 
> the wrong information.
> I propose that {{SaslPropertiesResolver}} should pass a copy of its internal 
> map, so that users can safety modify them.
> I discovered this problem in my company's testing environment as we began to 
> enable {{dfs.data.transfer.protection}} on our DataNodes, while our NameNodes 
> were using {{IngressPortBasedResolver}} to give out block tokens with 
> different QOPs depending on the port used. The our HDFS client applications 
> became unable to read or write

[jira] [Updated] (HADOOP-18972) Bug in SaslPropertiesResolver allows mutation of internal state

2023-11-13 Thread Charles Connell (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Connell updated HADOOP-18972:
-
Status: Patch Available  (was: Open)

> Bug in SaslPropertiesResolver allows mutation of internal state
> ---
>
> Key: HADOOP-18972
> URL: https://issues.apache.org/jira/browse/HADOOP-18972
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Charles Connell
>Priority: Minor
>  Labels: pull-request-available
>
> When {{SaslDataTransferServer}} or {{SaslDataTranferClient}} want to get a 
> SASL properties map to do a handshake, they call 
> {{SaslPropertiesResolver#getServerProperties()}} or 
> {{SaslPropertiesResolver#getClientProperties()}}, and they get back a 
> {{Map}}. Every call gets the same {{Map}} object back, and 
> then the callers sometimes call 
> [put()|https://github.com/apache/hadoop/blob/rel/release-3.3.6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java#L385]
>  on it. This means that future users of {{SaslPropertiesResolver}} get back 
> the wrong information.
> I propose that {{SaslPropertiesResolver}} should pass a copy of its internal 
> map, so that users can safety modify them.
> I discovered this problem in my company's testing environment as we began to 
> enable {{dfs.data.transfer.protection}} on our DataNodes, while our NameNodes 
> were using {{IngressPortBasedResolver}} to give out block tokens with 
> different QOPs depending on the port used. The our HDFS client applications 
> became unable to read or write to HDFS because they could not find a QOP in 
> common with the DataNodes during SASL handshake. With multiple threads 
> executing SASL handshakes at the same time, the properties map used in 
> {{SaslDataTransferServer}} in a DataNode could be clobbered during usage, 
> since the same map was used by all threads. I think this is mostly likely the 
> direct cause of the problem I saw. I eliminated this thread unsafety and saw 
> the problem go away.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18972) Bug in SaslPropertiesResolver allows mutation of internal state

2023-11-13 Thread Charles Connell (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Connell updated HADOOP-18972:
-
Description: 
When {{SaslDataTransferServer}} or {{SaslDataTranferClient}} want to get a SASL 
properties map to do a handshake, they call 
{{SaslPropertiesResolver#getServerProperties()}} or 
{{SaslPropertiesResolver#getClientProperties()}}, and they get back a 
{{Map}}. Every call gets the same {{Map}} object back, and then 
the callers sometimes call 
[put()|https://github.com/apache/hadoop/blob/rel/release-3.3.6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java#L385]
 on it. This means that future users of {{SaslPropertiesResolver}} get back the 
wrong information.

I propose that {{SaslPropertiesResolver}} should pass a copy of its internal 
map, so that users can safety modify them.

I discovered this problem in my company's testing environment as we began to 
enable {{dfs.data.transfer.protection}} on our DataNodes, while our NameNodes 
were using {{IngressPortBasedResolver}} to give out block tokens with different 
QOPs depending on the port used. The our HDFS client applications became unable 
to read or write to HDFS because they could not find a QOP in common with the 
DataNodes during SASL handshake. With multiple threads executing SASL 
handshakes at the same time, the properties map used in 
{{SaslDataTransferServer}} in a DataNode could be clobbered during usage, since 
the same map was used by all threads. I think this is mostly likely the direct 
cause of the problem I saw. I eliminated this thread unsafety and saw the 
problem go away.

  was:
{color:#1d1c1d}When {{SaslDataTransferServer}} or {{SaslDataTranferClient}} 
want to get a SASL properties map to do a handshake, they call 
{{SaslPropertiesResolver#getServerProperties()}} or 
{{SaslPropertiesResolver}}{color}{{{}#getClientProperties(){}}}, and they get 
back a {{{}Map{}}}. Every call gets the same {{Map}} object 
back, and then the callers sometimes call 
[put()|https://github.com/apache/hadoop/blob/rel/release-3.3.6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java#L385]
 on it. This means that future users of 
{color:#1d1c1d}{{SaslPropertiesResolver}}{color} get back the wrong information.

I propose that {color:#1d1c1d}{{SaslPropertiesResolver}}{color} should pass a 
copy of its internal map, so that users can safety modify them{{{}.{}}}

PR incoming.


> Bug in SaslPropertiesResolver allows mutation of internal state
> ---
>
> Key: HADOOP-18972
> URL: https://issues.apache.org/jira/browse/HADOOP-18972
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Charles Connell
>Priority: Minor
>  Labels: pull-request-available
>
> When {{SaslDataTransferServer}} or {{SaslDataTranferClient}} want to get a 
> SASL properties map to do a handshake, they call 
> {{SaslPropertiesResolver#getServerProperties()}} or 
> {{SaslPropertiesResolver#getClientProperties()}}, and they get back a 
> {{Map}}. Every call gets the same {{Map}} object back, and 
> then the callers sometimes call 
> [put()|https://github.com/apache/hadoop/blob/rel/release-3.3.6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java#L385]
>  on it. This means that future users of {{SaslPropertiesResolver}} get back 
> the wrong information.
> I propose that {{SaslPropertiesResolver}} should pass a copy of its internal 
> map, so that users can safety modify them.
> I discovered this problem in my company's testing environment as we began to 
> enable {{dfs.data.transfer.protection}} on our DataNodes, while our NameNodes 
> were using {{IngressPortBasedResolver}} to give out block tokens with 
> different QOPs depending on the port used. The our HDFS client applications 
> became unable to read or write to HDFS because they could not find a QOP in 
> common with the DataNodes during SASL handshake. With multiple threads 
> executing SASL handshakes at the same time, the properties map used in 
> {{SaslDataTransferServer}} in a DataNode could be clobbered during usage, 
> since the same map was used by all threads. I think this is mostly likely the 
> direct cause of the problem I saw. I eliminated this thread unsafety and saw 
> the problem go away.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18972) Bug in SaslPropertiesResolver allows mutation of internal state

2023-11-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17785692#comment-17785692
 ] 

ASF GitHub Bot commented on HADOOP-18972:
-

charlesconnell opened a new pull request, #6272:
URL: https://github.com/apache/hadoop/pull/6272

   ### Description of PR
   
   When `SaslDataTransferServer` or `SaslDataTranferClient` want to get a SASL 
properties map to do a handshake, they call 
`SaslPropertiesResolver#getServerProperties()` or 
`SaslPropertiesResolver#getClientProperties()`, and they get back a 
`Map`. Every call gets the same Map object back, and then the 
callers sometimes call 
[put()](https://github.com/apache/hadoop/blob/rel/release-3.3.6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java#L385)
 on it. This means that future users of `SaslPropertiesResolver` get back the 
wrong information.
   
   In this PR, `SaslPropretiesResolver` gives out copies of its internal map, 
so that users can safety modify them.
   
   ### How was this patch tested?
   
   My employer has [effectively the same 
patch](https://github.com/HubSpot/hadoop/commit/6761522efd5f6ac6117ee151c44edb7d97ca0031)
 already committed to our fork of Hadoop 3.3.1. We are running this in 
production, and have found that this patch fixes the problem we were 
experiencing, described in the JIRA ticket.
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> Bug in SaslPropertiesResolver allows mutation of internal state
> ---
>
> Key: HADOOP-18972
> URL: https://issues.apache.org/jira/browse/HADOOP-18972
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Charles Connell
>Priority: Minor
>
> {color:#1d1c1d}When {{SaslDataTransferServer}} or {{SaslDataTranferClient}} 
> want to get a SASL properties map to do a handshake, they call 
> {{SaslPropertiesResolver#getServerProperties()}} or 
> {{SaslPropertiesResolver}}{color}{{{}#getClientProperties(){}}}, and they get 
> back a {{{}Map{}}}. Every call gets the same {{Map}} object 
> back, and then the callers sometimes call 
> [put()|https://github.com/apache/hadoop/blob/rel/release-3.3.6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java#L385]
>  on it. This means that future users of 
> {color:#1d1c1d}{{SaslPropertiesResolver}}{color} get back the wrong 
> information.
> I propose that {color:#1d1c1d}{{SaslPropertiesResolver}}{color} should pass a 
> copy of its internal map, so that users can safety modify them{{{}.{}}}
> PR incoming.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18972) Bug in SaslPropertiesResolver allows mutation of internal state

2023-11-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-18972:

Labels: pull-request-available  (was: )

> Bug in SaslPropertiesResolver allows mutation of internal state
> ---
>
> Key: HADOOP-18972
> URL: https://issues.apache.org/jira/browse/HADOOP-18972
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Charles Connell
>Priority: Minor
>  Labels: pull-request-available
>
> {color:#1d1c1d}When {{SaslDataTransferServer}} or {{SaslDataTranferClient}} 
> want to get a SASL properties map to do a handshake, they call 
> {{SaslPropertiesResolver#getServerProperties()}} or 
> {{SaslPropertiesResolver}}{color}{{{}#getClientProperties(){}}}, and they get 
> back a {{{}Map{}}}. Every call gets the same {{Map}} object 
> back, and then the callers sometimes call 
> [put()|https://github.com/apache/hadoop/blob/rel/release-3.3.6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java#L385]
>  on it. This means that future users of 
> {color:#1d1c1d}{{SaslPropertiesResolver}}{color} get back the wrong 
> information.
> I propose that {color:#1d1c1d}{{SaslPropertiesResolver}}{color} should pass a 
> copy of its internal map, so that users can safety modify them{{{}.{}}}
> PR incoming.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[PR] HADOOP-18972: Copy SASL properties map when returning from SaslPropertiesResolver [hadoop]

2023-11-13 Thread via GitHub


charlesconnell opened a new pull request, #6272:
URL: https://github.com/apache/hadoop/pull/6272

   ### Description of PR
   
   When `SaslDataTransferServer` or `SaslDataTranferClient` want to get a SASL 
properties map to do a handshake, they call 
`SaslPropertiesResolver#getServerProperties()` or 
`SaslPropertiesResolver#getClientProperties()`, and they get back a 
`Map`. Every call gets the same Map object back, and then the 
callers sometimes call 
[put()](https://github.com/apache/hadoop/blob/rel/release-3.3.6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java#L385)
 on it. This means that future users of `SaslPropertiesResolver` get back the 
wrong information.
   
   In this PR, `SaslPropretiesResolver` gives out copies of its internal map, 
so that users can safety modify them.
   
   ### How was this patch tested?
   
   My employer has [effectively the same 
patch](https://github.com/HubSpot/hadoop/commit/6761522efd5f6ac6117ee151c44edb7d97ca0031)
 already committed to our fork of Hadoop 3.3.1. We are running this in 
production, and have found that this patch fixes the problem we were 
experiencing, described in the JIRA ticket.
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18925) S3A: add option "fs.s3a.copy.from.local.enabled" to enable/disable CopyFromLocalOperation

2023-11-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17785685#comment-17785685
 ] 

ASF GitHub Bot commented on HADOOP-18925:
-

hadoop-yetus commented on PR #6259:
URL: https://github.com/apache/hadoop/pull/6259#issuecomment-1809247030

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  11m 55s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ branch-3.3 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  52m 16s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |   1m 14s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  41m 15s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  40m 28s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 34s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 160m 14s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6259/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6259 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux 5f5b7c0bd335 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / 996a7cd1a32e9543ca033b7d2368522a9b01ca98 |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~18.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6259/2/testReport/ |
   | Max. process+thread count | 563 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6259/2/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> S3A: add option "fs.s3a.copy.from.local.enabled" to enable/disable 
> CopyFromLocalOperation
> -
>
> Key: HADOOP-18925
> URL: https://issues.apache.org/jira/browse/HADOOP-18925
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.3.6
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> reported failure of CopyFromLocalOperation.getFinalPath() during job 
> submission with s3a declared as cluster fs.
> add an emergency option to disable this optimised uploader and revert to the 
> superclass implementation



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-

Re: [PR] HADOOP-18925. S3A: option to enable/disable CopyFromLocalOperation (#6163) [hadoop]

2023-11-13 Thread via GitHub


hadoop-yetus commented on PR #6259:
URL: https://github.com/apache/hadoop/pull/6259#issuecomment-1809247030

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  11m 55s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ branch-3.3 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  52m 16s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |   1m 14s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  41m 15s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  40m 28s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 34s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 160m 14s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6259/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6259 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux 5f5b7c0bd335 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / 996a7cd1a32e9543ca033b7d2368522a9b01ca98 |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~18.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6259/2/testReport/ |
   | Max. process+thread count | 563 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6259/2/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18972) Bug in SaslPropertiesResolver allows mutation of internal state

2023-11-13 Thread Charles Connell (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Connell updated HADOOP-18972:
-
Description: 
{color:#1d1c1d}When {{SaslDataTransferServer}} or {{SaslDataTranferClient}} 
want to get a SASL properties map to do a handshake, they call 
{{SaslPropertiesResolver#getServerProperties()}} or 
{{SaslPropertiesResolver}}{color}{{{}#getClientProperties(){}}}, and they get 
back a {{{}Map{}}}. Every call gets the same {{Map}} object 
back, and then the callers sometimes call 
[put()|[https://github.com/apache/hadoop/blob/rel/release-3.3.6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java#L385]
 on it. This means that future users of 
{color:#1d1c1d}{{SaslPropertiesResolver}}{color} get back the wrong information.

I propose that {color:#1d1c1d}{{SaslPropertiesResolver}}{color} should pass a 
copy of its internal map, so that users can safety modify them{{{}.{}}}

PR incoming.

  was:
{color:#1d1c1d}When {{SaslDataTransferServer}} or {{SaslDataTranferClient}} 
want to get a SASL properties map to do a handshake, they call 
{{SaslPropertiesResolver#getServerProperties()}} or 
{{SaslPropertiesResolver}}{color}{{{}#getClientProperties(){}}}, and they get 
back a {{{}Map{}}}. Every call gets the same {{Map}} object 
back, and then the callers sometimes call 
[put()|[https://github.com/apache/hadoop/blob/rel/release-3.3.6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java#L385|#L385]
 on it. This means that future users of 
{color:#1d1c1d}{{SaslPropertiesResolver}}{color} get back the wrong information.

I propose that {color:#1d1c1d}{{SaslPropertiesResolver}}{color} should pass a 
copy of its internal map, so that users can safety modify them{{{}.{}}}

PR incoming.


> Bug in SaslPropertiesResolver allows mutation of internal state
> ---
>
> Key: HADOOP-18972
> URL: https://issues.apache.org/jira/browse/HADOOP-18972
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Charles Connell
>Priority: Minor
>
> {color:#1d1c1d}When {{SaslDataTransferServer}} or {{SaslDataTranferClient}} 
> want to get a SASL properties map to do a handshake, they call 
> {{SaslPropertiesResolver#getServerProperties()}} or 
> {{SaslPropertiesResolver}}{color}{{{}#getClientProperties(){}}}, and they get 
> back a {{{}Map{}}}. Every call gets the same {{Map}} object 
> back, and then the callers sometimes call 
> [put()|[https://github.com/apache/hadoop/blob/rel/release-3.3.6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java#L385]
>  on it. This means that future users of 
> {color:#1d1c1d}{{SaslPropertiesResolver}}{color} get back the wrong 
> information.
> I propose that {color:#1d1c1d}{{SaslPropertiesResolver}}{color} should pass a 
> copy of its internal map, so that users can safety modify them{{{}.{}}}
> PR incoming.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18972) Bug in SaslPropertiesResolver allows mutation of internal state

2023-11-13 Thread Charles Connell (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Connell updated HADOOP-18972:
-
Description: 
{color:#1d1c1d}When {{SaslDataTransferServer}} or {{SaslDataTranferClient}} 
want to get a SASL properties map to do a handshake, they call 
{{SaslPropertiesResolver#getServerProperties()}} or 
{{SaslPropertiesResolver}}{color}{{{}#getClientProperties(){}}}, and they get 
back a {{{}Map{}}}. Every call gets the same {{Map}} object 
back, and then the callers sometimes call 
[put()|https://github.com/apache/hadoop/blob/rel/release-3.3.6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java#L385]
 on it. This means that future users of 
{color:#1d1c1d}{{SaslPropertiesResolver}}{color} get back the wrong information.

I propose that {color:#1d1c1d}{{SaslPropertiesResolver}}{color} should pass a 
copy of its internal map, so that users can safety modify them{{{}.{}}}

PR incoming.

  was:
{color:#1d1c1d}When {{SaslDataTransferServer}} or {{SaslDataTranferClient}} 
want to get a SASL properties map to do a handshake, they call 
{{SaslPropertiesResolver#getServerProperties()}} or 
{{SaslPropertiesResolver}}{color}{{{}#getClientProperties(){}}}, and they get 
back a {{{}Map{}}}. Every call gets the same {{Map}} object 
back, and then the callers sometimes call 
[put()|[https://github.com/apache/hadoop/blob/rel/release-3.3.6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java#L385]
 on it. This means that future users of 
{color:#1d1c1d}{{SaslPropertiesResolver}}{color} get back the wrong information.

I propose that {color:#1d1c1d}{{SaslPropertiesResolver}}{color} should pass a 
copy of its internal map, so that users can safety modify them{{{}.{}}}

PR incoming.


> Bug in SaslPropertiesResolver allows mutation of internal state
> ---
>
> Key: HADOOP-18972
> URL: https://issues.apache.org/jira/browse/HADOOP-18972
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Charles Connell
>Priority: Minor
>
> {color:#1d1c1d}When {{SaslDataTransferServer}} or {{SaslDataTranferClient}} 
> want to get a SASL properties map to do a handshake, they call 
> {{SaslPropertiesResolver#getServerProperties()}} or 
> {{SaslPropertiesResolver}}{color}{{{}#getClientProperties(){}}}, and they get 
> back a {{{}Map{}}}. Every call gets the same {{Map}} object 
> back, and then the callers sometimes call 
> [put()|https://github.com/apache/hadoop/blob/rel/release-3.3.6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java#L385]
>  on it. This means that future users of 
> {color:#1d1c1d}{{SaslPropertiesResolver}}{color} get back the wrong 
> information.
> I propose that {color:#1d1c1d}{{SaslPropertiesResolver}}{color} should pass a 
> copy of its internal map, so that users can safety modify them{{{}.{}}}
> PR incoming.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18972) Bug in SaslPropertiesResolver allows mutation of internal state

2023-11-13 Thread Charles Connell (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Connell updated HADOOP-18972:
-
Description: 
{color:#1d1c1d}When {{SaslDataTransferServer}} or {{SaslDataTranferClient}} 
want to get a SASL properties map to do a handshake, they call 
{{SaslPropertiesResolver#getServerProperties()}} or 
{{SaslPropertiesResolver}}{color}{{{}#getClientProperties(){}}}, and they get 
back a {{{}Map{}}}. Every call gets the same {{Map}} object 
back, and then the callers sometimes call 
[put()|[https://github.com/apache/hadoop/blob/rel/release-3.3.6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java#L385|#L385]
 on it. This means that future users of 
{color:#1d1c1d}{{SaslPropertiesResolver}}{color} get back the wrong information.

I propose that {color:#1d1c1d}{{SaslPropertiesResolver}}{color} should pass a 
copy of its internal map, so that users can safety modify them{{{}.{}}}

PR incoming.

  was:
{color:#1d1c1d}When {{SaslDataTransferServer}} or {{SaslDataTranferClient}} 
want to get a SASL properties map to do a handshake, they call 
{{SaslPropertiesResolver#getServerProperties()}} or 
{{SaslPropertiesResolver}}{color}{{{}#getClientProperties(){}}}, and they get 
back a {{{}Map{}}}. Every call gets the same {{Map}} object 
back, and then the callers sometimes [call {{put()}}|#L385] on it. This means 
that future users of {color:#1d1c1d}{{SaslPropertiesResolver}}{color} get back 
the wrong information.

I propose that {color:#1d1c1d}{{SaslPropertiesResolver}}{color} should pass a 
copy of its internal map, so that users can safety modify them{{{}.{}}}

PR incoming.


> Bug in SaslPropertiesResolver allows mutation of internal state
> ---
>
> Key: HADOOP-18972
> URL: https://issues.apache.org/jira/browse/HADOOP-18972
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Charles Connell
>Priority: Minor
>
> {color:#1d1c1d}When {{SaslDataTransferServer}} or {{SaslDataTranferClient}} 
> want to get a SASL properties map to do a handshake, they call 
> {{SaslPropertiesResolver#getServerProperties()}} or 
> {{SaslPropertiesResolver}}{color}{{{}#getClientProperties(){}}}, and they get 
> back a {{{}Map{}}}. Every call gets the same {{Map}} object 
> back, and then the callers sometimes call 
> [put()|[https://github.com/apache/hadoop/blob/rel/release-3.3.6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java#L385|#L385]
>  on it. This means that future users of 
> {color:#1d1c1d}{{SaslPropertiesResolver}}{color} get back the wrong 
> information.
> I propose that {color:#1d1c1d}{{SaslPropertiesResolver}}{color} should pass a 
> copy of its internal map, so that users can safety modify them{{{}.{}}}
> PR incoming.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18972) Bug in SaslPropertiesResolver allows mutation of internal state

2023-11-13 Thread Charles Connell (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Connell updated HADOOP-18972:
-
Description: 
{color:#1d1c1d}When {{SaslDataTransferServer}} or {{SaslDataTranferClient}} 
want to get a SASL properties map to do a handshake, they call 
{{SaslPropertiesResolver#getServerProperties()}} or 
{{SaslPropertiesResolver}}{color}{{{}#getClientProperties(){}}}, and they get 
back a {{{}Map{}}}. Every call gets the same {{Map}} object 
back, and then the callers sometimes [call {{put()}}|#L385] on it. This means 
that future users of {color:#1d1c1d}{{SaslPropertiesResolver}}{color} get back 
the wrong information.

I propose that {color:#1d1c1d}{{SaslPropertiesResolver}}{color} should pass a 
copy of its internal map, so that users can safety modify them{{{}.{}}}

PR incoming.

  was:
{color:#1d1c1d}When {{SaslDataTransferServer}} or {{SaslDataTranferClient}} 
want to get a SASL properties map to do a handshake, they call 
{{SaslPropertiesResolver#getServerProperties()}} or 
{{SaslPropertiesResolver}}{color}{{{}#getClientProperties(){}}}, and they get 
back a {{{}Map{}}}. Every call gets the same {{Map}} object 
back, and then the callers sometimes [call 
{{{}put(){}}}https://github.com/apache/hadoop/blob/rel/release-3.3.6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java#L385|#L385]
 on it. This means that future users of 
{color:#1d1c1d}{{SaslPropertiesResolver}}{color} get back the wrong information.

I propose that {color:#1d1c1d}{{SaslPropertiesResolver}}{color} should pass a 
copy of its internal map, so that users can safety modify them{{{}.{}}}

PR incoming.


> Bug in SaslPropertiesResolver allows mutation of internal state
> ---
>
> Key: HADOOP-18972
> URL: https://issues.apache.org/jira/browse/HADOOP-18972
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Charles Connell
>Priority: Minor
>
> {color:#1d1c1d}When {{SaslDataTransferServer}} or {{SaslDataTranferClient}} 
> want to get a SASL properties map to do a handshake, they call 
> {{SaslPropertiesResolver#getServerProperties()}} or 
> {{SaslPropertiesResolver}}{color}{{{}#getClientProperties(){}}}, and they get 
> back a {{{}Map{}}}. Every call gets the same {{Map}} object 
> back, and then the callers sometimes [call {{put()}}|#L385] on it. This means 
> that future users of {color:#1d1c1d}{{SaslPropertiesResolver}}{color} get 
> back the wrong information.
> I propose that {color:#1d1c1d}{{SaslPropertiesResolver}}{color} should pass a 
> copy of its internal map, so that users can safety modify them{{{}.{}}}
> PR incoming.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-11498 backport jettison exclusions [hadoop]

2023-11-13 Thread via GitHub


pjfanning commented on PR #6063:
URL: https://github.com/apache/hadoop/pull/6063#issuecomment-1809061513

   @steveloughran I've done a rebase, let's see what happens with this build.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-11498 backport jettison exclusions [hadoop]

2023-11-13 Thread via GitHub


steveloughran commented on PR #6063:
URL: https://github.com/apache/hadoop/pull/6063#issuecomment-1809041271

   this has been in  for a while. why not rebase and retry?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18708) AWS SDK V2 - Implement CSE

2023-11-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17785666#comment-17785666
 ] 

ASF GitHub Bot commented on HADOOP-18708:
-

steveloughran commented on PR #6164:
URL: https://github.com/apache/hadoop/pull/6164#issuecomment-1809036784

   > This is the first time I'm working with a provided scope. In the code I 
throw throw unavailable(uri, ENCRYPTION_CLIENT_CLASSNAME, null, "No encryption 
client available"); if EncryptionClient class is not present, but how do I 
verify that this works? I'm guessing I have to package it up and then test via 
CLI..is that correct/is there any other way?
   
   You can add a rule to the enforcer plugin to ban imports except in managed 
places, the way we do for mapreduce.




> AWS SDK V2 - Implement CSE
> --
>
> Key: HADOOP-18708
> URL: https://issues.apache.org/jira/browse/HADOOP-18708
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Ahmar Suhail
>Priority: Major
>  Labels: pull-request-available
>
> S3 Encryption client for SDK V2 is now available, so add client side 
> encryption back in. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18708. Adds in support for client side encryption. [hadoop]

2023-11-13 Thread via GitHub


steveloughran commented on PR #6164:
URL: https://github.com/apache/hadoop/pull/6164#issuecomment-1809036784

   > This is the first time I'm working with a provided scope. In the code I 
throw throw unavailable(uri, ENCRYPTION_CLIENT_CLASSNAME, null, "No encryption 
client available"); if EncryptionClient class is not present, but how do I 
verify that this works? I'm guessing I have to package it up and then test via 
CLI..is that correct/is there any other way?
   
   You can add a rule to the enforcer plugin to ban imports except in managed 
places, the way we do for mapreduce.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17377) ABFS: MsiTokenProvider doesn't retry HTTP 429 from the Instance Metadata Service

2023-11-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17785663#comment-17785663
 ] 

ASF GitHub Bot commented on HADOOP-17377:
-

steveloughran commented on PR #5273:
URL: https://github.com/apache/hadoop/pull/5273#issuecomment-1809005084

   I'll go with whatever @saxenapranav thinks here...we have seen this 
ourselves and need a fix.
   
   However, that PR to update mockito bounced, so either
   1. another attempt is made to update mockito, including the shaded client
   2. this PR can be done without updating mockito (easier)




> ABFS: MsiTokenProvider doesn't retry HTTP 429 from the Instance Metadata 
> Service
> 
>
> Key: HADOOP-17377
> URL: https://issues.apache.org/jira/browse/HADOOP-17377
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.2.1
>Reporter: Brandon
>Priority: Major
>  Labels: pull-request-available
>
> *Summary*
>  The instance metadata service has its own guidance for error handling and 
> retry which are different from the Blob store. 
> [https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/how-to-use-vm-token#error-handling]
> In particular, it responds with HTTP 429 if request rate is too high. Whereas 
> Blob store will respond with HTTP 503. The retry policy used only accounts 
> for the latter as it will retry any status >=500. This can result in job 
> instability when running multiple processes on the same host.
> *Environment*
>  * Spark talking to an ABFS store
>  * Hadoop 3.2.1
>  * Running on an Azure VM with user-assigned identity, ABFS configured to use 
> MsiTokenProvider
>  * 6 executor processes on each VM
> *Example*
>  Here's an example error message and stack trace. It's always the same stack 
> trace. This appears in logs a few hundred to low thousands of times a day. 
> It's luckily skating by since the download operation is wrapped in 3 retries.
> {noformat}
> AADToken: HTTP connection failed for getting token from AzureAD. Http 
> response: 429 null
> Content-Type: application/json; charset=utf-8 Content-Length: 90 Request ID:  
> Proxies: none
> First 1K of Body: {"error":"invalid_request","error_description":"Temporarily 
> throttled, too many requests"}
>   at 
> org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.executeHttpOperation(AbfsRestOperation.java:190)
>   at 
> org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.execute(AbfsRestOperation.java:125)
>   at 
> org.apache.hadoop.fs.azurebfs.services.AbfsClient.getAclStatus(AbfsClient.java:506)
>   at 
> org.apache.hadoop.fs.azurebfs.services.AbfsClient.getAclStatus(AbfsClient.java:489)
>   at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.getIsNamespaceEnabled(AzureBlobFileSystemStore.java:208)
>   at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.getFileStatus(AzureBlobFileSystemStore.java:473)
>   at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.getFileStatus(AzureBlobFileSystem.java:437)
>   at org.apache.hadoop.fs.FileSystem.isFile(FileSystem.java:1717)
>   at org.apache.spark.util.Utils$.fetchHcfsFile(Utils.scala:747)
>   at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:724)
>   at org.apache.spark.util.Utils$.fetchFile(Utils.scala:496)
>   at 
> org.apache.spark.executor.Executor.$anonfun$updateDependencies$7(Executor.scala:812)
>   at 
> org.apache.spark.executor.Executor.$anonfun$updateDependencies$7$adapted(Executor.scala:803)
>   at 
> scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:792)
>   at 
> scala.collection.mutable.HashMap.$anonfun$foreach$1(HashMap.scala:149)
>   at scala.collection.mutable.HashTable.foreachEntry(HashTable.scala:237)
>   at scala.collection.mutable.HashTable.foreachEntry$(HashTable.scala:230)
>   at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:44)
>   at scala.collection.mutable.HashMap.foreach(HashMap.scala:149)
>   at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:791)
>   at 
> org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$updateDependencies(Executor.scala:803)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:375)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748){noformat}
>  CC [~mackrorysd], [~ste...@apache.org]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] HADOOP-17377: AABFS: MsiTokenProvider doesn't retry HTTP 429/410 from the Instance Metadata Service [hadoop]

2023-11-13 Thread via GitHub


steveloughran commented on PR #5273:
URL: https://github.com/apache/hadoop/pull/5273#issuecomment-1809005084

   I'll go with whatever @saxenapranav thinks here...we have seen this 
ourselves and need a fix.
   
   However, that PR to update mockito bounced, so either
   1. another attempt is made to update mockito, including the shaded client
   2. this PR can be done without updating mockito (easier)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18965) ITestS3AHugeFilesEncryption failure

2023-11-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17785660#comment-17785660
 ] 

ASF GitHub Bot commented on HADOOP-18965:
-

steveloughran commented on PR #6261:
URL: https://github.com/apache/hadoop/pull/6261#issuecomment-1809000183

   updated and reran without and with encryption (SSE-KMS) s3 london
   
   ```
   [INFO] ---
   [INFO] Running org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFilesEncryption
   [INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
33.88 s - in org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFilesEncryption
   [INFO] 
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0
   [INFO] 
   ```
   




> ITestS3AHugeFilesEncryption failure
> ---
>
> Key: HADOOP-18965
> URL: https://issues.apache.org/jira/browse/HADOOP-18965
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> test failures for me with a test setup of per-bucket encryption of sse-kms.
> suspect (but can't guarantee) HADOOP-18850 may be a factor.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18965. ITestS3AHugeFilesEncryption failure [hadoop]

2023-11-13 Thread via GitHub


steveloughran commented on PR #6261:
URL: https://github.com/apache/hadoop/pull/6261#issuecomment-1809000183

   updated and reran without and with encryption (SSE-KMS) s3 london
   
   ```
   [INFO] ---
   [INFO] Running org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFilesEncryption
   [INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
33.88 s - in org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFilesEncryption
   [INFO] 
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0
   [INFO] 
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18972) Bug in SaslPropertiesResolver allows mutation of internal state

2023-11-13 Thread Charles Connell (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Connell updated HADOOP-18972:
-
Description: 
{color:#1d1c1d}When {{SaslDataTransferServer}} or {{SaslDataTranferClient}} 
want to get a SASL properties map to do a handshake, they call 
{{SaslPropertiesResolver#getServerProperties()}} or 
{{SaslPropertiesResolver}}{color}{{{}#getClientProperties(){}}}, and they get 
back a {{{}Map{}}}. Every call gets the same {{Map}} object 
back, and then the callers sometimes [call 
{{{}put(){}}}https://github.com/apache/hadoop/blob/rel/release-3.3.6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java#L385|#L385]
 on it. This means that future users of 
{color:#1d1c1d}{{SaslPropertiesResolver}}{color} get back the wrong information.

I propose that {color:#1d1c1d}{{SaslPropertiesResolver}}{color} should pass a 
copy of its internal map, so that users can safety modify them{{{}.{}}}

PR incoming.

  was:
{color:#1d1c1d}When {{SaslDataTransferServer}} or {{SaslDataTranferClient}} 
want to get a SASL properties map to do a handshake, they call 
{{SaslPropertiesResolver#getServerProperties()}} or 
{{SaslPropertiesResolver{color}#getClientProperties()}}, and they get back a 
{{{}Map{}}}. Every call gets the same {{Map}} object back, and 
then the callers sometimes [call 
\{{{}put(){}}}https://github.com/apache/hadoop/blob/rel/release-3.3.6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java#L385|#L385]
 on it. This means that future users of 
{color:#1d1c1d}{{SaslPropertiesResolver}}{color} get back the wrong information.

I propose that {color:#1d1c1d}{{SaslPropertiesResolver}}{color} should pass a 
copy of its internal map, so that users can safety modify them{{{}.{}}}

PR incoming.


> Bug in SaslPropertiesResolver allows mutation of internal state
> ---
>
> Key: HADOOP-18972
> URL: https://issues.apache.org/jira/browse/HADOOP-18972
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Charles Connell
>Priority: Minor
>
> {color:#1d1c1d}When {{SaslDataTransferServer}} or {{SaslDataTranferClient}} 
> want to get a SASL properties map to do a handshake, they call 
> {{SaslPropertiesResolver#getServerProperties()}} or 
> {{SaslPropertiesResolver}}{color}{{{}#getClientProperties(){}}}, and they get 
> back a {{{}Map{}}}. Every call gets the same {{Map}} object 
> back, and then the callers sometimes [call 
> {{{}put(){}}}https://github.com/apache/hadoop/blob/rel/release-3.3.6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java#L385|#L385]
>  on it. This means that future users of 
> {color:#1d1c1d}{{SaslPropertiesResolver}}{color} get back the wrong 
> information.
> I propose that {color:#1d1c1d}{{SaslPropertiesResolver}}{color} should pass a 
> copy of its internal map, so that users can safety modify them{{{}.{}}}
> PR incoming.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18972) Bug in SaslPropertiesResolver allows mutation of internal state

2023-11-13 Thread Charles Connell (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Connell updated HADOOP-18972:
-
Description: 
{color:#1d1c1d}When {{SaslDataTransferServer}} or {{SaslDataTranferClient}} 
want to get a SASL properties map to do a handshake, they call 
{{SaslPropertiesResolver#getServerProperties()}} or 
{{SaslPropertiesResolver{color}#getClientProperties()}}, and they get back a 
{{{}Map{}}}. Every call gets the same {{Map}} object back, and 
then the callers sometimes [call 
\{{{}put(){}}}https://github.com/apache/hadoop/blob/rel/release-3.3.6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java#L385|#L385]
 on it. This means that future users of 
{color:#1d1c1d}{{SaslPropertiesResolver}}{color} get back the wrong information.

I propose that {color:#1d1c1d}{{SaslPropertiesResolver}}{color} should pass a 
copy of its internal map, so that users can safety modify them{{{}.{}}}

PR incoming.

  was:
{color:#1d1c1d}When {{SaslDataTransferServer}} or {{SaslDataTranferClient}} 
want to get a SASL properties map to do a handshake, they call 
{{SaslPropertiesResolver#getServerProperties()}} or 
{{{}{color:#1d1c1d}SaslPropertiesResolver{color}#getClientProperties(){}}}, and 
they get back a {{{}Map{}}}. Every call gets the same {{Map}} 
object back, and then the callers sometimes [call 
{{{}put(){}}}https://github.com/apache/hadoop/blob/rel/release-3.3.6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java#L385]
 on it. This means that future users of 
{color:#1d1c1d}{{SaslPropertiesResolver}}{color}{color} get back the wrong 
information.{color:#1d1c1d}
{color}

I propose that 
{color:#1d1c1d}{color:#1d1c1d}{{SaslPropertiesResolver}}{color}{color} should 
pass a copy of its internal map, so that users can safety modify 
them{color:#1d1c1d}{color:#1d1c1d}{{.}}{color}{color}

PR incoming.


> Bug in SaslPropertiesResolver allows mutation of internal state
> ---
>
> Key: HADOOP-18972
> URL: https://issues.apache.org/jira/browse/HADOOP-18972
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Charles Connell
>Priority: Minor
>
> {color:#1d1c1d}When {{SaslDataTransferServer}} or {{SaslDataTranferClient}} 
> want to get a SASL properties map to do a handshake, they call 
> {{SaslPropertiesResolver#getServerProperties()}} or 
> {{SaslPropertiesResolver{color}#getClientProperties()}}, and they get back a 
> {{{}Map{}}}. Every call gets the same {{Map}} object back, 
> and then the callers sometimes [call 
> \{{{}put(){}}}https://github.com/apache/hadoop/blob/rel/release-3.3.6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java#L385|#L385]
>  on it. This means that future users of 
> {color:#1d1c1d}{{SaslPropertiesResolver}}{color} get back the wrong 
> information.
> I propose that {color:#1d1c1d}{{SaslPropertiesResolver}}{color} should pass a 
> copy of its internal map, so that users can safety modify them{{{}.{}}}
> PR incoming.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] Hadoop-18759: [ABFS][Backoff-Optimization] Have a Static retry policy for connection timeout. [hadoop]

2023-11-13 Thread via GitHub


steveloughran commented on PR #5881:
URL: https://github.com/apache/hadoop/pull/5881#issuecomment-1808957198

   @anujmodi2021 happy with this but you need to rebase to deal with merge 
problems from your other PR. then when backporting you will need to apply in 
the same order


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18972) Bug in SaslPropertiesResolver allows mutation of internal state

2023-11-13 Thread Charles Connell (Jira)
Charles Connell created HADOOP-18972:


 Summary: Bug in SaslPropertiesResolver allows mutation of internal 
state
 Key: HADOOP-18972
 URL: https://issues.apache.org/jira/browse/HADOOP-18972
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Charles Connell


{color:#1d1c1d}When {{SaslDataTransferServer}} or {{SaslDataTranferClient}} 
want to get a SASL properties map to do a handshake, they call 
{{SaslPropertiesResolver#getServerProperties()}} or 
{{{}{color:#1d1c1d}SaslPropertiesResolver{color}#getClientProperties(){}}}, and 
they get back a {{{}Map{}}}. Every call gets the same {{Map}} 
object back, and then the callers sometimes [call 
{{{}put(){}}}https://github.com/apache/hadoop/blob/rel/release-3.3.6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java#L385]
 on it. This means that future users of 
{color:#1d1c1d}{{SaslPropertiesResolver}}{color}{color} get back the wrong 
information.{color:#1d1c1d}
{color}

I propose that 
{color:#1d1c1d}{color:#1d1c1d}{{SaslPropertiesResolver}}{color}{color} should 
pass a copy of its internal map, so that users can safety modify 
them{color:#1d1c1d}{color:#1d1c1d}{{.}}{color}{color}

PR incoming.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18910) ABFS: Adding Support for MD5 Hash based integrity verification of the request content during transport

2023-11-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17785645#comment-17785645
 ] 

ASF GitHub Bot commented on HADOOP-18910:
-

steveloughran commented on code in PR #6069:
URL: https://github.com/apache/hadoop/pull/6069#discussion_r1391613074


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java:
##
@@ -798,6 +806,11 @@ public AbfsRestOperation append(final String path, final 
byte[] buffer,
   if (!op.hasResult()) {
 throw e;
   }
+

Review Comment:
   on the topic of ex parsing, L797 will blow up with a ClassCastException if 
the exception caught is anything other than a AbfsRestOperationException.
   
   so the type of exception caught can be changed to  AbfsRestOperationException



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java:
##
@@ -879,9 +880,8 @@ private boolean checkUserError(int responseStatusCode) {
* @return boolean whether exception is due to MD5Mismatch or not
*/
   protected boolean isMd5ChecksumError(final AzureBlobFileSystemException e) {
-return ((AbfsRestOperationException) e).getStatusCode()
-== HttpURLConnection.HTTP_BAD_REQUEST
-&& e.getMessage().contains(MD5_ERROR_SERVER_MESSAGE);
+AzureServiceErrorCode storageErrorCode = ((AbfsRestOperationException) 
e).getErrorCode();

Review Comment:
   see my comment above about making the catch on L787 a 
AbfsRestOperationException





> ABFS: Adding Support for MD5 Hash based integrity verification of the request 
> content during transport 
> ---
>
> Key: HADOOP-18910
> URL: https://issues.apache.org/jira/browse/HADOOP-18910
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
>
> Azure Storage Supports Content-MD5 Request Headers in Both Read and Append 
> APIs.
> Read: [Path - Read - REST API (Azure Storage Services) | Microsoft 
> Learn|https://learn.microsoft.com/en-us/rest/api/storageservices/datalakestoragegen2/path/read]
> Append: [Path - Update - REST API (Azure Storage Services) | Microsoft 
> Learn|https://learn.microsoft.com/en-us/rest/api/storageservices/datalakestoragegen2/path/update]
> This change is to make client-side changes to support them. In Read request, 
> we will send the appropriate header in response to which server will return 
> the MD5 Hash of the data it sends back. On Client we will tally this with the 
> MD5 hash computed from the data received.
> In Append request, we will compute the MD5 Hash of the data that we are 
> sending to the server and specify that in appropriate header. Server on 
> finding that header will tally this with the MD5 hash it will compute on the 
> data received. 
> This whole Checksum Validation Support is guarded behind a config, Config is 
> by default disabled because with the use of "https" integrity of data is 
> preserved anyways. This is introduced as an additional data integrity check 
> which will have a performance impact as well.
> Users can decide if they want to enable this or not by setting the following 
> config to *"true"* or *"false"* respectively. *Config: 
> "fs.azure.enable.checksum.validation"*



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18910: [ABFS] Adding Support for MD5 Hash based integrity verification of the request content during transport [hadoop]

2023-11-13 Thread via GitHub


steveloughran commented on code in PR #6069:
URL: https://github.com/apache/hadoop/pull/6069#discussion_r1391613074


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java:
##
@@ -798,6 +806,11 @@ public AbfsRestOperation append(final String path, final 
byte[] buffer,
   if (!op.hasResult()) {
 throw e;
   }
+

Review Comment:
   on the topic of ex parsing, L797 will blow up with a ClassCastException if 
the exception caught is anything other than a AbfsRestOperationException.
   
   so the type of exception caught can be changed to  AbfsRestOperationException



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java:
##
@@ -879,9 +880,8 @@ private boolean checkUserError(int responseStatusCode) {
* @return boolean whether exception is due to MD5Mismatch or not
*/
   protected boolean isMd5ChecksumError(final AzureBlobFileSystemException e) {
-return ((AbfsRestOperationException) e).getStatusCode()
-== HttpURLConnection.HTTP_BAD_REQUEST
-&& e.getMessage().contains(MD5_ERROR_SERVER_MESSAGE);
+AzureServiceErrorCode storageErrorCode = ((AbfsRestOperationException) 
e).getErrorCode();

Review Comment:
   see my comment above about making the catch on L787 a 
AbfsRestOperationException



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-11614. [Federation] Add Federation PolicyManager Validation Rules. [hadoop]

2023-11-13 Thread via GitHub


hadoop-yetus commented on PR #6271:
URL: https://github.com/apache/hadoop/pull/6271#issuecomment-1808922515

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  86m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 26s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  35m 54s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   7m 58s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   7m 18s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   1m 56s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 40s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   3m 17s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  38m 30s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m  1s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   7m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   7m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   7m 10s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   7m 10s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 53s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6271/1/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt)
 |  hadoop-yarn-project/hadoop-yarn: The patch generated 17 new + 1 unchanged - 
0 fixed = 18 total (was 1)  |
   | +1 :green_heart: |  mvnsite  |   1m 30s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   3m 28s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  39m 36s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m  6s |  |  hadoop-yarn-api in the patch 
passed.  |
   | -1 :x: |  unit  |  28m 10s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6271/1/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt)
 |  hadoop-yarn-client in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 58s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 302m 46s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.yarn.client.cli.TestRouterCLI |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6271/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6271 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 5649e734cd16 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / a5798bf2a05c05baead0e7b25b67412531a19ded |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk

[jira] [Resolved] (HADOOP-18872) ABFS: Misreporting Retry Count for Sub-sequential and Parallel Operations

2023-11-13 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18872.
-
Fix Version/s: 3.4.0
   Resolution: Fixed

> ABFS: Misreporting Retry Count for Sub-sequential and Parallel Operations
> -
>
> Key: HADOOP-18872
> URL: https://issues.apache.org/jira/browse/HADOOP-18872
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 3.3.6
>Reporter: Anmol Asrani
>Assignee: Anuj Modi
>Priority: Major
>  Labels: Bug, pull-request-available
> Fix For: 3.4.0
>
>
> There was a bug identified where retry count in the client correlation id was 
> wrongly reported for sub-sequential and parallel operations triggered by a 
> single file system call. This was due to reusing same tracing context for all 
> such calls.
> We create a new tracing context as soon as HDFS call comes. We keep on 
> passing that same TC for all the client calls.
> For instance, when we get a createFile call, we first call metadata 
> operations. If those metadata operations somehow succeeded after a few 
> retries, the tracing context will have that many retry count in it. Now when 
> actual call for create is made, same retry count will be used to construct 
> the headers(clientCorrelationId). Alhough the create operation never failed, 
> we will still see retry count from the previous request.
> Fix is to use a new tracing context object for all the network calls made. 
> All the sub-sequential and parallel operations will have same primary request 
> Id to correlate them, yet they will have their own tracing of retry count.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18872) ABFS: Misreporting Retry Count for Sub-sequential and Parallel Operations

2023-11-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17785639#comment-17785639
 ] 

ASF GitHub Bot commented on HADOOP-18872:
-

steveloughran commented on PR #6019:
URL: https://github.com/apache/hadoop/pull/6019#issuecomment-1808906983

   merged! Anuj -can you do a branch-3.3 backport and retest...mukund has been 
getting his signing setup for a 3.3.x release




> ABFS: Misreporting Retry Count for Sub-sequential and Parallel Operations
> -
>
> Key: HADOOP-18872
> URL: https://issues.apache.org/jira/browse/HADOOP-18872
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 3.3.6
>Reporter: Anmol Asrani
>Assignee: Anuj Modi
>Priority: Major
>  Labels: Bug, pull-request-available
>
> There was a bug identified where retry count in the client correlation id was 
> wrongly reported for sub-sequential and parallel operations triggered by a 
> single file system call. This was due to reusing same tracing context for all 
> such calls.
> We create a new tracing context as soon as HDFS call comes. We keep on 
> passing that same TC for all the client calls.
> For instance, when we get a createFile call, we first call metadata 
> operations. If those metadata operations somehow succeeded after a few 
> retries, the tracing context will have that many retry count in it. Now when 
> actual call for create is made, same retry count will be used to construct 
> the headers(clientCorrelationId). Alhough the create operation never failed, 
> we will still see retry count from the previous request.
> Fix is to use a new tracing context object for all the network calls made. 
> All the sub-sequential and parallel operations will have same primary request 
> Id to correlate them, yet they will have their own tracing of retry count.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18872: [ABFS] [BugFix] Misreporting Retry Count for Sub-sequential and Parallel Operations [hadoop]

2023-11-13 Thread via GitHub


steveloughran commented on PR #6019:
URL: https://github.com/apache/hadoop/pull/6019#issuecomment-1808906983

   merged! Anuj -can you do a branch-3.3 backport and retest...mukund has been 
getting his signing setup for a 3.3.x release


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18872) ABFS: Misreporting Retry Count for Sub-sequential and Parallel Operations

2023-11-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17785638#comment-17785638
 ] 

ASF GitHub Bot commented on HADOOP-18872:
-

steveloughran merged PR #6019:
URL: https://github.com/apache/hadoop/pull/6019




> ABFS: Misreporting Retry Count for Sub-sequential and Parallel Operations
> -
>
> Key: HADOOP-18872
> URL: https://issues.apache.org/jira/browse/HADOOP-18872
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 3.3.6
>Reporter: Anmol Asrani
>Assignee: Anuj Modi
>Priority: Major
>  Labels: Bug, pull-request-available
>
> There was a bug identified where retry count in the client correlation id was 
> wrongly reported for sub-sequential and parallel operations triggered by a 
> single file system call. This was due to reusing same tracing context for all 
> such calls.
> We create a new tracing context as soon as HDFS call comes. We keep on 
> passing that same TC for all the client calls.
> For instance, when we get a createFile call, we first call metadata 
> operations. If those metadata operations somehow succeeded after a few 
> retries, the tracing context will have that many retry count in it. Now when 
> actual call for create is made, same retry count will be used to construct 
> the headers(clientCorrelationId). Alhough the create operation never failed, 
> we will still see retry count from the previous request.
> Fix is to use a new tracing context object for all the network calls made. 
> All the sub-sequential and parallel operations will have same primary request 
> Id to correlate them, yet they will have their own tracing of retry count.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18872: [ABFS] [BugFix] Misreporting Retry Count for Sub-sequential and Parallel Operations [hadoop]

2023-11-13 Thread via GitHub


steveloughran merged PR #6019:
URL: https://github.com/apache/hadoop/pull/6019


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18872) ABFS: Misreporting Retry Count for Sub-sequential and Parallel Operations

2023-11-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17785637#comment-17785637
 ] 

ASF GitHub Bot commented on HADOOP-18872:
-

steveloughran commented on PR #6019:
URL: https://github.com/apache/hadoop/pull/6019#issuecomment-1808898251

   > Really sorry for making it difficult for you.
   no worries -I need to apologise for not giving the abfs code enough 
attention, either in reviews or my own work 




> ABFS: Misreporting Retry Count for Sub-sequential and Parallel Operations
> -
>
> Key: HADOOP-18872
> URL: https://issues.apache.org/jira/browse/HADOOP-18872
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 3.3.6
>Reporter: Anmol Asrani
>Assignee: Anuj Modi
>Priority: Major
>  Labels: Bug, pull-request-available
>
> There was a bug identified where retry count in the client correlation id was 
> wrongly reported for sub-sequential and parallel operations triggered by a 
> single file system call. This was due to reusing same tracing context for all 
> such calls.
> We create a new tracing context as soon as HDFS call comes. We keep on 
> passing that same TC for all the client calls.
> For instance, when we get a createFile call, we first call metadata 
> operations. If those metadata operations somehow succeeded after a few 
> retries, the tracing context will have that many retry count in it. Now when 
> actual call for create is made, same retry count will be used to construct 
> the headers(clientCorrelationId). Alhough the create operation never failed, 
> we will still see retry count from the previous request.
> Fix is to use a new tracing context object for all the network calls made. 
> All the sub-sequential and parallel operations will have same primary request 
> Id to correlate them, yet they will have their own tracing of retry count.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18872: [ABFS] [BugFix] Misreporting Retry Count for Sub-sequential and Parallel Operations [hadoop]

2023-11-13 Thread via GitHub


steveloughran commented on PR #6019:
URL: https://github.com/apache/hadoop/pull/6019#issuecomment-1808898251

   > Really sorry for making it difficult for you.
   no worries -I need to apologise for not giving the abfs code enough 
attention, either in reviews or my own work 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18971) ABFS: Enable Footer Read Optimizations with Appropriate Footer Read Buffer Size

2023-11-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17785635#comment-17785635
 ] 

ASF GitHub Bot commented on HADOOP-18971:
-

hadoop-yetus commented on PR #6270:
URL: https://github.com/apache/hadoop/pull/6270#issuecomment-1808893844

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 42s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  46m 27s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 36s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   1m 10s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  37m  3s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 19s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6270/3/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 4 new + 2 unchanged - 0 
fixed = 6 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   1m  3s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  38m  3s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  2s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 39s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 139m 39s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6270/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6270 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux b9aa07842398 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / e438b9446bfaa4cd602fa1c5bc5c83f5b9f5 |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6270/3/testReport/ |
   | Max. process+thread count | 554 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6270/3/

Re: [PR] HADOOP-18971: [ABFS] Enable Footer Read Optimizations with Appropriate Footer Read Buffer Size [hadoop]

2023-11-13 Thread via GitHub


hadoop-yetus commented on PR #6270:
URL: https://github.com/apache/hadoop/pull/6270#issuecomment-1808893844

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 42s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  46m 27s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 36s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   1m 10s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  37m  3s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 19s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6270/3/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 4 new + 2 unchanged - 0 
fixed = 6 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   1m  3s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  38m  3s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  2s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 39s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 139m 39s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6270/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6270 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux b9aa07842398 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / e438b9446bfaa4cd602fa1c5bc5c83f5b9f5 |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6270/3/testReport/ |
   | Max. process+thread count | 554 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6270/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To re

Re: [PR] HDFS-17249. Fix TestDFSUtil.testIsValidName() unit test failure [hadoop]

2023-11-13 Thread via GitHub


steveloughran merged PR #6249:
URL: https://github.com/apache/hadoop/pull/6249


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18845) Add ability to configure ConnectionTTL of http connections while creating S3 Client.

2023-11-13 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18845:

Description: 
The option fs.s3a.connection.ttl sets the maximum time an idle connection may 
be retained in the http connection pool. 

A lower value: fewer connections kept open, networking problems related to 
long-lived connections less likely
A higher value: less time spent negotiating TLS connections when new 
connections are needed



> Add ability to configure ConnectionTTL of http connections while creating S3 
> Client.
> 
>
> Key: HADOOP-18845
> URL: https://issues.apache.org/jira/browse/HADOOP-18845
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.6
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.9
>
>
> The option fs.s3a.connection.ttl sets the maximum time an idle connection may 
> be retained in the http connection pool. 
> A lower value: fewer connections kept open, networking problems related to 
> long-lived connections less likely
> A higher value: less time spent negotiating TLS connections when new 
> connections are needed



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18956) Zookeeper SSL/TLS support in ZKDelegationTokenSecretManager and ZKSignerSecretProvider

2023-11-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17785582#comment-17785582
 ] 

ASF GitHub Bot commented on HADOOP-18956:
-

sodonnel commented on PR #6263:
URL: https://github.com/apache/hadoop/pull/6263#issuecomment-1808611320

   Change looks OK to me, but it would be good to get a pass from someone else 
too, as I have never looked at this area before.




> Zookeeper SSL/TLS support in ZKDelegationTokenSecretManager and 
> ZKSignerSecretProvider
> --
>
> Key: HADOOP-18956
> URL: https://issues.apache.org/jira/browse/HADOOP-18956
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Zita Dombi
>Assignee: István Fajth
>Priority: Major
>  Labels: pull-request-available
>
> HADOOP-18709 added support for Zookeeper to communicate with SSL/TLS enabled 
> in hadoop-common. With those changes we have the necessary parameters, that 
> we need to set to enable SSL/TLS in a ZK Client. That change also did changes 
> in ZKCuratorManager, so with that it is easy to set the SSL/TLS, for Yarn it 
> was done in YARN-11468.
> In DelegationTokenAuthenticationFilter currently we are using 
> CuratorFrameworkFactory, it'd be good to change it to use ZKCuratorManager 
> and with that we should support SSL/TLS enablement.
> *UPDATE*
> So as I investigated this a bit more, it wouldn't be so easy to move to using 
> ZKCuratorManager. 
> DelegationTokenAuthenticationFilter uses ZK from two places: in 
> ZKDelegationTokenSecretManager and in ZKSignerSecretProvider. In both places 
> it uses CuratorFrameworkFactory, but the attributes and creation 
> differentiates from ZKCuratorManager. 
> In ZKDelegationTokenSecretManager it would be easy to add the new config and 
> based on that create ZK with CuratorFrameworkFactory. But 
> ZKSignerSecretProvider is in hadoop-auth module and with my change it would 
> need hadoop-common, so it would introduce circular dependency between modules 
> 'hadoop-auth' and 'hadoop-common'. I'm still working on a straightforward 
> solution. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18956. Zookeeper SSL/TLS support in ZKDelegationTokenSecretManager and ZKSignerSecretProvider [hadoop]

2023-11-13 Thread via GitHub


sodonnel commented on PR #6263:
URL: https://github.com/apache/hadoop/pull/6263#issuecomment-1808611320

   Change looks OK to me, but it would be good to get a pass from someone else 
too, as I have never looked at this area before.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18971) ABFS: Enable Footer Read Optimizations with Appropriate Footer Read Buffer Size

2023-11-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17785581#comment-17785581
 ] 

ASF GitHub Bot commented on HADOOP-18971:
-

anujmodi2021 commented on PR #6270:
URL: https://github.com/apache/hadoop/pull/6270#issuecomment-1808599485

   
    AGGREGATED TEST RESULT 
   
   HNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 141, Failures: 0, Errors: 0, Skipped: 5
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 339, Failures: 0, Errors: 0, Skipped: 41
   
   HNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 141, Failures: 0, Errors: 0, Skipped: 5
   [INFO] Results:
   [INFO] 
   [ERROR] Failures: 
   [ERROR]   
ITestAbfsFileSystemContractDistCp>AbstractContractDistCpTest.testUpdateDeepDirectoryStructureToRemote:259->AbstractContractDistCpTest.distCpUpdateDeepDirectoryStructure:334->AbstractContractDistCpTest.assertCounterInRange:294->Assert.assertTrue:42->Assert.fail:89
 Files Copied value 2 above maximum 1
   [INFO] 
   [ERROR] Tests run: 339, Failures: 1, Errors: 0, Skipped: 41
   
   NonHNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 141, Failures: 0, Errors: 0, Skipped: 11
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 591, Failures: 0, Errors: 0, Skipped: 274
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 339, Failures: 0, Errors: 0, Skipped: 44
   
   AppendBlob-HNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 141, Failures: 0, Errors: 0, Skipped: 5
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 339, Failures: 0, Errors: 0, Skipped: 41
   
   Time taken: 25 mins 53 secs.
   




> ABFS: Enable Footer Read Optimizations with Appropriate Footer Read Buffer 
> Size
> ---
>
> Key: HADOOP-18971
> URL: https://issues.apache.org/jira/browse/HADOOP-18971
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.6
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>
> Footer Read Optimization was introduced to Hadoop azure in this Jira: 
> https://issues.apache.org/jira/browse/HADOOP-17347
> and was kept disabled by default.
> This PR is to enable footer reads by default based on the results of analysis 
> performed as below:
> In our scale workload analysis, it was found that workloads working with 
> Parquet (or for that matter OCR etc.) have a lot of footer reads. Footer 
> reads here refers to the read operations done by workload to get the metadata 
> of the parquet file which is required to understand where the actual data 
> resides in the parquet.
> This whole process takes place in 3 steps:
>  # Workload reads the last 8 bytes of parquet file to get the offset and size 
> of the metadata which is present just above these 8 bytes.
>  # Using that offset, workload reads the metadata to get the exact offset and 
> length of data which it wants to read.
>  # Workload performs the final read operation to get the data it wants to use 
> for its purpose.
> Here the first two steps are metadata reads that can be combined into a 
> single footer read. When workload tries to read certain last few bytes of 
> data (let's say this value is footer size), driver will intelligently read 
> some extra bytes above the footer size to cater to the next read which is 
> going to come.
> Q. What is the footer size of file?
> A: 16KB. Any read request trying to get the data within last 16KB of the file 
> will qualify for whole footer read. This value is enough to cater to all 
> types of files including parquet, OCR, etc.
> Q. What is the buffer size to read when reading the footer?
> A. Let's call this footer read buffer size. Prior to this PR footer read 
> buffer size was same as read buffer size (default 4MB). It was found that for 
> most of the workload required footer size was only 256KB. i.e. For almost all 
> parquet files metadata for that file was found to be within last 256KBs. 
> Keeping this in mind it does not make sense to read whole buffer length of 
> 4MB as a part of footer read. Moreover, reading larger data than require 
> incur additional costs in terms of server and network latencies. Based on 
> this and extensive experimentation it was observed that footer read buffer 
> size of 512KB is ideal for almost all the workloads running on parquet, OCR, 
> etc.
> Following configuration was introduced to configure the footer read buffer 
> size:
> {*}fs.azure.footer.read.request.size{*}: default 512 KB.
> *Quantitative Stats:* For a workload running on parquet files the number of 
> read requests got reduced by 2.3M down

Re: [PR] HADOOP-18971: [ABFS] Enable Footer Read Optimizations with Appropriate Footer Read Buffer Size [hadoop]

2023-11-13 Thread via GitHub


anujmodi2021 commented on PR #6270:
URL: https://github.com/apache/hadoop/pull/6270#issuecomment-1808599485

   
    AGGREGATED TEST RESULT 
   
   HNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 141, Failures: 0, Errors: 0, Skipped: 5
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 339, Failures: 0, Errors: 0, Skipped: 41
   
   HNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 141, Failures: 0, Errors: 0, Skipped: 5
   [INFO] Results:
   [INFO] 
   [ERROR] Failures: 
   [ERROR]   
ITestAbfsFileSystemContractDistCp>AbstractContractDistCpTest.testUpdateDeepDirectoryStructureToRemote:259->AbstractContractDistCpTest.distCpUpdateDeepDirectoryStructure:334->AbstractContractDistCpTest.assertCounterInRange:294->Assert.assertTrue:42->Assert.fail:89
 Files Copied value 2 above maximum 1
   [INFO] 
   [ERROR] Tests run: 339, Failures: 1, Errors: 0, Skipped: 41
   
   NonHNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 141, Failures: 0, Errors: 0, Skipped: 11
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 591, Failures: 0, Errors: 0, Skipped: 274
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 339, Failures: 0, Errors: 0, Skipped: 44
   
   AppendBlob-HNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 141, Failures: 0, Errors: 0, Skipped: 5
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 339, Failures: 0, Errors: 0, Skipped: 41
   
   Time taken: 25 mins 53 secs.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18959) Use builder for prefetch CachingBlockManager

2023-11-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17785574#comment-17785574
 ] 

ASF GitHub Bot commented on HADOOP-18959:
-

ahmarsuhail commented on code in PR #6240:
URL: https://github.com/apache/hadoop/pull/6240#discussion_r1391235690


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/prefetch/BlockManagerParams.java:
##
@@ -0,0 +1,169 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.hadoop.fs.impl.prefetch;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.LocalDirAllocator;
+import org.apache.hadoop.fs.statistics.DurationTrackerFactory;
+
+/**
+ * This class is used to provide params to {@link BlockManager}.
+ */
+@InterfaceAudience.Private
+public final class BlockManagerParams {

Review Comment:
   nit: could you please rename to `BlockManagerParameters` ? 



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/prefetch/BlockManagerParams.java:
##
@@ -0,0 +1,169 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.hadoop.fs.impl.prefetch;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.LocalDirAllocator;
+import org.apache.hadoop.fs.statistics.DurationTrackerFactory;
+
+/**
+ * This class is used to provide params to {@link BlockManager}.
+ */
+@InterfaceAudience.Private
+public final class BlockManagerParams {
+
+  /**
+   * Asynchronous tasks are performed in this pool.
+   */
+  private final ExecutorServiceFuturePool futurePool;

Review Comment:
   nit: could you add a new line after each of these, makes for slightly easier 
reading.



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/prefetch/BlockManagerParams.java:
##
@@ -0,0 +1,169 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.hadoop.fs.impl.prefetch;

Review Comment:
   could you refactor this class, similar to 
[S3ClientFactory.S3ClientCreationParameters](https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ClientFactory.java#L98).
 That feels like a cleaner way to get and set these parameters. 



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/prefetch/BlockManagerParams.java:
##
@@ -0,0 +1,169 @@
+/*
+ * Licensed to the Apache Software Foundation (

Re: [PR] HADOOP-18959 Use builder for prefetch CachingBlockManager [hadoop]

2023-11-13 Thread via GitHub


ahmarsuhail commented on code in PR #6240:
URL: https://github.com/apache/hadoop/pull/6240#discussion_r1391235690


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/prefetch/BlockManagerParams.java:
##
@@ -0,0 +1,169 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.hadoop.fs.impl.prefetch;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.LocalDirAllocator;
+import org.apache.hadoop.fs.statistics.DurationTrackerFactory;
+
+/**
+ * This class is used to provide params to {@link BlockManager}.
+ */
+@InterfaceAudience.Private
+public final class BlockManagerParams {

Review Comment:
   nit: could you please rename to `BlockManagerParameters` ? 



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/prefetch/BlockManagerParams.java:
##
@@ -0,0 +1,169 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.hadoop.fs.impl.prefetch;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.LocalDirAllocator;
+import org.apache.hadoop.fs.statistics.DurationTrackerFactory;
+
+/**
+ * This class is used to provide params to {@link BlockManager}.
+ */
+@InterfaceAudience.Private
+public final class BlockManagerParams {
+
+  /**
+   * Asynchronous tasks are performed in this pool.
+   */
+  private final ExecutorServiceFuturePool futurePool;

Review Comment:
   nit: could you add a new line after each of these, makes for slightly easier 
reading.



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/prefetch/BlockManagerParams.java:
##
@@ -0,0 +1,169 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.hadoop.fs.impl.prefetch;

Review Comment:
   could you refactor this class, similar to 
[S3ClientFactory.S3ClientCreationParameters](https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ClientFactory.java#L98).
 That feels like a cleaner way to get and set these parameters. 



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/prefetch/BlockManagerParams.java:
##
@@ -0,0 +1,169 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "Lic

Re: [PR] [HDFS-17244] Avoid affecting the overall logic of checkpoint when I/O… [hadoop]

2023-11-13 Thread via GitHub


hadoop-yetus commented on PR #6237:
URL: https://github.com/apache/hadoop/pull/6237#issuecomment-1808449603

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 29s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 13s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 47s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 44s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 54s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   1m 53s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 49s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 42s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 45s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   1m 52s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 44s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 197m 41s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6237/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 290m  4s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestDFSUtil |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6237/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6237 |
   | JIRA Issue | HDFS-17244 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 6fcc2c9cbda8 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / e078771fe5e6b4e30d361f6be3e16600c5436164 |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6237/2/testReport/ |
   | Max. process+thread count | 3456 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6237/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powere

[jira] [Created] (HADOOP-18971) ABFS: Enable Footer Read Optimizations with Appropriate Footer Read Buffer Size

2023-11-13 Thread Anuj Modi (Jira)
Anuj Modi created HADOOP-18971:
--

 Summary: ABFS: Enable Footer Read Optimizations with Appropriate 
Footer Read Buffer Size
 Key: HADOOP-18971
 URL: https://issues.apache.org/jira/browse/HADOOP-18971
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Affects Versions: 3.3.6
Reporter: Anuj Modi


Footer Read Optimization was introduced to Hadoop azure in this Jira: 
https://issues.apache.org/jira/browse/HADOOP-17347
and was kept disabled by default.
This PR is to enable footer reads by default based on the results of analysis 
performed as below:

In our scale workload analysis, it was found that workloads working with 
Parquet (or for that matter OCR etc.) have a lot of footer reads. Footer reads 
here refers to the read operations done by workload to get the metadata of the 
parquet file which is required to understand where the actual data resides in 
the parquet.
This whole process takes place in 3 steps:
 # Workload reads the last 8 bytes of parquet file to get the offset and size 
of the metadata which is present just above these 8 bytes.
 # Using that offset, workload reads the metadata to get the exact offset and 
length of data which it wants to read.
 # Workload performs the final read operation to get the data it wants to use 
for its purpose.

Here the first two steps are metadata reads that can be combined into a single 
footer read. When workload tries to read certain last few bytes of data (let's 
say this value is footer size), driver will intelligently read some extra bytes 
above the footer size to cater to the next read which is going to come.

Q. What is the footer size of file?
A: 16KB. Any read request trying to get the data within last 16KB of the file 
will qualify for whole footer read. This value is enough to cater to all types 
of files including parquet, OCR, etc.

Q. What is the buffer size to read when reading the footer?
A. Let's call this footer read buffer size. Prior to this PR footer read buffer 
size was same as read buffer size (default 4MB). It was found that for most of 
the workload required footer size was only 256KB. i.e. For almost all parquet 
files metadata for that file was found to be within last 256KBs. Keeping this 
in mind it does not make sense to read whole buffer length of 4MB as a part of 
footer read. Moreover, reading larger data than require incur additional costs 
in terms of server and network latencies. Based on this and extensive 
experimentation it was observed that footer read buffer size of 512KB is ideal 
for almost all the workloads running on parquet, OCR, etc.

Following configuration was introduced to configure the footer read buffer size:
{*}fs.azure.footer.read.request.size{*}: default 512 KB.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] Code Changes to enable footer optimization with new buffer size [hadoop]

2023-11-13 Thread via GitHub


anujmodi2021 commented on code in PR #6270:
URL: https://github.com/apache/hadoop/pull/6270#discussion_r1391274903


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java:
##
@@ -190,7 +193,8 @@ private void seekReadAndTest(final FileSystem fs, final 
Path testFilePath,
 try (FSDataInputStream iStream = fs.open(testFilePath)) {
   AbfsInputStream abfsInputStream = (AbfsInputStream) iStream
   .getWrappedStream();
-  long bufferSize = abfsInputStream.getBufferSize();
+  long footerReadBufferSize = abfsInputStream.getFooterReadBufferSize();

Review Comment:
   Not able to get this...
   Can you please elaborate...
   
   The footer buffer size here will be the default one unless user sets it in 
configs explicitly.
   Are you recommending this to be hardcoded.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[PR] YARN-11614. [Federation] Add Federation PolicyManager Validation Rules. [hadoop]

2023-11-13 Thread via GitHub


slfan1989 opened a new pull request, #6271:
URL: https://github.com/apache/hadoop/pull/6271

   
   
   ### Description of PR
   
   JIRA: YARN-11614. [Federation] Add Federation PolicyManager Validation Rules.
   
   When entering queue weights in Federation, it is essential to enhance the 
validation rules. If a policy manager does not support weights, a prompt should 
be provided to the user.
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] Code Changes to enable footer optimization with new buffer size [hadoop]

2023-11-13 Thread via GitHub


hadoop-yetus commented on PR #6270:
URL: https://github.com/apache/hadoop/pull/6270#issuecomment-1808276346

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 42s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  46m 41s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 33s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   1m  6s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  36m 45s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 22s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6270/2/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 8 new + 2 unchanged - 0 
fixed = 10 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   1m  7s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  36m 21s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  1s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 137m 20s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6270/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6270 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 93c32e743192 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 14214f062717f40c41a937222fe9c1326cb6b310 |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6270/2/testReport/ |
   | Max. process+thread count | 692 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6270/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To r

[jira] [Commented] (HADOOP-17377) ABFS: MsiTokenProvider doesn't retry HTTP 429 from the Instance Metadata Service

2023-11-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17785506#comment-17785506
 ] 

ASF GitHub Bot commented on HADOOP-17377:
-

nandorKollar commented on PR #5273:
URL: https://github.com/apache/hadoop/pull/5273#issuecomment-1808268461

   I think this this PR is great, however there's still one related open 
problem: the default values (2) for 
`fs.azure.oauth.token.fetch.retry.delta.backoff` is incorrect. The value of 2 
is consistent with MS recommendation 
(https://docs.microsoft.com/en-us/azure/active-directory/managed-service-identity/how-to-use-vm-token#retry-guidance),
 but it is assumed in **seconds**, but as this is used in Thread.sleep 
[here](https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/AzureADAuthenticator.java#L326),
 it will be measured in **milliseconds**. I think we should change the default 
to 2000. @steveloughran @anmolanmol1234 do you think we can implement this 
minimal change in this PR, or we should open a separate one?




> ABFS: MsiTokenProvider doesn't retry HTTP 429 from the Instance Metadata 
> Service
> 
>
> Key: HADOOP-17377
> URL: https://issues.apache.org/jira/browse/HADOOP-17377
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.2.1
>Reporter: Brandon
>Priority: Major
>  Labels: pull-request-available
>
> *Summary*
>  The instance metadata service has its own guidance for error handling and 
> retry which are different from the Blob store. 
> [https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/how-to-use-vm-token#error-handling]
> In particular, it responds with HTTP 429 if request rate is too high. Whereas 
> Blob store will respond with HTTP 503. The retry policy used only accounts 
> for the latter as it will retry any status >=500. This can result in job 
> instability when running multiple processes on the same host.
> *Environment*
>  * Spark talking to an ABFS store
>  * Hadoop 3.2.1
>  * Running on an Azure VM with user-assigned identity, ABFS configured to use 
> MsiTokenProvider
>  * 6 executor processes on each VM
> *Example*
>  Here's an example error message and stack trace. It's always the same stack 
> trace. This appears in logs a few hundred to low thousands of times a day. 
> It's luckily skating by since the download operation is wrapped in 3 retries.
> {noformat}
> AADToken: HTTP connection failed for getting token from AzureAD. Http 
> response: 429 null
> Content-Type: application/json; charset=utf-8 Content-Length: 90 Request ID:  
> Proxies: none
> First 1K of Body: {"error":"invalid_request","error_description":"Temporarily 
> throttled, too many requests"}
>   at 
> org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.executeHttpOperation(AbfsRestOperation.java:190)
>   at 
> org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.execute(AbfsRestOperation.java:125)
>   at 
> org.apache.hadoop.fs.azurebfs.services.AbfsClient.getAclStatus(AbfsClient.java:506)
>   at 
> org.apache.hadoop.fs.azurebfs.services.AbfsClient.getAclStatus(AbfsClient.java:489)
>   at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.getIsNamespaceEnabled(AzureBlobFileSystemStore.java:208)
>   at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.getFileStatus(AzureBlobFileSystemStore.java:473)
>   at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.getFileStatus(AzureBlobFileSystem.java:437)
>   at org.apache.hadoop.fs.FileSystem.isFile(FileSystem.java:1717)
>   at org.apache.spark.util.Utils$.fetchHcfsFile(Utils.scala:747)
>   at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:724)
>   at org.apache.spark.util.Utils$.fetchFile(Utils.scala:496)
>   at 
> org.apache.spark.executor.Executor.$anonfun$updateDependencies$7(Executor.scala:812)
>   at 
> org.apache.spark.executor.Executor.$anonfun$updateDependencies$7$adapted(Executor.scala:803)
>   at 
> scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:792)
>   at 
> scala.collection.mutable.HashMap.$anonfun$foreach$1(HashMap.scala:149)
>   at scala.collection.mutable.HashTable.foreachEntry(HashTable.scala:237)
>   at scala.collection.mutable.HashTable.foreachEntry$(HashTable.scala:230)
>   at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:44)
>   at scala.collection.mutable.HashMap.foreach(HashMap.scala:149)
>   at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:791)
>   at 
> org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$updateDependencies(Executor.scala:

Re: [PR] HADOOP-17377: AABFS: MsiTokenProvider doesn't retry HTTP 429/410 from the Instance Metadata Service [hadoop]

2023-11-13 Thread via GitHub


nandorKollar commented on PR #5273:
URL: https://github.com/apache/hadoop/pull/5273#issuecomment-1808268461

   I think this this PR is great, however there's still one related open 
problem: the default values (2) for 
`fs.azure.oauth.token.fetch.retry.delta.backoff` is incorrect. The value of 2 
is consistent with MS recommendation 
(https://docs.microsoft.com/en-us/azure/active-directory/managed-service-identity/how-to-use-vm-token#retry-guidance),
 but it is assumed in **seconds**, but as this is used in Thread.sleep 
[here](https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/AzureADAuthenticator.java#L326),
 it will be measured in **milliseconds**. I think we should change the default 
to 2000. @steveloughran @anmolanmol1234 do you think we can implement this 
minimal change in this PR, or we should open a separate one?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-11610. [Federation] Add WeightedHomePolicyManager. [hadoop]

2023-11-13 Thread via GitHub


slfan1989 commented on PR #6256:
URL: https://github.com/apache/hadoop/pull/6256#issuecomment-1808239393

   @goiri Can you help review this PR? Thank you very much!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-11577. Improve FederationInterceptorREST Method Result. [hadoop]

2023-11-13 Thread via GitHub


slfan1989 commented on PR #6190:
URL: https://github.com/apache/hadoop/pull/6190#issuecomment-1808238847

   @goiri Can you help review this PR? Thank you very much!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] Code Changes to enable footer optimization with new buffer size [hadoop]

2023-11-13 Thread via GitHub


hadoop-yetus commented on PR #6270:
URL: https://github.com/apache/hadoop/pull/6270#issuecomment-1808206576

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 32s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  48m  3s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 28s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   1m  3s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  39m 10s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 18s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6270/1/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 6 new + 2 unchanged - 0 
fixed = 8 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  38m 14s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  2s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 141m 29s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6270/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6270 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux c4ae513e19be 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5af02c5881f01f9cc89f3ea73dba4f643b616cab |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6270/1/testReport/ |
   | Max. process+thread count | 532 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6270/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To re

Re: [PR] Code Changes to enable footer optimization with new buffer size [hadoop]

2023-11-13 Thread via GitHub


saxenapranav commented on code in PR #6270:
URL: https://github.com/apache/hadoop/pull/6270#discussion_r1391060543


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java:
##
@@ -190,7 +193,8 @@ private void seekReadAndTest(final FileSystem fs, final 
Path testFilePath,
 try (FSDataInputStream iStream = fs.open(testFilePath)) {
   AbfsInputStream abfsInputStream = (AbfsInputStream) iStream
   .getWrappedStream();
-  long bufferSize = abfsInputStream.getBufferSize();
+  long footerReadBufferSize = abfsInputStream.getFooterReadBufferSize();

Review Comment:
   +1 on diff sizes of file. Should we have parameterized values for 
getFooterReadBufferSize. Right now, it depends on what test-config developer 
has.



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java:
##
@@ -812,6 +814,11 @@ public int getBufferSize() {
 return bufferSize;
   }
 
+  @VisibleForTesting
+  public int getFooterReadBufferSize() {

Review Comment:
   lets have it package-protected.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[PR] Code Changes to enable footer optimization with new buffer size [hadoop]

2023-11-13 Thread via GitHub


anujmodi2021 opened a new pull request, #6270:
URL: https://github.com/apache/hadoop/pull/6270

   WIP : Will add description here.
   ### Description of PR
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18967) Allow secure mode to be enabled with no downtime

2023-11-13 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17785460#comment-17785460
 ] 

Steve Loughran commented on HADOOP-18967:
-

this will need hdfs and yarn tickets to match, assuming it spans them all. If 
it is HDFS only, then this JIRA can be moved to that project

> Allow secure mode to be enabled with no downtime
> 
>
> Key: HADOOP-18967
> URL: https://issues.apache.org/jira/browse/HADOOP-18967
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Charles Connell
>Priority: Minor
>
> My employer (HubSpot) recently completed transitioning all of the Hadoop 
> clusters underlying our HBase databases into secure mode. It was important to 
> us that we be able to make this change without impacting the functionality of 
> our SaaS product. To accomplish this, we added some new settings to our fork 
> of Hadoop, and fixed a latent bug. This ticket is my intention to contribute 
> these changes back to the mainline code, so others can benefit. A patch will 
> be incoming.
> The basic theme of the new functionality is the ability to accept incoming 
> secure connections without requiring them or making them outgoing. Secure 
> mode enablement will then be done in two stages.
>  * First, all nodes are given configuration to accept secure connections, and 
> are gracefully rolling-restarted to adopt this new functionality. I'll be 
> adding the new settings to make this stage possible.
>  * Second, all nodes are told to require incoming connections be secure, and 
> to make secure outgoing connections, and the settings added in the first 
> stage are removed. Nodes are again rolling-restarted to adopt this 
> functionality. The settings in this final state will look the same as in any 
> secure Hadoop cluster today.
> I'll include documentation changes explaining how to do this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17251. Optimize MountTableResolver#TRASH_PATTERN [hadoop]

2023-11-13 Thread via GitHub


hadoop-yetus commented on PR #6268:
URL: https://github.com/apache/hadoop/pull/6268#issuecomment-1807962354

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  19m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  48m 19s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 28s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   1m 20s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  38m 37s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  41m 25s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  22m 17s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6268/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 185m 30s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterTrash |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6268/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6268 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 75d7c2fe7f8d 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 02c5687d8279176592dc20e55fd4149b66b18f64 |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6268/1/testReport/ |
   | Max. process+thread count | 2612 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6268/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4

[jira] [Updated] (HADOOP-18970) Upgrade hadoop2 docker scripts to latest 2.10.2

2023-11-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-18970:

Labels: pull-request-available  (was: )

> Upgrade hadoop2 docker scripts to latest 2.10.2
> ---
>
> Key: HADOOP-18970
> URL: https://issues.apache.org/jira/browse/HADOOP-18970
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>
> Apply enhancements from {{docker-hadoop-3}} branch, and upgrade to latest 
> Hadoop 2 release: 2.10.2.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17250. EditLogTailer#triggerActiveLogRoll should handle thread Interrupted [hadoop]

2023-11-13 Thread via GitHub


haiyang1987 commented on PR #6266:
URL: https://github.com/apache/hadoop/pull/6266#issuecomment-1807711506

   Hi @Hexiaoqiao @ZanderXu  @ayushtkn @zhangshuyan0 Would you mind to take a 
review this pr when you have free time? thank you very much~ 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17249. Fix TestDFSUtil.testIsValidName() unit test failure [hadoop]

2023-11-13 Thread via GitHub


GauthamBanasandra commented on PR #6249:
URL: https://github.com/apache/hadoop/pull/6249#issuecomment-1807689988

   @steveloughran could you please review this PR?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17242. Make congestion backoff time configurable. [hadoop]

2023-11-13 Thread via GitHub


hfutatzhanghb commented on PR #6227:
URL: https://github.com/apache/hadoop/pull/6227#issuecomment-1807663175

   @ayushtkn Hi, Sir. Have fixed some problems. Could you please review this pr 
again when you have free time? Thanks a lot.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[PR] HDFS-17251. Optimize MountTableResolver#TRASH_PATTERN [hadoop]

2023-11-13 Thread via GitHub


hfutatzhanghb opened a new pull request, #6268:
URL: https://github.com/apache/hadoop/pull/6268

   ### Description of PR
   We should make the length of date string of MountTableResolver#TRASH_PATTERN 
have fixed length.
   
   because the trash dirs look like below pattern:
   
   /user/hdfs/.Trash/231113002000
   
   the data string has a fixed length of 12.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org