Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-04-23 Thread via GitHub


steveloughran merged PR #6763:
URL: https://github.com/apache/hadoop/pull/6763


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-04-23 Thread via GitHub


hadoop-yetus commented on PR #6763:
URL: https://github.com/apache/hadoop/pull/6763#issuecomment-2071588551

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   6m 39s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ branch-3.4 Compile Tests _ |
   | +0 :ok: |  mvndep  |   4m  5s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  28m 45s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  compile  |   8m 52s |  |  branch-3.4 passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   8m  6s |  |  branch-3.4 passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   2m  2s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 19s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  branch-3.4 passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  branch-3.4 passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 11s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  shadedclient  |  20m  4s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   0m 47s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 22s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   8m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   7m 56s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   7m 56s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 55s |  |  root: The patch generated 
0 new + 5 unchanged - 8 fixed = 5 total (was 13)  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 17s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 10s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m 31s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m  8s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 39s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 153m 32s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6763/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6763 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux d47908eada51 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.4 / f7f022f8ef957ff32d3f13eaa3e7e7c245b75406 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6763/1/testReport/ |
   | Max. process+thread count | 3153 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-azure 
U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6763/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.a

Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-04-22 Thread via GitHub


saxenapranav commented on PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#issuecomment-2071476916

   Thank you @steveloughran very much! Have opened a PR against branch-3.4 
https://github.com/apache/hadoop/pull/6763. Thank you!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-04-22 Thread via GitHub


saxenapranav commented on PR #6763:
URL: https://github.com/apache/hadoop/pull/6763#issuecomment-2071475267

   --
    AGGREGATED TEST RESULT 
   
   
   HNS-OAuth
   
   
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 2
   [WARNING] Tests run: 617, Failures: 0, Errors: 0, Skipped: 73
   [WARNING] Tests run: 380, Failures: 0, Errors: 0, Skipped: 57
   
   
   HNS-SharedKey
   
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 3
   [WARNING] Tests run: 617, Failures: 0, Errors: 0, Skipped: 28
   [WARNING] Tests run: 380, Failures: 0, Errors: 0, Skipped: 41
   
   
   NonHNS-SharedKey
   
   
   
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 9
   [WARNING] Tests run: 601, Failures: 0, Errors: 0, Skipped: 268
   [WARNING] Tests run: 380, Failures: 0, Errors: 0, Skipped: 44
   
   
   AppendBlob-HNS-OAuth
   
   
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 2
   [WARNING] Tests run: 617, Failures: 0, Errors: 0, Skipped: 75
   [WARNING] Tests run: 380, Failures: 0, Errors: 0, Skipped: 81
   
   Time taken: 20 mins 36 secs.
   
   azureuser@Hadoop-VM-EAST2:~/hadoop/hadoop-tools/hadoop-azure$ git log
   commit f7f022f8ef957ff32d3f13eaa3e7e7c245b75406 (HEAD -> 
saxenapranav/footerBufferSizeFix-3.4, 
origin/saxenapranav/footerBufferSizeFix-3.4)
   Author: Pranav Saxena <108325433+saxenapra...@users.noreply.github.com>
   Date:   Mon Apr 22 23:06:12 2024 +0530
   
   HADOOP-19102. [ABFS] FooterReadBufferSize should not be greater than 
readBufferSize (#6617)
   
   
   Contributed by  Pranav Saxena


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-04-22 Thread via GitHub


steveloughran merged PR #6617:
URL: https://github.com/apache/hadoop/pull/6617


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-04-22 Thread via GitHub


hadoop-yetus commented on PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#issuecomment-2069841215

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 43s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 10s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  41m  1s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  20m 41s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   5m 22s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 51s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 15s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   4m 25s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  43m 38s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 31s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 59s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  19m 59s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 55s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |  21m 55s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   5m 21s |  |  root: The patch generated 
0 new + 5 unchanged - 8 fixed = 5 total (was 13)  |
   | +1 :green_heart: |  mvnsite  |   2m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 13s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   4m 27s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  43m 48s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  20m 34s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 43s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  0s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 295m 37s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/16/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6617 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 0684a4e10c42 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 235426078ec6cb55f0165bc455fda06589e06218 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/16/testReport/ |
   | Max. process+thread count | 1838 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-azure 
U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/16/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automat

Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-04-22 Thread via GitHub


saxenapranav commented on PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#issuecomment-2069027408

   Thanks @steveloughran for the review. Have refactored the import order, and 
have refactored `log.error` to `log.debug`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-04-19 Thread via GitHub


steveloughran commented on code in PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#discussion_r1572728278


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/functional/FutureIO.java:
##
@@ -141,9 +145,11 @@ public static  List awaitFuture(final 
Collection> collection)
   }
   return results;
 } catch (InterruptedException e) {
+  LOG.error("Execution of future interrupted ", e);

Review Comment:
   lets make these a debug() and let the caller handle the the rest.



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/functional/FutureIO.java:
##
@@ -141,9 +145,11 @@ public static  List awaitFuture(final 
Collection> collection)
   }
   return results;
 } catch (InterruptedException e) {
+  LOG.error("Execution of future interrupted ", e);
   throw (InterruptedIOException) new InterruptedIOException(e.toString())
   .initCause(e);
 } catch (ExecutionException e) {
+  LOG.error("Execution of future failed with exception", e.getCause());

Review Comment:
   log this at debug. handlers up the stack can choose what to do -it may be 
harmless



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/functional/FutureIO.java:
##
@@ -31,6 +32,8 @@
 import java.util.concurrent.Future;
 import java.util.concurrent.TimeUnit;
 import java.util.concurrent.TimeoutException;
+import org.slf4j.Logger;

Review Comment:
   shouldn't be in same import block as java*. 
   
   tip: you can set your IDE up for these rules



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-04-16 Thread via GitHub


saxenapranav commented on PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#issuecomment-2060283113

   Hi @steveloughran , thank you a lot for the review. Have taken the comments. 
Have taken assertj in the new test util class. Class 
`ITestAbfsInputStreamFooter` is now extending `AbstractAbfsScaleTest` so it run 
on the -Dscale parameter. Requesting you to kindly review the PR please. Thank 
you!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-04-16 Thread via GitHub


saxenapranav commented on code in PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#discussion_r1567168026


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStreamTestUtils.java:
##
@@ -0,0 +1,181 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.lang.reflect.Field;
+import java.util.Map;
+import java.util.Random;
+import java.util.UUID;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest;
+import org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem;
+import org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore;
+import org.apache.hadoop.fs.azurebfs.utils.UriUtils;
+
+import static 
org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest.SHORTENED_GUID_LEN;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotEquals;
+
+public class AbfsInputStreamTestUtils {
+
+  public static final int HUNDRED = 100;
+
+  private final AbstractAbfsIntegrationTest abstractAbfsIntegrationTest;
+
+  public AbfsInputStreamTestUtils(AbstractAbfsIntegrationTest 
abstractAbfsIntegrationTest) {
+this.abstractAbfsIntegrationTest = abstractAbfsIntegrationTest;
+  }
+
+  private Path path(String filepath) throws IOException {
+return abstractAbfsIntegrationTest.getFileSystem().makeQualified(
+new Path(getTestPath(), getUniquePath(filepath)));
+  }
+
+  private Path getTestPath() {
+Path path = new Path(UriUtils.generateUniqueTestPath());
+return path;
+  }
+
+  /**
+   * Generate a unique path using the given filepath.
+   * @param filepath path string
+   * @return unique path created from filepath and a GUID
+   */
+  private Path getUniquePath(String filepath) {
+if (filepath.equals("/")) {
+  return new Path(filepath);
+}
+return new Path(filepath + StringUtils
+.right(UUID.randomUUID().toString(), SHORTENED_GUID_LEN));
+  }
+
+  public AzureBlobFileSystem getFileSystem(boolean readSmallFilesCompletely)
+  throws IOException {
+final AzureBlobFileSystem fs = abstractAbfsIntegrationTest.getFileSystem();
+abstractAbfsIntegrationTest.getAbfsStore(fs).getAbfsConfiguration()
+.setReadSmallFilesCompletely(readSmallFilesCompletely);
+abstractAbfsIntegrationTest.getAbfsStore(fs).getAbfsConfiguration()
+.setOptimizeFooterRead(false);
+abstractAbfsIntegrationTest.getAbfsStore(fs).getAbfsConfiguration()
+.setIsChecksumValidationEnabled(true);
+return fs;
+  }
+
+  public byte[] getRandomBytesArray(int length) {
+final byte[] b = new byte[length];
+new Random().nextBytes(b);
+return b;
+  }
+
+  public Path createFileWithContent(FileSystem fs, String fileName,
+  byte[] fileContent) throws IOException {
+Path testFilePath = path(fileName);
+try (FSDataOutputStream oStream = fs.create(testFilePath)) {
+  oStream.write(fileContent);
+  oStream.flush();
+}
+return testFilePath;
+  }
+
+  public AzureBlobFileSystemStore getAbfsStore(FileSystem fs)
+  throws NoSuchFieldException, IllegalAccessException {
+AzureBlobFileSystem abfs = (AzureBlobFileSystem) fs;
+Field abfsStoreField = AzureBlobFileSystem.class
+.getDeclaredField("abfsStore");
+abfsStoreField.setAccessible(true);
+return (AzureBlobFileSystemStore) abfsStoreField.get(abfs);
+  }
+
+  public Map getInstrumentationMap(FileSystem fs)
+  throws NoSuchFieldException, IllegalAccessException {
+AzureBlobFileSystem abfs = (AzureBlobFileSystem) fs;
+Field abfsCountersField = AzureBlobFileSystem.class
+.getDeclaredField("abfsCounters");
+abfsCountersField.setAccessible(true);
+AbfsCounters abfsCounters = (AbfsCounters) abfsCountersField.get(abfs);
+return abfsCounters.toMap();
+  }
+
+  public void assertContentReadCorrectly(byte[] actualFileContent, int from,

Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-04-16 Thread via GitHub


hadoop-yetus commented on PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#issuecomment-2059255931

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 22s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 23s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 29s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  10m 10s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   9m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   2m 15s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 23s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m  5s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 24s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   9m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m  9s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   8m  9s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   2m  5s |  |  root: The patch generated 
0 new + 5 unchanged - 8 fixed = 5 total (was 13)  |
   | +1 :green_heart: |  mvnsite  |   1m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 30s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 55s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m 26s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m  4s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 41s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 160m 41s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/15/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6617 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 66597cd82f27 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 8aad99518b4073db6dbe140139710aaba4d3398f |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/15/testReport/ |
   | Max. process+thread count | 3152 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-azure 
U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/15/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automat

Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-04-16 Thread via GitHub


hadoop-yetus commented on PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#issuecomment-2059240094

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  12m 22s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 31s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  38m 38s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  18m 58s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  17m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   4m 21s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 35s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 59s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   4m  1s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  36m 28s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 58s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  17m 58s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 43s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |  16m 43s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m 41s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/13/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 1 new + 5 unchanged - 8 fixed = 6 total (was 13)  
|
   | +1 :green_heart: |  mvnsite  |   2m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   4m 11s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  35m 52s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 46s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 25s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 57s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 270m 52s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/13/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6617 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux b41aa4f4d6ff 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / fbbf9faf71f82e7e90ecc0efd939a8a4f12f6cb1 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/13/testReport/ |
   | Max. process+thread count | 3152 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-azure 
U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/13/console |
   | versions | git

Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-04-16 Thread via GitHub


hadoop-yetus commented on PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#issuecomment-2059115354

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m  5s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 39s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   9m  0s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   8m 10s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   2m  8s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 23s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 20s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 17s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 31s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   0m 45s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 51s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   8m 51s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   9m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   2m 12s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/14/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 1 new + 5 unchanged - 8 fixed = 6 total (was 13)  
|
   | +1 :green_heart: |  mvnsite  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 26s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 29s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  15m 59s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m 57s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 157m 19s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/14/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6617 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 617ccb0b28ea 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 3cf5c721b0407feb85de216e0a4b942928ad21fd |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/14/testReport/ |
   | Max. process+thread count | 3153 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-azure 
U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/14/console |
   | versions | git

Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-04-16 Thread via GitHub


hadoop-yetus commented on PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#issuecomment-2058922165

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  22m 58s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 33s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  39m 54s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  20m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   5m  7s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 36s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   4m  2s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  40m 47s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 34s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m  7s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  22m  7s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m  5s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |  20m  5s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   5m 35s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/11/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 3 new + 5 unchanged - 8 fixed = 8 total (was 13)  
|
   | +1 :green_heart: |  mvnsite  |   3m 15s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 52s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   4m 20s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  41m 59s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 44s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 44s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 59s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 309m 57s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/11/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6617 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 57dac4ed779c 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / bd4f396e6a563f85aafebc8384c3853bdc1d968c |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/11/testReport/ |
   | Max. process+thread count | 1272 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-azure 
U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/11/console |
   | versions | git

Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-04-16 Thread via GitHub


hadoop-yetus commented on PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#issuecomment-2058878425

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   6m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 58s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 40s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   9m 21s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   9m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   2m 23s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 30s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 17s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 19s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 41s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 20s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   0m 51s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 39s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   8m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   8m 16s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   2m  4s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/12/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 1 new + 5 unchanged - 8 fixed = 6 total (was 13)  
|
   | +1 :green_heart: |  mvnsite  |   1m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 32s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 20s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m  7s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m 56s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 165m 22s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/12/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6617 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 5bed7fbf70ff 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5bf2321301b3dc5ab184e7cf6b02291a9eee7538 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/12/testReport/ |
   | Max. process+thread count | 2557 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-azure 
U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/12/console |
   | versions | git

Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-04-16 Thread via GitHub


saxenapranav commented on code in PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#discussion_r1567171371


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java:
##
@@ -54,9 +63,44 @@ public class ITestAbfsInputStreamReadFooter extends 
ITestAbfsInputStream {
   private static final int TEN = 10;
   private static final int TWENTY = 20;
 
+  private static ExecutorService executorService;
+
+  private static final int SIZE_256_KB = 256 * ONE_KB;
+
+  private static final Integer[] FILE_SIZES = {

Review Comment:
   As suggested, have made the class `ITestAbfsInputStreamFooter` extend 
`AbstractAbfsScaleTest`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-04-16 Thread via GitHub


saxenapranav commented on code in PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#discussion_r1567168026


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStreamTestUtils.java:
##
@@ -0,0 +1,181 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.lang.reflect.Field;
+import java.util.Map;
+import java.util.Random;
+import java.util.UUID;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest;
+import org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem;
+import org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore;
+import org.apache.hadoop.fs.azurebfs.utils.UriUtils;
+
+import static 
org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest.SHORTENED_GUID_LEN;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotEquals;
+
+public class AbfsInputStreamTestUtils {
+
+  public static final int HUNDRED = 100;
+
+  private final AbstractAbfsIntegrationTest abstractAbfsIntegrationTest;
+
+  public AbfsInputStreamTestUtils(AbstractAbfsIntegrationTest 
abstractAbfsIntegrationTest) {
+this.abstractAbfsIntegrationTest = abstractAbfsIntegrationTest;
+  }
+
+  private Path path(String filepath) throws IOException {
+return abstractAbfsIntegrationTest.getFileSystem().makeQualified(
+new Path(getTestPath(), getUniquePath(filepath)));
+  }
+
+  private Path getTestPath() {
+Path path = new Path(UriUtils.generateUniqueTestPath());
+return path;
+  }
+
+  /**
+   * Generate a unique path using the given filepath.
+   * @param filepath path string
+   * @return unique path created from filepath and a GUID
+   */
+  private Path getUniquePath(String filepath) {
+if (filepath.equals("/")) {
+  return new Path(filepath);
+}
+return new Path(filepath + StringUtils
+.right(UUID.randomUUID().toString(), SHORTENED_GUID_LEN));
+  }
+
+  public AzureBlobFileSystem getFileSystem(boolean readSmallFilesCompletely)
+  throws IOException {
+final AzureBlobFileSystem fs = abstractAbfsIntegrationTest.getFileSystem();
+abstractAbfsIntegrationTest.getAbfsStore(fs).getAbfsConfiguration()
+.setReadSmallFilesCompletely(readSmallFilesCompletely);
+abstractAbfsIntegrationTest.getAbfsStore(fs).getAbfsConfiguration()
+.setOptimizeFooterRead(false);
+abstractAbfsIntegrationTest.getAbfsStore(fs).getAbfsConfiguration()
+.setIsChecksumValidationEnabled(true);
+return fs;
+  }
+
+  public byte[] getRandomBytesArray(int length) {
+final byte[] b = new byte[length];
+new Random().nextBytes(b);
+return b;
+  }
+
+  public Path createFileWithContent(FileSystem fs, String fileName,
+  byte[] fileContent) throws IOException {
+Path testFilePath = path(fileName);
+try (FSDataOutputStream oStream = fs.create(testFilePath)) {
+  oStream.write(fileContent);
+  oStream.flush();
+}
+return testFilePath;
+  }
+
+  public AzureBlobFileSystemStore getAbfsStore(FileSystem fs)
+  throws NoSuchFieldException, IllegalAccessException {
+AzureBlobFileSystem abfs = (AzureBlobFileSystem) fs;
+Field abfsStoreField = AzureBlobFileSystem.class
+.getDeclaredField("abfsStore");
+abfsStoreField.setAccessible(true);
+return (AzureBlobFileSystemStore) abfsStoreField.get(abfs);
+  }
+
+  public Map getInstrumentationMap(FileSystem fs)
+  throws NoSuchFieldException, IllegalAccessException {
+AzureBlobFileSystem abfs = (AzureBlobFileSystem) fs;
+Field abfsCountersField = AzureBlobFileSystem.class
+.getDeclaredField("abfsCounters");
+abfsCountersField.setAccessible(true);
+AbfsCounters abfsCounters = (AbfsCounters) abfsCountersField.get(abfs);
+return abfsCounters.toMap();
+  }
+
+  public void assertContentReadCorrectly(byte[] actualFileContent, int from,

Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-04-16 Thread via GitHub


saxenapranav commented on code in PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#discussion_r1567090171


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStreamTestUtils.java:
##
@@ -0,0 +1,181 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.lang.reflect.Field;
+import java.util.Map;
+import java.util.Random;
+import java.util.UUID;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest;
+import org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem;
+import org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore;
+import org.apache.hadoop.fs.azurebfs.utils.UriUtils;
+
+import static 
org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest.SHORTENED_GUID_LEN;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotEquals;
+
+public class AbfsInputStreamTestUtils {
+
+  public static final int HUNDRED = 100;
+
+  private final AbstractAbfsIntegrationTest abstractAbfsIntegrationTest;
+
+  public AbfsInputStreamTestUtils(AbstractAbfsIntegrationTest 
abstractAbfsIntegrationTest) {
+this.abstractAbfsIntegrationTest = abstractAbfsIntegrationTest;
+  }
+
+  private Path path(String filepath) throws IOException {
+return abstractAbfsIntegrationTest.getFileSystem().makeQualified(
+new Path(getTestPath(), getUniquePath(filepath)));
+  }
+
+  private Path getTestPath() {
+Path path = new Path(UriUtils.generateUniqueTestPath());
+return path;
+  }
+
+  /**
+   * Generate a unique path using the given filepath.
+   * @param filepath path string
+   * @return unique path created from filepath and a GUID
+   */
+  private Path getUniquePath(String filepath) {
+if (filepath.equals("/")) {
+  return new Path(filepath);
+}
+return new Path(filepath + StringUtils
+.right(UUID.randomUUID().toString(), SHORTENED_GUID_LEN));
+  }
+
+  public AzureBlobFileSystem getFileSystem(boolean readSmallFilesCompletely)
+  throws IOException {
+final AzureBlobFileSystem fs = abstractAbfsIntegrationTest.getFileSystem();
+abstractAbfsIntegrationTest.getAbfsStore(fs).getAbfsConfiguration()
+.setReadSmallFilesCompletely(readSmallFilesCompletely);
+abstractAbfsIntegrationTest.getAbfsStore(fs).getAbfsConfiguration()
+.setOptimizeFooterRead(false);
+abstractAbfsIntegrationTest.getAbfsStore(fs).getAbfsConfiguration()
+.setIsChecksumValidationEnabled(true);
+return fs;
+  }
+
+  public byte[] getRandomBytesArray(int length) {
+final byte[] b = new byte[length];
+new Random().nextBytes(b);
+return b;
+  }
+
+  public Path createFileWithContent(FileSystem fs, String fileName,
+  byte[] fileContent) throws IOException {
+Path testFilePath = path(fileName);
+try (FSDataOutputStream oStream = fs.create(testFilePath)) {
+  oStream.write(fileContent);
+  oStream.flush();
+}
+return testFilePath;
+  }
+
+  public AzureBlobFileSystemStore getAbfsStore(FileSystem fs)
+  throws NoSuchFieldException, IllegalAccessException {
+AzureBlobFileSystem abfs = (AzureBlobFileSystem) fs;
+Field abfsStoreField = AzureBlobFileSystem.class
+.getDeclaredField("abfsStore");
+abfsStoreField.setAccessible(true);
+return (AzureBlobFileSystemStore) abfsStoreField.get(abfs);
+  }
+
+  public Map getInstrumentationMap(FileSystem fs)
+  throws NoSuchFieldException, IllegalAccessException {
+AzureBlobFileSystem abfs = (AzureBlobFileSystem) fs;
+Field abfsCountersField = AzureBlobFileSystem.class
+.getDeclaredField("abfsCounters");
+abfsCountersField.setAccessible(true);
+AbfsCounters abfsCounters = (AbfsCounters) abfsCountersField.get(abfs);
+return abfsCounters.toMap();
+  }
+
+  public void assertContentReadCorrectly(byte[] actualFileContent, int from,

Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-04-16 Thread via GitHub


saxenapranav commented on code in PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#discussion_r1567089420


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStreamTestUtils.java:
##
@@ -0,0 +1,181 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.lang.reflect.Field;
+import java.util.Map;
+import java.util.Random;
+import java.util.UUID;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest;
+import org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem;
+import org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore;
+import org.apache.hadoop.fs.azurebfs.utils.UriUtils;
+
+import static 
org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest.SHORTENED_GUID_LEN;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotEquals;
+
+public class AbfsInputStreamTestUtils {
+
+  public static final int HUNDRED = 100;
+
+  private final AbstractAbfsIntegrationTest abstractAbfsIntegrationTest;
+
+  public AbfsInputStreamTestUtils(AbstractAbfsIntegrationTest 
abstractAbfsIntegrationTest) {
+this.abstractAbfsIntegrationTest = abstractAbfsIntegrationTest;
+  }
+
+  private Path path(String filepath) throws IOException {
+return abstractAbfsIntegrationTest.getFileSystem().makeQualified(
+new Path(getTestPath(), getUniquePath(filepath)));
+  }
+
+  private Path getTestPath() {
+Path path = new Path(UriUtils.generateUniqueTestPath());
+return path;
+  }
+
+  /**
+   * Generate a unique path using the given filepath.
+   * @param filepath path string
+   * @return unique path created from filepath and a GUID
+   */
+  private Path getUniquePath(String filepath) {
+if (filepath.equals("/")) {
+  return new Path(filepath);
+}
+return new Path(filepath + StringUtils
+.right(UUID.randomUUID().toString(), SHORTENED_GUID_LEN));
+  }
+
+  public AzureBlobFileSystem getFileSystem(boolean readSmallFilesCompletely)
+  throws IOException {
+final AzureBlobFileSystem fs = abstractAbfsIntegrationTest.getFileSystem();
+abstractAbfsIntegrationTest.getAbfsStore(fs).getAbfsConfiguration()
+.setReadSmallFilesCompletely(readSmallFilesCompletely);
+abstractAbfsIntegrationTest.getAbfsStore(fs).getAbfsConfiguration()
+.setOptimizeFooterRead(false);
+abstractAbfsIntegrationTest.getAbfsStore(fs).getAbfsConfiguration()
+.setIsChecksumValidationEnabled(true);
+return fs;
+  }
+
+  public byte[] getRandomBytesArray(int length) {
+final byte[] b = new byte[length];
+new Random().nextBytes(b);
+return b;
+  }
+
+  public Path createFileWithContent(FileSystem fs, String fileName,
+  byte[] fileContent) throws IOException {
+Path testFilePath = path(fileName);
+try (FSDataOutputStream oStream = fs.create(testFilePath)) {
+  oStream.write(fileContent);
+  oStream.flush();
+}
+return testFilePath;
+  }
+
+  public AzureBlobFileSystemStore getAbfsStore(FileSystem fs)
+  throws NoSuchFieldException, IllegalAccessException {
+AzureBlobFileSystem abfs = (AzureBlobFileSystem) fs;
+Field abfsStoreField = AzureBlobFileSystem.class
+.getDeclaredField("abfsStore");
+abfsStoreField.setAccessible(true);
+return (AzureBlobFileSystemStore) abfsStoreField.get(abfs);
+  }
+
+  public Map getInstrumentationMap(FileSystem fs)
+  throws NoSuchFieldException, IllegalAccessException {
+AzureBlobFileSystem abfs = (AzureBlobFileSystem) fs;
+Field abfsCountersField = AzureBlobFileSystem.class
+.getDeclaredField("abfsCounters");
+abfsCountersField.setAccessible(true);
+AbfsCounters abfsCounters = (AbfsCounters) abfsCountersField.get(abfs);
+return abfsCounters.toMap();
+  }
+
+  public void assertContentReadCorrectly(byte[] actualFileContent, int from,

Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-04-16 Thread via GitHub


saxenapranav commented on code in PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#discussion_r1567087997


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStreamTestUtils.java:
##
@@ -0,0 +1,181 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.lang.reflect.Field;
+import java.util.Map;
+import java.util.Random;
+import java.util.UUID;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest;
+import org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem;
+import org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore;
+import org.apache.hadoop.fs.azurebfs.utils.UriUtils;
+
+import static 
org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest.SHORTENED_GUID_LEN;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotEquals;
+
+public class AbfsInputStreamTestUtils {
+
+  public static final int HUNDRED = 100;
+
+  private final AbstractAbfsIntegrationTest abstractAbfsIntegrationTest;
+
+  public AbfsInputStreamTestUtils(AbstractAbfsIntegrationTest 
abstractAbfsIntegrationTest) {
+this.abstractAbfsIntegrationTest = abstractAbfsIntegrationTest;
+  }
+
+  private Path path(String filepath) throws IOException {
+return abstractAbfsIntegrationTest.getFileSystem().makeQualified(
+new Path(getTestPath(), getUniquePath(filepath)));
+  }
+
+  private Path getTestPath() {
+Path path = new Path(UriUtils.generateUniqueTestPath());
+return path;
+  }
+
+  /**
+   * Generate a unique path using the given filepath.
+   * @param filepath path string
+   * @return unique path created from filepath and a GUID
+   */
+  private Path getUniquePath(String filepath) {
+if (filepath.equals("/")) {
+  return new Path(filepath);
+}
+return new Path(filepath + StringUtils
+.right(UUID.randomUUID().toString(), SHORTENED_GUID_LEN));
+  }
+
+  public AzureBlobFileSystem getFileSystem(boolean readSmallFilesCompletely)
+  throws IOException {
+final AzureBlobFileSystem fs = abstractAbfsIntegrationTest.getFileSystem();
+abstractAbfsIntegrationTest.getAbfsStore(fs).getAbfsConfiguration()
+.setReadSmallFilesCompletely(readSmallFilesCompletely);
+abstractAbfsIntegrationTest.getAbfsStore(fs).getAbfsConfiguration()
+.setOptimizeFooterRead(false);
+abstractAbfsIntegrationTest.getAbfsStore(fs).getAbfsConfiguration()
+.setIsChecksumValidationEnabled(true);
+return fs;
+  }
+
+  public byte[] getRandomBytesArray(int length) {
+final byte[] b = new byte[length];
+new Random().nextBytes(b);
+return b;
+  }
+
+  public Path createFileWithContent(FileSystem fs, String fileName,
+  byte[] fileContent) throws IOException {
+Path testFilePath = path(fileName);
+try (FSDataOutputStream oStream = fs.create(testFilePath)) {
+  oStream.write(fileContent);
+  oStream.flush();
+}
+return testFilePath;
+  }
+
+  public AzureBlobFileSystemStore getAbfsStore(FileSystem fs)
+  throws NoSuchFieldException, IllegalAccessException {
+AzureBlobFileSystem abfs = (AzureBlobFileSystem) fs;
+Field abfsStoreField = AzureBlobFileSystem.class
+.getDeclaredField("abfsStore");
+abfsStoreField.setAccessible(true);
+return (AzureBlobFileSystemStore) abfsStoreField.get(abfs);
+  }
+
+  public Map getInstrumentationMap(FileSystem fs)
+  throws NoSuchFieldException, IllegalAccessException {
+AzureBlobFileSystem abfs = (AzureBlobFileSystem) fs;
+Field abfsCountersField = AzureBlobFileSystem.class
+.getDeclaredField("abfsCounters");
+abfsCountersField.setAccessible(true);
+AbfsCounters abfsCounters = (AbfsCounters) abfsCountersField.get(abfs);
+return abfsCounters.toMap();
+  }
+
+  public void assertContentReadCorrectly(byte[] actualFileContent, int from,

Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-04-16 Thread via GitHub


saxenapranav commented on code in PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#discussion_r1567074053


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStreamTestUtils.java:
##
@@ -0,0 +1,181 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.lang.reflect.Field;
+import java.util.Map;
+import java.util.Random;
+import java.util.UUID;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest;
+import org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem;
+import org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore;
+import org.apache.hadoop.fs.azurebfs.utils.UriUtils;
+
+import static 
org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest.SHORTENED_GUID_LEN;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotEquals;
+
+public class AbfsInputStreamTestUtils {
+
+  public static final int HUNDRED = 100;
+
+  private final AbstractAbfsIntegrationTest abstractAbfsIntegrationTest;
+
+  public AbfsInputStreamTestUtils(AbstractAbfsIntegrationTest 
abstractAbfsIntegrationTest) {
+this.abstractAbfsIntegrationTest = abstractAbfsIntegrationTest;
+  }
+
+  private Path path(String filepath) throws IOException {
+return abstractAbfsIntegrationTest.getFileSystem().makeQualified(
+new Path(getTestPath(), getUniquePath(filepath)));
+  }
+
+  private Path getTestPath() {
+Path path = new Path(UriUtils.generateUniqueTestPath());
+return path;
+  }
+
+  /**
+   * Generate a unique path using the given filepath.
+   * @param filepath path string
+   * @return unique path created from filepath and a GUID
+   */
+  private Path getUniquePath(String filepath) {
+if (filepath.equals("/")) {
+  return new Path(filepath);
+}
+return new Path(filepath + StringUtils
+.right(UUID.randomUUID().toString(), SHORTENED_GUID_LEN));
+  }
+
+  public AzureBlobFileSystem getFileSystem(boolean readSmallFilesCompletely)
+  throws IOException {
+final AzureBlobFileSystem fs = abstractAbfsIntegrationTest.getFileSystem();
+abstractAbfsIntegrationTest.getAbfsStore(fs).getAbfsConfiguration()
+.setReadSmallFilesCompletely(readSmallFilesCompletely);
+abstractAbfsIntegrationTest.getAbfsStore(fs).getAbfsConfiguration()
+.setOptimizeFooterRead(false);
+abstractAbfsIntegrationTest.getAbfsStore(fs).getAbfsConfiguration()
+.setIsChecksumValidationEnabled(true);
+return fs;
+  }
+
+  public byte[] getRandomBytesArray(int length) {
+final byte[] b = new byte[length];
+new Random().nextBytes(b);
+return b;
+  }
+
+  public Path createFileWithContent(FileSystem fs, String fileName,
+  byte[] fileContent) throws IOException {
+Path testFilePath = path(fileName);
+try (FSDataOutputStream oStream = fs.create(testFilePath)) {
+  oStream.write(fileContent);
+  oStream.flush();
+}
+return testFilePath;
+  }
+
+  public AzureBlobFileSystemStore getAbfsStore(FileSystem fs)

Review Comment:
   Have removed this method. This method was required to get configuration on 
the required fileSystem.  To get the required configuration, I have added a 
method `AbstractAbfsIntegrationTest#getConfiguration(AzureBlobFileSystem fs)` 
which can give the configuration. It gives configuration by 
`fs.getAbfsStore().getAbfsConfiguration`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional 

Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-04-16 Thread via GitHub


saxenapranav commented on code in PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#discussion_r1567066497


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStreamTestUtils.java:
##
@@ -0,0 +1,181 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.lang.reflect.Field;
+import java.util.Map;
+import java.util.Random;
+import java.util.UUID;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest;
+import org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem;
+import org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore;
+import org.apache.hadoop.fs.azurebfs.utils.UriUtils;
+
+import static 
org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest.SHORTENED_GUID_LEN;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotEquals;
+
+public class AbfsInputStreamTestUtils {
+
+  public static final int HUNDRED = 100;
+
+  private final AbstractAbfsIntegrationTest abstractAbfsIntegrationTest;
+
+  public AbfsInputStreamTestUtils(AbstractAbfsIntegrationTest 
abstractAbfsIntegrationTest) {
+this.abstractAbfsIntegrationTest = abstractAbfsIntegrationTest;
+  }
+
+  private Path path(String filepath) throws IOException {
+return abstractAbfsIntegrationTest.getFileSystem().makeQualified(
+new Path(getTestPath(), getUniquePath(filepath)));
+  }
+
+  private Path getTestPath() {
+Path path = new Path(UriUtils.generateUniqueTestPath());
+return path;
+  }
+
+  /**
+   * Generate a unique path using the given filepath.
+   * @param filepath path string
+   * @return unique path created from filepath and a GUID
+   */
+  private Path getUniquePath(String filepath) {
+if (filepath.equals("/")) {
+  return new Path(filepath);
+}
+return new Path(filepath + StringUtils
+.right(UUID.randomUUID().toString(), SHORTENED_GUID_LEN));
+  }
+
+  public AzureBlobFileSystem getFileSystem(boolean readSmallFilesCompletely)

Review Comment:
   Added javadocs in all the public methods of the class.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-04-16 Thread via GitHub


saxenapranav commented on code in PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#discussion_r1567064541


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/functional/FutureIO.java:
##
@@ -114,6 +117,75 @@ public static  T awaitFuture(final Future future,
 }
   }
 
+  /**
+   * Evaluates a collection of futures and returns their results as a list.
+   * 
+   * This method blocks until all futures in the collection have completed.
+   * If any future throws an exception during its execution, this method
+   * extracts and rethrows that exception.
+   * 
+   *
+   * @param collection collection of futures to be evaluated
+   * @param  type of the result.
+   * @return the list of future's result, if all went well.
+   * @throws InterruptedIOException future was interrupted
+   * @throws IOException if something went wrong
+   * @throws RuntimeException any nested RTE thrown
+   */
+  public static  List awaitFuture(final Collection> collection)
+  throws InterruptedIOException, IOException, RuntimeException {
+List results = new ArrayList<>();
+try {
+  for (Future future : collection) {
+results.add(future.get());
+  }
+  return results;
+} catch (InterruptedException e) {
+  throw (InterruptedIOException) new InterruptedIOException(e.toString())
+  .initCause(e);
+} catch (ExecutionException e) {
+  return raiseInnerCause(e);
+}
+  }
+
+  /**
+   * Evaluates a collection of futures and returns their results as a list,
+   * but only waits up to the specified timeout for each future to complete.
+   * 
+   * This method blocks until all futures in the collection have completed or
+   * the timeout expires, whichever happens first. If any future throws an
+   * exception during its execution, this method extracts and rethrows that 
exception.
+   * 
+   *
+   * @param collection collection of futures to be evaluated
+   * @param timeout timeout to wait
+   * @param unit time unit.
+   * @param  type of the result.
+   * @return the list of future's result, if all went well.
+   * @throws InterruptedIOException future was interrupted
+   * @throws IOException if something went wrong
+   * @throws RuntimeException any nested RTE thrown
+   * @throws TimeoutException the future timed out.
+   */
+  public static  List awaitFuture(final Collection> collection,

Review Comment:
   1. Made name awaitAllFutures
   2. Taking java.time.Duration as argument.



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/functional/FutureIO.java:
##
@@ -114,6 +117,75 @@ public static  T awaitFuture(final Future future,
 }
   }
 
+  /**
+   * Evaluates a collection of futures and returns their results as a list.
+   * 
+   * This method blocks until all futures in the collection have completed.
+   * If any future throws an exception during its execution, this method
+   * extracts and rethrows that exception.
+   * 
+   *
+   * @param collection collection of futures to be evaluated
+   * @param  type of the result.
+   * @return the list of future's result, if all went well.
+   * @throws InterruptedIOException future was interrupted
+   * @throws IOException if something went wrong
+   * @throws RuntimeException any nested RTE thrown
+   */
+  public static  List awaitFuture(final Collection> collection)

Review Comment:
   Refaactored the name.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-04-16 Thread via GitHub


saxenapranav commented on code in PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#discussion_r1567063343


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/functional/FutureIO.java:
##
@@ -114,6 +117,75 @@ public static  T awaitFuture(final Future future,
 }
   }
 
+  /**
+   * Evaluates a collection of futures and returns their results as a list.
+   * 
+   * This method blocks until all futures in the collection have completed.
+   * If any future throws an exception during its execution, this method
+   * extracts and rethrows that exception.
+   * 
+   *
+   * @param collection collection of futures to be evaluated
+   * @param  type of the result.
+   * @return the list of future's result, if all went well.
+   * @throws InterruptedIOException future was interrupted
+   * @throws IOException if something went wrong
+   * @throws RuntimeException any nested RTE thrown
+   */
+  public static  List awaitFuture(final Collection> collection)
+  throws InterruptedIOException, IOException, RuntimeException {
+List results = new ArrayList<>();
+try {
+  for (Future future : collection) {
+results.add(future.get());
+  }
+  return results;
+} catch (InterruptedException e) {
+  throw (InterruptedIOException) new InterruptedIOException(e.toString())
+  .initCause(e);
+} catch (ExecutionException e) {
+  return raiseInnerCause(e);

Review Comment:
   Added error logs.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-04-03 Thread via GitHub


steveloughran commented on code in PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#discussion_r1550219003


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/functional/FutureIO.java:
##
@@ -114,6 +117,75 @@ public static  T awaitFuture(final Future future,
 }
   }
 
+  /**
+   * Evaluates a collection of futures and returns their results as a list.
+   * 
+   * This method blocks until all futures in the collection have completed.
+   * If any future throws an exception during its execution, this method
+   * extracts and rethrows that exception.
+   * 
+   *
+   * @param collection collection of futures to be evaluated
+   * @param  type of the result.
+   * @return the list of future's result, if all went well.
+   * @throws InterruptedIOException future was interrupted
+   * @throws IOException if something went wrong
+   * @throws RuntimeException any nested RTE thrown
+   */
+  public static  List awaitFuture(final Collection> collection)
+  throws InterruptedIOException, IOException, RuntimeException {
+List results = new ArrayList<>();
+try {
+  for (Future future : collection) {
+results.add(future.get());
+  }
+  return results;
+} catch (InterruptedException e) {
+  throw (InterruptedIOException) new InterruptedIOException(e.toString())
+  .initCause(e);
+} catch (ExecutionException e) {
+  return raiseInnerCause(e);

Review Comment:
   could you do a log at debug of this, as i've discovered how much of a PITA 
it is debugging future-related failures. thanks.



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStreamTestUtils.java:
##
@@ -0,0 +1,181 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.lang.reflect.Field;
+import java.util.Map;
+import java.util.Random;
+import java.util.UUID;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest;
+import org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem;
+import org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore;
+import org.apache.hadoop.fs.azurebfs.utils.UriUtils;
+
+import static 
org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest.SHORTENED_GUID_LEN;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotEquals;
+
+public class AbfsInputStreamTestUtils {
+
+  public static final int HUNDRED = 100;
+
+  private final AbstractAbfsIntegrationTest abstractAbfsIntegrationTest;
+
+  public AbfsInputStreamTestUtils(AbstractAbfsIntegrationTest 
abstractAbfsIntegrationTest) {
+this.abstractAbfsIntegrationTest = abstractAbfsIntegrationTest;
+  }
+
+  private Path path(String filepath) throws IOException {
+return abstractAbfsIntegrationTest.getFileSystem().makeQualified(
+new Path(getTestPath(), getUniquePath(filepath)));
+  }
+
+  private Path getTestPath() {
+Path path = new Path(UriUtils.generateUniqueTestPath());
+return path;
+  }
+
+  /**
+   * Generate a unique path using the given filepath.
+   * @param filepath path string
+   * @return unique path created from filepath and a GUID
+   */
+  private Path getUniquePath(String filepath) {
+if (filepath.equals("/")) {
+  return new Path(filepath);
+}
+return new Path(filepath + StringUtils
+.right(UUID.randomUUID().toString(), SHORTENED_GUID_LEN));
+  }
+
+  public AzureBlobFileSystem getFileSystem(boolean readSmallFilesCompletely)
+  throws IOException {
+final AzureBlobFileSystem fs = abstractAbfsIntegrationTest.getFileSystem();
+abstractAbfsIntegrationTest.getAbfsStore(fs).getAbfsConfiguration()
+.setReadSmallFilesCompletely(readSmallFilesCompletely);
+abstractAbfsIntegrationTest.getAbfsStore(fs).getAbfsConfiguration()
+.setOptimizeFooterRead(false);
+abstractAbfsIn

Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-03-27 Thread via GitHub


hadoop-yetus commented on PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#issuecomment-2022372090

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 42s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  36m 13s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 33s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  17m 49s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   5m  0s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 30s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m  6s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   4m 35s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  41m 11s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 30s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 52s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  19m 52s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |  17m 32s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 41s |  |  root: The patch generated 
0 new + 5 unchanged - 8 fixed = 5 total (was 13)  |
   | +1 :green_heart: |  mvnsite  |   2m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 47s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   4m  8s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  39m 45s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 18s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 41s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  0s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 269m  6s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6617 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 3a18d943d000 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5a06792648d7f03aeeaf78a3bec296f040e45cba |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/10/testReport/ |
   | Max. process+thread count | 1275 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-azure 
U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/10/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automat

Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-03-27 Thread via GitHub


hadoop-yetus commented on PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#issuecomment-2022289775

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m  7s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  37m  2s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  19m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   4m 45s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 25s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 55s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   5m  7s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  41m 17s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m  6s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  20m  6s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m  0s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |  18m  0s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 58s |  |  root: The patch generated 
0 new + 5 unchanged - 8 fixed = 5 total (was 13)  |
   | +1 :green_heart: |  mvnsite  |   2m 26s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 53s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   5m 10s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  41m  7s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 31s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 38s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 57s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 274m 46s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6617 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 99411abf0f7d 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 2b4f68e481b1b40d9c884ee584415963cd34f2f2 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/9/testReport/ |
   | Max. process+thread count | 1275 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-azure 
U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/9/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatica

Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-03-27 Thread via GitHub


saxenapranav commented on PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#issuecomment-2022125533

   Hi @steveloughran, thank you very much for the suggestion. Have taken it. As 
part of the change, I have the following changes:
   1. Added a new util class `AbfsInputStreamTestUtils` which contains the util 
methods in ITestAbfsStream for which this class was inherited in 
`ITestAbfsInputStreamReadFooter`. Now, ITestAbfsInputStreamReadFooter doesnt 
need to inherit ITestAbfsStream.
   2. ITestAbfsStreamReadFooter now inherits AbstractAbfsScaleTest.
   3. Removed inheritence of ITestAbfsInputStream from 
ITestAbfsInputStreamSmallFileReads as it was only inheriting it for the util 
methods.
   
   Thank you very much for the suggestions, requesting your kind review. Thank 
you a lot!
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-03-26 Thread via GitHub


saxenapranav commented on PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#issuecomment-2022078864

    AGGREGATED TEST RESULT 
   
   HNS-OAuth
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 2
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   
ITestAzureBlobFileSystemRandomRead.testSkipBounds:218->Assert.assertTrue:42->Assert.fail:89
 There should not be any network I/O (elapsedTimeMs=43).
   [ERROR]   
ITestAzureBlobFileSystemRandomRead.testValidateSeekBounds:269->Assert.assertTrue:42->Assert.fail:89
 There should not be any network I/O (elapsedTimeMs=50).
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut 
test timed o...
   [INFO]
   [ERROR] Tests run: 573, Failures: 2, Errors: 1, Skipped: 77
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAbfsTerasort.test_120_terasort:262->executeStage:206 » IO The 
ownership o...
   [INFO]
   [ERROR] Tests run: 340, Failures: 0, Errors: 1, Skipped: 55
   
   HNS-SharedKey
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 3
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   
ITestAzureBlobFileSystemRandomRead.testValidateSeekBounds:269->Assert.assertTrue:42->Assert.fail:89
 There should not be any network I/O (elapsedTimeMs=192).
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut 
test timed o...
   [ERROR]   
ITestAzureBlobFileSystemLease.testTwoWritersCreateAppendWithInfiniteLeaseEnabled:186->twoWriters:154
 » TestTimedOut
   [INFO]
   [ERROR] Tests run: 576, Failures: 1, Errors: 2, Skipped: 34
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 340, Failures: 0, Errors: 0, Skipped: 41
   
   NonHNS-SharedKey
   
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   
TestAbfsClientThrottlingAnalyzer.testManySuccessAndErrorsAndWaiting:181->fuzzyValidate:64
 The actual value 9 is not within the expected range: [5.60, 8.40].
   [INFO]
   [ERROR] Tests run: 137, Failures: 1, Errors: 0, Skipped: 9
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut 
test timed o...
   [ERROR]   
ITestAzureBlobFileSystemLease.testTwoWritersCreateAppendWithInfiniteLeaseEnabled:186->twoWriters:154
 » TestTimedOut
   [INFO]
   [ERROR] Tests run: 518, Failures: 0, Errors: 2, Skipped: 267
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 340, Failures: 0, Errors: 0, Skipped: 44
   
   AppendBlob-HNS-OAuth
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 2
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   
ITestAzureBlobFileSystemRandomRead.testValidateSeekBounds:269->Assert.assertTrue:42->Assert.fail:89
 There should not be any network I/O (elapsedTimeMs=20).
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut 
test timed o...
   [ERROR]   
ITestAzureBlobFileSystemLease.testTwoWritersCreateAppendWithInfiniteLeaseEnabled:186->twoWriters:154
 » TestTimedOut
   [INFO]
   [ERROR] Tests run: 576, Failures: 1, Errors: 2, Skipped: 79
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAbfsTerasort.test_120_terasort:262->executeStage:206 » IO The 
ownership o...
   [INFO]
   [ERROR] Tests run: 340, Failures: 0, Errors: 1, Skipped: 55
   
   Time taken: 48 mins 20 secs.
   azureuser@Hadoop-VM-EAST2:~/hadoop/hadoop-tools/hadoop-azure$
   azureuser@Hadoop-VM-EAST2:~/hadoop/hadoop-tools/hadoop-azure$ git log
   commit 5a06792648d7f03aeeaf78a3bec296f040e45cba (HEAD -> 
saxenapranav/footerBufferSizeFix, origin/saxenapranav/footerBufferSizeFix)
   Author: Pranav Saxena <>
   Date:   Tue Mar 26 22:34:45 2024 -0700
   
   removed ITestAbfsInputStream inheritence from 
ITestAbfsInputStreamSmallFileReads; fixed futureAwait API use; javadocs


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-03-19 Thread via GitHub


steveloughran commented on code in PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#discussion_r1530854506


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java:
##
@@ -167,24 +200,55 @@ public void testSeekToEndAndReadWithConfFalse() throws 
Exception {
 
   private void testSeekAndReadWithConf(boolean optimizeFooterRead,

Review Comment:
   nit: javadocs



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-03-19 Thread via GitHub


steveloughran commented on PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#issuecomment-2007808498

   yeah, spotbugs unrelated; should be fixed in trunk now and for future PRs.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-03-15 Thread via GitHub


hadoop-yetus commented on PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#issuecomment-1999258859

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 47s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 30s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  36m 21s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  18m 58s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  17m 18s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   4m 40s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 30s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 59s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | -1 :x: |  spotbugs  |   2m 34s | 
[/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/8/artifact/out/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html)
 |  hadoop-common-project/hadoop-common in trunk has 1 extant spotbugs 
warnings.  |
   | +1 :green_heart: |  shadedclient  |  39m  3s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 30s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 44s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  18m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 39s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  17m 39s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 35s |  |  root: The patch generated 
0 new + 3 unchanged - 8 fixed = 3 total (was 11)  |
   | +1 :green_heart: |  mvnsite  |   2m 27s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 53s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   4m  3s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  38m 50s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 14s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 41s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 56s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 262m 39s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6617 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 7482568b5dac 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / eaa5550bb1d22bab4bb351d805b1caa7b07f89f4 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/8/testReport/ |
   | Max. process+thread count | 1275 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-azure 
U: . |
   | Console output | 
https://

Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-03-14 Thread via GitHub


saxenapranav commented on code in PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#discussion_r1525764372


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/functional/FutureIO.java:
##
@@ -114,6 +117,70 @@ public static  T awaitFuture(final Future future,
 }
   }
 
+  /**
+   * Given a future, evaluate it.
+   * 
+   * Any exception generated in the future is
+   * extracted and rethrown.
+   * 
+   * @param collection collection of futures to be evaluated
+   * @param  type of the result.
+   * @return the list of future's result, if all went well.
+   * @throws InterruptedIOException future was interrupted
+   * @throws IOException if something went wrong
+   * @throws RuntimeException any nested RTE thrown
+   */
+  public static  List awaitFuture(final Collection> collection)

Review Comment:
   There is a method `public static  T awaitFuture(final Future future)` 
in the class for single future. Added this method in order to keep new methods 
in sync with existing methods.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-03-14 Thread via GitHub


mukund-thakur commented on code in PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#discussion_r1525577732


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/functional/FutureIO.java:
##
@@ -114,6 +117,70 @@ public static  T awaitFuture(final Future future,
 }
   }
 
+  /**
+   * Given a future, evaluate it.
+   * 
+   * Any exception generated in the future is
+   * extracted and rethrown.
+   * 
+   * @param collection collection of futures to be evaluated
+   * @param  type of the result.
+   * @return the list of future's result, if all went well.
+   * @throws InterruptedIOException future was interrupted
+   * @throws IOException if something went wrong
+   * @throws RuntimeException any nested RTE thrown
+   */
+  public static  List awaitFuture(final Collection> collection)

Review Comment:
   wondering where will this be used without timeout? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-03-13 Thread via GitHub


saxenapranav commented on PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#issuecomment-1994792612

   The spotbug warning is on the hadoop-common trunk code, and the warning is 
on the code-path which is not changed by this PR.
   ```
   Bug type NP_NULL_ON_SOME_PATH_FROM_RETURN_VALUE (click for details)
   In class org.apache.hadoop.crypto.key.kms.ValueQueue
   In method org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String)
   Local variable stored in JVM register ?
   Dereferenced at ValueQueue.java:[line 332]
   Known null at ValueQueue.java:[line 332]
   ```
   
   
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-03-13 Thread via GitHub


hadoop-yetus commented on PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#issuecomment-1994634016

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 47s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m  0s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  35m 40s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  18m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  17m 19s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   4m 44s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 30s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 58s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | -1 :x: |  spotbugs  |   2m 34s | 
[/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/7/artifact/out/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html)
 |  hadoop-common-project/hadoop-common in trunk has 1 extant spotbugs 
warnings.  |
   | +1 :green_heart: |  shadedclient  |  38m 43s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 30s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 13s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  18m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  17m 15s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 29s |  |  root: The patch generated 
0 new + 3 unchanged - 8 fixed = 3 total (was 11)  |
   | +1 :green_heart: |  mvnsite  |   2m 27s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 53s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   4m  3s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  38m 27s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 11s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 42s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 59s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 261m 13s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6617 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 328d65c2fb89 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b19fbede42d5bfef7564c18a891ac2c75ba795ed |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/7/testReport/ |
   | Max. process+thread count | 3206 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-azure 
U: . |
   | Console output | 
https://

Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-03-13 Thread via GitHub


hadoop-yetus commented on PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#issuecomment-1994080167

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 46s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m  7s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  39m 41s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 19s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  19m  4s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   4m 45s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 20s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 49s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | -1 :x: |  spotbugs  |   2m 34s | 
[/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/6/artifact/out/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html)
 |  hadoop-common-project/hadoop-common in trunk has 1 extant spotbugs 
warnings.  |
   | +1 :green_heart: |  shadedclient  |  40m 24s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 30s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  18m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 14s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  17m 14s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 34s |  |  root: The patch generated 
0 new + 3 unchanged - 8 fixed = 3 total (was 11)  |
   | +1 :green_heart: |  mvnsite  |   2m 28s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 53s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   4m  1s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  39m 43s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  20m 21s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 49s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 58s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 271m 10s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6617 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 595e243ddcef 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6ed12970052f79a32325490ca9f475be647327f6 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/6/testReport/ |
   | Max. process+thread count | 3110 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-azure 
U: . |
   | Console output | 
https://

Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-03-13 Thread via GitHub


hadoop-yetus commented on PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#issuecomment-1993813395

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 46s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 35s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  36m 35s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 11s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  17m 14s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   4m 39s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 29s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 58s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | -1 :x: |  spotbugs  |   2m 30s | 
[/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/5/artifact/out/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html)
 |  hadoop-common-project/hadoop-common in trunk has 1 extant spotbugs 
warnings.  |
   | +1 :green_heart: |  shadedclient  |  38m 26s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 30s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  18m 48s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 50s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  17m 50s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   5m 12s |  |  root: The patch generated 
0 new + 3 unchanged - 8 fixed = 3 total (was 11)  |
   | +1 :green_heart: |  mvnsite  |   3m 12s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 46s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   4m 21s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  42m 19s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 17s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 37s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 56s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 267m 12s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6617 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux a217e3ff3522 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 18d88aa6016ec7f2b8cc4f7c770905050126a78d |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/5/testReport/ |
   | Max. process+thread count | 1237 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-azure 
U: . |
   | Console output | 
https://

Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-03-12 Thread via GitHub


saxenapranav commented on code in PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#discussion_r1522594974


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java:
##
@@ -54,9 +63,44 @@ public class ITestAbfsInputStreamReadFooter extends 
ITestAbfsInputStream {
   private static final int TEN = 10;
   private static final int TWENTY = 20;
 
+  private static ExecutorService executorService;
+
+  private static final int SIZE_256_KB = 256 * ONE_KB;
+
+  private static final Integer[] FILE_SIZES = {

Review Comment:
   On trunk, file size has the range: 256KB, 512KB, 1MB, 2MB, 4MB.
   As part of this PR, fileSize has the range 256KB, 512KB, 1MB, 4MB. And as 
part of this PR, a dimension of readBufferSize is added [256 KB, 512KB, 1MB, 
4MB]. With this PR. for a test, at a given fileSize, only once the file would 
be created, and all the combinations of readBufferSize and footerReadBufferSize 
would test on that file.
   
   On trunk, if we run all the test sequentially, it takes ~8min47sec and if 
all tests are run on this PR (including readBufferSize dimension), it takes 
only ~7min (These tests runs are done out of Azure network). With this PR, the 
time to run this class reduces.
   
   4MB fileSize is included because we have a default readBufferSize of 4MB. 
Kindly advise please if we should remove the 4MB fileSize from the fileSizes.
   
   Thank you!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-03-12 Thread via GitHub


saxenapranav commented on code in PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#discussion_r1522581623


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java:
##
@@ -71,22 +115,40 @@ public void 
testMultipleServerCallsAreMadeWhenTheConfIsFalse()
   private void testNumBackendCalls(boolean optimizeFooterRead)
   throws Exception {
 int fileIdx = 0;
-for (int i = 0; i <= 4; i++) {
-  for (int j = 0; j <= 2; j++) {
-int fileSize = (int) Math.pow(2, i) * 256 * ONE_KB;
-int footerReadBufferSize = (int) Math.pow(2, j) * 256 * ONE_KB;
-final AzureBlobFileSystem fs = getFileSystem(
-optimizeFooterRead, fileSize);
-Path testFilePath = createPathAndFileWithContent(
-fs, fileIdx++, fileSize);
+final List futureList = new ArrayList<>();
+for (int fileSize : FILE_SIZES) {
+  final int fileId = fileIdx++;
+  Future future = executorService.submit(() -> {
+try (AzureBlobFileSystem spiedFs = createSpiedFs(
+getRawConfiguration())) {
+  Path testPath = createPathAndFileWithContent(
+  spiedFs, fileId, fileSize);
+  testNumBackendCalls(spiedFs, optimizeFooterRead, fileSize,
+  testPath);
+} catch (Exception ex) {
+  throw new RuntimeException(ex);
+}
+  });
+  futureList.add(future);
+}
+for (Future future : futureList) {

Review Comment:
   Added two new APIs in FutureIO:
   ```
   List awaitFuture(Collection> collection, long timeout, TimeUnit 
unit)
   ```
   and
   ```
   List awaitFuture(Collection> collection)
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-03-12 Thread via GitHub


saxenapranav commented on code in PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#discussion_r1522580701


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java:
##
@@ -322,28 +434,52 @@ private void testPartialReadWithNoData(final FileSystem 
fs,
 
   @Test
   public void testPartialReadWithSomeData() throws Exception {
-for (int i = 0; i <= 4; i++) {
-  for (int j = 0; j <= 2; j++) {
-int fileSize = (int) Math.pow(2, i) * 256 * ONE_KB;
-int footerReadBufferSize = (int) Math.pow(2, j) * 256 * ONE_KB;
-final AzureBlobFileSystem fs = getFileSystem(true,
-fileSize, footerReadBufferSize);
-String fileName = methodName.getMethodName() + i;
-byte[] fileContent = getRandomBytesArray(fileSize);
-Path testFilePath = createFileWithContent(fs, fileName, fileContent);
-testPartialReadWithSomeData(fs, testFilePath,
-fileSize - AbfsInputStream.FOOTER_SIZE, 
AbfsInputStream.FOOTER_SIZE,
-fileContent, footerReadBufferSize);
+int fileIdx = 0;
+List futureList = new ArrayList<>();
+for (int fileSize : FILE_SIZES) {
+  final int fileId = fileIdx++;
+  futureList.add(executorService.submit(() -> {
+try (AzureBlobFileSystem spiedFs = createSpiedFs(
+getRawConfiguration())) {
+  String fileName = methodName.getMethodName() + fileId;
+  byte[] fileContent = getRandomBytesArray(fileSize);
+  Path testFilePath = createFileWithContent(spiedFs, fileName,
+  fileContent);
+  testParialReadWithSomeData(spiedFs, fileSize, testFilePath,
+  fileContent);
+} catch (Exception ex) {
+  throw new RuntimeException(ex);
+}
+  }));
+}
+for (Future future : futureList) {
+  future.get();
+}
+  }
+
+  private void testParialReadWithSomeData(final AzureBlobFileSystem spiedFs,

Review Comment:
   Taken.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-03-12 Thread via GitHub


saxenapranav commented on code in PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#discussion_r1522580484


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java:
##
@@ -322,28 +434,52 @@ private void testPartialReadWithNoData(final FileSystem 
fs,
 
   @Test
   public void testPartialReadWithSomeData() throws Exception {
-for (int i = 0; i <= 4; i++) {
-  for (int j = 0; j <= 2; j++) {
-int fileSize = (int) Math.pow(2, i) * 256 * ONE_KB;
-int footerReadBufferSize = (int) Math.pow(2, j) * 256 * ONE_KB;
-final AzureBlobFileSystem fs = getFileSystem(true,
-fileSize, footerReadBufferSize);
-String fileName = methodName.getMethodName() + i;
-byte[] fileContent = getRandomBytesArray(fileSize);
-Path testFilePath = createFileWithContent(fs, fileName, fileContent);
-testPartialReadWithSomeData(fs, testFilePath,
-fileSize - AbfsInputStream.FOOTER_SIZE, 
AbfsInputStream.FOOTER_SIZE,
-fileContent, footerReadBufferSize);
+int fileIdx = 0;
+List futureList = new ArrayList<>();
+for (int fileSize : FILE_SIZES) {
+  final int fileId = fileIdx++;
+  futureList.add(executorService.submit(() -> {
+try (AzureBlobFileSystem spiedFs = createSpiedFs(
+getRawConfiguration())) {
+  String fileName = methodName.getMethodName() + fileId;
+  byte[] fileContent = getRandomBytesArray(fileSize);
+  Path testFilePath = createFileWithContent(spiedFs, fileName,
+  fileContent);
+  testParialReadWithSomeData(spiedFs, fileSize, testFilePath,
+  fileContent);
+} catch (Exception ex) {
+  throw new RuntimeException(ex);
+}
+  }));
+}
+for (Future future : futureList) {
+  future.get();
+}
+  }
+
+  private void testParialReadWithSomeData(final AzureBlobFileSystem spiedFs,
+  final int fileSize, final Path testFilePath, final byte[] fileContent)
+  throws IOException {
+for (int readBufferSize : READ_BUFFER_SIZE) {
+  for (int footerReadBufferSize : FOOTER_READ_BUFFER_SIZE) {
+changeFooterConfigs(spiedFs, true,
+fileSize, footerReadBufferSize, readBufferSize);
+
+testPartialReadWithSomeData(spiedFs, testFilePath,
+fileSize - AbfsInputStream.FOOTER_SIZE,
+AbfsInputStream.FOOTER_SIZE,
+fileContent, footerReadBufferSize, readBufferSize);
   }
 }
   }
 
   private void testPartialReadWithSomeData(final FileSystem fs,

Review Comment:
   Taken. Refactored the names of non-test-entry methods.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-03-12 Thread via GitHub


steveloughran commented on code in PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#discussion_r1522047711


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java:
##
@@ -54,9 +63,44 @@ public class ITestAbfsInputStreamReadFooter extends 
ITestAbfsInputStream {
   private static final int TEN = 10;
   private static final int TWENTY = 20;
 
+  private static ExecutorService executorService;
+
+  private static final int SIZE_256_KB = 256 * ONE_KB;
+
+  private static final Integer[] FILE_SIZES = {

Review Comment:
   This is going to make a slower test on remote runs. Does it really have to 
be this big or is it possible to tune things so that They work with smaller 
files? Because if this is the restriction then it is going to have to become a 
scale test, which will not be run as often.



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java:
##
@@ -322,28 +434,52 @@ private void testPartialReadWithNoData(final FileSystem 
fs,
 
   @Test
   public void testPartialReadWithSomeData() throws Exception {
-for (int i = 0; i <= 4; i++) {
-  for (int j = 0; j <= 2; j++) {
-int fileSize = (int) Math.pow(2, i) * 256 * ONE_KB;
-int footerReadBufferSize = (int) Math.pow(2, j) * 256 * ONE_KB;
-final AzureBlobFileSystem fs = getFileSystem(true,
-fileSize, footerReadBufferSize);
-String fileName = methodName.getMethodName() + i;
-byte[] fileContent = getRandomBytesArray(fileSize);
-Path testFilePath = createFileWithContent(fs, fileName, fileContent);
-testPartialReadWithSomeData(fs, testFilePath,
-fileSize - AbfsInputStream.FOOTER_SIZE, 
AbfsInputStream.FOOTER_SIZE,
-fileContent, footerReadBufferSize);
+int fileIdx = 0;
+List futureList = new ArrayList<>();
+for (int fileSize : FILE_SIZES) {
+  final int fileId = fileIdx++;
+  futureList.add(executorService.submit(() -> {
+try (AzureBlobFileSystem spiedFs = createSpiedFs(
+getRawConfiguration())) {
+  String fileName = methodName.getMethodName() + fileId;
+  byte[] fileContent = getRandomBytesArray(fileSize);
+  Path testFilePath = createFileWithContent(spiedFs, fileName,
+  fileContent);
+  testParialReadWithSomeData(spiedFs, fileSize, testFilePath,
+  fileContent);
+} catch (Exception ex) {
+  throw new RuntimeException(ex);
+}
+  }));
+}
+for (Future future : futureList) {
+  future.get();
+}
+  }
+
+  private void testParialReadWithSomeData(final AzureBlobFileSystem spiedFs,

Review Comment:
   nit: typo



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java:
##
@@ -71,22 +115,40 @@ public void 
testMultipleServerCallsAreMadeWhenTheConfIsFalse()
   private void testNumBackendCalls(boolean optimizeFooterRead)
   throws Exception {
 int fileIdx = 0;
-for (int i = 0; i <= 4; i++) {
-  for (int j = 0; j <= 2; j++) {
-int fileSize = (int) Math.pow(2, i) * 256 * ONE_KB;
-int footerReadBufferSize = (int) Math.pow(2, j) * 256 * ONE_KB;
-final AzureBlobFileSystem fs = getFileSystem(
-optimizeFooterRead, fileSize);
-Path testFilePath = createPathAndFileWithContent(
-fs, fileIdx++, fileSize);
+final List futureList = new ArrayList<>();
+for (int fileSize : FILE_SIZES) {
+  final int fileId = fileIdx++;
+  Future future = executorService.submit(() -> {
+try (AzureBlobFileSystem spiedFs = createSpiedFs(
+getRawConfiguration())) {
+  Path testPath = createPathAndFileWithContent(
+  spiedFs, fileId, fileSize);
+  testNumBackendCalls(spiedFs, optimizeFooterRead, fileSize,
+  testPath);
+} catch (Exception ex) {
+  throw new RuntimeException(ex);
+}
+  });
+  futureList.add(future);
+}
+for (Future future : futureList) {

Review Comment:
   I'm going to suggest that in org.apache.hadoop.util.functional.FutureIO you 
add a new awaitFutures(Collection) method, which iterates through the 
collection and calls awaitFuture on each. And yes, you should be passing down a 
timeout, as when Junit times out It is less informative.



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java:
##
@@ -322,28 +434,52 @@ private void testPartialReadWithNoData(final FileSystem 
fs,
 
   @Test
   public void testPartialReadWithSomeData() throws Exception {
-for (int i = 0; i <= 4; i++) {
-  for (int j = 0; j <= 2; j++) {
-int fileSize = (int) Math.pow(2, i) * 256 * ONE_KB;
-int footerReadBufferSize = (int) Math.pow(2, j) * 256 * O

Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-03-11 Thread via GitHub


saxenapranav commented on PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#issuecomment-1989949575

   @steveloughran @mehakmeet @mukund-thakur , requesting your kind review 
please. Thank you!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-03-11 Thread via GitHub


anmolanmol1234 commented on PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#issuecomment-1988228423

   LGTM !!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-03-11 Thread via GitHub


hadoop-yetus commented on PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#issuecomment-1988039361

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  48m 44s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  3s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  40m 27s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  |  
hadoop-tools/hadoop-azure: The patch generated 0 new + 3 unchanged - 8 fixed = 
3 total (was 11)  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  38m 47s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 24s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 144m 32s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6617 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 84ddb61de8ca 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / e0108f832c31bf082daa7989dcdd0763db8b7a47 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/4/testReport/ |
   | Max. process+thread count | 527 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/4/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this

Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-03-11 Thread via GitHub


hadoop-yetus commented on PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#issuecomment-1987998333

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  18m 25s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  48m 46s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 28s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  4s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  41m 19s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 18s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/3/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 4 new + 3 unchanged - 8 
fixed = 7 total (was 11)  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  38m 22s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 23s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 162m 15s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6617 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 265bb9e6f5a2 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / fd7189aba922f02d84fbe8a8d6b52338ce82e061 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/3/testReport/ |
   | Max. process+thread count | 606 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the m

Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-03-11 Thread via GitHub


saxenapranav commented on PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#issuecomment-1987916114

   
    AGGREGATED TEST RESULT 
   
   HNS-OAuth
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 2
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut 
test timed o...
   [ERROR]   
ITestAzureBlobFileSystemLease.testTwoWritersCreateAppendWithInfiniteLeaseEnabled:186->twoWriters:154
 » TestTimedOut
   [INFO]
   [ERROR] Tests run: 587, Failures: 0, Errors: 2, Skipped: 79
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAbfsTerasort.test_120_terasort:262->executeStage:206 » IO The 
ownership o...
   [INFO]
   [ERROR] Tests run: 340, Failures: 0, Errors: 1, Skipped: 55
   
   HNS-SharedKey
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 3
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   
ITestAzureBlobFileSystemRandomRead.testValidateSeekBounds:269->Assert.assertTrue:42->Assert.fail:89
 There should not be any network I/O (elapsedTimeMs=659).
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:336 » TestTimedOut 
test timed o...
   [INFO]
   [ERROR] Tests run: 568, Failures: 1, Errors: 1, Skipped: 34
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 340, Failures: 0, Errors: 0, Skipped: 41
   
   NonHNS-SharedKey
   
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   
TestAbfsClientThrottlingAnalyzer.testManySuccessAndErrorsAndWaiting:181->fuzzyValidate:64
 The actual value 9 is not within the expected range: [5.60, 8.40].
   [INFO]
   [ERROR] Tests run: 137, Failures: 1, Errors: 0, Skipped: 9
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut 
test timed o...
   [ERROR]   
ITestAzureBlobFileSystemLease.testTwoWritersCreateAppendWithInfiniteLeaseEnabled:186->twoWriters:154
 » TestTimedOut
   [INFO]
   [ERROR] Tests run: 579, Failures: 0, Errors: 2, Skipped: 267
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 340, Failures: 0, Errors: 0, Skipped: 44
   
   AppendBlob-HNS-OAuth
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 2
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut 
test timed o...
   [ERROR]   
ITestAzureBlobFileSystemLease.testTwoWritersCreateAppendWithInfiniteLeaseEnabled:186->twoWriters:154
 » TestTimedOut
   [INFO]
   [ERROR] Tests run: 592, Failures: 0, Errors: 2, Skipped: 79
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAbfsTerasort.test_120_terasort:262->executeStage:206 » IO The 
ownership o...
   [INFO]
   [ERROR] Tests run: 340, Failures: 0, Errors: 1, Skipped: 55
   
   Time taken: 45 mins 0 secs.
   azureuser@Hadoop-VM-EAST2:~/hadoop/hadoop-tools/hadoop-azure$ git log
   commit e0108f832c31bf082daa7989dcdd0763db8b7a47 (HEAD -> 
saxenapranav/footerBufferSizeFix, origin/saxenapranav/footerBufferSizeFix)
   Author: Pranav Saxena <>
   Date:   Mon Mar 11 00:40:31 2024 -0700
   
   static consts


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-03-11 Thread via GitHub


saxenapranav commented on PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#issuecomment-1987812837

   
    AGGREGATED TEST RESULT 
   
   HNS-OAuth
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 2
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   
ITestAzureBlobFileSystemRandomRead.testValidateSeekBounds:269->Assert.assertTrue:42->Assert.fail:89
 There should not be any network I/O (elapsedTimeMs=168).
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut 
test timed o...
   [INFO]
   [ERROR] Tests run: 590, Failures: 1, Errors: 1, Skipped: 79
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAbfsTerasort.test_120_terasort:262->executeStage:206 » IO The 
ownership o...
   [INFO]
   [ERROR] Tests run: 340, Failures: 0, Errors: 1, Skipped: 55
   
   HNS-SharedKey
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 3
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:336 » TestTimedOut 
test timed o...
   [INFO]
   [ERROR] Tests run: 592, Failures: 0, Errors: 1, Skipped: 34
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 340, Failures: 0, Errors: 0, Skipped: 41
   
   NonHNS-SharedKey
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 9
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   
ITestAzureBlobFileSystemRandomRead.testValidateSeekBounds:269->Assert.assertTrue:42->Assert.fail:89
 There should not be any network I/O (elapsedTimeMs=171).
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut 
test timed o...
   [INFO]
   [ERROR] Tests run: 560, Failures: 1, Errors: 1, Skipped: 267
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 340, Failures: 0, Errors: 0, Skipped: 44
   
   AppendBlob-HNS-OAuth
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 2
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut 
test timed o...
   [INFO]
   [ERROR] Tests run: 579, Failures: 0, Errors: 1, Skipped: 79
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAbfsTerasort.test_120_terasort:262->executeStage:206 » IO The 
ownership o...
   [INFO]
   [ERROR] Tests run: 340, Failures: 0, Errors: 1, Skipped: 55
   
   Time taken: 45 mins 30 secs.
   azureuser@Hadoop-VM-EAST2:~/hadoop/hadoop-tools/hadoop-azure$ git log
   commit fd7189aba922f02d84fbe8a8d6b52338ce82e061 (HEAD -> 
saxenapranav/footerBufferSizeFix, origin/saxenapranav/footerBufferSizeFix)
   Author: Pranav Saxena <>
   Date:   Mon Mar 11 00:01:15 2024 -0700
   
   review refactors


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-03-11 Thread via GitHub


saxenapranav commented on code in PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#discussion_r1519264819


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java:
##
@@ -167,24 +214,55 @@ public void testSeekToEndAndReadWithConfFalse() throws 
Exception {
 
   private void testSeekAndReadWithConf(boolean optimizeFooterRead,
   SeekTo seekTo) throws Exception {
+int fileIdx = 0;
+List futureList = new ArrayList<>();
+for (int j = 0; j <= 4; j++) {
+  final int fileSize = (int) Math.pow(2, j) * SIZE_256_KB;
+  final int fileId = fileIdx++;
+  futureList.add(executorService.submit(() -> {
+try {
+  try (AzureBlobFileSystem spiedFs = createSpiedFs(
+  getRawConfiguration())) {
+String fileName = methodName.getMethodName() + fileId;
+byte[] fileContent = getRandomBytesArray(fileSize);
+Path testFilePath = createFileWithContent(spiedFs, fileName,
+fileContent);
+for (int i = 0; i <= 4; i++) {

Review Comment:
   ReadBufferSize has default equal to 4 MB. Will keep this in range. Though 
have removed 2 MB from readBufferSize and fileSize. Following are the ranges:
   
   fileSize: 256 KB, 512 KB, 1 MB, 4 MB
   readBufferSize: 256 KB, 512 KB, 1 MB, 4 MB
   footerReadBufferSize: 256 KB, 512 KB, 1 MB,



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-03-11 Thread via GitHub


saxenapranav commented on code in PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#discussion_r1519263553


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java:
##
@@ -443,27 +575,35 @@ private FutureDataInputStreamBuilder 
getParameterizedBuilder(final Path path,
 return builder;
   }
 
-  private AzureBlobFileSystem getFileSystem(final boolean optimizeFooterRead,
-  final int fileSize) throws IOException {
-final AzureBlobFileSystem fs = getFileSystem();
-AzureBlobFileSystemStore store = getAbfsStore(fs);
-store.getAbfsConfiguration().setOptimizeFooterRead(optimizeFooterRead);
-if (fileSize <= store.getAbfsConfiguration().getReadBufferSize()) {
-  store.getAbfsConfiguration().setReadSmallFilesCompletely(false);
+  private void changeFooterConfigs(final AzureBlobFileSystem spiedFs,
+  final boolean optimizeFooterRead, final int fileSize,
+  final int readBufferSize) throws IOException {
+AbfsConfiguration configuration = 
spiedFs.getAbfsStore().getAbfsConfiguration();
+
Mockito.doReturn(optimizeFooterRead).when(configuration).optimizeFooterRead();
+if (fileSize <= readBufferSize) {
+  Mockito.doReturn(false).when(configuration).readSmallFilesCompletely();
 }
-return fs;
   }
 
-  private AzureBlobFileSystem getFileSystem(final boolean optimizeFooterRead,
-  final int fileSize, final int footerReadBufferSize) throws IOException {
-final AzureBlobFileSystem fs = getFileSystem();
-AzureBlobFileSystemStore store = getAbfsStore(fs);
-store.getAbfsConfiguration().setOptimizeFooterRead(optimizeFooterRead);
-store.getAbfsConfiguration().setFooterReadBufferSize(footerReadBufferSize);
-if (fileSize <= store.getAbfsConfiguration().getReadBufferSize()) {
-  store.getAbfsConfiguration().setReadSmallFilesCompletely(false);
+  private AzureBlobFileSystem createSpiedFs(Configuration configuration) 
throws IOException {
+AzureBlobFileSystem spiedFs = Mockito.spy((AzureBlobFileSystem) 
FileSystem.newInstance(configuration));
+AzureBlobFileSystemStore store = Mockito.spy(spiedFs.getAbfsStore());
+Mockito.doReturn(store).when(spiedFs).getAbfsStore();
+AbfsConfiguration spiedConfig = Mockito.spy(store.getAbfsConfiguration());
+Mockito.doReturn(spiedConfig).when(store).getAbfsConfiguration();
+return spiedFs;
+  }
+
+  private void changeFooterConfigs(final AzureBlobFileSystem spiedFs,
+  final boolean optimizeFooterRead, final int fileSize,
+  final int footerReadBufferSize, final int readBufferSize) throws 
IOException {

Review Comment:
   Fixed it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-03-11 Thread via GitHub


saxenapranav commented on code in PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#discussion_r1519262705


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java:
##
@@ -54,9 +63,44 @@ public class ITestAbfsInputStreamReadFooter extends 
ITestAbfsInputStream {
   private static final int TEN = 10;
   private static final int TWENTY = 20;
 
+  private static ExecutorService executorService;
+
+  private final int SIZE_256_KB = 256 * ONE_KB;
+
+  private final Integer[] FILE_SIZES = {
+  SIZE_256_KB,

Review Comment:
   made static.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-03-11 Thread via GitHub


saxenapranav commented on code in PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#discussion_r1519263077


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java:
##
@@ -167,24 +214,55 @@ public void testSeekToEndAndReadWithConfFalse() throws 
Exception {
 
   private void testSeekAndReadWithConf(boolean optimizeFooterRead,
   SeekTo seekTo) throws Exception {
+int fileIdx = 0;
+List futureList = new ArrayList<>();
+for (int j = 0; j <= 4; j++) {
+  final int fileSize = (int) Math.pow(2, j) * SIZE_256_KB;
+  final int fileId = fileIdx++;
+  futureList.add(executorService.submit(() -> {
+try {
+  try (AzureBlobFileSystem spiedFs = createSpiedFs(

Review Comment:
   Fixed it.



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java:
##
@@ -71,22 +94,46 @@ public void 
testMultipleServerCallsAreMadeWhenTheConfIsFalse()
   private void testNumBackendCalls(boolean optimizeFooterRead)
   throws Exception {
 int fileIdx = 0;
+final List futureList = new ArrayList<>();
+for (int i = 0; i <= 4; i++) {
+  final int fileSize = (int) Math.pow(2, i) * SIZE_256_KB;
+  final int fileId = fileIdx++;
+  Future future = executorService.submit(() -> {
+try {
+  try (AzureBlobFileSystem spiedFs = createSpiedFs(

Review Comment:
   Fixed it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-03-11 Thread via GitHub


saxenapranav commented on code in PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#discussion_r1519263077


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java:
##
@@ -167,24 +214,55 @@ public void testSeekToEndAndReadWithConfFalse() throws 
Exception {
 
   private void testSeekAndReadWithConf(boolean optimizeFooterRead,
   SeekTo seekTo) throws Exception {
+int fileIdx = 0;
+List futureList = new ArrayList<>();
+for (int j = 0; j <= 4; j++) {
+  final int fileSize = (int) Math.pow(2, j) * SIZE_256_KB;
+  final int fileId = fileIdx++;
+  futureList.add(executorService.submit(() -> {
+try {
+  try (AzureBlobFileSystem spiedFs = createSpiedFs(

Review Comment:
   good catch! Fixed it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-03-11 Thread via GitHub


anmolanmol1234 commented on code in PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#discussion_r1519250637


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java:
##
@@ -54,9 +63,44 @@ public class ITestAbfsInputStreamReadFooter extends 
ITestAbfsInputStream {
   private static final int TEN = 10;
   private static final int TWENTY = 20;
 
+  private static ExecutorService executorService;
+
+  private final int SIZE_256_KB = 256 * ONE_KB;
+
+  private final Integer[] FILE_SIZES = {
+  SIZE_256_KB,

Review Comment:
   The naming format seems not correct as the variables are not static



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-03-10 Thread via GitHub


anujmodi2021 commented on code in PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#discussion_r1519176063


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java:
##
@@ -71,22 +94,46 @@ public void 
testMultipleServerCallsAreMadeWhenTheConfIsFalse()
   private void testNumBackendCalls(boolean optimizeFooterRead)
   throws Exception {
 int fileIdx = 0;
+final List futureList = new ArrayList<>();
+for (int i = 0; i <= 4; i++) {
+  final int fileSize = (int) Math.pow(2, i) * SIZE_256_KB;
+  final int fileId = fileIdx++;
+  Future future = executorService.submit(() -> {
+try {
+  try (AzureBlobFileSystem spiedFs = createSpiedFs(

Review Comment:
   Why do we need a try inside try?
   If we are catching a general exception, can we have a single exception?



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java:
##
@@ -443,27 +575,35 @@ private FutureDataInputStreamBuilder 
getParameterizedBuilder(final Path path,
 return builder;
   }
 
-  private AzureBlobFileSystem getFileSystem(final boolean optimizeFooterRead,
-  final int fileSize) throws IOException {
-final AzureBlobFileSystem fs = getFileSystem();
-AzureBlobFileSystemStore store = getAbfsStore(fs);
-store.getAbfsConfiguration().setOptimizeFooterRead(optimizeFooterRead);
-if (fileSize <= store.getAbfsConfiguration().getReadBufferSize()) {
-  store.getAbfsConfiguration().setReadSmallFilesCompletely(false);
+  private void changeFooterConfigs(final AzureBlobFileSystem spiedFs,
+  final boolean optimizeFooterRead, final int fileSize,
+  final int readBufferSize) throws IOException {
+AbfsConfiguration configuration = 
spiedFs.getAbfsStore().getAbfsConfiguration();
+
Mockito.doReturn(optimizeFooterRead).when(configuration).optimizeFooterRead();
+if (fileSize <= readBufferSize) {
+  Mockito.doReturn(false).when(configuration).readSmallFilesCompletely();
 }
-return fs;
   }
 
-  private AzureBlobFileSystem getFileSystem(final boolean optimizeFooterRead,
-  final int fileSize, final int footerReadBufferSize) throws IOException {
-final AzureBlobFileSystem fs = getFileSystem();
-AzureBlobFileSystemStore store = getAbfsStore(fs);
-store.getAbfsConfiguration().setOptimizeFooterRead(optimizeFooterRead);
-store.getAbfsConfiguration().setFooterReadBufferSize(footerReadBufferSize);
-if (fileSize <= store.getAbfsConfiguration().getReadBufferSize()) {
-  store.getAbfsConfiguration().setReadSmallFilesCompletely(false);
+  private AzureBlobFileSystem createSpiedFs(Configuration configuration) 
throws IOException {
+AzureBlobFileSystem spiedFs = Mockito.spy((AzureBlobFileSystem) 
FileSystem.newInstance(configuration));
+AzureBlobFileSystemStore store = Mockito.spy(spiedFs.getAbfsStore());
+Mockito.doReturn(store).when(spiedFs).getAbfsStore();
+AbfsConfiguration spiedConfig = Mockito.spy(store.getAbfsConfiguration());
+Mockito.doReturn(spiedConfig).when(store).getAbfsConfiguration();
+return spiedFs;
+  }
+
+  private void changeFooterConfigs(final AzureBlobFileSystem spiedFs,
+  final boolean optimizeFooterRead, final int fileSize,
+  final int footerReadBufferSize, final int readBufferSize) throws 
IOException {

Review Comment:
   nit: IOException is never thrown in the method body, can be avoided.



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java:
##
@@ -167,24 +214,55 @@ public void testSeekToEndAndReadWithConfFalse() throws 
Exception {
 
   private void testSeekAndReadWithConf(boolean optimizeFooterRead,
   SeekTo seekTo) throws Exception {
+int fileIdx = 0;
+List futureList = new ArrayList<>();
+for (int j = 0; j <= 4; j++) {
+  final int fileSize = (int) Math.pow(2, j) * SIZE_256_KB;
+  final int fileId = fileIdx++;
+  futureList.add(executorService.submit(() -> {
+try {
+  try (AzureBlobFileSystem spiedFs = createSpiedFs(

Review Comment:
   Multiple try here as well.



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java:
##
@@ -167,24 +214,55 @@ public void testSeekToEndAndReadWithConfFalse() throws 
Exception {
 
   private void testSeekAndReadWithConf(boolean optimizeFooterRead,
   SeekTo seekTo) throws Exception {
+int fileIdx = 0;
+List futureList = new ArrayList<>();
+for (int j = 0; j <= 4; j++) {
+  final int fileSize = (int) Math.pow(2, j) * SIZE_256_KB;
+  final int fileId = fileIdx++;
+  futureList.add(executorService.submit(() -> {
+try {
+  try (AzureBlobFileSystem spiedFs = createSpiedFs(
+  getRawConfiguration())) {
+String fileName = methodNa

Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-03-08 Thread via GitHub


hadoop-yetus commented on PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#issuecomment-1985565656

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 50s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  47m 54s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 44s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 56s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 14s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  39m 50s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 18s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/2/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 1 new + 3 unchanged - 8 
fixed = 4 total (was 11)  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  38m  6s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 25s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 143m 31s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6617 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 62b9990fbb8e 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 0a1491a7b1b61778c9433abbaa891d871d6b2f78 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/2/testReport/ |
   | Max. process+thread count | 527 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the m

Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-03-08 Thread via GitHub


saxenapranav commented on PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#issuecomment-1985460270

   
    AGGREGATED TEST RESULT 
   
   HNS-OAuth
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 2
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:336 » TestTimedOut 
test timed o...
   [INFO]
   [ERROR] Tests run: 574, Failures: 0, Errors: 1, Skipped: 79
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAbfsTerasort.test_120_terasort:262->executeStage:206 » IO The 
ownership o...
   [INFO]
   [ERROR] Tests run: 340, Failures: 0, Errors: 1, Skipped: 55
   
   HNS-SharedKey
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 3
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   
ITestAzureBlobFileSystemRandomRead.testValidateSeekBounds:269->Assert.assertTrue:42->Assert.fail:89
 There should not be any network I/O (elapsedTimeMs=54).
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut 
test timed o...
   [ERROR]   
ITestAzureBlobFileSystemLease.testTwoWritersCreateAppendWithInfiniteLeaseEnabled:186->twoWriters:154
 » TestTimedOut
   [INFO]
   [ERROR] Tests run: 581, Failures: 1, Errors: 2, Skipped: 32
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 340, Failures: 0, Errors: 0, Skipped: 41
   
   NonHNS-SharedKey
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 9
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   
ITestAzureBlobFileSystemRandomRead.testSkipBounds:218->Assert.assertTrue:42->Assert.fail:89
 There should not be any network I/O (elapsedTimeMs=260).
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:336 » TestTimedOut 
test timed o...
   [INFO]
   [ERROR] Tests run: 579, Failures: 1, Errors: 1, Skipped: 267
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 340, Failures: 0, Errors: 0, Skipped: 44
   
   AppendBlob-HNS-OAuth
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 2
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:336 » TestTimedOut 
test timed o...
   [INFO]
   [ERROR] Tests run: 584, Failures: 0, Errors: 1, Skipped: 79
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAbfsTerasort.test_120_terasort:262->executeStage:206 » IO The 
ownership o...
   [INFO]
   [ERROR] Tests run: 340, Failures: 0, Errors: 1, Skipped: 55
   
   Time taken: 46 mins 15 secs.
   azureuser@Hadoop-VM-EAST2:~/hadoop/hadoop-tools/hadoop-azure$ git log
   commit 0a1491a7b1b61778c9433abbaa891d871d6b2f78 (HEAD -> 
saxenapranav/footerBufferSizeFix, origin/saxenapranav/footerBufferSizeFix)
   Author: Pranav Saxena <>
   Date:   Fri Mar 8 01:30:38 2024 -0800
   
   set and unset executorservice; magic num for 256 KB


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-03-08 Thread via GitHub


saxenapranav commented on PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#issuecomment-1985271965

    AGGREGATED TEST RESULT 
   
   HNS-OAuth
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 2
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   
ITestAzureBlobFileSystemRandomRead.testSkipBounds:218->Assert.assertTrue:42->Assert.fail:89
 There should not be any network I/O (elapsedTimeMs=59).
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemE2E.testHttpReadTimeout »  Unexpected 
exception, expec...
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut 
test timed o...
   [ERROR]   
ITestAzureBlobFileSystemLease.testTwoWritersCreateAppendWithInfiniteLeaseEnabled:186->twoWriters:154
 » TestTimedOut
   [INFO]
   [ERROR] Tests run: 585, Failures: 1, Errors: 3, Skipped: 71
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAbfsTerasort.test_120_terasort:262->executeStage:206 » IO The 
ownership o...
   [INFO]
   [ERROR] Tests run: 340, Failures: 0, Errors: 1, Skipped: 55
   
   HNS-SharedKey
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 3
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   
ITestAzureBlobFileSystemRandomRead.testSkipBounds:218->Assert.assertTrue:42->Assert.fail:89
 There should not be any network I/O (elapsedTimeMs=20).
   [ERROR]   
ITestAzureBlobFileSystemRandomRead.testValidateSeekBounds:269->Assert.assertTrue:42->Assert.fail:89
 There should not be any network I/O (elapsedTimeMs=40).
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:336 » TestTimedOut 
test timed o...
   [INFO]
   [ERROR] Tests run: 580, Failures: 2, Errors: 1, Skipped: 26
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 340, Failures: 0, Errors: 0, Skipped: 41
   
   NonHNS-SharedKey
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 9
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   
ITestAzureBlobFileSystemRandomRead.testValidateSeekBounds:269->Assert.assertTrue:42->Assert.fail:89
 There should not be any network I/O (elapsedTimeMs=339).
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut 
test timed o...
   [INFO]
   [ERROR] Tests run: 543, Failures: 1, Errors: 1, Skipped: 266
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 340, Failures: 0, Errors: 0, Skipped: 44
   
   AppendBlob-HNS-OAuth
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 2
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:336 » TestTimedOut 
test timed o...
   [ERROR]   
ITestAzureBlobFileSystemLease.testTwoWritersCreateAppendWithInfiniteLeaseEnabled:186->twoWriters:154
 » TestTimedOut
   [INFO]
   [ERROR] Tests run: 559, Failures: 0, Errors: 2, Skipped: 71
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAbfsTerasort.test_120_terasort:262->executeStage:206 » IO The 
ownership o...
   [INFO]
   [ERROR] Tests run: 340, Failures: 0, Errors: 1, Skipped: 55
   
   Time taken: 43 mins 38 secs.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-03-08 Thread via GitHub


hadoop-yetus commented on PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#issuecomment-1985254282

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  48m 34s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  4s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  37m 49s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 18s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/1/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 5 new + 6 unchanged - 5 
fixed = 11 total (was 11)  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  3s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  37m 35s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 24s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 140m 35s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6617 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux f55f9b337083 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / dbca78b72f6f24c3db5ac5553bcab229d86439db |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/1/testReport/ |
   | Max. process+thread count | 609 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the