[hadoop] branch branch-2.10 updated: HDFS-15632. AbstractContractDeleteTest should set recursive peremeter to true for recursive test cases. Contributed by Anton Kutuzov.

2021-01-22 Thread shv
This is an automated email from the ASF dual-hosted git repository.

shv pushed a commit to branch branch-2.10
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2.10 by this push:
 new 0ed997a  HDFS-15632. AbstractContractDeleteTest should set recursive 
peremeter to true for recursive test cases. Contributed by Anton Kutuzov.
0ed997a is described below

commit 0ed997abababef26ac06623e1a663d806fdbb17f
Author: Anton Kutuzov 
AuthorDate: Wed Jan 20 18:38:02 2021 +0300

HDFS-15632. AbstractContractDeleteTest should set recursive peremeter to 
true for recursive test cases. Contributed by Anton Kutuzov.

(cherry picked from commit 91d4ba57c5b85379303ac8fb2a1a03ba10b07d4e)
---
 .../org/apache/hadoop/fs/contract/AbstractContractDeleteTest.java  | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractDeleteTest.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractDeleteTest.java
index 328c8e1..08df1d4 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractDeleteTest.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractDeleteTest.java
@@ -49,18 +49,17 @@ public abstract class AbstractContractDeleteTest extends
 Path path = path("testDeleteNonexistentPathRecursive");
 assertPathDoesNotExist("leftover", path);
 ContractTestUtils.rejectRootOperation(path);
-assertFalse("Returned true attempting to delete"
+assertFalse("Returned true attempting to recursively delete"
 + " a nonexistent path " + path,
-getFileSystem().delete(path, false));
+getFileSystem().delete(path, true));
   }
 
-
   @Test
   public void testDeleteNonexistentPathNonRecursive() throws Throwable {
 Path path = path("testDeleteNonexistentPathNonRecursive");
 assertPathDoesNotExist("leftover", path);
 ContractTestUtils.rejectRootOperation(path);
-assertFalse("Returned true attempting to recursively delete"
+assertFalse("Returned true attempting to non recursively delete"
 + " a nonexistent path " + path,
 getFileSystem().delete(path, false));
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.1 updated: HDFS-15632. AbstractContractDeleteTest should set recursive peremeter to true for recursive test cases. Contributed by Anton Kutuzov.

2021-01-22 Thread shv
This is an automated email from the ASF dual-hosted git repository.

shv pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 397ff30  HDFS-15632. AbstractContractDeleteTest should set recursive 
peremeter to true for recursive test cases. Contributed by Anton Kutuzov.
397ff30 is described below

commit 397ff302b323d737909bb6717a849dcfa33d67a0
Author: Anton Kutuzov 
AuthorDate: Wed Jan 20 18:38:02 2021 +0300

HDFS-15632. AbstractContractDeleteTest should set recursive peremeter to 
true for recursive test cases. Contributed by Anton Kutuzov.

(cherry picked from commit 91d4ba57c5b85379303ac8fb2a1a03ba10b07d4e)
---
 .../org/apache/hadoop/fs/contract/AbstractContractDeleteTest.java  | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractDeleteTest.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractDeleteTest.java
index 328c8e1..08df1d4 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractDeleteTest.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractDeleteTest.java
@@ -49,18 +49,17 @@ public abstract class AbstractContractDeleteTest extends
 Path path = path("testDeleteNonexistentPathRecursive");
 assertPathDoesNotExist("leftover", path);
 ContractTestUtils.rejectRootOperation(path);
-assertFalse("Returned true attempting to delete"
+assertFalse("Returned true attempting to recursively delete"
 + " a nonexistent path " + path,
-getFileSystem().delete(path, false));
+getFileSystem().delete(path, true));
   }
 
-
   @Test
   public void testDeleteNonexistentPathNonRecursive() throws Throwable {
 Path path = path("testDeleteNonexistentPathNonRecursive");
 assertPathDoesNotExist("leftover", path);
 ContractTestUtils.rejectRootOperation(path);
-assertFalse("Returned true attempting to recursively delete"
+assertFalse("Returned true attempting to non recursively delete"
 + " a nonexistent path " + path,
 getFileSystem().delete(path, false));
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: HDFS-15632. AbstractContractDeleteTest should set recursive peremeter to true for recursive test cases. Contributed by Anton Kutuzov.

2021-01-22 Thread shv
This is an automated email from the ASF dual-hosted git repository.

shv pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 16e908e  HDFS-15632. AbstractContractDeleteTest should set recursive 
peremeter to true for recursive test cases. Contributed by Anton Kutuzov.
16e908e is described below

commit 16e908e7577d6aea1a7c227e65b96213a9b1
Author: Anton Kutuzov 
AuthorDate: Wed Jan 20 18:38:02 2021 +0300

HDFS-15632. AbstractContractDeleteTest should set recursive peremeter to 
true for recursive test cases. Contributed by Anton Kutuzov.

(cherry picked from commit 91d4ba57c5b85379303ac8fb2a1a03ba10b07d4e)
---
 .../org/apache/hadoop/fs/contract/AbstractContractDeleteTest.java  | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractDeleteTest.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractDeleteTest.java
index 328c8e1..08df1d4 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractDeleteTest.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractDeleteTest.java
@@ -49,18 +49,17 @@ public abstract class AbstractContractDeleteTest extends
 Path path = path("testDeleteNonexistentPathRecursive");
 assertPathDoesNotExist("leftover", path);
 ContractTestUtils.rejectRootOperation(path);
-assertFalse("Returned true attempting to delete"
+assertFalse("Returned true attempting to recursively delete"
 + " a nonexistent path " + path,
-getFileSystem().delete(path, false));
+getFileSystem().delete(path, true));
   }
 
-
   @Test
   public void testDeleteNonexistentPathNonRecursive() throws Throwable {
 Path path = path("testDeleteNonexistentPathNonRecursive");
 assertPathDoesNotExist("leftover", path);
 ContractTestUtils.rejectRootOperation(path);
-assertFalse("Returned true attempting to recursively delete"
+assertFalse("Returned true attempting to non recursively delete"
 + " a nonexistent path " + path,
 getFileSystem().delete(path, false));
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: HDFS-15632. AbstractContractDeleteTest should set recursive peremeter to true for recursive test cases. Contributed by Anton Kutuzov.

2021-01-22 Thread shv
This is an automated email from the ASF dual-hosted git repository.

shv pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new dcf6d77  HDFS-15632. AbstractContractDeleteTest should set recursive 
peremeter to true for recursive test cases. Contributed by Anton Kutuzov.
dcf6d77 is described below

commit dcf6d77279169386837179017dbd75290df17cc8
Author: Anton Kutuzov 
AuthorDate: Wed Jan 20 18:38:02 2021 +0300

HDFS-15632. AbstractContractDeleteTest should set recursive peremeter to 
true for recursive test cases. Contributed by Anton Kutuzov.

(cherry picked from commit 91d4ba57c5b85379303ac8fb2a1a03ba10b07d4e)
---
 .../org/apache/hadoop/fs/contract/AbstractContractDeleteTest.java  | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractDeleteTest.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractDeleteTest.java
index 328c8e1..08df1d4 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractDeleteTest.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractDeleteTest.java
@@ -49,18 +49,17 @@ public abstract class AbstractContractDeleteTest extends
 Path path = path("testDeleteNonexistentPathRecursive");
 assertPathDoesNotExist("leftover", path);
 ContractTestUtils.rejectRootOperation(path);
-assertFalse("Returned true attempting to delete"
+assertFalse("Returned true attempting to recursively delete"
 + " a nonexistent path " + path,
-getFileSystem().delete(path, false));
+getFileSystem().delete(path, true));
   }
 
-
   @Test
   public void testDeleteNonexistentPathNonRecursive() throws Throwable {
 Path path = path("testDeleteNonexistentPathNonRecursive");
 assertPathDoesNotExist("leftover", path);
 ContractTestUtils.rejectRootOperation(path);
-assertFalse("Returned true attempting to recursively delete"
+assertFalse("Returned true attempting to non recursively delete"
 + " a nonexistent path " + path,
 getFileSystem().delete(path, false));
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDFS-15632. AbstractContractDeleteTest should set recursive peremeter to true for recursive test cases. Contributed by Anton Kutuzov.

2021-01-22 Thread shv
This is an automated email from the ASF dual-hosted git repository.

shv pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 91d4ba5  HDFS-15632. AbstractContractDeleteTest should set recursive 
peremeter to true for recursive test cases. Contributed by Anton Kutuzov.
91d4ba5 is described below

commit 91d4ba57c5b85379303ac8fb2a1a03ba10b07d4e
Author: Anton Kutuzov 
AuthorDate: Wed Jan 20 18:38:02 2021 +0300

HDFS-15632. AbstractContractDeleteTest should set recursive peremeter to 
true for recursive test cases. Contributed by Anton Kutuzov.
---
 .../org/apache/hadoop/fs/contract/AbstractContractDeleteTest.java  | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractDeleteTest.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractDeleteTest.java
index 328c8e1..08df1d4 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractDeleteTest.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractDeleteTest.java
@@ -49,18 +49,17 @@ public abstract class AbstractContractDeleteTest extends
 Path path = path("testDeleteNonexistentPathRecursive");
 assertPathDoesNotExist("leftover", path);
 ContractTestUtils.rejectRootOperation(path);
-assertFalse("Returned true attempting to delete"
+assertFalse("Returned true attempting to recursively delete"
 + " a nonexistent path " + path,
-getFileSystem().delete(path, false));
+getFileSystem().delete(path, true));
   }
 
-
   @Test
   public void testDeleteNonexistentPathNonRecursive() throws Throwable {
 Path path = path("testDeleteNonexistentPathNonRecursive");
 assertPathDoesNotExist("leftover", path);
 ContractTestUtils.rejectRootOperation(path);
-assertFalse("Returned true attempting to recursively delete"
+assertFalse("Returned true attempting to non recursively delete"
 + " a nonexistent path " + path,
 getFileSystem().delete(path, false));
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: HADOOP-17272. ABFS Streams to support IOStatistics API (#2604)

2021-01-22 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new d20b2de  HADOOP-17272. ABFS Streams to support IOStatistics API (#2604)
d20b2de is described below

commit d20b2deac33796ca3e294726bab806069e2fabc0
Author: Mehakmeet Singh 
AuthorDate: Tue Jan 12 21:18:09 2021 +0530

HADOOP-17272. ABFS Streams to support IOStatistics API (#2604)

Contributed by Mehakmeet Singh.

Change-Id: I3445dec84b9b9e43bb1e41f709944ea05416bd74
---
 .../hadoop/fs/statistics/StreamStatisticNames.java |  72 +
 .../fs/azurebfs/services/AbfsInputStream.java  |  23 ++-
 .../services/AbfsInputStreamStatistics.java|  15 +-
 .../services/AbfsInputStreamStatisticsImpl.java| 162 +
 .../fs/azurebfs/services/AbfsOutputStream.java | 125 ++--
 .../services/AbfsOutputStreamStatistics.java   |  17 ++-
 .../services/AbfsOutputStreamStatisticsImpl.java   | 130 ++---
 .../azurebfs/ITestAbfsInputStreamStatistics.java   |  43 +-
 .../azurebfs/ITestAbfsOutputStreamStatistics.java  |  31 
 .../azurebfs/TestAbfsOutputStreamStatistics.java   |  27 +---
 10 files changed, 444 insertions(+), 201 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/statistics/StreamStatisticNames.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/statistics/StreamStatisticNames.java
index 02072d4..bbb8517 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/statistics/StreamStatisticNames.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/statistics/StreamStatisticNames.java
@@ -286,6 +286,78 @@ public final class StreamStatisticNames {
   public static final String STREAM_WRITE_TOTAL_DATA
   = "stream_write_total_data";
 
+  /**
+   * Number of bytes to upload from an OutputStream.
+   */
+  public static final String BYTES_TO_UPLOAD
+  = "bytes_upload";
+
+  /**
+   * Number of bytes uploaded successfully to the object store.
+   */
+  public static final String BYTES_UPLOAD_SUCCESSFUL
+  = "bytes_upload_successfully";
+
+  /**
+   * Number of bytes failed to upload to the object store.
+   */
+  public static final String BYTES_UPLOAD_FAILED
+  = "bytes_upload_failed";
+
+  /**
+   * Total time spent on waiting for a task to complete.
+   */
+  public static final String TIME_SPENT_ON_TASK_WAIT
+  = "time_spent_task_wait";
+
+  /**
+   * Number of task queue shrunk operations.
+   */
+  public static final String QUEUE_SHRUNK_OPS
+  = "queue_shrunk_ops";
+
+  /**
+   * Number of times current buffer is written to the service.
+   */
+  public static final String WRITE_CURRENT_BUFFER_OPERATIONS
+  = "write_current_buffer_ops";
+
+  /**
+   * Total time spent on completing a PUT request.
+   */
+  public static final String TIME_SPENT_ON_PUT_REQUEST
+  = "time_spent_on_put_request";
+
+  /**
+   * Number of seeks in buffer.
+   */
+  public static final String SEEK_IN_BUFFER
+  = "seek_in_buffer";
+
+  /**
+   * Number of bytes read from the buffer.
+   */
+  public static final String BYTES_READ_BUFFER
+  = "bytes_read_buffer";
+
+  /**
+   * Total number of remote read operations performed.
+   */
+  public static final String REMOTE_READ_OP
+  = "remote_read_op";
+
+  /**
+   * Total number of bytes read from readAhead.
+   */
+  public static final String READ_AHEAD_BYTES_READ
+  = "read_ahead_bytes_read";
+
+  /**
+   * Total number of bytes read from remote operations.
+   */
+  public static final String REMOTE_BYTES_READ
+  = "remote_bytes_read";
+
   private StreamStatisticNames() {
   }
 
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java
index 1d109f4..c1de031 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java
@@ -37,6 +37,11 @@ import org.apache.hadoop.fs.StreamCapabilities;
 import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsRestOperationException;
 import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.AzureBlobFileSystemException;
 import org.apache.hadoop.fs.azurebfs.utils.CachedSASToken;
+import org.apache.hadoop.fs.statistics.IOStatistics;
+import org.apache.hadoop.fs.statistics.IOStatisticsSource;
+import org.apache.hadoop.fs.statistics.StoreStatisticNames;
+import org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding;
+import org.apache.hadoop.fs.statistics.impl.IOStatisticsStore;
 
 import static java.lang.Math.max;

[hadoop] 01/06: HADOOP-17296. ABFS: Force reads to be always of buffer size.

2021-01-22 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit a44890eb63f5320a542d5160f140e11e82256932
Author: Sneha Vijayarajan 
AuthorDate: Fri Nov 27 19:52:34 2020 +0530

HADOOP-17296. ABFS: Force reads to be always of buffer size.

Contributed by Sneha Vijayarajan.

(cherry picked from commit 142941b96e221fc1b4524476ce445714d7f6eec3)
---
 .../hadoop/fs/azurebfs/AbfsConfiguration.java  |  18 ++
 .../fs/azurebfs/AzureBlobFileSystemStore.java  |   3 +
 .../fs/azurebfs/constants/ConfigurationKeys.java   |   2 +
 .../constants/FileSystemConfigurations.java|   3 +
 .../fs/azurebfs/services/AbfsInputStream.java  |  41 +++-
 .../azurebfs/services/AbfsInputStreamContext.java  |  38 
 .../fs/azurebfs/services/ReadBufferManager.java| 105 -
 .../hadoop-azure/src/site/markdown/abfs.md |  16 ++
 .../fs/azurebfs/AbstractAbfsIntegrationTest.java   |   8 +
 .../ITestAzureBlobFileSystemRandomRead.java| 244 -
 .../fs/azurebfs/services/TestAbfsInputStream.java  | 223 ++-
 11 files changed, 634 insertions(+), 67 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
index c4a2b67..3d09a80 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
@@ -201,6 +201,16 @@ public class AbfsConfiguration{
   DefaultValue = DEFAULT_READ_AHEAD_QUEUE_DEPTH)
   private int readAheadQueueDepth;
 
+  @IntegerConfigurationValidatorAnnotation(ConfigurationKey = 
FS_AZURE_READ_AHEAD_BLOCK_SIZE,
+  MinValue = MIN_BUFFER_SIZE,
+  MaxValue = MAX_BUFFER_SIZE,
+  DefaultValue = DEFAULT_READ_AHEAD_BLOCK_SIZE)
+  private int readAheadBlockSize;
+
+  @BooleanConfigurationValidatorAnnotation(ConfigurationKey = 
FS_AZURE_ALWAYS_READ_BUFFER_SIZE,
+  DefaultValue = DEFAULT_ALWAYS_READ_BUFFER_SIZE)
+  private boolean alwaysReadBufferSize;
+
   @BooleanConfigurationValidatorAnnotation(ConfigurationKey = 
FS_AZURE_ENABLE_FLUSH,
   DefaultValue = DEFAULT_ENABLE_FLUSH)
   private boolean enableFlush;
@@ -599,6 +609,14 @@ public class AbfsConfiguration{
 return this.readAheadQueueDepth;
   }
 
+  public int getReadAheadBlockSize() {
+return this.readAheadBlockSize;
+  }
+
+  public boolean shouldReadBufferSizeAlways() {
+return this.alwaysReadBufferSize;
+  }
+
   public boolean isFlushEnabled() {
 return this.enableFlush;
   }
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
index e8f355f..a766c62 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
@@ -644,6 +644,9 @@ public class AzureBlobFileSystemStore implements Closeable {
 
.withReadAheadQueueDepth(abfsConfiguration.getReadAheadQueueDepth())
 .withTolerateOobAppends(abfsConfiguration.getTolerateOobAppends())
 .withStreamStatistics(new AbfsInputStreamStatisticsImpl())
+.withShouldReadBufferSizeAlways(
+abfsConfiguration.shouldReadBufferSizeAlways())
+.withReadAheadBlockSize(abfsConfiguration.getReadAheadBlockSize())
 .build();
   }
 
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java
index c15c470..cb9c0de 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java
@@ -75,6 +75,8 @@ public final class ConfigurationKeys {
*  Default is empty. **/
   public static final String FS_AZURE_APPEND_BLOB_KEY = 
"fs.azure.appendblob.directories";
   public static final String FS_AZURE_READ_AHEAD_QUEUE_DEPTH = 
"fs.azure.readaheadqueue.depth";
+  public static final String FS_AZURE_ALWAYS_READ_BUFFER_SIZE = 
"fs.azure.read.alwaysReadBufferSize";
+  public static final String FS_AZURE_READ_AHEAD_BLOCK_SIZE = 
"fs.azure.read.readahead.blocksize";
   /** Provides a config control to enable or disable ABFS Flush operations -
*  HFlush and HSync. Default is true. **/
   public static final String FS_AZURE_ENABLE_FLUSH = "fs.azure.enable.flush";
diff --git 

[hadoop] branch branch-3.3 updated (1520b84 -> 4865589)

2021-01-22 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a change to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 1520b84  YARN-10519. Refactor QueueMetricsForCustomResources class to 
move to yarn-common package. Contributed by Minni Mittal
 new a44890e  HADOOP-17296. ABFS: Force reads to be always of buffer size.
 new d3caa15  Hadoop-17413. Release elastic byte buffer pool at close
 new 5f312a0  HADOOP-17422: ABFS: Set default ListMaxResults to max server 
limit (#2535) Contributed by Sumangala Patki
 new f3a0ca6  HADOOP-17407. ABFS: Fix NPE on delete idempotency flow
 new cb67292  HADOOP-17347. ABFS: Read optimizations
 new 4865589  HADOOP-17404. ABFS: Small write - Merge append and flush

The 6 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 hadoop-tools/hadoop-azure/pom.xml  |   2 +
 .../src/config/checkstyle-suppressions.xml |   2 +
 .../hadoop/fs/azurebfs/AbfsConfiguration.java  |  54 +++
 .../fs/azurebfs/AzureBlobFileSystemStore.java  |   6 +
 .../fs/azurebfs/constants/AbfsHttpConstants.java   |   1 +
 .../fs/azurebfs/constants/ConfigurationKeys.java   |  13 +
 .../constants/FileSystemConfigurations.java|  12 +-
 .../fs/azurebfs/constants/HttpQueryParams.java |   1 +
 .../services/AppendRequestParameters.java  |  69 +++
 .../hadoop/fs/azurebfs/services/AbfsClient.java|  50 +-
 .../fs/azurebfs/services/AbfsHttpOperation.java|  39 +-
 .../fs/azurebfs/services/AbfsInputStream.java  | 231 -
 .../azurebfs/services/AbfsInputStreamContext.java  |  62 +++
 .../fs/azurebfs/services/AbfsOutputStream.java |  70 ++-
 .../azurebfs/services/AbfsOutputStreamContext.java |  11 +
 .../fs/azurebfs/services/AbfsRestOperation.java|   2 +-
 .../fs/azurebfs/services/ReadBufferManager.java| 105 -
 .../hadoop-azure/src/site/markdown/abfs.md |  18 +-
 .../fs/azurebfs/AbstractAbfsIntegrationTest.java   |   8 +
 .../fs/azurebfs/ITestAbfsNetworkStatistics.java| 339 +++--
 .../azurebfs/ITestAzureBlobFileSystemDelete.java   |  31 +-
 .../ITestAzureBlobFileSystemListStatus.java|   8 +-
 .../ITestAzureBlobFileSystemRandomRead.java| 244 --
 .../fs/azurebfs/ITestSmallWriteOptimization.java   | 523 +
 .../fs/azurebfs/services/ITestAbfsInputStream.java | 256 ++
 .../services/ITestAbfsInputStreamReadFooter.java   | 358 ++
 .../ITestAbfsInputStreamSmallFileReads.java| 326 +
 .../azurebfs/services/ITestAbfsOutputStream.java   |  17 +-
 .../fs/azurebfs/services/TestAbfsClient.java   |  46 ++
 .../fs/azurebfs/services/TestAbfsInputStream.java  | 223 -
 .../fs/azurebfs/services/TestAbfsOutputStream.java | 279 ++-
 .../fs/azurebfs/services/TestAbfsPerfTracker.java  |  13 +
 .../hadoop/fs/azurebfs/utils/TestMockHelpers.java  |  59 +++
 33 files changed, 3031 insertions(+), 447 deletions(-)
 create mode 100644 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/contracts/services/AppendRequestParameters.java
 create mode 100644 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestSmallWriteOptimization.java
 create mode 100644 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStream.java
 create mode 100644 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java
 create mode 100644 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamSmallFileReads.java
 create mode 100644 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/utils/TestMockHelpers.java


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 02/06: Hadoop-17413. Release elastic byte buffer pool at close

2021-01-22 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit d3caa1552b143b1d578d69d3786ccdedf66e4557
Author: Sneha Vijayarajan 
AuthorDate: Tue Dec 15 10:15:37 2020 +0530

Hadoop-17413. Release elastic byte buffer pool at close

- Contributed by Sneha Vijayarajan

(cherry picked from commit 5bf977e6b16287d7d140dd96dad66d0fce213954)
---
 .../java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java  | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java
index b53b2b2..01b2fa5 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java
@@ -85,7 +85,7 @@ public class AbfsOutputStream extends OutputStream implements 
Syncable, StreamCa
* blocks. After the data is sent to the service, the buffer is returned
* back to the queue
*/
-  private final ElasticByteBufferPool byteBufferPool
+  private ElasticByteBufferPool byteBufferPool
   = new ElasticByteBufferPool();
 
   private final Statistics statistics;
@@ -297,6 +297,7 @@ public class AbfsOutputStream extends OutputStream 
implements Syncable, StreamCa
   bufferIndex = 0;
   closed = true;
   writeOperations.clear();
+  byteBufferPool = null;
   if (!threadExecutor.isShutdown()) {
 threadExecutor.shutdownNow();
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 06/06: HADOOP-17404. ABFS: Small write - Merge append and flush

2021-01-22 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 4865589bb4414c87d9ac02b4323ebbd485348cf7
Author: Sneha Vijayarajan 
AuthorDate: Thu Jan 7 00:13:37 2021 +0530

HADOOP-17404. ABFS: Small write - Merge append and flush

- Contributed by Sneha Vijayarajan

(cherry picked from commit b612c310c26394aa406c99d8598c9cb7621df052)
---
 hadoop-tools/hadoop-azure/pom.xml  |   2 +
 .../src/config/checkstyle-suppressions.xml |   2 +
 .../hadoop/fs/azurebfs/AbfsConfiguration.java  |   8 +
 .../fs/azurebfs/AzureBlobFileSystemStore.java  |   1 +
 .../fs/azurebfs/constants/AbfsHttpConstants.java   |   1 +
 .../fs/azurebfs/constants/ConfigurationKeys.java   |   9 +
 .../constants/FileSystemConfigurations.java|   1 +
 .../fs/azurebfs/constants/HttpQueryParams.java |   1 +
 .../services/AppendRequestParameters.java  |  69 +++
 .../hadoop/fs/azurebfs/services/AbfsClient.java|  47 +-
 .../fs/azurebfs/services/AbfsOutputStream.java |  67 ++-
 .../azurebfs/services/AbfsOutputStreamContext.java |  11 +
 .../fs/azurebfs/services/AbfsRestOperation.java|   2 +-
 .../fs/azurebfs/ITestAbfsNetworkStatistics.java| 339 +++--
 .../fs/azurebfs/ITestSmallWriteOptimization.java   | 523 +
 .../azurebfs/services/ITestAbfsOutputStream.java   |  17 +-
 .../fs/azurebfs/services/TestAbfsOutputStream.java | 279 ++-
 17 files changed, 1030 insertions(+), 349 deletions(-)

diff --git a/hadoop-tools/hadoop-azure/pom.xml 
b/hadoop-tools/hadoop-azure/pom.xml
index 7bfce01..6b0599d 100644
--- a/hadoop-tools/hadoop-azure/pom.xml
+++ b/hadoop-tools/hadoop-azure/pom.xml
@@ -555,6 +555,7 @@
 
**/azurebfs/ITestAbfsReadWriteAndSeek.java
 
**/azurebfs/ITestAzureBlobFileSystemListStatus.java
 
**/azurebfs/extensions/ITestAbfsDelegationTokens.java
+
**/azurebfs/ITestSmallWriteOptimization.java
   
 
 
@@ -594,6 +595,7 @@
 
**/azurebfs/ITestAbfsReadWriteAndSeek.java
 
**/azurebfs/ITestAzureBlobFileSystemListStatus.java
 
**/azurebfs/extensions/ITestAbfsDelegationTokens.java
+
**/azurebfs/ITestSmallWriteOptimization.java
   
 
   
diff --git a/hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
b/hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml
index c502361..070c8c1 100644
--- a/hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml
+++ b/hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml
@@ -46,4 +46,6 @@
   
files="org[\\/]apache[\\/]hadoop[\\/]fs[\\/]azurebfs[\\/]AzureBlobFileSystemStore.java"/>
 
+
 
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
index b1c95d2..5a70323 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
@@ -100,6 +100,10 @@ public class AbfsConfiguration{
   DefaultValue = DEFAULT_WRITE_BUFFER_SIZE)
   private int writeBufferSize;
 
+  @BooleanConfigurationValidatorAnnotation(ConfigurationKey = 
AZURE_ENABLE_SMALL_WRITE_OPTIMIZATION,
+  DefaultValue = DEFAULT_AZURE_ENABLE_SMALL_WRITE_OPTIMIZATION)
+  private boolean enableSmallWriteOptimization;
+
   @BooleanConfigurationValidatorAnnotation(
   ConfigurationKey = AZURE_READ_SMALL_FILES_COMPLETELY,
   DefaultValue = DEFAULT_READ_SMALL_FILES_COMPLETELY)
@@ -537,6 +541,10 @@ public class AbfsConfiguration{
 return this.writeBufferSize;
   }
 
+  public boolean isSmallWriteOptimizationEnabled() {
+return this.enableSmallWriteOptimization;
+  }
+
   public boolean readSmallFilesCompletely() {
 return this.readSmallFilesCompletely;
   }
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
index 869a6f9..c8dd518 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
@@ -578,6 +578,7 @@ public class AzureBlobFileSystemStore implements Closeable {
 return new 
AbfsOutputStreamContext(abfsConfiguration.getSasTokenRenewPeriodForStreamsInSeconds())
 .withWriteBufferSize(bufferSize)
 .enableFlush(abfsConfiguration.isFlushEnabled())
+

[hadoop] 05/06: HADOOP-17347. ABFS: Read optimizations

2021-01-22 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit cb6729224e15b89bd7fa7877fe045d28b3582f7b
Author: bilaharith <52483117+bilahar...@users.noreply.github.com>
AuthorDate: Sun Jan 3 00:07:10 2021 +0530

HADOOP-17347. ABFS: Read optimizations

- Contributed by Bilahari T H

(cherry picked from commit 1448add08fcd4a23e59eab5f75ef46fca6b1c3d1)
---
 .../hadoop/fs/azurebfs/AbfsConfiguration.java  |  28 ++
 .../fs/azurebfs/AzureBlobFileSystemStore.java  |   2 +
 .../fs/azurebfs/constants/ConfigurationKeys.java   |   2 +
 .../constants/FileSystemConfigurations.java|   6 +-
 .../fs/azurebfs/services/AbfsInputStream.java  | 194 +--
 .../azurebfs/services/AbfsInputStreamContext.java  |  24 ++
 .../fs/azurebfs/services/ITestAbfsInputStream.java | 256 +++
 .../services/ITestAbfsInputStreamReadFooter.java   | 358 +
 .../ITestAbfsInputStreamSmallFileReads.java| 326 +++
 9 files changed, 1175 insertions(+), 21 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
index 3d09a80..b1c95d2 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
@@ -100,6 +100,16 @@ public class AbfsConfiguration{
   DefaultValue = DEFAULT_WRITE_BUFFER_SIZE)
   private int writeBufferSize;
 
+  @BooleanConfigurationValidatorAnnotation(
+  ConfigurationKey = AZURE_READ_SMALL_FILES_COMPLETELY,
+  DefaultValue = DEFAULT_READ_SMALL_FILES_COMPLETELY)
+  private boolean readSmallFilesCompletely;
+
+  @BooleanConfigurationValidatorAnnotation(
+  ConfigurationKey = AZURE_READ_OPTIMIZE_FOOTER_READ,
+  DefaultValue = DEFAULT_OPTIMIZE_FOOTER_READ)
+  private boolean optimizeFooterRead;
+
   @IntegerConfigurationValidatorAnnotation(ConfigurationKey = 
AZURE_READ_BUFFER_SIZE,
   MinValue = MIN_BUFFER_SIZE,
   MaxValue = MAX_BUFFER_SIZE,
@@ -527,6 +537,14 @@ public class AbfsConfiguration{
 return this.writeBufferSize;
   }
 
+  public boolean readSmallFilesCompletely() {
+return this.readSmallFilesCompletely;
+  }
+
+  public boolean optimizeFooterRead() {
+return this.optimizeFooterRead;
+  }
+
   public int getReadBufferSize() {
 return this.readBufferSize;
   }
@@ -925,4 +943,14 @@ public class AbfsConfiguration{
 return authority;
   }
 
+  @VisibleForTesting
+  public void setReadSmallFilesCompletely(boolean readSmallFilesCompletely) {
+this.readSmallFilesCompletely = readSmallFilesCompletely;
+  }
+
+  @VisibleForTesting
+  public void setOptimizeFooterRead(boolean optimizeFooterRead) {
+this.optimizeFooterRead = optimizeFooterRead;
+  }
+
 }
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
index a766c62..869a6f9 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
@@ -643,6 +643,8 @@ public class AzureBlobFileSystemStore implements Closeable {
 .withReadBufferSize(abfsConfiguration.getReadBufferSize())
 
.withReadAheadQueueDepth(abfsConfiguration.getReadAheadQueueDepth())
 .withTolerateOobAppends(abfsConfiguration.getTolerateOobAppends())
+
.withReadSmallFilesCompletely(abfsConfiguration.readSmallFilesCompletely())
+.withOptimizeFooterRead(abfsConfiguration.optimizeFooterRead())
 .withStreamStatistics(new AbfsInputStreamStatisticsImpl())
 .withShouldReadBufferSizeAlways(
 abfsConfiguration.shouldReadBufferSizeAlways())
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java
index cb9c0de..3e1ff80 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java
@@ -56,6 +56,8 @@ public final class ConfigurationKeys {
   public static final String AZURE_WRITE_MAX_REQUESTS_TO_QUEUE = 
"fs.azure.write.max.requests.to.queue";
   public static final String AZURE_WRITE_BUFFER_SIZE = 
"fs.azure.write.request.size";
   public static final String AZURE_READ_BUFFER_SIZE = 
"fs.azure.read.request.size";
+  public 

[hadoop] 04/06: HADOOP-17407. ABFS: Fix NPE on delete idempotency flow

2021-01-22 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit f3a0ca66c2d50ac6605010d970a8dbb4ceeeac1d
Author: Sneha Vijayarajan 
AuthorDate: Sat Jan 2 23:52:10 2021 +0530

HADOOP-17407. ABFS: Fix NPE on delete idempotency flow

- Contributed by Sneha Vijayarajan

(cherry picked from commit 5ca1ea89b3f57017768ae4d8002f353e3d240e07)
---
 .../hadoop/fs/azurebfs/services/AbfsClient.java|  3 ++
 .../fs/azurebfs/services/AbfsHttpOperation.java| 39 --
 .../azurebfs/ITestAzureBlobFileSystemDelete.java   | 31 ++--
 .../fs/azurebfs/services/TestAbfsClient.java   | 46 +
 .../fs/azurebfs/services/TestAbfsPerfTracker.java  | 13 +
 .../hadoop/fs/azurebfs/utils/TestMockHelpers.java  | 59 ++
 6 files changed, 183 insertions(+), 8 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java
index 7722c62..db2f44f 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java
@@ -383,6 +383,7 @@ public class AbfsClient implements Closeable {
   HttpHeaderConfigurations.LAST_MODIFIED);
 
   if (DateTimeUtils.isRecentlyModified(lmt, renameRequestStartTime)) {
+LOG.debug("Returning success response from rename idempotency 
logic");
 return destStatusOp;
   }
 }
@@ -450,6 +451,7 @@ public class AbfsClient implements Closeable {
 String fileLength = destStatusOp.getResult().getResponseHeader(
 HttpHeaderConfigurations.CONTENT_LENGTH);
 if (length <= Long.parseLong(fileLength)) {
+  LOG.debug("Returning success response from append blob idempotency 
code");
   return true;
 }
   }
@@ -627,6 +629,7 @@ public class AbfsClient implements Closeable {
   op.getUrl(),
   op.getRequestHeaders());
   successOp.hardSetResult(HttpURLConnection.HTTP_OK);
+  LOG.debug("Returning success response from delete idempotency logic");
   return successOp;
 }
 
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java
index 51d0fb1..720b99b 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java
@@ -86,12 +86,23 @@ public class AbfsHttpOperation implements AbfsPerfLoggable {
   private long sendRequestTimeMs;
   private long recvResponseTimeMs;
 
-  public static AbfsHttpOperation getAbfsHttpOperationWithFixedResult(final 
URL url,
-  final String method, final int httpStatus) {
-   return new AbfsHttpOperation(url, method, httpStatus);
+  public static AbfsHttpOperation getAbfsHttpOperationWithFixedResult(
+  final URL url,
+  final String method,
+  final int httpStatus) {
+AbfsHttpOperationWithFixedResult httpOp
+= new AbfsHttpOperationWithFixedResult(url, method, httpStatus);
+return httpOp;
   }
 
-  private AbfsHttpOperation(final URL url, final String method,
+  /**
+   * Constructor for FixedResult instance, avoiding connection init.
+   * @param url request url
+   * @param method Http method
+   * @param httpStatus HttpStatus
+   */
+  protected AbfsHttpOperation(final URL url,
+  final String method,
   final int httpStatus) {
 this.isTraceEnabled = LOG.isTraceEnabled();
 this.url = url;
@@ -547,4 +558,24 @@ public class AbfsHttpOperation implements AbfsPerfLoggable 
{
 return this.maskedEncodedUrl;
   }
 
+  public static class AbfsHttpOperationWithFixedResult extends 
AbfsHttpOperation {
+/**
+ * Creates an instance to represent fixed results.
+ * This is used in idempotency handling.
+ *
+ * @param url The full URL including query string parameters.
+ * @param method The HTTP method (PUT, PATCH, POST, GET, HEAD, or DELETE).
+ * @param httpStatus StatusCode to hard set
+ */
+public AbfsHttpOperationWithFixedResult(final URL url,
+final String method,
+final int httpStatus) {
+  super(url, method, httpStatus);
+}
+
+@Override
+public String getResponseHeader(final String httpHeader) {
+  return "";
+}
+  }
 }
diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemDelete.java
 

[hadoop] 03/06: HADOOP-17422: ABFS: Set default ListMaxResults to max server limit (#2535) Contributed by Sumangala Patki

2021-01-22 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 5f312a0d854f8d2c84099bb44783f07d84602625
Author: Sumangala 
AuthorDate: Wed Dec 9 15:35:03 2020 +0530

HADOOP-17422: ABFS: Set default ListMaxResults to max server limit (#2535)
Contributed by Sumangala Patki

TEST RESULTS:

namespace.enabled=true
auth.type=SharedKey
---
$mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
Tests run: 462, Failures: 0, Errors: 0, Skipped: 24
Tests run: 208, Failures: 0, Errors: 0, Skipped: 24

namespace.enabled=true
auth.type=OAuth
---
$mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
Tests run: 462, Failures: 0, Errors: 0, Skipped: 70
Tests run: 208, Failures: 0, Errors: 0, Skipped: 141

(cherry picked from commit a35fc3871b01d8a3a375f3ae0e330b55a1d9009f)
---
 .../hadoop/fs/azurebfs/constants/FileSystemConfigurations.java| 2 +-
 hadoop-tools/hadoop-azure/src/site/markdown/abfs.md   | 2 +-
 .../hadoop/fs/azurebfs/ITestAzureBlobFileSystemListStatus.java| 8 +++-
 3 files changed, 9 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/FileSystemConfigurations.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/FileSystemConfigurations.java
index 49fc58b..27dafd0 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/FileSystemConfigurations.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/FileSystemConfigurations.java
@@ -63,7 +63,7 @@ public final class FileSystemConfigurations {
   public static final int MAX_BUFFER_SIZE = 100 * ONE_MB;  // 100 MB
   public static final long MAX_AZURE_BLOCK_SIZE = 256 * 1024 * 1024L; // 
changing default abfs blocksize to 256MB
   public static final String AZURE_BLOCK_LOCATION_HOST_DEFAULT = "localhost";
-  public static final int DEFAULT_AZURE_LIST_MAX_RESULTS = 500;
+  public static final int DEFAULT_AZURE_LIST_MAX_RESULTS = 5000;
 
   public static final int MAX_CONCURRENT_READ_THREADS = 12;
   public static final int MAX_CONCURRENT_WRITE_THREADS = 8;
diff --git a/hadoop-tools/hadoop-azure/src/site/markdown/abfs.md 
b/hadoop-tools/hadoop-azure/src/site/markdown/abfs.md
index a418811..0777f9b 100644
--- a/hadoop-tools/hadoop-azure/src/site/markdown/abfs.md
+++ b/hadoop-tools/hadoop-azure/src/site/markdown/abfs.md
@@ -848,7 +848,7 @@ Please refer the following links for further information.
 listStatus API fetches the FileStatus information from server in a page by page
 manner. The config `fs.azure.list.max.results` used to set the maxResults URI
  param which sets the pagesize(maximum results per call). The value should
- be >  0. By default this will be 500. Server has a maximum value for this
+ be >  0. By default this will be 5000. Server has a maximum value for this
  parameter as 5000. So even if the config is above 5000 the response will only
 contain 5000 entries. Please refer the following link for further information.
 
https://docs.microsoft.com/en-us/rest/api/storageservices/datalakestoragegen2/path/list
diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemListStatus.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemListStatus.java
index 25a1567..31f92d2 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemListStatus.java
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemListStatus.java
@@ -29,12 +29,15 @@ import java.util.concurrent.Future;
 
 import org.junit.Test;
 
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.FSDataOutputStream;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.LocatedFileStatus;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.contract.ContractTestUtils;
 
+import static 
org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.AZURE_LIST_MAX_RESULTS;
 import static org.apache.hadoop.fs.contract.ContractTestUtils.assertMkdirs;
 import static org.apache.hadoop.fs.contract.ContractTestUtils.createFile;
 import static org.apache.hadoop.fs.contract.ContractTestUtils.assertPathExists;
@@ -55,7 +58,10 @@ public class ITestAzureBlobFileSystemListStatus extends
 
   @Test
   public void testListPath() throws Exception {
-final AzureBlobFileSystem fs = getFileSystem();
+Configuration config = new Configuration(this.getRawConfiguration());