[hadoop] branch trunk updated: HADOOP-17422: ABFS: Set default ListMaxResults to max server limit (#2535) Contributed by Sumangala Patki

2020-12-20 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new a35fc38  HADOOP-17422: ABFS: Set default ListMaxResults to max server 
limit (#2535) Contributed by Sumangala Patki
a35fc38 is described below

commit a35fc3871b01d8a3a375f3ae0e330b55a1d9009f
Author: Sumangala 
AuthorDate: Wed Dec 9 15:35:03 2020 +0530

HADOOP-17422: ABFS: Set default ListMaxResults to max server limit (#2535)
Contributed by Sumangala Patki

TEST RESULTS:

namespace.enabled=true
auth.type=SharedKey
---
$mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
Tests run: 462, Failures: 0, Errors: 0, Skipped: 24
Tests run: 208, Failures: 0, Errors: 0, Skipped: 24

namespace.enabled=true
auth.type=OAuth
---
$mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
Tests run: 462, Failures: 0, Errors: 0, Skipped: 70
Tests run: 208, Failures: 0, Errors: 0, Skipped: 141
---
 .../hadoop/fs/azurebfs/constants/FileSystemConfigurations.java| 2 +-
 hadoop-tools/hadoop-azure/src/site/markdown/abfs.md   | 2 +-
 .../hadoop/fs/azurebfs/ITestAzureBlobFileSystemListStatus.java| 8 +++-
 3 files changed, 9 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/FileSystemConfigurations.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/FileSystemConfigurations.java
index 49fc58b..27dafd0 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/FileSystemConfigurations.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/FileSystemConfigurations.java
@@ -63,7 +63,7 @@ public final class FileSystemConfigurations {
   public static final int MAX_BUFFER_SIZE = 100 * ONE_MB;  // 100 MB
   public static final long MAX_AZURE_BLOCK_SIZE = 256 * 1024 * 1024L; // 
changing default abfs blocksize to 256MB
   public static final String AZURE_BLOCK_LOCATION_HOST_DEFAULT = "localhost";
-  public static final int DEFAULT_AZURE_LIST_MAX_RESULTS = 500;
+  public static final int DEFAULT_AZURE_LIST_MAX_RESULTS = 5000;
 
   public static final int MAX_CONCURRENT_READ_THREADS = 12;
   public static final int MAX_CONCURRENT_WRITE_THREADS = 8;
diff --git a/hadoop-tools/hadoop-azure/src/site/markdown/abfs.md 
b/hadoop-tools/hadoop-azure/src/site/markdown/abfs.md
index a418811..0777f9b 100644
--- a/hadoop-tools/hadoop-azure/src/site/markdown/abfs.md
+++ b/hadoop-tools/hadoop-azure/src/site/markdown/abfs.md
@@ -848,7 +848,7 @@ Please refer the following links for further information.
 listStatus API fetches the FileStatus information from server in a page by page
 manner. The config `fs.azure.list.max.results` used to set the maxResults URI
  param which sets the pagesize(maximum results per call). The value should
- be >  0. By default this will be 500. Server has a maximum value for this
+ be >  0. By default this will be 5000. Server has a maximum value for this
  parameter as 5000. So even if the config is above 5000 the response will only
 contain 5000 entries. Please refer the following link for further information.
 
https://docs.microsoft.com/en-us/rest/api/storageservices/datalakestoragegen2/path/list
diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemListStatus.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemListStatus.java
index 25a1567..31f92d2 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemListStatus.java
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemListStatus.java
@@ -29,12 +29,15 @@ import java.util.concurrent.Future;
 
 import org.junit.Test;
 
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.FSDataOutputStream;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.LocatedFileStatus;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.contract.ContractTestUtils;
 
+import static 
org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.AZURE_LIST_MAX_RESULTS;
 import static org.apache.hadoop.fs.contract.ContractTestUtils.assertMkdirs;
 import static org.apache.hadoop.fs.contract.ContractTestUtils.createFile;
 import static org.apache.hadoop.fs.contract.ContractTestUtils.assertPathExists;
@@ -55,7 +58,10 @@ public class ITestAzureBlobFileSystemListStatus extends
 
   @Test
   public v

[hadoop] branch branch-3.3 updated: HADOOP-17397: ABFS: SAS Test updates for version and permission update

2020-12-03 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new a569505  HADOOP-17397: ABFS: SAS Test updates for version and 
permission update
a569505 is described below

commit a5695057b1894527438d8c9a1bda7005af4fb83d
Author: Thomas Marquardt 
AuthorDate: Tue Dec 1 04:47:46 2020 +

HADOOP-17397: ABFS: SAS Test updates for version and permission update

DETAILS:

The previous commit for HADOOP-17397 was not the correct fix.  
DelegationSASGenerator.getDelegationSAS
should return sp=p for the set-permission and set-acl operations.  The 
tests have also been updated as
follows:

1. When saoid and suoid are not specified, skoid must have an RBAC role 
assignment which grants
   
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/modifyPermissions/action
 and sp=p
   to set permissions or set ACL.

2. When saoid or suiod is specified, same as 1) but furthermore the 
saoid or suoid must be an owner of
   the file or directory in order for the operation to succeed.

3. When saoid or suiod is specified, the ownership check is bypassed by 
also including 'o' (ownership)
   in the SAS permission (for example, sp=op).  Note that 'o' grants 
the saoid or suoid the ability to
   change the file or directory owner to themself, and they can also 
change the owning group. Generally
   speaking, if a trusted authorizer would like to give a user the 
ability to change the permissions or
   ACL, then that user should be the file or directory owner.

TEST RESULTS:

namespace.enabled=true
auth.type=SharedKey
---
$mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean 
verify
Tests run: 89, Failures: 0, Errors: 0, Skipped: 0
Tests run: 461, Failures: 0, Errors: 0, Skipped: 24
Tests run: 208, Failures: 0, Errors: 0, Skipped: 24

namespace.enabled=true
auth.type=OAuth
---
$mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean 
verify
Tests run: 89, Failures: 0, Errors: 0, Skipped: 0
Tests run: 461, Failures: 0, Errors: 0, Skipped: 70
Tests run: 208, Failures: 0, Errors: 0, Skipped: 141
---
 .../ITestAzureBlobFileSystemDelegationSAS.java | 69 +-
 .../extensions/MockDelegationSASTokenProvider.java | 12 +++-
 .../fs/azurebfs/utils/DelegationSASGenerator.java  |  2 +-
 3 files changed, 78 insertions(+), 5 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemDelegationSAS.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemDelegationSAS.java
index 75adaf3..0cff518 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemDelegationSAS.java
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemDelegationSAS.java
@@ -25,6 +25,8 @@ import java.util.Arrays;
 import java.util.List;
 import java.util.UUID;
 
+import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsRestOperationException;
+import org.apache.hadoop.fs.azurebfs.contracts.services.AzureServiceErrorCode;
 import org.assertj.core.api.Assertions;
 import org.junit.Assume;
 import org.junit.Test;
@@ -94,13 +96,16 @@ public class ITestAzureBlobFileSystemDelegationSAS extends 
AbstractAbfsIntegrati
 final AzureBlobFileSystem fs = getFileSystem();
 
 Path rootPath = new Path("/");
+fs.setOwner(rootPath, MockDelegationSASTokenProvider.TEST_OWNER, null);
 fs.setPermission(rootPath, new FsPermission(FsAction.ALL, 
FsAction.READ_EXECUTE, FsAction.EXECUTE));
 FileStatus rootStatus = fs.getFileStatus(rootPath);
 assertEquals("The directory permissions are not expected.", "rwxr-x--x", 
rootStatus.getPermission().toString());
+assertEquals("The directory owner is not expected.",
+MockDelegationSASTokenProvider.TEST_OWNER,
+rootStatus.getOwner());
 
 Path dirPath = new Path(UUID.randomUUID().toString());
 fs.mkdirs(dirPath);
-fs.setOwner(dirPath, MockDelegationSASTokenProvider.TEST_OWNER, null);
 
 Path filePath = new Path(dirPath, "file1");
 fs.create(filePath).close();
@@ -324,8 +329,10 @@ public class ITestAzureBlobFileSystemDelegationSAS extends 
AbstractAbfsIntegrati
 final AzureBlobFileSystem fs = getFileSystem();
 Path rootPath = new Path(AbfsHttpConstants.ROOT_PATH);
 
+fs.setOwner(rootPath, MockDelegationSASTokenProvider.TEST_OWNER, null);
 Fil

[hadoop] branch trunk updated: HADOOP-17397: ABFS: SAS Test updates for version and permission update

2020-12-03 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 717b835  HADOOP-17397: ABFS: SAS Test updates for version and 
permission update
717b835 is described below

commit 717b8350687e0c5b435e954cc7519779b3f96851
Author: Thomas Marquardt 
AuthorDate: Tue Dec 1 04:47:46 2020 +

HADOOP-17397: ABFS: SAS Test updates for version and permission update

DETAILS:

The previous commit for HADOOP-17397 was not the correct fix.  
DelegationSASGenerator.getDelegationSAS
should return sp=p for the set-permission and set-acl operations.  The 
tests have also been updated as
follows:

1. When saoid and suoid are not specified, skoid must have an RBAC role 
assignment which grants
   
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/modifyPermissions/action
 and sp=p
   to set permissions or set ACL.

2. When saoid or suiod is specified, same as 1) but furthermore the 
saoid or suoid must be an owner of
   the file or directory in order for the operation to succeed.

3. When saoid or suiod is specified, the ownership check is bypassed by 
also including 'o' (ownership)
   in the SAS permission (for example, sp=op).  Note that 'o' grants 
the saoid or suoid the ability to
   change the file or directory owner to themself, and they can also 
change the owning group. Generally
   speaking, if a trusted authorizer would like to give a user the 
ability to change the permissions or
   ACL, then that user should be the file or directory owner.

TEST RESULTS:

namespace.enabled=true
auth.type=SharedKey
---
$mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean 
verify
Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
Tests run: 462, Failures: 0, Errors: 0, Skipped: 24
Tests run: 208, Failures: 0, Errors: 0, Skipped: 24

namespace.enabled=true
auth.type=OAuth
---
$mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean 
verify
Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
Tests run: 462, Failures: 0, Errors: 0, Skipped: 70
Tests run: 208, Failures: 0, Errors: 0, Skipped: 141
---
 .../ITestAzureBlobFileSystemDelegationSAS.java | 69 +-
 .../extensions/MockDelegationSASTokenProvider.java | 12 +++-
 .../fs/azurebfs/utils/DelegationSASGenerator.java  |  2 +-
 3 files changed, 78 insertions(+), 5 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemDelegationSAS.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemDelegationSAS.java
index 75adaf3..0cff518 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemDelegationSAS.java
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemDelegationSAS.java
@@ -25,6 +25,8 @@ import java.util.Arrays;
 import java.util.List;
 import java.util.UUID;
 
+import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsRestOperationException;
+import org.apache.hadoop.fs.azurebfs.contracts.services.AzureServiceErrorCode;
 import org.assertj.core.api.Assertions;
 import org.junit.Assume;
 import org.junit.Test;
@@ -94,13 +96,16 @@ public class ITestAzureBlobFileSystemDelegationSAS extends 
AbstractAbfsIntegrati
 final AzureBlobFileSystem fs = getFileSystem();
 
 Path rootPath = new Path("/");
+fs.setOwner(rootPath, MockDelegationSASTokenProvider.TEST_OWNER, null);
 fs.setPermission(rootPath, new FsPermission(FsAction.ALL, 
FsAction.READ_EXECUTE, FsAction.EXECUTE));
 FileStatus rootStatus = fs.getFileStatus(rootPath);
 assertEquals("The directory permissions are not expected.", "rwxr-x--x", 
rootStatus.getPermission().toString());
+assertEquals("The directory owner is not expected.",
+MockDelegationSASTokenProvider.TEST_OWNER,
+rootStatus.getOwner());
 
 Path dirPath = new Path(UUID.randomUUID().toString());
 fs.mkdirs(dirPath);
-fs.setOwner(dirPath, MockDelegationSASTokenProvider.TEST_OWNER, null);
 
 Path filePath = new Path(dirPath, "file1");
 fs.create(filePath).close();
@@ -324,8 +329,10 @@ public class ITestAzureBlobFileSystemDelegationSAS extends 
AbstractAbfsIntegrati
 final AzureBlobFileSystem fs = getFileSystem();
 Path rootPath = new Path(AbfsHttpConstants.ROOT_PATH);
 
+fs.setOwner(rootPath, MockDelegationSASTokenProvider.TEST_OWNER, null);
 FileStatus status = fs.getFil

[hadoop] 06/09: HADOOP-17166. ABFS: configure output stream thread pool (#2179)

2020-10-14 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit f208da286cddebe594c240ed6e4c8c4850f1faeb
Author: bilaharith <52483117+bilahar...@users.noreply.github.com>
AuthorDate: Wed Sep 9 21:11:36 2020 +0530

HADOOP-17166. ABFS: configure output stream thread pool (#2179)


Adds the options to control the size of the per-output-stream threadpool
when writing data through the abfs connector

* fs.azure.write.max.concurrent.requests
* fs.azure.write.max.requests.to.queue

Contributed by Bilahari T H
---
 .../hadoop/fs/azurebfs/AbfsConfiguration.java  | 22 ++
 .../fs/azurebfs/AzureBlobFileSystemStore.java  |  2 +
 .../fs/azurebfs/constants/ConfigurationKeys.java   |  2 +
 .../fs/azurebfs/services/AbfsOutputStream.java | 18 -
 .../azurebfs/services/AbfsOutputStreamContext.java | 24 +++
 .../hadoop-azure/src/site/markdown/abfs.md | 13 
 .../azurebfs/services/ITestAbfsOutputStream.java   | 78 ++
 .../fs/azurebfs/services/TestAbfsOutputStream.java |  7 +-
 8 files changed, 163 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
index 85bd37a..66d4853 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
@@ -86,6 +86,14 @@ public class AbfsConfiguration{
   DefaultValue = DEFAULT_FS_AZURE_ACCOUNT_IS_HNS_ENABLED)
   private String isNamespaceEnabledAccount;
 
+  @IntegerConfigurationValidatorAnnotation(ConfigurationKey = 
AZURE_WRITE_MAX_CONCURRENT_REQUESTS,
+  DefaultValue = -1)
+  private int writeMaxConcurrentRequestCount;
+
+  @IntegerConfigurationValidatorAnnotation(ConfigurationKey = 
AZURE_WRITE_MAX_REQUESTS_TO_QUEUE,
+  DefaultValue = -1)
+  private int maxWriteRequestsToQueue;
+
   @IntegerConfigurationValidatorAnnotation(ConfigurationKey = 
AZURE_WRITE_BUFFER_SIZE,
   MinValue = MIN_BUFFER_SIZE,
   MaxValue = MAX_BUFFER_SIZE,
@@ -822,6 +830,20 @@ public class AbfsConfiguration{
 oauthTokenFetchRetryDeltaBackoff);
   }
 
+  public int getWriteMaxConcurrentRequestCount() {
+if (this.writeMaxConcurrentRequestCount < 1) {
+  return 4 * Runtime.getRuntime().availableProcessors();
+}
+return this.writeMaxConcurrentRequestCount;
+  }
+
+  public int getMaxWriteRequestsToQueue() {
+if (this.maxWriteRequestsToQueue < 1) {
+  return 2 * getWriteMaxConcurrentRequestCount();
+}
+return this.maxWriteRequestsToQueue;
+  }
+
   @VisibleForTesting
   void setReadBufferSize(int bufferSize) {
 this.readBufferSize = bufferSize;
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
index 9861e3a..23d2b5a 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
@@ -490,6 +490,8 @@ public class AzureBlobFileSystemStore implements Closeable {
 
.disableOutputStreamFlush(abfsConfiguration.isOutputStreamFlushDisabled())
 .withStreamStatistics(new AbfsOutputStreamStatisticsImpl())
 .withAppendBlob(isAppendBlob)
+
.withWriteMaxConcurrentRequestCount(abfsConfiguration.getWriteMaxConcurrentRequestCount())
+
.withMaxWriteRequestsToQueue(abfsConfiguration.getMaxWriteRequestsToQueue())
 .build();
   }
 
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java
index 5f1ad31..681390c 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java
@@ -52,6 +52,8 @@ public final class ConfigurationKeys {
   public static final String AZURE_OAUTH_TOKEN_FETCH_RETRY_DELTA_BACKOFF = 
"fs.azure.oauth.token.fetch.retry.delta.backoff";
 
   // Read and write buffer sizes defined by the user
+  public static final String AZURE_WRITE_MAX_CONCURRENT_REQUESTS = 
"fs.azure.write.max.concurrent.requests";
+  public static final String AZURE_WRITE_MAX_REQUESTS_TO_QUEUE = 
"fs.azure.write.max.requests.to.queue";
   public static final String AZURE_WRITE_BUFFER_SIZE = 
"fs.azure.wri

[hadoop] 08/09: HADOOP-17279: ABFS: testNegativeScenariosForCreateOverwriteDisabled fails for non-HNS account.

2020-10-14 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit da5db6a5a66d13fd8ba71b127bd6f16a1e3dade8
Author: Sneha Vijayarajan 
AuthorDate: Tue Sep 22 20:58:12 2020 +

HADOOP-17279: ABFS: testNegativeScenariosForCreateOverwriteDisabled fails 
for non-HNS account.

Contributed by Sneha Vijayarajan

Testing:

namespace.enabled=false
auth.type=SharedKey
$mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify

Tests run: 87, Failures: 0, Errors: 0, Skipped: 0
Tests run: 457, Failures: 0, Errors: 0, Skipped: 246
Tests run: 207, Failures: 0, Errors: 0, Skipped: 24

namespace.enabled=true
auth.type=SharedKey
$mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify

Tests run: 87, Failures: 0, Errors: 0, Skipped: 0
Tests run: 457, Failures: 0, Errors: 0, Skipped: 33
Tests run: 207, Failures: 0, Errors: 0, Skipped: 24

namespace.enabled=true
auth.type=OAuth
$mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify

Tests run: 87, Failures: 0, Errors: 0, Skipped: 0
Tests run: 457, Failures: 0, Errors: 0, Skipped: 74
Tests run: 207, Failures: 0, Errors: 0, Skipped: 140
---
 .../fs/azurebfs/ITestAzureBlobFileSystemCreate.java   | 15 +++
 1 file changed, 11 insertions(+), 4 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemCreate.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemCreate.java
index 981ed25..09304d1 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemCreate.java
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemCreate.java
@@ -346,6 +346,7 @@ public class ITestAzureBlobFileSystemCreate extends
 
 AzureBlobFileSystemStore abfsStore = fs.getAbfsStore();
 abfsStore = setAzureBlobSystemStoreField(abfsStore, "client", mockClient);
+boolean isNamespaceEnabled = abfsStore.getIsNamespaceEnabled();
 
 AbfsRestOperation successOp = mock(
 AbfsRestOperation.class);
@@ -363,6 +364,7 @@ public class ITestAzureBlobFileSystemCreate extends
 AbfsRestOperationException preConditionResponseEx
 = getMockAbfsRestOperationException(HTTP_PRECON_FAILED);
 
+// mock for overwrite=false
 doThrow(conflictResponseEx) // Scn1: GFS fails with Http404
 .doThrow(conflictResponseEx) // Scn2: GFS fails with Http500
 .doThrow(
@@ -372,8 +374,10 @@ public class ITestAzureBlobFileSystemCreate extends
 .doThrow(
 serverErrorResponseEx) // Scn5: create overwrite=false fails with 
Http500
 .when(mockClient)
-.createPath(any(String.class), eq(true), eq(false), any(String.class),
-any(String.class), any(boolean.class), eq(null));
+.createPath(any(String.class), eq(true), eq(false),
+isNamespaceEnabled ? any(String.class) : eq(null),
+isNamespaceEnabled ? any(String.class) : eq(null),
+any(boolean.class), eq(null));
 
 doThrow(fileNotFoundResponseEx) // Scn1: GFS fails with Http404
 .doThrow(serverErrorResponseEx) // Scn2: GFS fails with Http500
@@ -382,13 +386,16 @@ public class ITestAzureBlobFileSystemCreate extends
 .when(mockClient)
 .getPathStatus(any(String.class), eq(false));
 
+// mock for overwrite=true
 doThrow(
 preConditionResponseEx) // Scn3: create overwrite=true fails with 
Http412
 .doThrow(
 serverErrorResponseEx) // Scn4: create overwrite=true fails with 
Http500
 .when(mockClient)
-.createPath(any(String.class), eq(true), eq(true), any(String.class),
-any(String.class), any(boolean.class), eq(null));
+.createPath(any(String.class), eq(true), eq(true),
+isNamespaceEnabled ? any(String.class) : eq(null),
+isNamespaceEnabled ? any(String.class) : eq(null),
+any(boolean.class), eq(null));
 
 // Scn1: GFS fails with Http404
 // Sequence of events expected:


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 03/09: HADOOP-17149. ABFS: Fixing the testcase ITestGetNameSpaceEnabled

2020-10-14 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit e481d0108aa8f152610ca16b813c1dfaa568f1cc
Author: bilaharith <52483117+bilahar...@users.noreply.github.com>
AuthorDate: Wed Aug 5 22:31:04 2020 +0530

HADOOP-17149. ABFS: Fixing the testcase ITestGetNameSpaceEnabled

- Contributed by Bilahari T H
---
 .../fs/azurebfs/ITestGetNameSpaceEnabled.java  | 23 
 .../hadoop/fs/azurebfs/ITestSharedKeyAuth.java | 61 ++
 2 files changed, 61 insertions(+), 23 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestGetNameSpaceEnabled.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestGetNameSpaceEnabled.java
index 4268ff2..29de126 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestGetNameSpaceEnabled.java
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestGetNameSpaceEnabled.java
@@ -32,8 +32,6 @@ import org.apache.hadoop.fs.azurebfs.services.AbfsClient;
 import org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Path;
-import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsRestOperationException;
-import org.apache.hadoop.fs.azurebfs.services.AuthType;
 
 import static org.mockito.ArgumentMatchers.anyString;
 import static org.mockito.Mockito.doReturn;
@@ -46,7 +44,6 @@ import static 
org.apache.hadoop.fs.azurebfs.constants.FileSystemConfigurations.D
 import static 
org.apache.hadoop.fs.azurebfs.constants.TestConfigurationKeys.TEST_CONFIGURATION_FILE_NAME;
 import static 
org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.AZURE_CREATE_REMOTE_FILESYSTEM_DURING_INITIALIZATION;
 import static 
org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.FS_AZURE_ACCOUNT_IS_HNS_ENABLED;
-import static 
org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.FS_AZURE_ACCOUNT_KEY_PROPERTY_NAME;
 import static 
org.apache.hadoop.fs.azurebfs.constants.TestConfigurationKeys.FS_AZURE_TEST_NAMESPACE_ENABLED_ACCOUNT;
 import static org.apache.hadoop.test.LambdaTestUtils.intercept;
 
@@ -146,26 +143,6 @@ public class ITestGetNameSpaceEnabled extends 
AbstractAbfsIntegrationTest {
   }
 
   @Test
-  public void testFailedRequestWhenCredentialsNotCorrect() throws Exception {
-Assume.assumeTrue(this.getAuthType() == AuthType.SharedKey);
-Configuration config = this.getRawConfiguration();
-config.setBoolean(AZURE_CREATE_REMOTE_FILESYSTEM_DURING_INITIALIZATION, 
false);
-String accountName = this.getAccountName();
-String configkKey = FS_AZURE_ACCOUNT_KEY_PROPERTY_NAME + "." + accountName;
-// Provide a wrong sharedKey
-String secret = config.get(configkKey);
-secret = (char) (secret.charAt(0) + 1) + secret.substring(1);
-config.set(configkKey, secret);
-
-AzureBlobFileSystem fs = this.getFileSystem(config);
-intercept(AbfsRestOperationException.class,
-"\"Server failed to authenticate the request. Make sure the value 
of Authorization header is formed correctly including the signature.\", 403",
-()-> {
-  fs.getIsNamespaceEnabled();
-});
-  }
-
-  @Test
   public void testEnsureGetAclCallIsMadeOnceWhenConfigIsInvalid()
   throws Exception {
 unsetConfAndEnsureGetAclCallIsMadeOnce();
diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestSharedKeyAuth.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestSharedKeyAuth.java
new file mode 100644
index 000..ab55ffa
--- /dev/null
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestSharedKeyAuth.java
@@ -0,0 +1,61 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.azurebfs;
+
+import org.junit.Assume;
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import 
org.apache.hadoop.fs.azureb

[hadoop] 04/09: Upgrade store REST API version to 2019-12-12

2020-10-14 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 4072323de407e7693ac43f4f8370c51f417ed874
Author: Sneha Vijayarajan 
AuthorDate: Mon Aug 17 22:47:18 2020 +0530

Upgrade store REST API version to 2019-12-12

- Contributed by Sneha Vijayarajan
---
 .../main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java
index b4447b9..45c1948 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java
@@ -62,7 +62,7 @@ public class AbfsClient implements Closeable {
   public static final Logger LOG = LoggerFactory.getLogger(AbfsClient.class);
   private final URL baseUrl;
   private final SharedKeyCredentials sharedKeyCredentials;
-  private final String xMsVersion = "2018-11-09";
+  private final String xMsVersion = "2019-12-12";
   private final ExponentialRetryPolicy retryPolicy;
   private final String filesystem;
   private final AbfsConfiguration abfsConfiguration;


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 02/09: HADOOP-17163. ABFS: Adding debug log for rename failures

2020-10-14 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit f73c90f0b07991c2871ab33fedfa16f0d4c88c74
Author: bilaharith <52483117+bilahar...@users.noreply.github.com>
AuthorDate: Wed Aug 5 22:08:13 2020 +0530

HADOOP-17163. ABFS: Adding debug log for rename failures

- Contributed by Bilahari T H
---
 .../src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java | 1 +
 1 file changed, 1 insertion(+)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
index 84d6068..513150a 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
@@ -323,6 +323,7 @@ public class AzureBlobFileSystem extends FileSystem {
   abfsStore.rename(qualifiedSrcPath, qualifiedDstPath);
   return true;
 } catch(AzureBlobFileSystemException ex) {
+  LOG.debug("Rename operation failed. ", ex);
   checkException(
   src,
   ex,


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated (41a3c9b -> d5b4d04)

2020-10-14 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a change to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 41a3c9b  HDFS-15628. HttpFS server throws NPE if a file is a symlink. 
Contributed by Ahmed Hussein.
 new fbf151e  HADOOP-17137. ABFS: Makes the test cases in 
ITestAbfsNetworkStatistics agnostic
 new f73c90f  HADOOP-17163. ABFS: Adding debug log for rename failures
 new e481d01  HADOOP-17149. ABFS: Fixing the testcase 
ITestGetNameSpaceEnabled
 new 4072323  Upgrade store REST API version to 2019-12-12
 new cc73503  HADOOP-16915. ABFS: Ignoring the test 
ITestAzureBlobFileSystemRandomRead.testRandomReadPerformance
 new f208da2  HADOOP-17166. ABFS: configure output stream thread pool 
(#2179)
 new d166420  HADOOP-17215: Support for conditional overwrite.
 new da5db6a  HADOOP-17279: ABFS: 
testNegativeScenariosForCreateOverwriteDisabled fails for non-HNS account.
 new d5b4d04  HADOOP-17301. ABFS: read-ahead error reporting breaks buffer 
management (#2369)

The 9 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../hadoop/fs/azurebfs/AbfsConfiguration.java  |  30 +++
 .../hadoop/fs/azurebfs/AzureBlobFileSystem.java|   1 +
 .../fs/azurebfs/AzureBlobFileSystemStore.java  | 103 +++-
 .../fs/azurebfs/constants/ConfigurationKeys.java   |   6 +
 .../constants/FileSystemConfigurations.java|   1 +
 ...ConcurrentWriteOperationDetectedException.java} |  17 +-
 .../hadoop/fs/azurebfs/services/AbfsClient.java|   8 +-
 .../fs/azurebfs/services/AbfsOutputStream.java |  18 +-
 .../azurebfs/services/AbfsOutputStreamContext.java |  24 ++
 .../fs/azurebfs/services/ReadBufferManager.java|  36 ++-
 .../hadoop-azure/src/site/markdown/abfs.md |  13 +
 .../fs/azurebfs/ITestAbfsNetworkStatistics.java| 103 +---
 .../azurebfs/ITestAzureBlobFileSystemCreate.java   | 279 +
 .../fs/azurebfs/ITestAzureBlobFileSystemMkDir.java |  60 +
 .../ITestAzureBlobFileSystemRandomRead.java|   2 +
 .../fs/azurebfs/ITestGetNameSpaceEnabled.java  |  23 --
 .../hadoop/fs/azurebfs/ITestSharedKeyAuth.java |  61 +
 .../azurebfs/services/ITestAbfsOutputStream.java   |  78 ++
 .../fs/azurebfs/services/TestAbfsClient.java   |  62 +++--
 .../fs/azurebfs/services/TestAbfsInputStream.java  |  49 
 .../fs/azurebfs/services/TestAbfsOutputStream.java |   7 +-
 21 files changed, 870 insertions(+), 111 deletions(-)
 copy 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/contracts/exceptions/{TimeoutException.java
 => ConcurrentWriteOperationDetectedException.java} (72%)
 create mode 100644 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestSharedKeyAuth.java
 create mode 100644 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsOutputStream.java


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 07/09: HADOOP-17215: Support for conditional overwrite.

2020-10-14 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit d1664203026c0bb862bb405f3f48c602ef699a2f
Author: Sneha Vijayarajan 
AuthorDate: Wed Aug 26 00:31:35 2020 +0530

HADOOP-17215: Support for conditional overwrite.

Contributed by Sneha Vijayarajan

DETAILS:

This change adds config key 
"fs.azure.enable.conditional.create.overwrite" with
a default of true.  When enabled, if create(path, overwrite: true) is 
invoked
and the file exists, the ABFS driver will first obtain its etag and 
then attempt
to overwrite the file on the condition that the etag matches. The 
purpose of this
is to mitigate the non-idempotency of this method.  Specifically, in 
the event of
a network error or similar, the client will retry and this can result 
in the file
being created more than once which may result in data loss.  In essense 
this is
like a poor man's file handle, and will be addressed more thoroughly in 
the future
when support for lease is added to ABFS.

TEST RESULTS:

namespace.enabled=true
auth.type=SharedKey
---
$mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean 
verify
Tests run: 87, Failures: 0, Errors: 0, Skipped: 0
Tests run: 457, Failures: 0, Errors: 0, Skipped: 42
Tests run: 207, Failures: 0, Errors: 0, Skipped: 24

namespace.enabled=true
auth.type=OAuth
---
$mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean 
verify
Tests run: 87, Failures: 0, Errors: 0, Skipped: 0
Tests run: 457, Failures: 0, Errors: 0, Skipped: 74
Tests run: 207, Failures: 0, Errors: 0, Skipped: 140
---
 .../hadoop/fs/azurebfs/AbfsConfiguration.java  |   8 +
 .../fs/azurebfs/AzureBlobFileSystemStore.java  | 101 +++-
 .../fs/azurebfs/constants/ConfigurationKeys.java   |   4 +
 .../constants/FileSystemConfigurations.java|   1 +
 ...ConcurrentWriteOperationDetectedException.java} |  32 +--
 .../hadoop/fs/azurebfs/services/AbfsClient.java|   6 +-
 .../fs/azurebfs/ITestAbfsNetworkStatistics.java|  40 ++-
 .../azurebfs/ITestAzureBlobFileSystemCreate.java   | 272 +
 .../fs/azurebfs/ITestAzureBlobFileSystemMkDir.java |  60 +
 .../fs/azurebfs/services/TestAbfsClient.java   |  62 +++--
 10 files changed, 515 insertions(+), 71 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
index 66d4853..72a8a43 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
@@ -181,6 +181,10 @@ public class AbfsConfiguration{
   DefaultValue = DEFAULT_FS_AZURE_ATOMIC_RENAME_DIRECTORIES)
   private String azureAtomicDirs;
 
+  @BooleanConfigurationValidatorAnnotation(ConfigurationKey = 
FS_AZURE_ENABLE_CONDITIONAL_CREATE_OVERWRITE,
+  DefaultValue = DEFAULT_FS_AZURE_ENABLE_CONDITIONAL_CREATE_OVERWRITE)
+  private boolean enableConditionalCreateOverwrite;
+
   @StringConfigurationValidatorAnnotation(ConfigurationKey = 
FS_AZURE_APPEND_BLOB_KEY,
   DefaultValue = DEFAULT_FS_AZURE_APPEND_BLOB_DIRECTORIES)
   private String azureAppendBlobDirs;
@@ -573,6 +577,10 @@ public class AbfsConfiguration{
 return this.azureAtomicDirs;
   }
 
+  public boolean isConditionalCreateOverwriteEnabled() {
+return this.enableConditionalCreateOverwrite;
+  }
+
   public String getAppendBlobDirs() {
 return this.azureAppendBlobDirs;
   }
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
index 23d2b5a..d2a1d53 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
@@ -66,6 +66,7 @@ import 
org.apache.hadoop.fs.azurebfs.constants.FileSystemConfigurations;
 import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
 import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsRestOperationException;
 import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.AzureBlobFileSystemException;
+import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.ConcurrentWriteOperationDetectedException;
 import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.FileSystemOperationUnhandledExc

[hadoop] 01/09: HADOOP-17137. ABFS: Makes the test cases in ITestAbfsNetworkStatistics agnostic

2020-10-14 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit fbf151ef6f608e5acc0c478325434c88359724da
Author: bilaharith <52483117+bilahar...@users.noreply.github.com>
AuthorDate: Sat Aug 1 00:57:57 2020 +0530

HADOOP-17137. ABFS: Makes the test cases in ITestAbfsNetworkStatistics 
agnostic

- Contributed by Bilahari T H
---
 .../fs/azurebfs/ITestAbfsNetworkStatistics.java| 63 +-
 1 file changed, 38 insertions(+), 25 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsNetworkStatistics.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsNetworkStatistics.java
index e3a97b3..f6ee7a9 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsNetworkStatistics.java
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsNetworkStatistics.java
@@ -33,6 +33,9 @@ import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.azurebfs.services.AbfsOutputStream;
 import org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation;
 
+import static org.apache.hadoop.fs.azurebfs.AbfsStatistic.CONNECTIONS_MADE;
+import static org.apache.hadoop.fs.azurebfs.AbfsStatistic.SEND_REQUESTS;
+
 public class ITestAbfsNetworkStatistics extends AbstractAbfsIntegrationTest {
 
   private static final Logger LOG =
@@ -57,6 +60,11 @@ public class ITestAbfsNetworkStatistics extends 
AbstractAbfsIntegrationTest {
 String testNetworkStatsString = "http_send";
 long connectionsMade, requestsSent, bytesSent;
 
+metricMap = fs.getInstrumentationMap();
+long connectionsMadeBeforeTest = metricMap
+.get(CONNECTIONS_MADE.getStatName());
+long requestsMadeBeforeTest = metricMap.get(SEND_REQUESTS.getStatName());
+
 /*
  * Creating AbfsOutputStream will result in 1 connection made and 1 send
  * request.
@@ -76,27 +84,26 @@ public class ITestAbfsNetworkStatistics extends 
AbstractAbfsIntegrationTest {
   /*
* Testing the network stats with 1 write operation.
*
-   * connections_made : 3(getFileSystem()) + 1(AbfsOutputStream) + 
2(flush).
+   * connections_made : (connections made above) + 2(flush).
*
-   * send_requests : 1(getFileSystem()) + 1(AbfsOutputStream) + 2(flush).
+   * send_requests : (requests sent above) + 2(flush).
*
* bytes_sent : bytes wrote in AbfsOutputStream.
*/
-  if 
(fs.getAbfsStore().isAppendBlobKey(fs.makeQualified(sendRequestPath).toString()))
 {
+  long extraCalls = 0;
+  if (!fs.getAbfsStore()
+  .isAppendBlobKey(fs.makeQualified(sendRequestPath).toString())) {
 // no network calls are made for hflush in case of appendblob
-connectionsMade = assertAbfsStatistics(AbfsStatistic.CONNECTIONS_MADE,
-5, metricMap);
-requestsSent = assertAbfsStatistics(AbfsStatistic.SEND_REQUESTS, 3,
-metricMap);
-  } else {
-connectionsMade = assertAbfsStatistics(AbfsStatistic.CONNECTIONS_MADE,
-6, metricMap);
-requestsSent = assertAbfsStatistics(AbfsStatistic.SEND_REQUESTS, 4,
-metricMap);
+extraCalls++;
   }
+  long expectedConnectionsMade = connectionsMadeBeforeTest + extraCalls + 
2;
+  long expectedRequestsSent = requestsMadeBeforeTest + extraCalls + 2;
+  connectionsMade = assertAbfsStatistics(CONNECTIONS_MADE,
+  expectedConnectionsMade, metricMap);
+  requestsSent = assertAbfsStatistics(SEND_REQUESTS, expectedRequestsSent,
+  metricMap);
   bytesSent = assertAbfsStatistics(AbfsStatistic.BYTES_SENT,
   testNetworkStatsString.getBytes().length, metricMap);
-
 }
 
 // To close the AbfsOutputStream 1 connection is made and 1 request is 
sent.
@@ -136,14 +143,14 @@ public class ITestAbfsNetworkStatistics extends 
AbstractAbfsIntegrationTest {
*/
   if 
(fs.getAbfsStore().isAppendBlobKey(fs.makeQualified(sendRequestPath).toString()))
 {
 // no network calls are made for hflush in case of appendblob
-assertAbfsStatistics(AbfsStatistic.CONNECTIONS_MADE,
+assertAbfsStatistics(CONNECTIONS_MADE,
 connectionsMade + 1 + LARGE_OPERATIONS, metricMap);
-assertAbfsStatistics(AbfsStatistic.SEND_REQUESTS,
+assertAbfsStatistics(SEND_REQUESTS,
 requestsSent + 1 + LARGE_OPERATIONS, metricMap);
   } else {
-assertAbfsStatistics(AbfsStatistic.CONNECTIONS_MADE,
+assertAbfsStatistics(CONNECTIONS_MADE,
 connectionsMade + 1 + LARGE_OPERATIONS * 2, metricMap);
-assertAbfsStatistics(AbfsStatistic.SEND_REQUESTS,
+assertAbfsStatistics(SEND_REQUESTS,
 requestsSent + 1 + LARGE_OPERATIONS * 2, metricMap);
   }
   asse

[hadoop] 09/09: HADOOP-17301. ABFS: read-ahead error reporting breaks buffer management (#2369)

2020-10-14 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit d5b4d04b0df9661e23904bb30f369552cc0ec95d
Author: Sneha Vijayarajan 
AuthorDate: Tue Oct 13 21:00:34 2020 +0530

HADOOP-17301. ABFS: read-ahead error reporting breaks buffer management 
(#2369)


Fixes read-ahead buffer management issues introduced by HADOOP-16852,
 "ABFS: Send error back to client for Read Ahead request failure".

Contributed by Sneha Vijayarajan
---
 .../fs/azurebfs/services/ReadBufferManager.java| 36 ++--
 .../fs/azurebfs/services/TestAbfsInputStream.java  | 49 ++
 2 files changed, 82 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/ReadBufferManager.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/ReadBufferManager.java
index 73c23b0..d7e031b 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/ReadBufferManager.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/ReadBufferManager.java
@@ -22,6 +22,7 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import java.io.IOException;
+import java.util.ArrayList;
 import java.util.Collection;
 import java.util.LinkedList;
 import java.util.Queue;
@@ -218,6 +219,8 @@ final class ReadBufferManager {
   return false;  // there are no evict-able buffers
 }
 
+long currentTimeInMs = currentTimeMillis();
+
 // first, try buffers where all bytes have been consumed (approximated as 
first and last bytes consumed)
 for (ReadBuffer buf : completedReadList) {
   if (buf.isFirstByteConsumed() && buf.isLastByteConsumed()) {
@@ -242,14 +245,30 @@ final class ReadBufferManager {
 }
 
 // next, try any old nodes that have not been consumed
+// Failed read buffers (with buffer index=-1) that are older than
+// thresholdAge should be cleaned up, but at the same time should not
+// report successful eviction.
+// Queue logic expects that a buffer is freed up for read ahead when
+// eviction is successful, whereas a failed ReadBuffer would have released
+// its buffer when its status was set to READ_FAILED.
 long earliestBirthday = Long.MAX_VALUE;
+ArrayList oldFailedBuffers = new ArrayList<>();
 for (ReadBuffer buf : completedReadList) {
-  if (buf.getTimeStamp() < earliestBirthday) {
+  if ((buf.getBufferindex() != -1)
+  && (buf.getTimeStamp() < earliestBirthday)) {
 nodeToEvict = buf;
 earliestBirthday = buf.getTimeStamp();
+  } else if ((buf.getBufferindex() == -1)
+  && (currentTimeInMs - buf.getTimeStamp()) > 
thresholdAgeMilliseconds) {
+oldFailedBuffers.add(buf);
   }
 }
-if ((currentTimeMillis() - earliestBirthday > thresholdAgeMilliseconds) && 
(nodeToEvict != null)) {
+
+for (ReadBuffer buf : oldFailedBuffers) {
+  evict(buf);
+}
+
+if ((currentTimeInMs - earliestBirthday > thresholdAgeMilliseconds) && 
(nodeToEvict != null)) {
   return evict(nodeToEvict);
 }
 
@@ -417,7 +436,6 @@ final class ReadBufferManager {
   if (result == ReadBufferStatus.AVAILABLE && bytesActuallyRead > 0) {
 buffer.setStatus(ReadBufferStatus.AVAILABLE);
 buffer.setLength(bytesActuallyRead);
-completedReadList.add(buffer);
   } else {
 freeList.push(buffer.getBufferindex());
 // buffer will be deleted as per the eviction policy.
@@ -464,4 +482,16 @@ final class ReadBufferManager {
   void callTryEvict() {
 tryEvict();
   }
+
+  /**
+   * Test method that can mimic no free buffers scenario and also add a 
ReadBuffer
+   * into completedReadList. This readBuffer will get picked up by TryEvict()
+   * next time a new queue request comes in.
+   * @param buf that needs to be added to completedReadlist
+   */
+  @VisibleForTesting
+  void testMimicFullUseAndAddFailedBuffer(ReadBuffer buf) {
+freeList.clear();
+completedReadList.add(buf);
+  }
 }
diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsInputStream.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsInputStream.java
index c9dacd6..ae72c5a 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsInputStream.java
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsInputStream.java
@@ -23,9 +23,12 @@ import java.io.IOException;
 import org.junit.Assert;
 import org.junit.Test;
 
+import org.assertj.core.api.Assertions;
+
 import org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest;
 import 
org.ap

[hadoop] 05/09: HADOOP-16915. ABFS: Ignoring the test ITestAzureBlobFileSystemRandomRead.testRandomReadPerformance

2020-10-14 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit cc7350302f4bde082d8dd5a01bf7d71b195089a2
Author: bilaharith <52483117+bilahar...@users.noreply.github.com>
AuthorDate: Tue Aug 25 00:30:55 2020 +0530

HADOOP-16915. ABFS: Ignoring the test 
ITestAzureBlobFileSystemRandomRead.testRandomReadPerformance

- Contributed by Bilahari T H
---
 .../apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemRandomRead.java   | 2 ++
 1 file changed, 2 insertions(+)

diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemRandomRead.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemRandomRead.java
index e5f64b5..f582763 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemRandomRead.java
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemRandomRead.java
@@ -23,6 +23,7 @@ import java.util.Random;
 import java.util.concurrent.Callable;
 
 import org.junit.Assume;
+import org.junit.Ignore;
 import org.junit.Test;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -412,6 +413,7 @@ public class ITestAzureBlobFileSystemRandomRead extends
   }
 
   @Test
+  @Ignore("HADOOP-16915")
   public void testRandomReadPerformance() throws Exception {
 Assume.assumeFalse("This test does not support namespace enabled account",
 this.getFileSystem().getIsNamespaceEnabled());


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HADOOP-17279: ABFS: testNegativeScenariosForCreateOverwriteDisabled fails for non-HNS account.

2020-09-23 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new c3a90dd  HADOOP-17279: ABFS: 
testNegativeScenariosForCreateOverwriteDisabled fails for non-HNS account.
c3a90dd is described below

commit c3a90dd9186b664594238131596ae2de17bf70fc
Author: Sneha Vijayarajan 
AuthorDate: Tue Sep 22 20:58:12 2020 +

HADOOP-17279: ABFS: testNegativeScenariosForCreateOverwriteDisabled fails 
for non-HNS account.

Contributed by Sneha Vijayarajan

Testing:

namespace.enabled=false
auth.type=SharedKey
$mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify

Tests run: 87, Failures: 0, Errors: 0, Skipped: 0
Tests run: 457, Failures: 0, Errors: 0, Skipped: 246
Tests run: 207, Failures: 0, Errors: 0, Skipped: 24

namespace.enabled=true
auth.type=SharedKey
$mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify

Tests run: 87, Failures: 0, Errors: 0, Skipped: 0
Tests run: 457, Failures: 0, Errors: 0, Skipped: 33
Tests run: 207, Failures: 0, Errors: 0, Skipped: 24

namespace.enabled=true
auth.type=OAuth
$mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify

Tests run: 87, Failures: 0, Errors: 0, Skipped: 0
Tests run: 457, Failures: 0, Errors: 0, Skipped: 74
Tests run: 207, Failures: 0, Errors: 0, Skipped: 140
---
 .../fs/azurebfs/ITestAzureBlobFileSystemCreate.java   | 15 +++
 1 file changed, 11 insertions(+), 4 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemCreate.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemCreate.java
index 981ed25..09304d1 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemCreate.java
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemCreate.java
@@ -346,6 +346,7 @@ public class ITestAzureBlobFileSystemCreate extends
 
 AzureBlobFileSystemStore abfsStore = fs.getAbfsStore();
 abfsStore = setAzureBlobSystemStoreField(abfsStore, "client", mockClient);
+boolean isNamespaceEnabled = abfsStore.getIsNamespaceEnabled();
 
 AbfsRestOperation successOp = mock(
 AbfsRestOperation.class);
@@ -363,6 +364,7 @@ public class ITestAzureBlobFileSystemCreate extends
 AbfsRestOperationException preConditionResponseEx
 = getMockAbfsRestOperationException(HTTP_PRECON_FAILED);
 
+// mock for overwrite=false
 doThrow(conflictResponseEx) // Scn1: GFS fails with Http404
 .doThrow(conflictResponseEx) // Scn2: GFS fails with Http500
 .doThrow(
@@ -372,8 +374,10 @@ public class ITestAzureBlobFileSystemCreate extends
 .doThrow(
 serverErrorResponseEx) // Scn5: create overwrite=false fails with 
Http500
 .when(mockClient)
-.createPath(any(String.class), eq(true), eq(false), any(String.class),
-any(String.class), any(boolean.class), eq(null));
+.createPath(any(String.class), eq(true), eq(false),
+isNamespaceEnabled ? any(String.class) : eq(null),
+isNamespaceEnabled ? any(String.class) : eq(null),
+any(boolean.class), eq(null));
 
 doThrow(fileNotFoundResponseEx) // Scn1: GFS fails with Http404
 .doThrow(serverErrorResponseEx) // Scn2: GFS fails with Http500
@@ -382,13 +386,16 @@ public class ITestAzureBlobFileSystemCreate extends
 .when(mockClient)
 .getPathStatus(any(String.class), eq(false));
 
+// mock for overwrite=true
 doThrow(
 preConditionResponseEx) // Scn3: create overwrite=true fails with 
Http412
 .doThrow(
 serverErrorResponseEx) // Scn4: create overwrite=true fails with 
Http500
 .when(mockClient)
-.createPath(any(String.class), eq(true), eq(true), any(String.class),
-any(String.class), any(boolean.class), eq(null));
+.createPath(any(String.class), eq(true), eq(true),
+isNamespaceEnabled ? any(String.class) : eq(null),
+isNamespaceEnabled ? any(String.class) : eq(null),
+any(boolean.class), eq(null));
 
 // Scn1: GFS fails with Http404
 // Sequence of events expected:


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HADOOP-17215: Support for conditional overwrite.

2020-09-18 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new e31a636  HADOOP-17215: Support for conditional overwrite.
e31a636 is described below

commit e31a636e922a8fdbe0aa7cca53f6de7175e97254
Author: Sneha Vijayarajan 
AuthorDate: Wed Aug 26 00:31:35 2020 +0530

HADOOP-17215: Support for conditional overwrite.

Contributed by Sneha Vijayarajan

DETAILS:

This change adds config key 
"fs.azure.enable.conditional.create.overwrite" with
a default of true.  When enabled, if create(path, overwrite: true) is 
invoked
and the file exists, the ABFS driver will first obtain its etag and 
then attempt
to overwrite the file on the condition that the etag matches. The 
purpose of this
is to mitigate the non-idempotency of this method.  Specifically, in 
the event of
a network error or similar, the client will retry and this can result 
in the file
being created more than once which may result in data loss.  In essense 
this is
like a poor man's file handle, and will be addressed more thoroughly in 
the future
when support for lease is added to ABFS.

TEST RESULTS:

namespace.enabled=true
auth.type=SharedKey
---
$mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean 
verify
Tests run: 87, Failures: 0, Errors: 0, Skipped: 0
Tests run: 457, Failures: 0, Errors: 0, Skipped: 42
Tests run: 207, Failures: 0, Errors: 0, Skipped: 24

namespace.enabled=true
auth.type=OAuth
---
$mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean 
verify
Tests run: 87, Failures: 0, Errors: 0, Skipped: 0
Tests run: 457, Failures: 0, Errors: 0, Skipped: 74
Tests run: 207, Failures: 0, Errors: 0, Skipped: 140
---
 .../hadoop/fs/azurebfs/AbfsConfiguration.java  |   8 +
 .../fs/azurebfs/AzureBlobFileSystemStore.java  | 101 +++-
 .../fs/azurebfs/constants/ConfigurationKeys.java   |   4 +
 .../constants/FileSystemConfigurations.java|   1 +
 ...ConcurrentWriteOperationDetectedException.java} |  32 +--
 .../hadoop/fs/azurebfs/services/AbfsClient.java|   6 +-
 .../fs/azurebfs/ITestAbfsNetworkStatistics.java|  40 ++-
 .../azurebfs/ITestAzureBlobFileSystemCreate.java   | 272 +
 .../fs/azurebfs/ITestAzureBlobFileSystemMkDir.java |  60 +
 .../fs/azurebfs/services/TestAbfsClient.java   |  62 +++--
 10 files changed, 515 insertions(+), 71 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
index 66d4853..72a8a43 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
@@ -181,6 +181,10 @@ public class AbfsConfiguration{
   DefaultValue = DEFAULT_FS_AZURE_ATOMIC_RENAME_DIRECTORIES)
   private String azureAtomicDirs;
 
+  @BooleanConfigurationValidatorAnnotation(ConfigurationKey = 
FS_AZURE_ENABLE_CONDITIONAL_CREATE_OVERWRITE,
+  DefaultValue = DEFAULT_FS_AZURE_ENABLE_CONDITIONAL_CREATE_OVERWRITE)
+  private boolean enableConditionalCreateOverwrite;
+
   @StringConfigurationValidatorAnnotation(ConfigurationKey = 
FS_AZURE_APPEND_BLOB_KEY,
   DefaultValue = DEFAULT_FS_AZURE_APPEND_BLOB_DIRECTORIES)
   private String azureAppendBlobDirs;
@@ -573,6 +577,10 @@ public class AbfsConfiguration{
 return this.azureAtomicDirs;
   }
 
+  public boolean isConditionalCreateOverwriteEnabled() {
+return this.enableConditionalCreateOverwrite;
+  }
+
   public String getAppendBlobDirs() {
 return this.azureAppendBlobDirs;
   }
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
index 23d2b5a..d2a1d53 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
@@ -66,6 +66,7 @@ import 
org.apache.hadoop.fs.azurebfs.constants.FileSystemConfigurations;
 import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
 import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsRestOperationException;
 import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.AzureBlobFileSystemException;
+import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.ConcurrentWriteOperationDe

[hadoop] branch trunk updated (fc2435c -> 0dc54d0)

2020-09-18 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from fc2435c  HADOOP-15136. Correct typos in filesystem.md (#2314)
 add 0dc54d0  HADOOP-17203: Revert HADOOP-17183. ABFS: Enabling checkaccess 
on ABFS

No new revisions were added by this update.

Summary of changes:
 .../constants/FileSystemConfigurations.java|   2 +-
 .../ITestAzureBlobFileSystemCheckAccess.java   | 110 -
 2 files changed, 40 insertions(+), 72 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 06/09: HADOOP-17065. Add Network Counters to ABFS (#2056)

2020-07-25 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit bbd3278d09a86faef47e3aebf4000aacbe76f02d
Author: Mehakmeet Singh 
AuthorDate: Fri Jun 19 18:33:49 2020 +0530

HADOOP-17065. Add Network Counters to ABFS (#2056)


Contributed by Mehakmeet Singh.
---
 ...sInstrumentation.java => AbfsCountersImpl.java} |  13 +-
 .../apache/hadoop/fs/azurebfs/AbfsStatistic.java   |  20 +-
 .../hadoop/fs/azurebfs/AzureBlobFileSystem.java|  15 +-
 .../fs/azurebfs/AzureBlobFileSystemStore.java  |  15 +-
 .../hadoop/fs/azurebfs/services/AbfsClient.java|  25 +-
 .../services/AbfsClientThrottlingAnalyzer.java |   7 +-
 .../services/AbfsClientThrottlingIntercept.java|  14 +-
 .../fs/azurebfs/services/AbfsRestOperation.java|  24 +-
 .../fs/azurebfs/AbstractAbfsIntegrationTest.java   |   3 +-
 .../fs/azurebfs/ITestAbfsNetworkStatistics.java| 253 +
 .../hadoop/fs/azurebfs/ITestAbfsStatistics.java|   2 +-
 .../fs/azurebfs/TestAbfsNetworkStatistics.java |  67 ++
 .../hadoop/fs/azurebfs/TestAbfsStatistics.java |   2 +-
 .../fs/azurebfs/services/TestAbfsClient.java   |   4 +-
 14 files changed, 430 insertions(+), 34 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsInstrumentation.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsCountersImpl.java
similarity index 96%
rename from 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsInstrumentation.java
rename to 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsCountersImpl.java
index 9094c40..57cc3ea 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsInstrumentation.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsCountersImpl.java
@@ -41,7 +41,7 @@ import static org.apache.hadoop.fs.azurebfs.AbfsStatistic.*;
 /**
  * Instrumentation of Abfs counters.
  */
-public class AbfsInstrumentation implements AbfsCounters {
+public class AbfsCountersImpl implements AbfsCounters {
 
   /**
* Single context for all the Abfs counters to separate them from other
@@ -78,10 +78,17 @@ public class AbfsInstrumentation implements AbfsCounters {
   DIRECTORIES_DELETED,
   FILES_CREATED,
   FILES_DELETED,
-  ERROR_IGNORED
+  ERROR_IGNORED,
+  CONNECTIONS_MADE,
+  SEND_REQUESTS,
+  GET_RESPONSES,
+  BYTES_SENT,
+  BYTES_RECEIVED,
+  READ_THROTTLES,
+  WRITE_THROTTLES
   };
 
-  public AbfsInstrumentation(URI uri) {
+  public AbfsCountersImpl(URI uri) {
 UUID fileSystemInstanceId = UUID.randomUUID();
 registry.tag(REGISTRY_ID,
 "A unique identifier for the instance",
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsStatistic.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsStatistic.java
index a9867aa..2935cd7 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsStatistic.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsStatistic.java
@@ -22,7 +22,7 @@ import 
org.apache.hadoop.fs.StorageStatistics.CommonStatisticNames;
 
 /**
  * Statistic which are collected in Abfs.
- * Available as metrics in {@link AbfsInstrumentation}.
+ * Available as metrics in {@link AbfsCountersImpl}.
  */
 public enum AbfsStatistic {
 
@@ -57,7 +57,23 @@ public enum AbfsStatistic {
   FILES_DELETED("files_deleted",
   "Total number of files deleted from the object store."),
   ERROR_IGNORED("error_ignored",
-  "Errors caught and ignored.");
+  "Errors caught and ignored."),
+
+  //Network statistics.
+  CONNECTIONS_MADE("connections_made",
+  "Total number of times a connection was made with the data store."),
+  SEND_REQUESTS("send_requests",
+  "Total number of times http requests were sent to the data store."),
+  GET_RESPONSES("get_responses",
+  "Total number of times a response was received."),
+  BYTES_SENT("bytes_sent",
+  "Total bytes uploaded."),
+  BYTES_RECEIVED("bytes_received",
+  "Total bytes received."),
+  READ_THROTTLES("read_throttles",
+  "Total number of times a read operation is throttled."),
+  WRITE_THROTTLES("write_throttles",
+  "Total number of times a write operation is throttled.");
 
   private String statName;
   private String statDescription;
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFi

[hadoop] 09/09: Hadoop 17132. ABFS: Fix Rename and Delete Idempotency check trigger

2020-07-25 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 18ca80331c8c7028f8016142966c80249cd8da6a
Author: Sneha Vijayarajan 
AuthorDate: Tue Jul 21 21:52:38 2020 +0530

Hadoop 17132. ABFS: Fix Rename and Delete Idempotency check trigger

- Contributed by Sneha Vijayarajan
---
 .../hadoop/fs/azurebfs/services/AbfsClient.java| 55 +++-
 .../fs/azurebfs/services/AbfsRestOperation.java|  7 +-
 .../azurebfs/ITestAzureBlobFileSystemDelete.java   | 65 +-
 .../azurebfs/ITestAzureBlobFileSystemRename.java   | 68 +++
 .../fs/azurebfs/services/TestAbfsClient.java   | 76 ++
 5 files changed, 253 insertions(+), 18 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java
index f747bd0..e1ea75e 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java
@@ -336,10 +336,19 @@ public class AbfsClient implements Closeable {
 url,
 requestHeaders);
 Instant renameRequestStartTime = Instant.now();
-op.execute();
-
-if (op.getResult().getStatusCode() != HttpURLConnection.HTTP_OK) {
-  return renameIdempotencyCheckOp(renameRequestStartTime, op, destination);
+try {
+  op.execute();
+} catch (AzureBlobFileSystemException e) {
+final AbfsRestOperation idempotencyOp = renameIdempotencyCheckOp(
+renameRequestStartTime, op, destination);
+if (idempotencyOp.getResult().getStatusCode()
+== op.getResult().getStatusCode()) {
+  // idempotency did not return different result
+  // throw back the exception
+  throw e;
+} else {
+  return idempotencyOp;
+}
 }
 
 return op;
@@ -369,14 +378,21 @@ public class AbfsClient implements Closeable {
   // exists. Check on destination status and if it has a recent LMT 
timestamp.
   // If yes, return success, else fall back to original rename request 
failure response.
 
-  final AbfsRestOperation destStatusOp = getPathStatus(destination, false);
-  if (destStatusOp.getResult().getStatusCode() == 
HttpURLConnection.HTTP_OK) {
-String lmt = destStatusOp.getResult().getResponseHeader(
-HttpHeaderConfigurations.LAST_MODIFIED);
-
-if (DateTimeUtils.isRecentlyModified(lmt, renameRequestStartTime)) {
-  return destStatusOp;
+  try {
+final AbfsRestOperation destStatusOp = getPathStatus(destination,
+false);
+if (destStatusOp.getResult().getStatusCode()
+== HttpURLConnection.HTTP_OK) {
+  String lmt = destStatusOp.getResult().getResponseHeader(
+  HttpHeaderConfigurations.LAST_MODIFIED);
+
+  if (DateTimeUtils.isRecentlyModified(lmt, renameRequestStartTime)) {
+return destStatusOp;
+  }
 }
+  } catch (AzureBlobFileSystemException e) {
+// GetFileStatus on the destination failed, return original op
+return op;
   }
 }
 
@@ -570,10 +586,18 @@ public class AbfsClient implements Closeable {
 HTTP_METHOD_DELETE,
 url,
 requestHeaders);
+try {
 op.execute();
-
-if (op.getResult().getStatusCode() != HttpURLConnection.HTTP_OK) {
-  return deleteIdempotencyCheckOp(op);
+} catch (AzureBlobFileSystemException e) {
+  final AbfsRestOperation idempotencyOp = deleteIdempotencyCheckOp(op);
+  if (idempotencyOp.getResult().getStatusCode()
+  == op.getResult().getStatusCode()) {
+// idempotency did not return different result
+// throw back the exception
+throw e;
+  } else {
+return idempotencyOp;
+  }
 }
 
 return op;
@@ -822,7 +846,8 @@ public class AbfsClient implements Closeable {
 return createRequestUrl(EMPTY_STRING, query);
   }
 
-  private URL createRequestUrl(final String path, final String query)
+  @VisibleForTesting
+  protected URL createRequestUrl(final String path, final String query)
   throws AzureBlobFileSystemException {
 final String base = baseUrl.toString();
 String encodedPath = path;
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java
index f3986d4..936267a 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs

[hadoop] 07/09: HADOOP-16961. ABFS: Adding metrics to AbfsInputStream (#2076)

2020-07-25 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 7c9b45978637898a1ea8d8ff197bc493705c3fd8
Author: Mehakmeet Singh 
AuthorDate: Fri Jul 3 16:11:35 2020 +0530

HADOOP-16961. ABFS: Adding metrics to AbfsInputStream (#2076)


Contributed by Mehakmeet Singh.
---
 .../fs/azurebfs/AzureBlobFileSystemStore.java  |   2 +
 .../fs/azurebfs/services/AbfsInputStream.java  |  68 +
 .../azurebfs/services/AbfsInputStreamContext.java  |  12 +
 .../services/AbfsInputStreamStatistics.java|  93 +++
 .../services/AbfsInputStreamStatisticsImpl.java| 205 ++
 .../azurebfs/ITestAbfsInputStreamStatistics.java   | 297 +
 .../fs/azurebfs/TestAbfsInputStreamStatistics.java |  55 
 7 files changed, 732 insertions(+)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
index 27ca207..9aba59b 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
@@ -87,6 +87,7 @@ import org.apache.hadoop.fs.azurebfs.services.AbfsCounters;
 import org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation;
 import org.apache.hadoop.fs.azurebfs.services.AbfsInputStream;
 import org.apache.hadoop.fs.azurebfs.services.AbfsInputStreamContext;
+import org.apache.hadoop.fs.azurebfs.services.AbfsInputStreamStatisticsImpl;
 import org.apache.hadoop.fs.azurebfs.services.AbfsOutputStream;
 import org.apache.hadoop.fs.azurebfs.services.AbfsOutputStreamContext;
 import org.apache.hadoop.fs.azurebfs.services.AbfsOutputStreamStatisticsImpl;
@@ -512,6 +513,7 @@ public class AzureBlobFileSystemStore implements Closeable {
 .withReadBufferSize(abfsConfiguration.getReadBufferSize())
 
.withReadAheadQueueDepth(abfsConfiguration.getReadAheadQueueDepth())
 .withTolerateOobAppends(abfsConfiguration.getTolerateOobAppends())
+.withStreamStatistics(new AbfsInputStreamStatisticsImpl())
 .build();
   }
 
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java
index 50380c9..a809bde 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java
@@ -68,6 +68,9 @@ public class AbfsInputStream extends FSInputStream implements 
CanUnbuffer,
   //  of valid bytes in 
buffer)
   private boolean closed = false;
 
+  /** Stream statistics. */
+  private final AbfsInputStreamStatistics streamStatistics;
+
   public AbfsInputStream(
   final AbfsClient client,
   final Statistics statistics,
@@ -86,6 +89,7 @@ public class AbfsInputStream extends FSInputStream implements 
CanUnbuffer,
 this.readAheadEnabled = true;
 this.cachedSasToken = new CachedSASToken(
 abfsInputStreamContext.getSasTokenRenewPeriodForStreamsInSeconds());
+this.streamStatistics = abfsInputStreamContext.getStreamStatistics();
   }
 
   public String getPath() {
@@ -105,10 +109,21 @@ public class AbfsInputStream extends FSInputStream 
implements CanUnbuffer,
 
   @Override
   public synchronized int read(final byte[] b, final int off, final int len) 
throws IOException {
+// check if buffer is null before logging the length
+if (b != null) {
+  LOG.debug("read requested b.length = {} offset = {} len = {}", b.length,
+  off, len);
+} else {
+  LOG.debug("read requested b = null offset = {} len = {}", off, len);
+}
+
 int currentOff = off;
 int currentLen = len;
 int lastReadBytes;
 int totalReadBytes = 0;
+if (streamStatistics != null) {
+  streamStatistics.readOperationStarted(off, len);
+}
 incrementReadOps();
 do {
   lastReadBytes = readOneBlock(b, currentOff, currentLen);
@@ -130,6 +145,8 @@ public class AbfsInputStream extends FSInputStream 
implements CanUnbuffer,
 }
 
 Preconditions.checkNotNull(b);
+LOG.debug("read one block requested b.length = {} off {} len {}", b.length,
+off, len);
 
 if (len == 0) {
   return 0;
@@ -155,6 +172,7 @@ public class AbfsInputStream extends FSInputStream 
implements CanUnbuffer,
   bCursor = 0;
   limit = 0;
   if (buffer == null) {
+LOG.debug("created new buffer size {}", bufferSize);
 buffer = new byte[bufferSize];

[hadoop] 04/09: HADOOP-17053. ABFS: Fix Account-specific OAuth config setting parsing

2020-07-25 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 903935da0f94a23b73c14584efc31a163389bcbc
Author: Sneha Vijayarajan 
AuthorDate: Wed May 27 13:56:09 2020 -0700

HADOOP-17053. ABFS: Fix Account-specific OAuth config setting parsing

Contributed by Sneha Vijayarajan
---
 .../hadoop/fs/azurebfs/AbfsConfiguration.java  |  91 --
 .../fs/azurebfs/TestAccountConfiguration.java  | 193 -
 2 files changed, 265 insertions(+), 19 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
index c56aebf..091b1c7 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
@@ -342,25 +342,73 @@ public class AbfsConfiguration{
   }
 
   /**
-   * Returns the account-specific Class if it exists, then looks for an
-   * account-agnostic value, and finally tries the default value.
+   * Returns account-specific token provider class if it exists, else checks if
+   * an account-agnostic setting is present for token provider class if 
AuthType
+   * matches with authType passed.
+   * @param authType AuthType effective on the account
* @param name Account-agnostic configuration key
* @param defaultValue Class returned if none is configured
* @param xface Interface shared by all possible values
+   * @param  Interface class type
* @return Highest-precedence Class object that was found
*/
-  public  Class getClass(String name, Class 
defaultValue, Class xface) {
+  public  Class getTokenProviderClass(AuthType authType,
+  String name,
+  Class defaultValue,
+  Class xface) {
+Class tokenProviderClass = getAccountSpecificClass(name, defaultValue,
+xface);
+
+// If there is none set specific for account
+// fall back to generic setting if Auth Type matches
+if ((tokenProviderClass == null)
+&& (authType == getAccountAgnosticEnum(
+FS_AZURE_ACCOUNT_AUTH_TYPE_PROPERTY_NAME, AuthType.SharedKey))) {
+  tokenProviderClass = getAccountAgnosticClass(name, defaultValue, xface);
+}
+
+return (tokenProviderClass == null)
+? null
+: tokenProviderClass.asSubclass(xface);
+  }
+
+  /**
+   * Returns the account-specific class if it exists, else returns default 
value.
+   * @param name Account-agnostic configuration key
+   * @param defaultValue Class returned if none is configured
+   * @param xface Interface shared by all possible values
+   * @param  Interface class type
+   * @return Account specific Class object that was found
+   */
+  public  Class getAccountSpecificClass(String name,
+  Class defaultValue,
+  Class xface) {
 return rawConfig.getClass(accountConf(name),
-rawConfig.getClass(name, defaultValue, xface),
+defaultValue,
 xface);
   }
 
   /**
-   * Returns the account-specific password in string form if it exists, then
+   * Returns account-agnostic Class if it exists, else returns the default 
value.
+   * @param name Account-agnostic configuration key
+   * @param defaultValue Class returned if none is configured
+   * @param xface Interface shared by all possible values
+   * @param  Interface class type
+   * @return Account-Agnostic Class object that was found
+   */
+  public  Class getAccountAgnosticClass(String name,
+  Class defaultValue,
+  Class xface) {
+return rawConfig.getClass(name, defaultValue, xface);
+  }
+
+  /**
+   * Returns the account-specific enum value if it exists, then
* looks for an account-agnostic value.
* @param name Account-agnostic configuration key
* @param defaultValue Value returned if none is configured
-   * @return value in String form if one exists, else null
+   * @param  Enum type
+   * @return enum value if one exists, else null
*/
   public > T getEnum(String name, T defaultValue) {
 return rawConfig.getEnum(accountConf(name),
@@ -368,6 +416,18 @@ public class AbfsConfiguration{
   }
 
   /**
+   * Returns the account-agnostic enum value if it exists, else
+   * return default.
+   * @param name Account-agnostic configuration key
+   * @param defaultValue Value returned if none is configured
+   * @param  Enum type
+   * @return enum value if one exists, else null
+   */
+  public > T getAccountAgnosticEnum(String name, T 
defaultValue) {
+return rawConfig.getEnum(name, defaultValue);
+  }
+
+  /**
* Unsets parameter in the underlying Configuration object.
* Provided only as a convenience; does not add any account logic.
* @param key Configuration key
@@ -577,8 +637,10 @@ public class AbfsConfigur

[hadoop] branch branch-3.3 updated (7ec692a -> 18ca803)

2020-07-25 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a change to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 7ec692a  YARN-4771. Some containers can be skipped during log 
aggregation after NM restart. Contributed by Jason Lowe and Jim Brennan.
 new eed06b4  Hadoop-17015. ABFS: Handling Rename and Delete idempotency
 new 27b20f9  HADOOP-17054. ABFS: Fix test AbfsClient authentication 
instance
 new 869a68b  HADOOP-16852: Report read-ahead error back
 new 903935d  HADOOP-17053. ABFS: Fix Account-specific OAuth config setting 
parsing
 new 8b7e774  HDFS-15168: ABFS enhancement to translate AAD to Linux 
identities. (#1978)
 new bbd3278  HADOOP-17065. Add Network Counters to ABFS (#2056)
 new 7c9b459  HADOOP-16961. ABFS: Adding metrics to AbfsInputStream (#2076)
 new f24e2ec  HADOOP-17058. ABFS: Support for AppendBlob in Hadoop ABFS 
Driver
 new 18ca803  Hadoop 17132. ABFS: Fix Rename and Delete Idempotency check 
trigger

The 9 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../hadoop/fs/azurebfs/AbfsConfiguration.java  | 104 -
 ...sInstrumentation.java => AbfsCountersImpl.java} |  13 +-
 .../apache/hadoop/fs/azurebfs/AbfsStatistic.java   |  20 +-
 .../hadoop/fs/azurebfs/AzureBlobFileSystem.java|  15 +-
 .../fs/azurebfs/AzureBlobFileSystemStore.java  | 102 +++--
 .../fs/azurebfs/constants/AbfsHttpConstants.java   |   2 +
 .../fs/azurebfs/constants/ConfigurationKeys.java   |  10 +
 .../constants/FileSystemConfigurations.java|   5 +
 .../fs/azurebfs/constants/HttpQueryParams.java |   1 +
 .../fs/azurebfs/oauth2/IdentityTransformer.java|  10 +-
 .../oauth2/IdentityTransformerInterface.java   |  62 +++
 .../azurebfs/oauth2/LocalIdentityTransformer.java  |  72 
 .../hadoop/fs/azurebfs/services/AbfsClient.java| 187 -
 .../services/AbfsClientThrottlingAnalyzer.java |   7 +-
 .../services/AbfsClientThrottlingIntercept.java|  14 +-
 .../fs/azurebfs/services/AbfsHttpOperation.java|  23 ++
 .../fs/azurebfs/services/AbfsInputStream.java  |  80 
 .../azurebfs/services/AbfsInputStreamContext.java  |  12 +
 .../services/AbfsInputStreamStatistics.java|  93 +
 .../services/AbfsInputStreamStatisticsImpl.java| 205 ++
 .../fs/azurebfs/services/AbfsOutputStream.java |  63 ++-
 .../azurebfs/services/AbfsOutputStreamContext.java |  12 +
 .../fs/azurebfs/services/AbfsRestOperation.java|  51 ++-
 .../hadoop/fs/azurebfs/services/ReadBuffer.java|  16 +
 .../fs/azurebfs/services/ReadBufferManager.java|  90 -
 .../fs/azurebfs/services/ReadBufferWorker.java |  12 +-
 .../hadoop/fs/azurebfs/utils/DateTimeUtils.java|  71 
 .../hadoop/fs/azurebfs/utils/IdentityHandler.java  |  42 ++
 .../utils/TextFileBasedIdentityHandler.java| 195 +
 .../hadoop-azure/src/site/markdown/abfs.md |  12 +
 .../fs/azurebfs/AbstractAbfsIntegrationTest.java   |   7 +-
 .../azurebfs/ITestAbfsInputStreamStatistics.java   | 297 ++
 .../fs/azurebfs/ITestAbfsNetworkStatistics.java| 281 +
 .../azurebfs/ITestAbfsOutputStreamStatistics.java  |   4 +
 .../fs/azurebfs/ITestAbfsReadWriteAndSeek.java |   3 +
 .../hadoop/fs/azurebfs/ITestAbfsStatistics.java|   2 +-
 .../fs/azurebfs/ITestAbfsStreamStatistics.java |  11 +-
 .../azurebfs/ITestAzureBlobFileSystemCreate.java   |  20 +-
 .../azurebfs/ITestAzureBlobFileSystemDelete.java   | 116 ++
 .../fs/azurebfs/ITestAzureBlobFileSystemE2E.java   |   3 +
 .../fs/azurebfs/ITestAzureBlobFileSystemFlush.java |  30 +-
 .../azurebfs/ITestAzureBlobFileSystemRename.java   | 181 -
 .../TestAbfsConfigurationFieldsValidation.java |   9 +-
 .../fs/azurebfs/TestAbfsInputStreamStatistics.java |  55 +++
 .../fs/azurebfs/TestAbfsNetworkStatistics.java |  67 +++
 .../hadoop/fs/azurebfs/TestAbfsStatistics.java |   2 +-
 .../fs/azurebfs/TestAccountConfiguration.java  | 193 -
 .../azurebfs/constants/TestConfigurationKeys.java  |   1 +
 .../fs/azurebfs/services/TestAbfsClient.java   | 109 -
 .../fs/azurebfs/services/TestAbfsInputStream.java  | 450 +
 .../fs/azurebfs/services/TestAbfsOutputStream.java | 430 
 .../services/TestTextFileBasedIdentityHandler.java | 149 +++
 .../fs/azurebfs/utils/TestCachedSASToken.java  |  34 ++
 53 files changed, 3935 insertions(+), 120 deletions(-)
 rename 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/{AbfsInstrumentation.java
 => AbfsCountersImpl.java} (96%)
 create mode 100644 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/IdentityTransformerI

[hadoop] 03/09: HADOOP-16852: Report read-ahead error back

2020-07-25 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 869a68b81e80a37986c6487e95b2890a79d091dd
Author: Sneha Vijayarajan 
AuthorDate: Wed May 27 13:51:42 2020 -0700

HADOOP-16852: Report read-ahead error back

Contributed by Sneha Vijayarajan
---
 .../fs/azurebfs/services/AbfsInputStream.java  |  12 +
 .../hadoop/fs/azurebfs/services/ReadBuffer.java|  16 +
 .../fs/azurebfs/services/ReadBufferManager.java|  90 -
 .../fs/azurebfs/services/ReadBufferWorker.java |  12 +-
 .../fs/azurebfs/services/TestAbfsInputStream.java  | 450 +
 .../fs/azurebfs/utils/TestCachedSASToken.java  |  34 ++
 6 files changed, 604 insertions(+), 10 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java
index 422fa3b..50380c9 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java
@@ -24,6 +24,10 @@ import java.io.IOException;
 import java.net.HttpURLConnection;
 
 import com.google.common.base.Preconditions;
+import com.google.common.annotations.VisibleForTesting;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 import org.apache.hadoop.fs.CanUnbuffer;
 import org.apache.hadoop.fs.FSExceptionMessages;
@@ -41,6 +45,7 @@ import static org.apache.hadoop.util.StringUtils.toLowerCase;
  */
 public class AbfsInputStream extends FSInputStream implements CanUnbuffer,
 StreamCapabilities {
+  private static final Logger LOG = 
LoggerFactory.getLogger(AbfsInputStream.class);
 
   private final AbfsClient client;
   private final Statistics statistics;
@@ -239,6 +244,7 @@ public class AbfsInputStream extends FSInputStream 
implements CanUnbuffer,
 final AbfsRestOperation op;
 AbfsPerfTracker tracker = client.getAbfsPerfTracker();
 try (AbfsPerfInfo perfInfo = new AbfsPerfInfo(tracker, "readRemote", 
"read")) {
+  LOG.trace("Trigger client.read for path={} position={} offset={} 
length={}", path, position, offset, length);
   op = client.read(path, position, b, offset, length, tolerateOobAppends ? 
"*" : eTag, cachedSasToken.get());
   cachedSasToken.update(op.getSasToken());
   perfInfo.registerResult(op.getResult()).registerSuccess(true);
@@ -431,4 +437,10 @@ public class AbfsInputStream extends FSInputStream 
implements CanUnbuffer,
   byte[] getBuffer() {
 return buffer;
   }
+
+  @VisibleForTesting
+  protected void setCachedSasToken(final CachedSASToken cachedSasToken) {
+this.cachedSasToken = cachedSasToken;
+  }
+
 }
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/ReadBuffer.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/ReadBuffer.java
index 00e4f00..5d55726 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/ReadBuffer.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/ReadBuffer.java
@@ -18,10 +18,13 @@
 
 package org.apache.hadoop.fs.azurebfs.services;
 
+import java.io.IOException;
 import java.util.concurrent.CountDownLatch;
 
 import org.apache.hadoop.fs.azurebfs.contracts.services.ReadBufferStatus;
 
+import static 
org.apache.hadoop.fs.azurebfs.contracts.services.ReadBufferStatus.READ_FAILED;
+
 class ReadBuffer {
 
   private AbfsInputStream stream;
@@ -40,6 +43,8 @@ class ReadBuffer {
   private boolean isLastByteConsumed = false;
   private boolean isAnyByteConsumed = false;
 
+  private IOException errException = null;
+
   public AbfsInputStream getStream() {
 return stream;
   }
@@ -88,12 +93,23 @@ class ReadBuffer {
 this.bufferindex = bufferindex;
   }
 
+  public IOException getErrException() {
+return errException;
+  }
+
+  public void setErrException(final IOException errException) {
+this.errException = errException;
+  }
+
   public ReadBufferStatus getStatus() {
 return status;
   }
 
   public void setStatus(ReadBufferStatus status) {
 this.status = status;
+if (status == READ_FAILED) {
+  bufferindex = -1;
+}
   }
 
   public CountDownLatch getLatch() {
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/ReadBufferManager.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/ReadBufferManager.java
index 5b71cf0..73c23b0 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/ReadBufferManager.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services

[hadoop] 02/09: HADOOP-17054. ABFS: Fix test AbfsClient authentication instance

2020-07-25 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 27b20f96893e00223c90b87ccb643d3cb6fb3941
Author: Sneha Vijayarajan 
AuthorDate: Tue May 26 15:26:28 2020 -0700

HADOOP-17054. ABFS: Fix test AbfsClient authentication instance

Contributed by Sneha Vijayarajan
---
 .../fs/azurebfs/services/TestAbfsClient.java   | 42 +-
 1 file changed, 25 insertions(+), 17 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsClient.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsClient.java
index ce9c032..0fd65fb 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsClient.java
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsClient.java
@@ -246,21 +246,29 @@ public final class TestAbfsClient {
   AbfsClient baseAbfsClientInstance,
   AbfsConfiguration abfsConfig)
   throws AzureBlobFileSystemException {
-  AbfsPerfTracker tracker = new AbfsPerfTracker("test",
-  abfsConfig.getAccountName(),
-  abfsConfig);
-
-  // Create test AbfsClient
-  AbfsClient testClient = new AbfsClient(
-  baseAbfsClientInstance.getBaseUrl(),
-  new SharedKeyCredentials(abfsConfig.getAccountName().substring(0,
-  abfsConfig.getAccountName().indexOf(DOT)),
-  abfsConfig.getStorageAccountKey()),
-  abfsConfig,
-  new ExponentialRetryPolicy(abfsConfig.getMaxIoRetries()),
-  abfsConfig.getTokenProvider(),
-  tracker);
-
-  return testClient;
-}
+AuthType currentAuthType = abfsConfig.getAuthType(
+abfsConfig.getAccountName());
+
+AbfsPerfTracker tracker = new AbfsPerfTracker("test",
+abfsConfig.getAccountName(),
+abfsConfig);
+
+// Create test AbfsClient
+AbfsClient testClient = new AbfsClient(
+baseAbfsClientInstance.getBaseUrl(),
+(currentAuthType == AuthType.SharedKey
+? new SharedKeyCredentials(
+abfsConfig.getAccountName().substring(0,
+abfsConfig.getAccountName().indexOf(DOT)),
+abfsConfig.getStorageAccountKey())
+: null),
+abfsConfig,
+new ExponentialRetryPolicy(abfsConfig.getMaxIoRetries()),
+(currentAuthType == AuthType.OAuth
+? abfsConfig.getTokenProvider()
+: null),
+tracker);
+
+return testClient;
+  }
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 01/09: Hadoop-17015. ABFS: Handling Rename and Delete idempotency

2020-07-25 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit eed06b46ebeae15f2b492733d7383f7fab1b299d
Author: Sneha Vijayarajan 
AuthorDate: Tue May 19 12:30:07 2020 -0700

Hadoop-17015. ABFS: Handling Rename and Delete idempotency

Contributed by Sneha Vijayarajan.
---
 .../hadoop/fs/azurebfs/AbfsConfiguration.java  |   5 +
 .../fs/azurebfs/AzureBlobFileSystemStore.java  |  20 +---
 .../constants/FileSystemConfigurations.java|   3 +
 .../hadoop/fs/azurebfs/services/AbfsClient.java|  87 
 .../fs/azurebfs/services/AbfsHttpOperation.java|  13 +++
 .../fs/azurebfs/services/AbfsRestOperation.java|  20 +++-
 .../hadoop/fs/azurebfs/utils/DateTimeUtils.java|  71 +
 .../hadoop-azure/src/site/markdown/abfs.md |  12 +++
 .../azurebfs/ITestAzureBlobFileSystemDelete.java   |  53 ++
 .../azurebfs/ITestAzureBlobFileSystemRename.java   | 113 -
 .../TestAbfsConfigurationFieldsValidation.java |   7 ++
 .../fs/azurebfs/services/TestAbfsClient.java   |  23 +
 12 files changed, 409 insertions(+), 18 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
index 74f98a0..c56aebf 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
@@ -778,6 +778,11 @@ public class AbfsConfiguration{
   }
 
   @VisibleForTesting
+  void setMaxBackoffIntervalMilliseconds(int maxBackoffInterval) {
+this.maxBackoffInterval = maxBackoffInterval;
+  }
+
+  @VisibleForTesting
   void setIsNamespaceEnabledAccount(String isNamespaceEnabledAccount) {
 this.isNamespaceEnabledAccount = isNamespaceEnabledAccount;
   }
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
index 35fd439..ca51cc7 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
@@ -34,7 +34,6 @@ import java.nio.charset.Charset;
 import java.nio.charset.CharsetDecoder;
 import java.nio.charset.CharsetEncoder;
 import java.nio.charset.StandardCharsets;
-import java.text.ParseException;
 import java.text.SimpleDateFormat;
 import java.time.Instant;
 import java.util.ArrayList;
@@ -97,6 +96,7 @@ import org.apache.hadoop.fs.azurebfs.services.AbfsPerfTracker;
 import org.apache.hadoop.fs.azurebfs.services.AbfsPerfInfo;
 import org.apache.hadoop.fs.azurebfs.utils.Base64;
 import org.apache.hadoop.fs.azurebfs.utils.CRC64;
+import org.apache.hadoop.fs.azurebfs.utils.DateTimeUtils;
 import org.apache.hadoop.fs.azurebfs.utils.UriUtils;
 import org.apache.hadoop.fs.permission.AclEntry;
 import org.apache.hadoop.fs.permission.AclStatus;
@@ -129,7 +129,6 @@ public class AzureBlobFileSystemStore implements Closeable {
   private URI uri;
   private String userName;
   private String primaryUserGroup;
-  private static final String DATE_TIME_PATTERN = "E, dd MMM  HH:mm:ss z";
   private static final String TOKEN_DATE_PATTERN = 
"-MM-dd'T'HH:mm:ss.SSS'Z'";
   private static final String XMS_PROPERTIES_ENCODING = "ISO-8859-1";
   private static final int GET_SET_AGGREGATE_COUNT = 2;
@@ -673,7 +672,7 @@ public class AzureBlobFileSystemStore implements Closeable {
   resourceIsDir,
   1,
   blockSize,
-  parseLastModifiedTime(lastModified),
+  DateTimeUtils.parseLastModifiedTime(lastModified),
   path,
   eTag);
 }
@@ -749,7 +748,8 @@ public class AzureBlobFileSystemStore implements Closeable {
   long contentLength = entry.contentLength() == null ? 0 : 
entry.contentLength();
   boolean isDirectory = entry.isDirectory() == null ? false : 
entry.isDirectory();
   if (entry.lastModified() != null && !entry.lastModified().isEmpty()) 
{
-lastModifiedMillis = parseLastModifiedTime(entry.lastModified());
+lastModifiedMillis = DateTimeUtils.parseLastModifiedTime(
+entry.lastModified());
   }
 
   Path entryPath = new Path(File.separator + entry.name());
@@ -1240,18 +1240,6 @@ public class AzureBlobFileSystemStore implements 
Closeable {
 && resourceType.equalsIgnoreCase(AbfsHttpConstants.DIRECTORY);
   }
 
-  private long parseLastModifiedTime(final String lastModifiedTime

[hadoop] 05/09: HDFS-15168: ABFS enhancement to translate AAD to Linux identities. (#1978)

2020-07-25 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 8b7e77443df9781cfc3a4d3a42abc8849012cf10
Author: Karthik Amarnath 
AuthorDate: Thu May 28 19:00:23 2020 -0700

HDFS-15168: ABFS enhancement to translate AAD to Linux identities. (#1978)
---
 .../fs/azurebfs/AzureBlobFileSystemStore.java  |  15 +-
 .../fs/azurebfs/constants/AbfsHttpConstants.java   |   1 +
 .../fs/azurebfs/constants/ConfigurationKeys.java   |   7 +
 .../fs/azurebfs/oauth2/IdentityTransformer.java|  10 +-
 .../oauth2/IdentityTransformerInterface.java   |  62 +++
 .../azurebfs/oauth2/LocalIdentityTransformer.java  |  72 
 .../hadoop/fs/azurebfs/utils/IdentityHandler.java  |  42 +
 .../utils/TextFileBasedIdentityHandler.java| 195 +
 .../services/TestTextFileBasedIdentityHandler.java | 149 
 9 files changed, 547 insertions(+), 6 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
index ca51cc7..f478c4d 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
@@ -21,6 +21,7 @@ import java.io.Closeable;
 import java.io.File;
 import java.io.IOException;
 import java.io.OutputStream;
+import java.lang.reflect.InvocationTargetException;
 import java.io.UnsupportedEncodingException;
 import java.net.HttpURLConnection;
 import java.net.MalformedURLException;
@@ -79,6 +80,7 @@ import 
org.apache.hadoop.fs.azurebfs.extensions.ExtensionHelper;
 import org.apache.hadoop.fs.azurebfs.oauth2.AccessTokenProvider;
 import org.apache.hadoop.fs.azurebfs.oauth2.AzureADAuthenticator;
 import org.apache.hadoop.fs.azurebfs.oauth2.IdentityTransformer;
+import org.apache.hadoop.fs.azurebfs.oauth2.IdentityTransformerInterface;
 import org.apache.hadoop.fs.azurebfs.services.AbfsAclHelper;
 import org.apache.hadoop.fs.azurebfs.services.AbfsClient;
 import org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation;
@@ -116,6 +118,7 @@ import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.ROOT_PAT
 import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.SINGLE_WHITE_SPACE;
 import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.TOKEN_VERSION;
 import static 
org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.AZURE_ABFS_ENDPOINT;
+import static 
org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.FS_AZURE_IDENTITY_TRANSFORM_CLASS;
 
 /**
  * Provides the bridging logic between Hadoop's abstract filesystem and Azure 
Storage.
@@ -138,7 +141,7 @@ public class AzureBlobFileSystemStore implements Closeable {
   private Trilean isNamespaceEnabled;
   private final AuthType authType;
   private final UserGroupInformation userGroupInformation;
-  private final IdentityTransformer identityTransformer;
+  private final IdentityTransformerInterface identityTransformer;
   private final AbfsPerfTracker abfsPerfTracker;
 
   public AzureBlobFileSystemStore(URI uri, boolean isSecureScheme, 
Configuration configuration)
@@ -181,7 +184,15 @@ public class AzureBlobFileSystemStore implements Closeable 
{
 boolean useHttps = (usingOauth || abfsConfiguration.isHttpsAlwaysUsed()) ? 
true : isSecureScheme;
 this.abfsPerfTracker = new AbfsPerfTracker(fileSystemName, accountName, 
this.abfsConfiguration);
 initializeClient(uri, fileSystemName, accountName, useHttps);
-this.identityTransformer = new 
IdentityTransformer(abfsConfiguration.getRawConfiguration());
+final Class 
identityTransformerClass =
+configuration.getClass(FS_AZURE_IDENTITY_TRANSFORM_CLASS, 
IdentityTransformer.class,
+IdentityTransformerInterface.class);
+try {
+  this.identityTransformer =
+  
identityTransformerClass.getConstructor(Configuration.class).newInstance(configuration);
+} catch (IllegalAccessException | InstantiationException | 
IllegalArgumentException | InvocationTargetException | NoSuchMethodException e) 
{
+  throw new IOException(e);
+}
 LOG.trace("IdentityTransformer init complete");
   }
 
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/AbfsHttpConstants.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/AbfsHttpConstants.java
index 42dc923..8d45513 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/AbfsHttpConstants.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/AbfsHttpConstants.java
@@ -74,6 +74,7 @@ public final class AbfsHttpConstants {
 

[hadoop] 08/09: HADOOP-17058. ABFS: Support for AppendBlob in Hadoop ABFS Driver

2020-07-25 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit f24e2ec487b43110bab5909d9fd15b08d59f08f4
Author: ishaniahuja <50942176+ishaniah...@users.noreply.github.com>
AuthorDate: Sun Jul 5 01:55:14 2020 +0530

HADOOP-17058. ABFS: Support for AppendBlob in Hadoop ABFS Driver

- Contributed by Ishani Ahuja
---
 .../hadoop/fs/azurebfs/AbfsConfiguration.java  |   8 +
 .../fs/azurebfs/AzureBlobFileSystemStore.java  |  50 ++-
 .../fs/azurebfs/constants/AbfsHttpConstants.java   |   1 +
 .../fs/azurebfs/constants/ConfigurationKeys.java   |   3 +
 .../constants/FileSystemConfigurations.java|   2 +
 .../fs/azurebfs/constants/HttpQueryParams.java |   1 +
 .../hadoop/fs/azurebfs/services/AbfsClient.java|  46 ++-
 .../fs/azurebfs/services/AbfsHttpOperation.java|  10 +
 .../fs/azurebfs/services/AbfsOutputStream.java |  63 ++-
 .../azurebfs/services/AbfsOutputStreamContext.java |  12 +
 .../fs/azurebfs/AbstractAbfsIntegrationTest.java   |   4 +
 .../fs/azurebfs/ITestAbfsNetworkStatistics.java|  52 ++-
 .../azurebfs/ITestAbfsOutputStreamStatistics.java  |   4 +
 .../fs/azurebfs/ITestAbfsReadWriteAndSeek.java |   3 +
 .../fs/azurebfs/ITestAbfsStreamStatistics.java |  11 +-
 .../azurebfs/ITestAzureBlobFileSystemCreate.java   |  20 +-
 .../fs/azurebfs/ITestAzureBlobFileSystemE2E.java   |   3 +
 .../fs/azurebfs/ITestAzureBlobFileSystemFlush.java |  30 +-
 .../TestAbfsConfigurationFieldsValidation.java |   2 +-
 .../azurebfs/constants/TestConfigurationKeys.java  |   1 +
 .../fs/azurebfs/services/TestAbfsOutputStream.java | 430 +
 21 files changed, 714 insertions(+), 42 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
index 091b1c7..85bd37a 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
@@ -173,6 +173,10 @@ public class AbfsConfiguration{
   DefaultValue = DEFAULT_FS_AZURE_ATOMIC_RENAME_DIRECTORIES)
   private String azureAtomicDirs;
 
+  @StringConfigurationValidatorAnnotation(ConfigurationKey = 
FS_AZURE_APPEND_BLOB_KEY,
+  DefaultValue = DEFAULT_FS_AZURE_APPEND_BLOB_DIRECTORIES)
+  private String azureAppendBlobDirs;
+
   @BooleanConfigurationValidatorAnnotation(ConfigurationKey = 
AZURE_CREATE_REMOTE_FILESYSTEM_DURING_INITIALIZATION,
   DefaultValue = 
DEFAULT_AZURE_CREATE_REMOTE_FILESYSTEM_DURING_INITIALIZATION)
   private boolean createRemoteFileSystemDuringInitialization;
@@ -561,6 +565,10 @@ public class AbfsConfiguration{
 return this.azureAtomicDirs;
   }
 
+  public String getAppendBlobDirs() {
+return this.azureAppendBlobDirs;
+  }
+
   public boolean getCreateRemoteFileSystemDuringInitialization() {
 // we do not support creating the filesystem when AuthType is SAS
 return this.createRemoteFileSystemDuringInitialization
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
index 9aba59b..59c2e26 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
@@ -62,6 +62,7 @@ import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
 import org.apache.hadoop.fs.azurebfs.constants.FileSystemUriSchemes;
+import org.apache.hadoop.fs.azurebfs.constants.FileSystemConfigurations;
 import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
 import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsRestOperationException;
 import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.AzureBlobFileSystemException;
@@ -146,6 +147,11 @@ public class AzureBlobFileSystemStore implements Closeable 
{
   private final IdentityTransformerInterface identityTransformer;
   private final AbfsPerfTracker abfsPerfTracker;
 
+  /**
+   * The set of directories where we should store files as append blobs.
+   */
+  private Set appendBlobDirSet;
+
   public AzureBlobFileSystemStore(URI uri, boolean isSecureScheme,
   Configuration configuration,
   AbfsCounters abfsCounters) throws 
IOException {
@@ -197,6 +203,23 @@ public class AzureBlobFileSystemStore implements Closeable 
{
   throw new IOException(e);
 }
 LOG.trace("IdentityTransformer init complete");
+
+// Extract the 

[hadoop] 01/02: HADOOP-17089: WASB: Update azure-storage-java SDK Contributed by Thomas Marquardt

2020-06-25 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-2.10
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 0d4f9c778967ce0f83663c63389987335d47c3ea
Author: Thomas Marquardt 
AuthorDate: Wed Jun 24 18:37:25 2020 +

HADOOP-17089: WASB: Update azure-storage-java SDK
Contributed by Thomas Marquardt

DETAILS: WASB depends on the Azure Storage Java SDK. There is a concurrency
bug in the Azure Storage Java SDK that can cause the results of a list blobs
operation to appear empty. This causes the Filesystem listStatus and similar
APIs to return empty results. This has been seen in Spark work loads when 
jobs
use more than one executor core.

See Azure/azure-storage-java#546 for details on the bug in the Azure 
Storage SDK.

TESTS: A new test was added to validate the fix. All tests are passing:

$mvn -T 1C -Dparallel-tests=wasb -Dscale -DtestsThreadCount=8 clean verify
Tests run: 231, Failures: 0, Errors: 0, Skipped: 4
Tests run: 588, Failures: 0, Errors: 0, Skipped: 12
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0

$mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
Tests run: 37, Failures: 0, Errors: 0, Skipped: 0
Tests run: 407, Failures: 0, Errors: 0, Skipped: 34
Tests run: 151, Failures: 0, Errors: 0, Skipped: 19
Tests run: 206, Failures: 0, Errors: 0, Skipped: 24
---
 hadoop-project/pom.xml |  7 +--
 .../ITestNativeAzureFileSystemConcurrencyLive.java | 59 +-
 2 files changed, 58 insertions(+), 8 deletions(-)

diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 5c9fbf6..aa3db91 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -1131,7 +1131,7 @@
   
 com.microsoft.azure
 azure-storage
-7.0.0
+7.0.1
   
 
   
@@ -1189,11 +1189,6 @@
1.46
test
  
-  
-com.microsoft.azure
-azure-storage
-5.4.0
-
   
  
 joda-time
diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/ITestNativeAzureFileSystemConcurrencyLive.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/ITestNativeAzureFileSystemConcurrencyLive.java
index e72fff2..3a4d20f 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/ITestNativeAzureFileSystemConcurrencyLive.java
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/ITestNativeAzureFileSystemConcurrencyLive.java
@@ -20,6 +20,7 @@ package org.apache.hadoop.fs.azure;
 
 
 import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.junit.Assert;
@@ -130,15 +131,56 @@ public class ITestNativeAzureFileSystemConcurrencyLive
 }
   }
 
+  /**
+   * Validate the bug fix for HADOOP-17089.  Please note that we were never
+   * able to reproduce this except during a Spark job that ran for multiple 
days
+   * and in a hacked-up azure-storage SDK that added sleep before and after
+   * the call to factory.setNamespaceAware(true) as shown in the description of
+   *
+   * @see https://github.com/Azure/azure-storage-java/pull/546";>https://github.com/Azure/azure-storage-java/pull/546
+   */
+  @Test(timeout = TEST_EXECUTION_TIMEOUT)
+  public void testConcurrentList() throws Exception {
+final Path testDir = new 
Path("/tmp/data-loss/11230174258112/_temporary/0/_temporary/attempt_20200624190514_0006_m_0");
+final Path testFile = new Path(testDir, 
"part-4-15ea87b1-312c-4fdf-1820-95afb3dfc1c3-a010.snappy.parquet");
+fs.create(testFile).close();
+List tasks = new ArrayList<>(THREAD_COUNT);
+
+for (int i = 0; i < THREAD_COUNT; i++) {
+  tasks.add(new ListTask(fs, testDir));
+}
+
+ExecutorService es = null;
+try {
+  es = Executors.newFixedThreadPool(THREAD_COUNT);
+
+  List> futures = es.invokeAll(tasks);
+
+  for (Future future : futures) {
+Assert.assertTrue(future.isDone());
+
+// we are using Callable, so if an exception
+// occurred during the operation, it will be thrown
+// when we call get
+long fileCount = future.get();
+assertEquals("The list should always contain 1 file.", 1, fileCount);
+  }
+} finally {
+  if (es != null) {
+es.shutdownNow();
+  }
+}
+  }
+
   abstract class FileSystemTask implements Callable {
 private final FileSystem fileSystem;
 private final Path path;
 
-protected FileSystem getFileSystem() {
+FileSystem getFileSystem() {
   return this.fileSystem;
 }
 
-protected Path getFilePath() {
+Path getFilePath() {
   return this.path;
 }
 
@@ -182,4 +224,17 @@ publ

[hadoop] 02/02: HADOOP-16845: Disable ITestAbfsClient.testContinuationTokenHavingEqualSign due to ADLS Gen2 service bug. Contributed by Sneha Vijayarajan.

2020-06-25 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-2.10
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit a6b0892539a89a27161e2cbecc9863d75d33ae13
Author: Sneha Vijayarajan 
AuthorDate: Thu Feb 6 18:41:06 2020 +

HADOOP-16845: Disable ITestAbfsClient.testContinuationTokenHavingEqualSign 
due to ADLS Gen2 service bug. Contributed by Sneha Vijayarajan.
---
 .../src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsClient.java| 2 ++
 1 file changed, 2 insertions(+)

diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsClient.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsClient.java
index eb34999..182664f 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsClient.java
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsClient.java
@@ -44,6 +44,8 @@ public final class ITestAbfsClient extends 
AbstractAbfsIntegrationTest {
 super();
   }
 
+  @Ignore("HADOOP-16845: Invalid continuation tokens are ignored by the ADLS "
+  + "Gen2 service, so we are disabling this test until the service is 
fixed.")
   @Test
   public void testContinuationTokenHavingEqualSign() throws Exception {
 final AzureBlobFileSystem fs = this.getFileSystem();


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-2.10 updated (e81002b -> a6b0892)

2020-06-25 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a change to branch branch-2.10
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from e81002b  SPNEGO TLS verification
 new 0d4f9c7  HADOOP-17089: WASB: Update azure-storage-java SDK Contributed 
by Thomas Marquardt
 new a6b0892  HADOOP-16845: Disable 
ITestAbfsClient.testContinuationTokenHavingEqualSign due to ADLS Gen2 service 
bug. Contributed by Sneha Vijayarajan.

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 hadoop-project/pom.xml |  7 +--
 .../ITestNativeAzureFileSystemConcurrencyLive.java | 59 +-
 .../apache/hadoop/fs/azurebfs/ITestAbfsClient.java |  2 +
 3 files changed, 60 insertions(+), 8 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: HADOOP-17089: WASB: Update azure-storage-java SDK Contributed by Thomas Marquardt

2020-06-24 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new ee192c4  HADOOP-17089: WASB: Update azure-storage-java SDK Contributed 
by Thomas Marquardt
ee192c4 is described below

commit ee192c48265fe7dcf23bc33f6a6698bb41477ca9
Author: Thomas Marquardt 
AuthorDate: Wed Jun 24 18:37:25 2020 +

HADOOP-17089: WASB: Update azure-storage-java SDK
Contributed by Thomas Marquardt

DETAILS: WASB depends on the Azure Storage Java SDK. There is a concurrency
bug in the Azure Storage Java SDK that can cause the results of a list blobs
operation to appear empty. This causes the Filesystem listStatus and similar
APIs to return empty results. This has been seen in Spark work loads when 
jobs
use more than one executor core.

See Azure/azure-storage-java#546 for details on the bug in the Azure 
Storage SDK.

TESTS: A new test was added to validate the fix. All tests are passing:

wasb:
mvn -T 1C -Dparallel-tests=wasb -Dscale -DtestsThreadCount=8 clean verify
Tests run: 248, Failures: 0, Errors: 0, Skipped: 11
Tests run: 651, Failures: 0, Errors: 0, Skipped: 65

abfs:
mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
Tests run: 64, Failures: 0, Errors: 0, Skipped: 0
Tests run: 437, Failures: 0, Errors: 0, Skipped: 33
Tests run: 206, Failures: 0, Errors: 0, Skipped: 24
---
 hadoop-project/pom.xml |  2 +-
 .../ITestNativeAzureFileSystemConcurrencyLive.java | 59 +-
 2 files changed, 58 insertions(+), 3 deletions(-)

diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 2805a39..f8b8274 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -1402,7 +1402,7 @@
   
 com.microsoft.azure
 azure-storage
-7.0.0
+7.0.1
  
 
   
diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/ITestNativeAzureFileSystemConcurrencyLive.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/ITestNativeAzureFileSystemConcurrencyLive.java
index 1c868ea..2c99b84 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/ITestNativeAzureFileSystemConcurrencyLive.java
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/ITestNativeAzureFileSystemConcurrencyLive.java
@@ -20,6 +20,7 @@ package org.apache.hadoop.fs.azure;
 
 
 import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.junit.Assert;
@@ -130,15 +131,56 @@ public class ITestNativeAzureFileSystemConcurrencyLive
 }
   }
 
+  /**
+   * Validate the bug fix for HADOOP-17089.  Please note that we were never
+   * able to reproduce this except during a Spark job that ran for multiple 
days
+   * and in a hacked-up azure-storage SDK that added sleep before and after
+   * the call to factory.setNamespaceAware(true) as shown in the description of
+   *
+   * @see https://github.com/Azure/azure-storage-java/pull/546";>https://github.com/Azure/azure-storage-java/pull/546
+   */
+  @Test(timeout = TEST_EXECUTION_TIMEOUT)
+  public void testConcurrentList() throws Exception {
+final Path testDir = new 
Path("/tmp/data-loss/11230174258112/_temporary/0/_temporary/attempt_20200624190514_0006_m_0");
+final Path testFile = new Path(testDir, 
"part-4-15ea87b1-312c-4fdf-1820-95afb3dfc1c3-a010.snappy.parquet");
+fs.create(testFile).close();
+List tasks = new ArrayList<>(THREAD_COUNT);
+
+for (int i = 0; i < THREAD_COUNT; i++) {
+  tasks.add(new ListTask(fs, testDir));
+}
+
+ExecutorService es = null;
+try {
+  es = Executors.newFixedThreadPool(THREAD_COUNT);
+
+  List> futures = es.invokeAll(tasks);
+
+  for (Future future : futures) {
+Assert.assertTrue(future.isDone());
+
+// we are using Callable, so if an exception
+// occurred during the operation, it will be thrown
+// when we call get
+long fileCount = future.get();
+assertEquals("The list should always contain 1 file.", 1, fileCount);
+  }
+} finally {
+  if (es != null) {
+es.shutdownNow();
+  }
+}
+  }
+
   abstract class FileSystemTask implements Callable {
 private final FileSystem fileSystem;
 private final Path path;
 
-protected FileSystem getFileSystem() {
+FileSystem getFileSystem() {
   return this.fileSystem;
 }
 
-protected Path getFilePath() {
+Path getFilePath() {
   return this.path;
 }
 
@@ -182,4 +224,17 @@ public class ITestNativeAzureFileSystemConcurrencyLive
   re

[hadoop] branch trunk updated: HADOOP-17089: WASB: Update azure-storage-java SDK Contributed by Thomas Marquardt

2020-06-24 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 4b5b54c  HADOOP-17089: WASB: Update azure-storage-java SDK Contributed 
by Thomas Marquardt
4b5b54c is described below

commit 4b5b54c73f2fd9146237087a59453e2b5d70f9ed
Author: Thomas Marquardt 
AuthorDate: Wed Jun 24 18:37:25 2020 +

HADOOP-17089: WASB: Update azure-storage-java SDK
Contributed by Thomas Marquardt

DETAILS: WASB depends on the Azure Storage Java SDK. There is a concurrency
bug in the Azure Storage Java SDK that can cause the results of a list blobs
operation to appear empty. This causes the Filesystem listStatus and similar
APIs to return empty results. This has been seen in Spark work loads when 
jobs
use more than one executor core.

See Azure/azure-storage-java#546 for details on the bug in the Azure 
Storage SDK.

TESTS: A new test was added to validate the fix. All tests are passing:

wasb:
mvn -T 1C -Dparallel-tests=wasb -Dscale -DtestsThreadCount=8 clean verify
Tests run: 248, Failures: 0, Errors: 0, Skipped: 11
Tests run: 651, Failures: 0, Errors: 0, Skipped: 65

abfs:
mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
Tests run: 64, Failures: 0, Errors: 0, Skipped: 0
Tests run: 437, Failures: 0, Errors: 0, Skipped: 33
Tests run: 206, Failures: 0, Errors: 0, Skipped: 24
---
 hadoop-project/pom.xml |  2 +-
 .../ITestNativeAzureFileSystemConcurrencyLive.java | 59 +-
 2 files changed, 58 insertions(+), 3 deletions(-)

diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 48928b5..4e819cd 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -1419,7 +1419,7 @@
   
 com.microsoft.azure
 azure-storage
-7.0.0
+7.0.1
  
 
   
diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/ITestNativeAzureFileSystemConcurrencyLive.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/ITestNativeAzureFileSystemConcurrencyLive.java
index 1c868ea..2c99b84 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/ITestNativeAzureFileSystemConcurrencyLive.java
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/ITestNativeAzureFileSystemConcurrencyLive.java
@@ -20,6 +20,7 @@ package org.apache.hadoop.fs.azure;
 
 
 import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.junit.Assert;
@@ -130,15 +131,56 @@ public class ITestNativeAzureFileSystemConcurrencyLive
 }
   }
 
+  /**
+   * Validate the bug fix for HADOOP-17089.  Please note that we were never
+   * able to reproduce this except during a Spark job that ran for multiple 
days
+   * and in a hacked-up azure-storage SDK that added sleep before and after
+   * the call to factory.setNamespaceAware(true) as shown in the description of
+   *
+   * @see https://github.com/Azure/azure-storage-java/pull/546";>https://github.com/Azure/azure-storage-java/pull/546
+   */
+  @Test(timeout = TEST_EXECUTION_TIMEOUT)
+  public void testConcurrentList() throws Exception {
+final Path testDir = new 
Path("/tmp/data-loss/11230174258112/_temporary/0/_temporary/attempt_20200624190514_0006_m_0");
+final Path testFile = new Path(testDir, 
"part-4-15ea87b1-312c-4fdf-1820-95afb3dfc1c3-a010.snappy.parquet");
+fs.create(testFile).close();
+List tasks = new ArrayList<>(THREAD_COUNT);
+
+for (int i = 0; i < THREAD_COUNT; i++) {
+  tasks.add(new ListTask(fs, testDir));
+}
+
+ExecutorService es = null;
+try {
+  es = Executors.newFixedThreadPool(THREAD_COUNT);
+
+  List> futures = es.invokeAll(tasks);
+
+  for (Future future : futures) {
+Assert.assertTrue(future.isDone());
+
+// we are using Callable, so if an exception
+// occurred during the operation, it will be thrown
+// when we call get
+long fileCount = future.get();
+assertEquals("The list should always contain 1 file.", 1, fileCount);
+  }
+} finally {
+  if (es != null) {
+es.shutdownNow();
+  }
+}
+  }
+
   abstract class FileSystemTask implements Callable {
 private final FileSystem fileSystem;
 private final Path path;
 
-protected FileSystem getFileSystem() {
+FileSystem getFileSystem() {
   return this.fileSystem;
 }
 
-protected Path getFilePath() {
+Path getFilePath() {
   return this.path;
 }
 
@@ -182,4 +224,17 @@ public class ITestNativeAzureFileSystemConcurrencyLive
   return null;

[hadoop] branch branch-3.3 updated (7613191 -> 63d236c)

2020-06-19 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a change to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 7613191  HDFS-15372. Files in snapshots no longer see attribute 
provider permissions. Contributed by Stephen O'Donnell.
 new 76ee7e5  HADOOP-17002. ABFS: Adding config to determine if the account 
is HNS enabled or not
 new a2f4434  HADOOP-17018. Intermittent failing of 
ITestAbfsStreamStatistics in ABFS (#1990)
 new af98f32  HADOOP-16916: ABFS: Delegation SAS generator for integration 
with Ranger
 new 11307f3  HADOOP-17004. ABFS: Improve the ABFS driver documentation
 new d639c11  HADOOP-17004. Fixing a formatting issue
 new 63d236c  HADOOP-17076: ABFS: Delegation SAS Generator Updates 
Contributed by Thomas Marquardt.

The 6 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../hadoop-azure/dev-support/findbugs-exclude.xml  |  21 +-
 .../hadoop/fs/azurebfs/AbfsConfiguration.java  |  23 ++
 .../hadoop/fs/azurebfs/AzureBlobFileSystem.java|  10 +-
 .../fs/azurebfs/AzureBlobFileSystemStore.java  | 166 +
 .../fs/azurebfs/constants/ConfigurationKeys.java   |  10 +
 .../constants/FileSystemConfigurations.java|   4 +
 ...eption.java => TrileanConversionException.java} |  19 +-
 .../apache/hadoop/fs/azurebfs/enums/Trilean.java   |  80 +
 .../hadoop/fs/azurebfs/enums}/package-info.java|   2 +-
 .../fs/azurebfs/extensions/SASTokenProvider.java   |  25 +-
 .../fs/azurebfs/oauth2/AzureADAuthenticator.java   |   8 +-
 .../hadoop/fs/azurebfs/services/AbfsClient.java| 115 --
 .../fs/azurebfs/services/AbfsInputStream.java  |   8 +-
 .../azurebfs/services/AbfsInputStreamContext.java  |   3 +-
 .../fs/azurebfs/services/AbfsOutputStream.java |  12 +-
 .../azurebfs/services/AbfsOutputStreamContext.java |   3 +-
 .../fs/azurebfs/services/AbfsRestOperation.java|  32 +-
 .../fs/azurebfs/services/AbfsStreamContext.java|  13 +
 .../hadoop/fs/azurebfs/utils/CachedSASToken.java   | 207 +++
 .../hadoop-azure/src/site/markdown/abfs.md | 157 -
 .../src/site/markdown/testing_azure.md |  75 +++-
 .../fs/azurebfs/AbstractAbfsIntegrationTest.java   |  21 +-
 .../fs/azurebfs/ITestAbfsStreamStatistics.java |  19 +-
 .../ITestAzureBlobFileSystemAuthorization.java |   9 +
 .../ITestAzureBlobFileSystemCheckAccess.java   |   2 +
 .../ITestAzureBlobFileSystemDelegationSAS.java | 384 +
 .../fs/azurebfs/ITestGetNameSpaceEnabled.java  | 141 +++-
 .../apache/hadoop/fs/azurebfs/TrileanTests.java|  92 +
 .../azurebfs/constants/TestConfigurationKeys.java  |   8 +
 .../extensions/MockDelegationSASTokenProvider.java | 142 
 .../azurebfs/extensions/MockSASTokenProvider.java  |   6 +-
 .../fs/azurebfs/utils/DelegationSASGenerator.java  | 192 +++
 .../hadoop/fs/azurebfs/utils/SASGenerator.java | 112 +++---
 ...{SASGenerator.java => ServiceSASGenerator.java} |  75 ++--
 .../fs/azurebfs/utils/TestCachedSASToken.java  | 162 +
 35 files changed, 2086 insertions(+), 272 deletions(-)
 copy 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/contracts/exceptions/{InvalidUriException.java
 => TrileanConversionException.java} (70%)
 create mode 100644 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/enums/Trilean.java
 copy 
{hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token
 => 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/enums}/package-info.java
 (95%)
 create mode 100644 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/utils/CachedSASToken.java
 create mode 100644 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemDelegationSAS.java
 create mode 100644 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TrileanTests.java
 create mode 100644 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/extensions/MockDelegationSASTokenProvider.java
 create mode 100644 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/utils/DelegationSASGenerator.java
 copy 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/utils/{SASGenerator.java
 => ServiceSASGenerator.java} (54%)
 create mode 100644 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/utils/TestCachedSASToken.java


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 02/06: HADOOP-17018. Intermittent failing of ITestAbfsStreamStatistics in ABFS (#1990)

2020-06-19 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit a2f44344c346601607b9ac1de4598b754f9f2d72
Author: Mehakmeet Singh 
AuthorDate: Thu May 7 16:45:28 2020 +0530

HADOOP-17018. Intermittent failing of ITestAbfsStreamStatistics in ABFS 
(#1990)


Contributed by: Mehakmeet Singh

In some cases, ABFS-prefetch thread runs in the background which returns 
some bytes from the buffer and gives an extra readOp. Thus, making readOps 
values arbitrary and giving intermittent failures in some cases. Hence, readOps 
values of 2 or 3 are seen in different setups.
---
 .../hadoop/fs/azurebfs/ITestAbfsStreamStatistics.java | 19 ++-
 1 file changed, 14 insertions(+), 5 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsStreamStatistics.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsStreamStatistics.java
index b749f49..51531f6 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsStreamStatistics.java
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsStreamStatistics.java
@@ -84,12 +84,21 @@ public class ITestAbfsStreamStatistics extends 
AbstractAbfsIntegrationTest {
 
   LOG.info("Result of Read operation : {}", result);
   /*
-  Testing if 2 read_ops value is coming after reading full content from a
-  file (3 if anything to read from Buffer too).
-  Reason: read() call gives read_ops=1,
-  reading from AbfsClient(http GET) gives read_ops=2.
+   * Testing if 2 read_ops value is coming after reading full content
+   * from a file (3 if anything to read from Buffer too). Reason: read()
+   * call gives read_ops=1, reading from AbfsClient(http GET) gives
+   * read_ops=2.
+   *
+   * In some cases ABFS-prefetch thread runs in the background which
+   * returns some bytes from buffer and gives an extra readOp.
+   * Thus, making readOps values arbitrary and giving intermittent
+   * failures in some cases. Hence, readOps values of 2 or 3 is seen in
+   * different setups.
+   *
*/
-  assertReadWriteOps("read", 2, statistics.getReadOps());
+  assertTrue(String.format("The actual value of %d was not equal to the "
+  + "expected value of 2 or 3", statistics.getReadOps()),
+  statistics.getReadOps() == 2 || statistics.getReadOps() == 3);
 
 } finally {
   IOUtils.cleanupWithLogger(LOG, inForOneOperation,


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 03/06: HADOOP-16916: ABFS: Delegation SAS generator for integration with Ranger

2020-06-19 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit af98f32f7dbb9d71915690b66f12c33758011450
Author: Thomas Marquardt 
AuthorDate: Tue May 12 17:32:52 2020 +

HADOOP-16916: ABFS: Delegation SAS generator for integration with Ranger

Contributed by Thomas Marquardt.

DETAILS:

Previously we had a SASGenerator class which generated Service SAS, but we 
need to add DelegationSASGenerator.
I separated SASGenerator into a base class and two subclasses 
ServiceSASGenerator and DelegationSASGenreator.  The
code in ServiceSASGenerator is copied from SASGenerator but the 
DelegationSASGenrator code is new.  The
DelegationSASGenerator code demonstrates how to use Delegation SAS with 
minimal permissions, as would be used
by an authorization service such as Apache Ranger.  Adding this to the 
tests helps us lock in this behavior.

Added a MockDelegationSASTokenProvider for testing User Delegation SAS.

Fixed the ITestAzureBlobFileSystemCheckAccess tests to assume oauth client 
ID so that they are ignored when that
is not configured.

To improve performance, AbfsInputStream/AbfsOutputStream re-use SAS tokens 
until the expiry is within 120 seconds.
After this a new SAS will be requested.  The default period of 120 seconds 
can be changed using the configuration
setting "fs.azure.sas.token.renew.period.for.streams".

The SASTokenProvider operation names were updated to correspond better with 
the ADLS Gen2 REST API, since these
operations must be provided tokens with appropriate SAS parameters to 
succeed.

Support for the version 2.0 AAD authentication endpoint was added to 
AzureADAuthenticator.

The getFileStatus method was mistakenly calling the ADLS Gen2 Get 
Properties API which requires read permission
while the getFileStatus call only requires execute permission.  ADLS Gen2 
Get Status API is supposed to be used
for this purpose, so the underlying AbfsClient.getPathStatus API was 
updated with a includeProperties
parameter which is set to false for getFileStatus and true for getXAttr.

Added SASTokenProvider support for delete recursive.

Fixed bugs in AzureBlobFileSystem where public methods were not validating 
the Path by calling makeQualified.  This is
necessary to avoid passing null paths and to convert relative paths into 
absolute paths.

Canonicalized the path used for root path internally so that root path can 
be used with SAS tokens, which requires
that the path in the URL and the path in the SAS token match.  Internally 
the code was using
"//" instead of "/" for the root path, sometimes.  Also related to this, 
the AzureBlobFileSystemStore.getRelativePath
API was updated so that we no longer remove and then add back a preceding 
forward / to paths.

To run ITestAzureBlobFileSystemDelegationSAS tests follow the instructions 
in testing_azure.md under the heading
"To run Delegation SAS test cases".  You also need to set 
"fs.azure.enable.check.access" to true.

TEST RESULTS:

namespace.enabled=true
auth.type=SharedKey
---
$mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
Tests run: 63, Failures: 0, Errors: 0, Skipped: 0
Tests run: 432, Failures: 0, Errors: 0, Skipped: 41
Tests run: 206, Failures: 0, Errors: 0, Skipped: 24

namespace.enabled=false
auth.type=SharedKey
---
$mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
Tests run: 63, Failures: 0, Errors: 0, Skipped: 0
Tests run: 432, Failures: 0, Errors: 0, Skipped: 244
Tests run: 206, Failures: 0, Errors: 0, Skipped: 24

namespace.enabled=true
auth.type=SharedKey
sas.token.provider.type=MockDelegationSASTokenProvider
enable.check.access=true
---
$mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
Tests run: 63, Failures: 0, Errors: 0, Skipped: 0
Tests run: 432, Failures: 0, Errors: 0, Skipped: 33
Tests run: 206, Failures: 0, Errors: 0, Skipped: 24

namespace.enabled=true
auth.type=OAuth
---
$mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
Tests run: 63, Failures: 0, Errors: 0, Skipped: 0
Tests run: 432, Failures: 0, Errors: 1, Skipped: 74
Tests run: 206, Failures: 0, Errors: 0, Skipped: 140
---
 .../hadoop-azure/dev-support/findbugs-exclude.xml  |  21 +-
 .../hadoop/fs/azurebfs/AbfsConfiguration.java  |   9 +
 .../hadoop/fs/azurebfs/AzureBlobFileSystem.java|  10 +-
 .../fs/azurebfs/AzureBlobFileSystemStore.java  | 111 ---
 .../fs/azurebfs/constants/ConfigurationKeys.java   | 

[hadoop] 05/06: HADOOP-17004. Fixing a formatting issue

2020-06-19 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit d639c119867cf382815da0e427776c59aba0f5c8
Author: bilaharith <52483117+bilahar...@users.noreply.github.com>
AuthorDate: Thu May 21 00:21:48 2020 +0530

HADOOP-17004. Fixing a formatting issue

Contributed by Bilahari T H.
---
 hadoop-tools/hadoop-azure/src/site/markdown/abfs.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hadoop-tools/hadoop-azure/src/site/markdown/abfs.md 
b/hadoop-tools/hadoop-azure/src/site/markdown/abfs.md
index 93141f1..6aa030b 100644
--- a/hadoop-tools/hadoop-azure/src/site/markdown/abfs.md
+++ b/hadoop-tools/hadoop-azure/src/site/markdown/abfs.md
@@ -751,7 +751,7 @@ The following configs are related to read and write 
operations.
 `fs.azure.io.retry.max.retries`: Sets the number of retries for IO operations.
 Currently this is used only for the server call retry logic. Used within
 AbfsClient class as part of the ExponentialRetryPolicy. The value should be
->= 0.
+greater than or equal to 0.
 
 `fs.azure.write.request.size`: To set the write buffer size. Specify the value
 in bytes. The value should be between 16384 to 104857600 both inclusive (16 KB


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 04/06: HADOOP-17004. ABFS: Improve the ABFS driver documentation

2020-06-19 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 11307f3be9be494ec880e036c78705c41ca8ceae
Author: bilaharith <52483117+bilahar...@users.noreply.github.com>
AuthorDate: Tue May 19 09:15:54 2020 +0530

HADOOP-17004. ABFS: Improve the ABFS driver documentation

Contributed by Bilahari T H.
---
 .../hadoop-azure/src/site/markdown/abfs.md | 133 -
 1 file changed, 130 insertions(+), 3 deletions(-)

diff --git a/hadoop-tools/hadoop-azure/src/site/markdown/abfs.md 
b/hadoop-tools/hadoop-azure/src/site/markdown/abfs.md
index 89f52e7..93141f1 100644
--- a/hadoop-tools/hadoop-azure/src/site/markdown/abfs.md
+++ b/hadoop-tools/hadoop-azure/src/site/markdown/abfs.md
@@ -257,7 +257,8 @@ will have the URL 
`abfs://contain...@abfswales1.dfs.core.windows.net/`
 
 
 You can create a new container through the ABFS connector, by setting the 
option
- `fs.azure.createRemoteFileSystemDuringInitialization` to `true`.
+ `fs.azure.createRemoteFileSystemDuringInitialization` to `true`. Though the
+  same is not supported when AuthType is SAS.
 
 If the container does not exist, an attempt to list it with `hadoop fs -ls`
 will fail
@@ -317,8 +318,13 @@ driven by them.
 
 What can be changed is what secrets/credentials are used to authenticate the 
caller.
 
-The authentication mechanism is set in `fs.azure.account.auth.type` (or the 
account specific variant),
-and, for the various OAuth options `fs.azure.account.oauth.provider.type`
+The authentication mechanism is set in `fs.azure.account.auth.type` (or the
+account specific variant). The possible values are SharedKey, OAuth, Custom
+and SAS. For the various OAuth options use the config `fs.azure.account
+.oauth.provider.type`. Following are the implementations supported
+ClientCredsTokenProvider, UserPasswordTokenProvider, MsiTokenProvider and
+RefreshTokenBasedTokenProvider. An IllegalArgumentException is thrown if
+the specified provider type is not one of the supported.
 
 All secrets can be stored in JCEKS files. These are encrypted and password
 protected —use them or a compatible Hadoop Key Management Store wherever
@@ -350,6 +356,15 @@ the password, "key", retrieved from the XML/JCECKs 
configuration files.
 *Note*: The source of the account key can be changed through a custom key 
provider;
 one exists to execute a shell script to retrieve it.
 
+A custom key provider class can be provided with the config
+`fs.azure.account.keyprovider`. If a key provider class is specified the same
+will be used to get account key. Otherwise the Simple key provider will be used
+which will use the key specified for the config `fs.azure.account.key`.
+
+To retrieve using shell script, specify the path to the script for the config
+`fs.azure.shellkeyprovider.script`. ShellDecryptionKeyProvider class use the
+script specified to retrieve the key.
+
 ###  OAuth 2.0 Client Credentials
 
 OAuth 2.0 credentials of (client id, client secret, endpoint) are provided in 
the configuration/JCEKS file.
@@ -466,6 +481,13 @@ With an existing Oauth 2.0 token, make a request of the 
Active Directory endpoin
   
 
 
+  fs.azure.account.oauth2.refresh.endpoint
+  
+  
+  Refresh token endpoint
+  
+
+
   fs.azure.account.oauth2.client.id
   
   
@@ -507,6 +529,13 @@ The Azure Portal/CLI is used to create the service 
identity.
   
 
 
+  fs.azure.account.oauth2.msi.endpoint
+  
+  
+   MSI endpoint
+  
+
+
   fs.azure.account.oauth2.client.id
   
   
@@ -542,6 +571,26 @@ and optionally 
`org.apache.hadoop.fs.azurebfs.extensions.BoundDTExtension`.
 
 The declared class also holds responsibility to implement retry logic while 
fetching access tokens.
 
+###  Delegation Token 
Provider
+
+A delegation token provider supplies the ABFS connector with delegation tokens,
+helps renew and cancel the tokens by implementing the
+CustomDelegationTokenManager interface.
+
+```xml
+
+  fs.azure.enable.delegation.token
+  true
+  Make this true to use delegation token provider
+
+
+  fs.azure.delegation.token.provider.type
+  
{fully-qualified-class-name-for-implementation-of-CustomDelegationTokenManager-interface}
+
+```
+In case delegation token is enabled, and the config `fs.azure.delegation.token
+.provider.type` is not provided then an IlleagalArgumentException is thrown.
+
 ### Shared Access Signature (SAS) Token Provider
 
 A Shared Access Signature (SAS) token provider supplies the ABFS connector 
with SAS
@@ -691,6 +740,84 @@ Config `fs.azure.account.hns.enabled` provides an option 
to specify whether
 Config `fs.azure.enable.check.access` needs to be set true to enable
  the AzureBlobFileSystem.access().
 
+###  Primary User Group Options
+The group name which is part of FileStatus and AclStatus will be set the same 
as
+the username if the following config is set to true
+`fs.azure.skipUserGroupMetadataDuri

[hadoop] 06/06: HADOOP-17076: ABFS: Delegation SAS Generator Updates Contributed by Thomas Marquardt.

2020-06-19 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 63d236c019909d321f7824bb0e043c7bddd60bf0
Author: Thomas Marquardt 
AuthorDate: Wed Jun 17 23:12:22 2020 +

HADOOP-17076: ABFS: Delegation SAS Generator Updates
Contributed by Thomas Marquardt.

DETAILS:
1) The authentication version in the service has been updated from Dec19 to 
Feb20, so need to update the client.
2) Add support and test cases for getXattr and setXAttr.
3) Update DelegationSASGenerator and related to use Duration instead of int 
for time periods.
4) Cleanup DelegationSASGenerator switch/case statement that maps 
operations to permissions.
5) Cleanup SASGenerator classes to use String.equals instead of ==.

TESTS:
Added tests for getXAttr and setXAttr.

All tests are passing against my account in eastus2euap:

 $mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
 Tests run: 76, Failures: 0, Errors: 0, Skipped: 0
 Tests run: 441, Failures: 0, Errors: 0, Skipped: 33
 Tests run: 206, Failures: 0, Errors: 0, Skipped: 24
---
 .../fs/azurebfs/extensions/SASTokenProvider.java   |  2 +-
 .../ITestAzureBlobFileSystemDelegationSAS.java | 16 +++
 .../extensions/MockDelegationSASTokenProvider.java |  4 +--
 .../fs/azurebfs/utils/DelegationSASGenerator.java  | 32 ++
 .../hadoop/fs/azurebfs/utils/SASGenerator.java |  8 --
 .../fs/azurebfs/utils/ServiceSASGenerator.java |  6 ++--
 6 files changed, 42 insertions(+), 26 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/extensions/SASTokenProvider.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/extensions/SASTokenProvider.java
index 2cd44f1..a2cd292 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/extensions/SASTokenProvider.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/extensions/SASTokenProvider.java
@@ -33,6 +33,7 @@ import org.apache.hadoop.security.AccessControlException;
 public interface SASTokenProvider {
 
   String CHECK_ACCESS_OPERATION = "check-access";
+  String CREATE_DIRECTORY_OPERATION = "create-directory";
   String CREATE_FILE_OPERATION = "create-file";
   String DELETE_OPERATION = "delete";
   String DELETE_RECURSIVE_OPERATION = "delete-recursive";
@@ -40,7 +41,6 @@ public interface SASTokenProvider {
   String GET_STATUS_OPERATION = "get-status";
   String GET_PROPERTIES_OPERATION = "get-properties";
   String LIST_OPERATION = "list";
-  String CREATE_DIRECTORY_OPERATION = "create-directory";
   String READ_OPERATION = "read";
   String RENAME_SOURCE_OPERATION = "rename-source";
   String RENAME_DESTINATION_OPERATION = "rename-destination";
diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemDelegationSAS.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemDelegationSAS.java
index 07b5804..c2c691e 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemDelegationSAS.java
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemDelegationSAS.java
@@ -25,6 +25,7 @@ import java.util.Arrays;
 import java.util.List;
 import java.util.UUID;
 
+import org.junit.Assert;
 import org.junit.Assume;
 import org.junit.Test;
 import org.slf4j.Logger;
@@ -365,4 +366,19 @@ public class ITestAzureBlobFileSystemDelegationSAS extends 
AbstractAbfsIntegrati
 }
 assertEquals(0, count);
   }
+
+  @Test
+  // Test filesystem operations getXAttr and setXAttr
+  public void testProperties() throws Exception {
+final AzureBlobFileSystem fs = getFileSystem();
+Path reqPath = new Path(UUID.randomUUID().toString());
+
+fs.create(reqPath).close();
+
+final String propertyName = "user.mime_type";
+final byte[] propertyValue = "text/plain".getBytes("utf-8");
+fs.setXAttr(reqPath, propertyName, propertyValue);
+
+assertArrayEquals(propertyValue, fs.getXAttr(reqPath, propertyName));
+  }
 }
diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/extensions/MockDelegationSASTokenProvider.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/extensions/MockDelegationSASTokenProvider.java
index fa50bef..121256c 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/extensions/MockDelegationSASTokenProvider.java
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/extensions/MockDelegationSASToke

[hadoop] 01/06: HADOOP-17002. ABFS: Adding config to determine if the account is HNS enabled or not

2020-06-19 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 76ee7e5494579b6f8adf1d86b17e97a63a8576ad
Author: bilaharith <52483117+bilahar...@users.noreply.github.com>
AuthorDate: Fri Apr 24 06:16:18 2020 +0530

HADOOP-17002. ABFS: Adding config to determine if the account is HNS 
enabled or not

Contributed by Bilahari T H.
---
 .../hadoop/fs/azurebfs/AbfsConfiguration.java  |  14 ++
 .../fs/azurebfs/AzureBlobFileSystemStore.java  |  59 ++---
 .../fs/azurebfs/constants/ConfigurationKeys.java   |   7 +
 .../constants/FileSystemConfigurations.java|   3 +
 .../exceptions/TrileanConversionException.java |  36 ++
 .../apache/hadoop/fs/azurebfs/enums/Trilean.java   |  80 
 .../hadoop/fs/azurebfs/enums/package-info.java |  22 
 .../hadoop-azure/src/site/markdown/abfs.md |   5 +
 .../fs/azurebfs/ITestGetNameSpaceEnabled.java  | 141 -
 .../apache/hadoop/fs/azurebfs/TrileanTests.java|  92 ++
 10 files changed, 438 insertions(+), 21 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
index 78d6260..d60bc37 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
@@ -47,6 +47,7 @@ import 
org.apache.hadoop.fs.azurebfs.diagnostics.BooleanConfigurationBasicValida
 import 
org.apache.hadoop.fs.azurebfs.diagnostics.IntegerConfigurationBasicValidator;
 import 
org.apache.hadoop.fs.azurebfs.diagnostics.LongConfigurationBasicValidator;
 import 
org.apache.hadoop.fs.azurebfs.diagnostics.StringConfigurationBasicValidator;
+import org.apache.hadoop.fs.azurebfs.enums.Trilean;
 import org.apache.hadoop.fs.azurebfs.extensions.CustomTokenProviderAdaptee;
 import org.apache.hadoop.fs.azurebfs.extensions.SASTokenProvider;
 import org.apache.hadoop.fs.azurebfs.oauth2.AccessTokenProvider;
@@ -81,6 +82,10 @@ public class AbfsConfiguration{
   private final boolean isSecure;
   private static final Logger LOG = 
LoggerFactory.getLogger(AbfsConfiguration.class);
 
+  @StringConfigurationValidatorAnnotation(ConfigurationKey = 
FS_AZURE_ACCOUNT_IS_HNS_ENABLED,
+  DefaultValue = DEFAULT_FS_AZURE_ACCOUNT_IS_HNS_ENABLED)
+  private String isNamespaceEnabledAccount;
+
   @IntegerConfigurationValidatorAnnotation(ConfigurationKey = 
AZURE_WRITE_BUFFER_SIZE,
   MinValue = MIN_BUFFER_SIZE,
   MaxValue = MAX_BUFFER_SIZE,
@@ -232,6 +237,10 @@ public class AbfsConfiguration{
 }
   }
 
+  public Trilean getIsNamespaceEnabledAccount() {
+return Trilean.getTrilean(isNamespaceEnabledAccount);
+  }
+
   /**
* Gets the Azure Storage account name corresponding to this instance of 
configuration.
* @return the Azure Storage account name
@@ -746,6 +755,11 @@ public class AbfsConfiguration{
 this.maxIoRetries = maxIoRetries;
   }
 
+  @VisibleForTesting
+  void setIsNamespaceEnabledAccount(String isNamespaceEnabledAccount) {
+this.isNamespaceEnabledAccount = isNamespaceEnabledAccount;
+  }
+
   private String getTrimmedPasswordString(String key, String defaultValue) 
throws IOException {
 String value = getPasswordString(key);
 if (StringUtils.isBlank(value)) {
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
index 62145e1..d37ceb3 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
@@ -73,6 +73,8 @@ import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.InvalidUriException;
 import org.apache.hadoop.fs.azurebfs.contracts.services.AzureServiceErrorCode;
 import org.apache.hadoop.fs.azurebfs.contracts.services.ListResultEntrySchema;
 import org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema;
+import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.TrileanConversionException;
+import org.apache.hadoop.fs.azurebfs.enums.Trilean;
 import org.apache.hadoop.fs.azurebfs.extensions.SASTokenProvider;
 import org.apache.hadoop.fs.azurebfs.extensions.ExtensionHelper;
 import org.apache.hadoop.fs.azurebfs.oauth2.AccessTokenProvider;
@@ -133,8 +135,7 @@ public class AzureBlobFileSystemStore implements Closeable {
 
   private final AbfsConfiguration abfsConfiguration;
   private final Set azureAtomicRenameDirSet;
-  private boolean isNamespaceEnabledSet;
-  private boolean isNamespaceEnabled;
+  private Trilean isNamespaceE

[hadoop] branch trunk updated: HADOOP-17076: ABFS: Delegation SAS Generator Updates Contributed by Thomas Marquardt.

2020-06-17 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new caf3995  HADOOP-17076: ABFS: Delegation SAS Generator Updates 
Contributed by Thomas Marquardt.
caf3995 is described below

commit caf3995ac2bbc3241896babb9a607272462f70ca
Author: Thomas Marquardt 
AuthorDate: Wed Jun 17 23:12:22 2020 +

HADOOP-17076: ABFS: Delegation SAS Generator Updates
Contributed by Thomas Marquardt.

DETAILS:
1) The authentication version in the service has been updated from Dec19 to 
Feb20, so need to update the client.
2) Add support and test cases for getXattr and setXAttr.
3) Update DelegationSASGenerator and related to use Duration instead of int 
for time periods.
4) Cleanup DelegationSASGenerator switch/case statement that maps 
operations to permissions.
5) Cleanup SASGenerator classes to use String.equals instead of ==.

TESTS:
Added tests for getXAttr and setXAttr.

All tests are passing against my account in eastus2euap:

 $mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
 Tests run: 76, Failures: 0, Errors: 0, Skipped: 0
 Tests run: 441, Failures: 0, Errors: 0, Skipped: 33
 Tests run: 206, Failures: 0, Errors: 0, Skipped: 24
---
 .../fs/azurebfs/extensions/SASTokenProvider.java   |  2 +-
 .../ITestAzureBlobFileSystemDelegationSAS.java | 16 +++
 .../extensions/MockDelegationSASTokenProvider.java |  4 +--
 .../fs/azurebfs/utils/DelegationSASGenerator.java  | 32 ++
 .../hadoop/fs/azurebfs/utils/SASGenerator.java |  8 --
 .../fs/azurebfs/utils/ServiceSASGenerator.java |  6 ++--
 6 files changed, 42 insertions(+), 26 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/extensions/SASTokenProvider.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/extensions/SASTokenProvider.java
index 2cd44f1..a2cd292 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/extensions/SASTokenProvider.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/extensions/SASTokenProvider.java
@@ -33,6 +33,7 @@ import org.apache.hadoop.security.AccessControlException;
 public interface SASTokenProvider {
 
   String CHECK_ACCESS_OPERATION = "check-access";
+  String CREATE_DIRECTORY_OPERATION = "create-directory";
   String CREATE_FILE_OPERATION = "create-file";
   String DELETE_OPERATION = "delete";
   String DELETE_RECURSIVE_OPERATION = "delete-recursive";
@@ -40,7 +41,6 @@ public interface SASTokenProvider {
   String GET_STATUS_OPERATION = "get-status";
   String GET_PROPERTIES_OPERATION = "get-properties";
   String LIST_OPERATION = "list";
-  String CREATE_DIRECTORY_OPERATION = "create-directory";
   String READ_OPERATION = "read";
   String RENAME_SOURCE_OPERATION = "rename-source";
   String RENAME_DESTINATION_OPERATION = "rename-destination";
diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemDelegationSAS.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemDelegationSAS.java
index 07b5804..c2c691e 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemDelegationSAS.java
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemDelegationSAS.java
@@ -25,6 +25,7 @@ import java.util.Arrays;
 import java.util.List;
 import java.util.UUID;
 
+import org.junit.Assert;
 import org.junit.Assume;
 import org.junit.Test;
 import org.slf4j.Logger;
@@ -365,4 +366,19 @@ public class ITestAzureBlobFileSystemDelegationSAS extends 
AbstractAbfsIntegrati
 }
 assertEquals(0, count);
   }
+
+  @Test
+  // Test filesystem operations getXAttr and setXAttr
+  public void testProperties() throws Exception {
+final AzureBlobFileSystem fs = getFileSystem();
+Path reqPath = new Path(UUID.randomUUID().toString());
+
+fs.create(reqPath).close();
+
+final String propertyName = "user.mime_type";
+final byte[] propertyValue = "text/plain".getBytes("utf-8");
+fs.setXAttr(reqPath, propertyName, propertyValue);
+
+assertArrayEquals(propertyValue, fs.getXAttr(reqPath, propertyName));
+  }
 }
diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/extensions/MockDelegationSASTokenProvider.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/extensions/MockDelegationSASTokenProvider.java
index fa50bef..121256c 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org

[hadoop] branch trunk updated: HADOOP-16916: ABFS: Delegation SAS generator for integration with Ranger

2020-05-12 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new b214bbd  HADOOP-16916: ABFS: Delegation SAS generator for integration 
with Ranger
b214bbd is described below

commit b214bbd2d92a0c02b71d352dba85f3b87317933c
Author: Thomas Marquardt 
AuthorDate: Tue May 12 17:32:52 2020 +

HADOOP-16916: ABFS: Delegation SAS generator for integration with Ranger

Contributed by Thomas Marquardt.

DETAILS:

Previously we had a SASGenerator class which generated Service SAS, but we 
need to add DelegationSASGenerator.
I separated SASGenerator into a base class and two subclasses 
ServiceSASGenerator and DelegationSASGenreator.  The
code in ServiceSASGenerator is copied from SASGenerator but the 
DelegationSASGenrator code is new.  The
DelegationSASGenerator code demonstrates how to use Delegation SAS with 
minimal permissions, as would be used
by an authorization service such as Apache Ranger.  Adding this to the 
tests helps us lock in this behavior.

Added a MockDelegationSASTokenProvider for testing User Delegation SAS.

Fixed the ITestAzureBlobFileSystemCheckAccess tests to assume oauth client 
ID so that they are ignored when that
is not configured.

To improve performance, AbfsInputStream/AbfsOutputStream re-use SAS tokens 
until the expiry is within 120 seconds.
After this a new SAS will be requested.  The default period of 120 seconds 
can be changed using the configuration
setting "fs.azure.sas.token.renew.period.for.streams".

The SASTokenProvider operation names were updated to correspond better with 
the ADLS Gen2 REST API, since these
operations must be provided tokens with appropriate SAS parameters to 
succeed.

Support for the version 2.0 AAD authentication endpoint was added to 
AzureADAuthenticator.

The getFileStatus method was mistakenly calling the ADLS Gen2 Get 
Properties API which requires read permission
while the getFileStatus call only requires execute permission.  ADLS Gen2 
Get Status API is supposed to be used
for this purpose, so the underlying AbfsClient.getPathStatus API was 
updated with a includeProperties
parameter which is set to false for getFileStatus and true for getXAttr.

Added SASTokenProvider support for delete recursive.

Fixed bugs in AzureBlobFileSystem where public methods were not validating 
the Path by calling makeQualified.  This is
necessary to avoid passing null paths and to convert relative paths into 
absolute paths.

Canonicalized the path used for root path internally so that root path can 
be used with SAS tokens, which requires
that the path in the URL and the path in the SAS token match.  Internally 
the code was using
"//" instead of "/" for the root path, sometimes.  Also related to this, 
the AzureBlobFileSystemStore.getRelativePath
API was updated so that we no longer remove and then add back a preceding 
forward / to paths.

To run ITestAzureBlobFileSystemDelegationSAS tests follow the instructions 
in testing_azure.md under the heading
"To run Delegation SAS test cases".  You also need to set 
"fs.azure.enable.check.access" to true.

TEST RESULTS:

namespace.enabled=true
auth.type=SharedKey
---
$mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
Tests run: 63, Failures: 0, Errors: 0, Skipped: 0
Tests run: 432, Failures: 0, Errors: 0, Skipped: 41
Tests run: 206, Failures: 0, Errors: 0, Skipped: 24

namespace.enabled=false
auth.type=SharedKey
---
$mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
Tests run: 63, Failures: 0, Errors: 0, Skipped: 0
Tests run: 432, Failures: 0, Errors: 0, Skipped: 244
Tests run: 206, Failures: 0, Errors: 0, Skipped: 24

namespace.enabled=true
auth.type=SharedKey
sas.token.provider.type=MockDelegationSASTokenProvider
enable.check.access=true
---
$mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
Tests run: 63, Failures: 0, Errors: 0, Skipped: 0
Tests run: 432, Failures: 0, Errors: 0, Skipped: 33
Tests run: 206, Failures: 0, Errors: 0, Skipped: 24

namespace.enabled=true
auth.type=OAuth
---
$mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
Tests run: 63, Failures: 0, Errors: 0, Skipped: 0
Tests run: 432, Failures: 0, Errors: 1, Skipped: 74
Tests run: 206, Failures: 0, Errors: 0, Skipped: 140
---
 .../hadoop-azure/dev-support/findbugs-exclude.xml  |  21 +-
 .../hadoop/fs/azurebfs/AbfsConfiguration.java  |   9 

[hadoop] branch trunk updated: HADOOP-16730: ABFS: Support for Shared Access Signatures (SAS). Contributed by Sneha Vijayarajan.

2020-02-27 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 791270a  HADOOP-16730: ABFS: Support for Shared Access Signatures 
(SAS). Contributed by Sneha Vijayarajan.
791270a is described below

commit 791270a2e5e31546ff5c1ef4fa8bad6852b906dc
Author: Sneha Vijayarajan 
AuthorDate: Thu Feb 27 17:00:15 2020 +

HADOOP-16730: ABFS: Support for Shared Access Signatures (SAS). Contributed 
by Sneha Vijayarajan.
---
 .../hadoop/fs/azurebfs/AbfsConfiguration.java  |  70 ++--
 .../hadoop/fs/azurebfs/AzureBlobFileSystem.java|  58 +--
 .../fs/azurebfs/AzureBlobFileSystemStore.java  |  19 +-
 .../fs/azurebfs/constants/ConfigurationKeys.java   |   3 +-
 .../exceptions/SASTokenProviderException.java} |  23 +-
 .../fs/azurebfs/extensions/AbfsAuthorizer.java |  57 ---
 .../fs/azurebfs/extensions/SASTokenProvider.java   |  74 
 .../hadoop/fs/azurebfs/services/AbfsClient.java|  96 -
 .../fs/azurebfs/services/AbfsRestOperation.java|  30 +-
 .../fs/azurebfs/services/AbfsUriQueryBuilder.java  |  17 +-
 .../hadoop/fs/azurebfs/services/AuthType.java  |   3 +-
 .../hadoop-azure/src/site/markdown/abfs.md |   2 +-
 .../fs/azurebfs/AbstractAbfsIntegrationTest.java   |  60 ++-
 .../fs/azurebfs/ITestAbfsIdentityTransformer.java  |   1 -
 .../ITestAzureBlobFileSystemAuthorization.java | 402 ++---
 .../azurebfs/constants/TestConfigurationKeys.java  |   3 +
 .../fs/azurebfs/extensions/MockAbfsAuthorizer.java |  87 -
 .../extensions/MockErrorSASTokenProvider.java  |  63 
 .../azurebfs/extensions/MockSASTokenProvider.java  |  85 +
 .../fs/azurebfs/services/TestAbfsClient.java   |   9 +-
 .../hadoop/fs/azurebfs/utils/SASGenerator.java | 129 +++
 .../hadoop-azure/src/test/resources/azure-test.xml |   2 +-
 22 files changed, 799 insertions(+), 494 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
index 81e4191..779f524 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
@@ -20,10 +20,10 @@ package org.apache.hadoop.fs.azurebfs;
 
 import java.io.IOException;
 import java.lang.reflect.Field;
-import java.lang.reflect.InvocationTargetException;
 import java.util.Map;
 
 import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Preconditions;
 
 import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.classification.InterfaceAudience;
@@ -40,15 +40,15 @@ import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.AzureBlobFileSystemExc
 import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.ConfigurationPropertyNotFoundException;
 import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.InvalidConfigurationValueException;
 import org.apache.hadoop.fs.azurebfs.contracts.exceptions.KeyProviderException;
+import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.SASTokenProviderException;
 import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.TokenAccessProviderException;
 import 
org.apache.hadoop.fs.azurebfs.diagnostics.Base64StringConfigurationBasicValidator;
 import 
org.apache.hadoop.fs.azurebfs.diagnostics.BooleanConfigurationBasicValidator;
 import 
org.apache.hadoop.fs.azurebfs.diagnostics.IntegerConfigurationBasicValidator;
 import 
org.apache.hadoop.fs.azurebfs.diagnostics.LongConfigurationBasicValidator;
 import 
org.apache.hadoop.fs.azurebfs.diagnostics.StringConfigurationBasicValidator;
-import org.apache.hadoop.fs.azurebfs.extensions.AbfsAuthorizationException;
-import org.apache.hadoop.fs.azurebfs.extensions.AbfsAuthorizer;
 import org.apache.hadoop.fs.azurebfs.extensions.CustomTokenProviderAdaptee;
+import org.apache.hadoop.fs.azurebfs.extensions.SASTokenProvider;
 import org.apache.hadoop.fs.azurebfs.oauth2.AccessTokenProvider;
 import org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider;
 import org.apache.hadoop.fs.azurebfs.oauth2.CustomTokenProviderAdapter;
@@ -170,9 +170,6 @@ public class AbfsConfiguration{
   DefaultValue = DEFAULT_ENABLE_DELEGATION_TOKEN)
   private boolean enableDelegationToken;
 
-  @StringConfigurationValidatorAnnotation(ConfigurationKey = 
ABFS_EXTERNAL_AUTHORIZATION_CLASS,
-  DefaultValue = "")
-  private String abfsExternalAuthorizationClass;
 
   @BooleanConfigurationValidatorAnnotation(ConfigurationKey = 
FS_AZURE_ALWAYS_USE_HTTPS,
   DefaultValue = DEFAULT_ENABLE_HTTPS)
@@ -218,6 +215,14 @@ public class AbfsConfiguration{
   }
 
   /**
+   * Gets the Azure Storage account name corresponding to this i

[hadoop] 02/02: HADOOP-16825: ITestAzureBlobFileSystemCheckAccess failing. Contributed by Bilahari T H.

2020-02-06 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 5944d28130925fe1452f545e96b5e44f064bc69e
Author: bilaharith <52483117+bilahar...@users.noreply.github.com>
AuthorDate: Thu Feb 6 18:48:00 2020 +

HADOOP-16825: ITestAzureBlobFileSystemCheckAccess failing.
Contributed by Bilahari T H.
---
 .../ITestAzureBlobFileSystemCheckAccess.java| 21 +
 1 file changed, 17 insertions(+), 4 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemCheckAccess.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemCheckAccess.java
index cc273e9..bc5fc59 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemCheckAccess.java
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemCheckAccess.java
@@ -53,7 +53,7 @@ public class ITestAzureBlobFileSystemCheckAccess
 
   private static final String TEST_FOLDER_PATH = "CheckAccessTestFolder";
   private final FileSystem superUserFs;
-  private final FileSystem testUserFs;
+  private FileSystem testUserFs;
   private final String testUserGuid;
   private final boolean isCheckAccessEnabled;
   private final boolean isHNSEnabled;
@@ -63,13 +63,15 @@ public class ITestAzureBlobFileSystemCheckAccess
 this.superUserFs = getFileSystem();
 testUserGuid = getConfiguration()
 .get(FS_AZURE_BLOB_FS_CHECKACCESS_TEST_USER_GUID);
-this.testUserFs = getTestUserFs();
 this.isCheckAccessEnabled = getConfiguration().isCheckAccessEnabled();
 this.isHNSEnabled = getConfiguration()
 .getBoolean(FS_AZURE_TEST_NAMESPACE_ENABLED_ACCOUNT, false);
   }
 
-  private FileSystem getTestUserFs() throws Exception {
+  private void setTestUserFs() throws Exception {
+if (this.testUserFs != null) {
+  return;
+}
 String orgClientId = getConfiguration().get(FS_AZURE_BLOB_FS_CLIENT_ID);
 String orgClientSecret = getConfiguration()
 .get(FS_AZURE_BLOB_FS_CLIENT_SECRET);
@@ -88,7 +90,7 @@ public class ITestAzureBlobFileSystemCheckAccess
 getRawConfiguration()
 .setBoolean(AZURE_CREATE_REMOTE_FILESYSTEM_DURING_INITIALIZATION,
 orgCreateFileSystemDurungInit);
-return fs;
+this.testUserFs = fs;
   }
 
   @Test(expected = IllegalArgumentException.class)
@@ -106,6 +108,7 @@ public class ITestAzureBlobFileSystemCheckAccess
   @Test(expected = FileNotFoundException.class)
   public void testCheckAccessForNonExistentFile() throws Exception {
 assumeHNSAndCheckAccessEnabled();
+setTestUserFs();
 Path nonExistentFile = setupTestDirectoryAndUserAccess(
 "/nonExistentFile1.txt", FsAction.ALL);
 superUserFs.delete(nonExistentFile, true);
@@ -149,12 +152,16 @@ public class ITestAzureBlobFileSystemCheckAccess
 Assume.assumeFalse(FS_AZURE_TEST_NAMESPACE_ENABLED_ACCOUNT + " is true",
 getConfiguration()
 .getBoolean(FS_AZURE_TEST_NAMESPACE_ENABLED_ACCOUNT, true));
+Assume.assumeTrue(FS_AZURE_ENABLE_CHECK_ACCESS + " is false",
+isCheckAccessEnabled);
+setTestUserFs();
 testUserFs.access(new Path("/"), FsAction.READ);
   }
 
   @Test
   public void testFsActionNONE() throws Exception {
 assumeHNSAndCheckAccessEnabled();
+setTestUserFs();
 Path testFilePath = setupTestDirectoryAndUserAccess("/test2.txt",
 FsAction.NONE);
 assertInaccessible(testFilePath, FsAction.EXECUTE);
@@ -169,6 +176,7 @@ public class ITestAzureBlobFileSystemCheckAccess
   @Test
   public void testFsActionEXECUTE() throws Exception {
 assumeHNSAndCheckAccessEnabled();
+setTestUserFs();
 Path testFilePath = setupTestDirectoryAndUserAccess("/test3.txt",
 FsAction.EXECUTE);
 assertAccessible(testFilePath, FsAction.EXECUTE);
@@ -184,6 +192,7 @@ public class ITestAzureBlobFileSystemCheckAccess
   @Test
   public void testFsActionREAD() throws Exception {
 assumeHNSAndCheckAccessEnabled();
+setTestUserFs();
 Path testFilePath = setupTestDirectoryAndUserAccess("/test4.txt",
 FsAction.READ);
 assertAccessible(testFilePath, FsAction.READ);
@@ -199,6 +208,7 @@ public class ITestAzureBlobFileSystemCheckAccess
   @Test
   public void testFsActionWRITE() throws Exception {
 assumeHNSAndCheckAccessEnabled();
+setTestUserFs();
 Path testFilePath = setupTestDirectoryAndUserAccess("/test5.txt",
 FsAction.WRITE);
 assertAccessible(testFilePath, FsAction.WRITE);
@@ -214,6 +224,7 @@ public class ITestAzureBlobFileSystemCheckAccess
   @Test
   public void testFsActionREADEXECUTE() throws Exception {
 assumeHNSAndCheckAccessEnabl

[hadoop] branch trunk updated (146ca0f -> 5944d28)

2020-02-06 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 146ca0f  HADOOP-16832. S3Guard testing doc: Add required parameters 
for S3Guard testing in IDE. (#1822). Contributed by Mukund Thakur.
 new 55f2421  HADOOP-16845: Disable 
ITestAbfsClient.testContinuationTokenHavingEqualSign due to ADLS Gen2 service 
bug. Contributed by Sneha Vijayarajan.
 new 5944d28  HADOOP-16825: ITestAzureBlobFileSystemCheckAccess failing. 
Contributed by Bilahari T H.

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../apache/hadoop/fs/azurebfs/ITestAbfsClient.java  |  2 ++
 .../ITestAzureBlobFileSystemCheckAccess.java| 21 +
 2 files changed, 19 insertions(+), 4 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 01/02: HADOOP-16845: Disable ITestAbfsClient.testContinuationTokenHavingEqualSign due to ADLS Gen2 service bug. Contributed by Sneha Vijayarajan.

2020-02-06 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 55f2421580678a6793c8cb6ad10fee3f4ec833aa
Author: Sneha Vijayarajan 
AuthorDate: Thu Feb 6 18:41:06 2020 +

HADOOP-16845: Disable ITestAbfsClient.testContinuationTokenHavingEqualSign 
due to ADLS Gen2 service bug.
Contributed by Sneha Vijayarajan.
---
 .../src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsClient.java| 2 ++
 1 file changed, 2 insertions(+)

diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsClient.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsClient.java
index bc05e7d..3d6869d 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsClient.java
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsClient.java
@@ -43,6 +43,8 @@ public final class ITestAbfsClient extends 
AbstractAbfsIntegrationTest {
 super();
   }
 
+  @Ignore("HADOOP-16845: Invalid continuation tokens are ignored by the ADLS "
+  + "Gen2 service, so we are disabling this test until the service is 
fixed.")
   @Test
   public void testContinuationTokenHavingEqualSign() throws Exception {
 final AzureBlobFileSystem fs = this.getFileSystem();


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 03/09: HADOOP-16251. ABFS: add FSMainOperationsBaseTest

2019-08-28 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 605c812749324c63a664ce2f91bc0b18968d84b1
Author: DadanielZ 
AuthorDate: Tue Apr 30 19:38:48 2019 +

HADOOP-16251. ABFS: add FSMainOperationsBaseTest

Author: Da Zhou
---
 .../ITestAzureBlobFileSystemMainOperation.java | 78 ++
 1 file changed, 78 insertions(+)

diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemMainOperation.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemMainOperation.java
new file mode 100644
index 000..41abfe8
--- /dev/null
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemMainOperation.java
@@ -0,0 +1,78 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import org.junit.Ignore;
+
+import org.apache.hadoop.fs.FSMainOperationsBaseTest;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.azurebfs.contract.ABFSContractTestBinding;
+
+/**
+ * Test AzureBlobFileSystem main operations.
+ * */
+public class ITestAzureBlobFileSystemMainOperation extends 
FSMainOperationsBaseTest {
+
+  private static final String TEST_ROOT_DIR =
+  "/tmp/TestAzureBlobFileSystemMainOperations";
+
+  private final ABFSContractTestBinding binding;
+
+  public ITestAzureBlobFileSystemMainOperation () throws Exception {
+super(TEST_ROOT_DIR);
+// Note: There are shared resources in this test suite (eg: 
"test/new/newfile")
+// To make sure this test suite can be ran in parallel, different 
containers
+// will be used for each test.
+binding = new ABFSContractTestBinding(false);
+  }
+
+  @Override
+  public void setUp() throws Exception {
+binding.setup();
+fSys = binding.getFileSystem();
+  }
+
+  @Override
+  public void tearDown() throws Exception {
+// Note: Because "tearDown()" is called during the testing,
+// here we should not call binding.tearDown() to destroy the container.
+// Instead we should remove the test containers manually with
+// AbfsTestUtils.
+super.tearDown();
+  }
+
+  @Override
+  protected FileSystem createFileSystem() throws Exception {
+return fSys;
+  }
+
+  @Override
+  @Ignore("Permission check for getFileInfo doesn't match the 
HdfsPermissionsGuide")
+  public void testListStatusThrowsExceptionForUnreadableDir() {
+// Permission Checks:
+// 
https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html
+  }
+
+  @Override
+  @Ignore("Permission check for getFileInfo doesn't match the 
HdfsPermissionsGuide")
+  public void testGlobStatusThrowsExceptionForUnreadableDir() {
+// Permission Checks:
+// 
https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html
+  }
+}


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 01/09: HADOOP-16242. ABFS: add bufferpool to AbfsOutputStream.

2019-08-28 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit e5bb8498df2b22495ae58829e7125d1377cc765a
Author: Da Zhou 
AuthorDate: Mon Apr 29 13:27:28 2019 +0100

HADOOP-16242. ABFS: add bufferpool to AbfsOutputStream.

Contributed by Da Zhou.
---
 .../hadoop/fs/azurebfs/services/AbfsOutputStream.java   | 17 ++---
 1 file changed, 14 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java
index 5764bcb..679f22e 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java
@@ -23,6 +23,7 @@ import java.io.IOException;
 import java.io.InterruptedIOException;
 import java.io.OutputStream;
 import java.net.HttpURLConnection;
+import java.nio.ByteBuffer;
 import java.util.Locale;
 import java.util.concurrent.ConcurrentLinkedDeque;
 import java.util.concurrent.LinkedBlockingQueue;
@@ -37,6 +38,7 @@ import com.google.common.base.Preconditions;
 
 import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsRestOperationException;
 import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.AzureBlobFileSystemException;
+import org.apache.hadoop.io.ElasticByteBufferPool;
 import org.apache.hadoop.fs.FSExceptionMessages;
 import org.apache.hadoop.fs.StreamCapabilities;
 import org.apache.hadoop.fs.Syncable;
@@ -64,6 +66,15 @@ public class AbfsOutputStream extends OutputStream 
implements Syncable, StreamCa
   private final ThreadPoolExecutor threadExecutor;
   private final ExecutorCompletionService completionService;
 
+  /**
+   * Queue storing buffers with the size of the Azure block ready for
+   * reuse. The pool allows reusing the blocks instead of allocating new
+   * blocks. After the data is sent to the service, the buffer is returned
+   * back to the queue
+   */
+  private final ElasticByteBufferPool byteBufferPool
+  = new ElasticByteBufferPool();
+
   public AbfsOutputStream(
   final AbfsClient client,
   final String path,
@@ -78,7 +89,7 @@ public class AbfsOutputStream extends OutputStream implements 
Syncable, StreamCa
 this.lastError = null;
 this.lastFlushOffset = 0;
 this.bufferSize = bufferSize;
-this.buffer = new byte[bufferSize];
+this.buffer = byteBufferPool.getBuffer(false, bufferSize).array();
 this.bufferIndex = 0;
 this.writeOperations = new ConcurrentLinkedDeque<>();
 
@@ -268,8 +279,7 @@ public class AbfsOutputStream extends OutputStream 
implements Syncable, StreamCa
 
 final byte[] bytes = buffer;
 final int bytesLength = bufferIndex;
-
-buffer = new byte[bufferSize];
+buffer = byteBufferPool.getBuffer(false, bufferSize).array();
 bufferIndex = 0;
 final long offset = position;
 position += bytesLength;
@@ -283,6 +293,7 @@ public class AbfsOutputStream extends OutputStream 
implements Syncable, StreamCa
   public Void call() throws Exception {
 client.append(path, offset, bytes, 0,
 bytesLength);
+byteBufferPool.putBuffer(ByteBuffer.wrap(bytes));
 return null;
   }
 });


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 04/09: Revert "HADOOP-16251. ABFS: add FSMainOperationsBaseTest"

2019-08-28 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 67fe5398fd8400287e4eeaf360cffa4e8788447e
Author: Aaron Fabbri 
AuthorDate: Fri May 10 13:55:56 2019 -0700

Revert "HADOOP-16251. ABFS: add FSMainOperationsBaseTest"

This reverts commit 7c2d7c05a9a9cc981674f97cc3825e917a17b1f7.

Git Commit name and email were incorrect. Will re-commit.
---
 .../hadoop/fs/azurebfs/ITestAzureBlobFileSystemMainOperation.java  | 7 ---
 1 file changed, 7 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemMainOperation.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemMainOperation.java
index 41abfe8..38682b3 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemMainOperation.java
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemMainOperation.java
@@ -68,11 +68,4 @@ public class ITestAzureBlobFileSystemMainOperation extends 
FSMainOperationsBaseT
 // Permission Checks:
 // 
https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html
   }
-
-  @Override
-  @Ignore("Permission check for getFileInfo doesn't match the 
HdfsPermissionsGuide")
-  public void testGlobStatusThrowsExceptionForUnreadableDir() {
-// Permission Checks:
-// 
https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html
-  }
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 07/09: HADOOP-16479. ABFS FileStatus.getModificationTime returns localized time instead of UTC.

2019-08-28 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 1376f2ed3dc26b98cb9c92b10797ad0f99377b58
Author: bilaharith 
AuthorDate: Thu Aug 8 19:08:04 2019 +0100

HADOOP-16479. ABFS FileStatus.getModificationTime returns localized time 
instead of UTC.

Contributed by Bilahari T H

Change-Id: I532055baaadfd7c324710e4b25f60cdf0378bdc0
---
 .../hadoop/fs/azurebfs/AzureBlobFileSystemStore.java   |  2 +-
 .../azurebfs/ITestAzureBlobFileSystemFileStatus.java   | 18 ++
 2 files changed, 19 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
index f36153d..27ba202 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
@@ -115,7 +115,7 @@ public class AzureBlobFileSystemStore {
   private URI uri;
   private String userName;
   private String primaryUserGroup;
-  private static final String DATE_TIME_PATTERN = "E, dd MMM  HH:mm:ss 
'GMT'";
+  private static final String DATE_TIME_PATTERN = "E, dd MMM  HH:mm:ss z";
   private static final String TOKEN_DATE_PATTERN = 
"-MM-dd'T'HH:mm:ss.SSS'Z'";
   private static final String XMS_PROPERTIES_ENCODING = "ISO-8859-1";
   private static final int LIST_MAX_RESULTS = 500;
diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemFileStatus.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemFileStatus.java
index f514696..421fa9a 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemFileStatus.java
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemFileStatus.java
@@ -122,4 +122,22 @@ public class ITestAzureBlobFileSystemFileStatus extends
 assertEquals(pathWithHost2.getName(), fileStatus2.getPath().getName());
   }
 
+  @Test
+  public void testLastModifiedTime() throws IOException {
+AzureBlobFileSystem fs = this.getFileSystem();
+Path testFilePath = new Path("childfile1.txt");
+long createStartTime = System.currentTimeMillis();
+long minCreateStartTime = (createStartTime / 1000) * 1000 - 1;
+//  Dividing and multiplying by 1000 to make last 3 digits 0.
+//  It is observed that modification time is returned with last 3
+//  digits 0 always.
+fs.create(testFilePath);
+long createEndTime = System.currentTimeMillis();
+FileStatus fStat = fs.getFileStatus(testFilePath);
+long lastModifiedTime = fStat.getModificationTime();
+assertTrue("lastModifiedTime should be after minCreateStartTime",
+minCreateStartTime < lastModifiedTime);
+assertTrue("lastModifiedTime should be before createEndTime",
+createEndTime > lastModifiedTime);
+  }
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 09/09: HADOOP-16460: ABFS: fix for Sever Name Indication (SNI)

2019-08-28 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit f56972bfe1c79c11a13a88284ca5286709967b4a
Author: Sneha Vijayarajan 
AuthorDate: Tue Jul 30 15:18:15 2019 +

HADOOP-16460: ABFS: fix for Sever Name Indication (SNI)

Contributed by Sneha Vijayarajan 
---
 hadoop-project/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 4c0464a..f437ec3 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -1121,7 +1121,7 @@
   
 org.wildfly.openssl
 wildfly-openssl
-1.0.4.Final
+1.0.7.Final
   
 
   


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 08/09: HADOOP-16315. ABFS: transform full UPN for named user in AclStatus

2019-08-28 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 6cb74629ec7f2a8f11afe5abd5c6587ee071d7aa
Author: Da Zhou 
AuthorDate: Fri Aug 9 12:37:27 2019 +0100

HADOOP-16315. ABFS: transform full UPN for named user in AclStatus

Contributed by Da Zhou

Change-Id: Ibc78322415fcbeff89c06c8586c53f5695550290
---
 .../fs/azurebfs/AzureBlobFileSystemStore.java  | 17 ++---
 .../fs/azurebfs/oauth2/IdentityTransformer.java| 75 +++---
 .../fs/azurebfs/ITestAbfsIdentityTransformer.java  | 58 -
 3 files changed, 131 insertions(+), 19 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
index 27ba202..2694565 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
@@ -721,8 +721,8 @@ public class AzureBlobFileSystemStore {
 path.toString(),
 AclEntry.aclSpecToString(aclSpec));
 
-final List transformedAclEntries = 
identityTransformer.transformAclEntriesForSetRequest(aclSpec);
-final Map modifyAclEntries = 
AbfsAclHelper.deserializeAclSpec(AclEntry.aclSpecToString(transformedAclEntries));
+identityTransformer.transformAclEntriesForSetRequest(aclSpec);
+final Map modifyAclEntries = 
AbfsAclHelper.deserializeAclSpec(AclEntry.aclSpecToString(aclSpec));
 boolean useUpn = AbfsAclHelper.isUpnFormatAclEntries(modifyAclEntries);
 
 final AbfsRestOperation op = 
client.getAclStatus(AbfsHttpConstants.FORWARD_SLASH + getRelativePath(path, 
true), useUpn);
@@ -748,8 +748,8 @@ public class AzureBlobFileSystemStore {
 path.toString(),
 AclEntry.aclSpecToString(aclSpec));
 
-final List transformedAclEntries = 
identityTransformer.transformAclEntriesForSetRequest(aclSpec);
-final Map removeAclEntries = 
AbfsAclHelper.deserializeAclSpec(AclEntry.aclSpecToString(transformedAclEntries));
+identityTransformer.transformAclEntriesForSetRequest(aclSpec);
+final Map removeAclEntries = 
AbfsAclHelper.deserializeAclSpec(AclEntry.aclSpecToString(aclSpec));
 boolean isUpnFormat = 
AbfsAclHelper.isUpnFormatAclEntries(removeAclEntries);
 
 final AbfsRestOperation op = 
client.getAclStatus(AbfsHttpConstants.FORWARD_SLASH + getRelativePath(path, 
true), isUpnFormat);
@@ -827,8 +827,8 @@ public class AzureBlobFileSystemStore {
 path.toString(),
 AclEntry.aclSpecToString(aclSpec));
 
-final List transformedAclEntries = 
identityTransformer.transformAclEntriesForSetRequest(aclSpec);
-final Map aclEntries = 
AbfsAclHelper.deserializeAclSpec(AclEntry.aclSpecToString(transformedAclEntries));
+identityTransformer.transformAclEntriesForSetRequest(aclSpec);
+final Map aclEntries = 
AbfsAclHelper.deserializeAclSpec(AclEntry.aclSpecToString(aclSpec));
 final boolean isUpnFormat = 
AbfsAclHelper.isUpnFormatAclEntries(aclEntries);
 
 final AbfsRestOperation op = 
client.getAclStatus(AbfsHttpConstants.FORWARD_SLASH + getRelativePath(path, 
true), isUpnFormat);
@@ -867,7 +867,8 @@ public class AzureBlobFileSystemStore {
 final String permissions = 
result.getResponseHeader(HttpHeaderConfigurations.X_MS_PERMISSIONS);
 final String aclSpecString = 
op.getResult().getResponseHeader(HttpHeaderConfigurations.X_MS_ACL);
 
-final List processedAclEntries = 
AclEntry.parseAclSpec(AbfsAclHelper.processAclString(aclSpecString), true);
+final List aclEntries = 
AclEntry.parseAclSpec(AbfsAclHelper.processAclString(aclSpecString), true);
+identityTransformer.transformAclEntriesForGetRequest(aclEntries, userName, 
primaryUserGroup);
 final FsPermission fsPermission = permissions == null ? new 
AbfsPermission(FsAction.ALL, FsAction.ALL, FsAction.ALL)
 : AbfsPermission.valueOf(permissions);
 
@@ -877,7 +878,7 @@ public class AzureBlobFileSystemStore {
 
 aclStatusBuilder.setPermission(fsPermission);
 aclStatusBuilder.stickyBit(fsPermission.getStickyBit());
-aclStatusBuilder.addEntries(processedAclEntries);
+aclStatusBuilder.addEntries(aclEntries);
 return aclStatusBuilder.build();
   }
 
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/IdentityTransformer.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/IdentityTransformer.java
index 343b233..6844afb 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/IdentityTransformer.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/IdentityTransformer.java
@@ -81,6 +81,7

[hadoop] 02/09: HADOOP-16269. ABFS: add listFileStatus with StartFrom.

2019-08-28 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 50953c586b1a96e46119fdb6a3366309da389985
Author: Da Zhou 
AuthorDate: Wed May 8 17:20:46 2019 +0100

HADOOP-16269. ABFS: add listFileStatus with StartFrom.

Author:Da Zhou
---
 .../fs/azurebfs/AzureBlobFileSystemStore.java  | 107 ++-
 .../fs/azurebfs/constants/AbfsHttpConstants.java   |   9 ++
 .../org/apache/hadoop/fs/azurebfs/utils/CRC64.java |  60 
 .../fs/azurebfs/AbstractAbfsIntegrationTest.java   |   7 +-
 ...zureBlobFileSystemStoreListStatusWithRange.java | 151 +
 .../apache/hadoop/fs/azurebfs/TestAbfsCrc64.java   |  38 ++
 6 files changed, 363 insertions(+), 9 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
index 4d11563..f36153d 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
@@ -31,6 +31,7 @@ import java.nio.charset.CharacterCodingException;
 import java.nio.charset.Charset;
 import java.nio.charset.CharsetDecoder;
 import java.nio.charset.CharsetEncoder;
+import java.nio.charset.StandardCharsets;
 import java.text.ParseException;
 import java.text.SimpleDateFormat;
 import java.util.ArrayList;
@@ -46,6 +47,7 @@ import java.util.Set;
 
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Preconditions;
+import com.google.common.base.Strings;
 
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
@@ -79,6 +81,7 @@ import org.apache.hadoop.fs.azurebfs.services.AuthType;
 import org.apache.hadoop.fs.azurebfs.services.ExponentialRetryPolicy;
 import org.apache.hadoop.fs.azurebfs.services.SharedKeyCredentials;
 import org.apache.hadoop.fs.azurebfs.utils.Base64;
+import org.apache.hadoop.fs.azurebfs.utils.CRC64;
 import org.apache.hadoop.fs.azurebfs.utils.UriUtils;
 import org.apache.hadoop.fs.permission.AclEntry;
 import org.apache.hadoop.fs.permission.AclStatus;
@@ -89,7 +92,17 @@ import org.apache.http.client.utils.URIBuilder;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.CHAR_EQUALS;
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.CHAR_FORWARD_SLASH;
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.CHAR_HYPHEN;
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.CHAR_PLUS;
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.CHAR_STAR;
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.CHAR_UNDERSCORE;
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.ROOT_PATH;
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.SINGLE_WHITE_SPACE;
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.TOKEN_VERSION;
 import static 
org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.AZURE_ABFS_ENDPOINT;
+
 /**
  * Provides the bridging logic between Hadoop's abstract filesystem and Azure 
Storage.
  */
@@ -103,6 +116,7 @@ public class AzureBlobFileSystemStore {
   private String userName;
   private String primaryUserGroup;
   private static final String DATE_TIME_PATTERN = "E, dd MMM  HH:mm:ss 
'GMT'";
+  private static final String TOKEN_DATE_PATTERN = 
"-MM-dd'T'HH:mm:ss.SSS'Z'";
   private static final String XMS_PROPERTIES_ENCODING = "ISO-8859-1";
   private static final int LIST_MAX_RESULTS = 500;
 
@@ -514,15 +528,43 @@ public class AzureBlobFileSystemStore {
 eTag);
   }
 
+  /**
+   * @param path The list path.
+   * @return the entries in the path.
+   * */
   public FileStatus[] listStatus(final Path path) throws IOException {
-LOG.debug("listStatus filesystem: {} path: {}",
+return listStatus(path, null);
+  }
+
+  /**
+   * @param path Path the list path.
+   * @param startFrom the entry name that list results should start with.
+   *  For example, if folder "/folder" contains four files: 
"afile", "bfile", "hfile", "ifile".
+   *  Then listStatus(Path("/folder"), "hfile") will return 
"/folder/hfile" and "folder/ifile"
+   *  Notice that if startFrom is a non-existent entry name, 
then the list response contains
+   *  all entries after this non-existent entry in lexical 
o

[hadoop] 06/09: HADOOP-16404. ABFS default blocksize change(256MB from 512MB)

2019-08-28 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 8044415ea1936c0dec4dba8b429c32a96e567e9a
Author: Arun Singh 
AuthorDate: Fri Jul 19 20:21:28 2019 -0700

HADOOP-16404. ABFS default blocksize change(256MB from 512MB)

Contributed by: Arun Singh
---
 .../hadoop/fs/azurebfs/constants/FileSystemConfigurations.java  | 2 +-
 .../hadoop/fs/azurebfs/TestAbfsConfigurationFieldsValidation.java   | 6 ++
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/FileSystemConfigurations.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/FileSystemConfigurations.java
index 9744307..a2a0064 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/FileSystemConfigurations.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/FileSystemConfigurations.java
@@ -44,7 +44,7 @@ public final class FileSystemConfigurations {
   public static final int DEFAULT_READ_BUFFER_SIZE = 4 * ONE_MB;  // 4 MB
   public static final int MIN_BUFFER_SIZE = 16 * ONE_KB;  // 16 KB
   public static final int MAX_BUFFER_SIZE = 100 * ONE_MB;  // 100 MB
-  public static final long MAX_AZURE_BLOCK_SIZE = 512 * 1024 * 1024L;
+  public static final long MAX_AZURE_BLOCK_SIZE = 256 * 1024 * 1024L; // 
changing default abfs blocksize to 256MB
   public static final String AZURE_BLOCK_LOCATION_HOST_DEFAULT = "localhost";
 
   public static final int MAX_CONCURRENT_READ_THREADS = 12;
diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAbfsConfigurationFieldsValidation.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAbfsConfigurationFieldsValidation.java
index a78602b..2a65263 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAbfsConfigurationFieldsValidation.java
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAbfsConfigurationFieldsValidation.java
@@ -144,6 +144,12 @@ public class TestAbfsConfigurationFieldsValidation {
   }
 
   @Test
+  public void testConfigBlockSizeInitialized() throws Exception {
+// test the block size annotated field has been initialized in the 
constructor
+assertEquals(MAX_AZURE_BLOCK_SIZE, abfsConfiguration.getAzureBlockSize());
+  }
+
+  @Test
   public void testGetAccountKey() throws Exception {
 String accountKey = abfsConfiguration.getStorageAccountKey();
 assertEquals(this.encodedAccountKey, accountKey);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 05/09: HADOOP-16376. ABFS: Override access() to no-op.

2019-08-28 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 42816bf698f7c312dce18ff1749e64598973f09d
Author: Da Zhou 
AuthorDate: Sun Jun 16 19:20:46 2019 +0100

HADOOP-16376. ABFS: Override access() to no-op.

Contributed by Da Zhou.

Change-Id: Ia0024bba32250189a87eb6247808b2473c331ed0
---
 .../hadoop/fs/azurebfs/AzureBlobFileSystem.java| 23 --
 1 file changed, 21 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
index e321e9e..1663ed9 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
@@ -38,12 +38,12 @@ import java.util.concurrent.Future;
 
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Preconditions;
-import org.apache.hadoop.fs.azurebfs.services.AbfsClient;
-import org.apache.hadoop.fs.azurebfs.services.AbfsClientThrottlingIntercept;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import org.apache.commons.lang3.ArrayUtils;
+import org.apache.hadoop.fs.azurebfs.services.AbfsClient;
+import org.apache.hadoop.fs.azurebfs.services.AbfsClientThrottlingIntercept;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.BlockLocation;
@@ -70,6 +70,7 @@ import org.apache.hadoop.fs.permission.AclEntry;
 import org.apache.hadoop.fs.permission.AclStatus;
 import org.apache.hadoop.fs.permission.FsAction;
 import org.apache.hadoop.fs.permission.FsPermission;
+import org.apache.hadoop.security.AccessControlException;
 import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.util.Progressable;
@@ -839,6 +840,24 @@ public class AzureBlobFileSystem extends FileSystem {
 }
   }
 
+  /**
+   * Checks if the user can access a path.  The mode specifies which access
+   * checks to perform.  If the requested permissions are granted, then the
+   * method returns normally.  If access is denied, then the method throws an
+   * {@link AccessControlException}.
+   *
+   * @param path Path to check
+   * @param mode type of access to check
+   * @throws AccessControlExceptionif access is denied
+   * @throws java.io.FileNotFoundException if the path does not exist
+   * @throws IOException   see specific implementation
+   */
+  @Override
+  public void access(final Path path, FsAction mode) throws IOException {
+// TODO: make it no-op to unblock hive permission issue for now.
+// Will add a long term fix similar to the implementation in AdlFileSystem.
+  }
+
   private FileStatus tryGetFileStatus(final Path f) {
 try {
   return getFileStatus(f);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-2 updated (8e8c16d -> f56972b)

2019-08-28 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a change to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 8e8c16d  YARN-9756: Create metric that sums total memory/vcores 
preempted per round. Contributed by  Manikandan R (manirajv06).
 new e5bb849  HADOOP-16242. ABFS: add bufferpool to AbfsOutputStream.
 new 50953c5  HADOOP-16269. ABFS: add listFileStatus with StartFrom.
 new 605c812  HADOOP-16251. ABFS: add FSMainOperationsBaseTest
 new 67fe539  Revert "HADOOP-16251. ABFS: add FSMainOperationsBaseTest"
 new 42816bf  HADOOP-16376. ABFS: Override access() to no-op.
 new 8044415  HADOOP-16404. ABFS default blocksize change(256MB from 512MB)
 new 1376f2e  HADOOP-16479. ABFS FileStatus.getModificationTime returns 
localized time instead of UTC.
 new 6cb7462  HADOOP-16315. ABFS: transform full UPN for named user in 
AclStatus
 new f56972b  HADOOP-16460: ABFS: fix for Sever Name Indication (SNI)

The 9 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 hadoop-project/pom.xml |   2 +-
 .../hadoop/fs/azurebfs/AzureBlobFileSystem.java|  23 +++-
 .../fs/azurebfs/AzureBlobFileSystemStore.java  | 126 +++--
 .../fs/azurebfs/constants/AbfsHttpConstants.java   |   9 ++
 .../constants/FileSystemConfigurations.java|   2 +-
 .../fs/azurebfs/oauth2/IdentityTransformer.java|  75 --
 .../fs/azurebfs/services/AbfsOutputStream.java |  17 ++-
 .../org/apache/hadoop/fs/azurebfs/utils/CRC64.java |  60 
 .../fs/azurebfs/AbstractAbfsIntegrationTest.java   |   7 +-
 .../fs/azurebfs/ITestAbfsIdentityTransformer.java  |  58 +++-
 .../ITestAzureBlobFileSystemFileStatus.java|  18 +++
 .../ITestAzureBlobFileSystemMainOperation.java |  71 ++
 ...zureBlobFileSystemStoreListStatusWithRange.java | 151 +
 .../TestAbfsConfigurationFieldsValidation.java |   6 +
 .../apache/hadoop/fs/azurebfs/TestAbfsCrc64.java   |  26 ++--
 15 files changed, 601 insertions(+), 50 deletions(-)
 create mode 100644 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/utils/CRC64.java
 create mode 100644 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemMainOperation.java
 create mode 100644 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemStoreListStatusWithRange.java
 copy 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TestJUnitSetup.java
 => 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAbfsCrc64.java
 (59%)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 04/07: HADOOP-16340. ABFS driver continues to retry on IOException responses from REST operations.

2019-08-27 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit ce23e971b427b561e10c93c88ceade9cc9efa190
Author: Robert Levas 
AuthorDate: Wed Jun 19 17:43:14 2019 +0100

HADOOP-16340. ABFS driver continues to retry on IOException responses from 
REST operations.

Contributed by Robert Levas.

This makes the HttpException constructor protected rather than public, so 
it is possible
to implement custom subclasses of this exception -exceptions which will not 
be retried.

Change-Id: Ie8aaa23a707233c2db35948784908b6778ff3a8f
---
 .../org/apache/hadoop/fs/azurebfs/oauth2/AzureADAuthenticator.java| 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/AzureADAuthenticator.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/AzureADAuthenticator.java
index df7b199..1d3a122 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/AzureADAuthenticator.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/AzureADAuthenticator.java
@@ -164,6 +164,8 @@ public final class AzureADAuthenticator {
* requestId and error message, it is thrown when AzureADAuthenticator
* failed to get the Azure Active Directory token.
*/
+  @InterfaceAudience.LimitedPrivate("authorization-subsystems")
+  @InterfaceStability.Unstable
   public static class HttpException extends IOException {
 private int httpErrorCode;
 private String requestId;
@@ -184,7 +186,7 @@ public final class AzureADAuthenticator {
   return this.requestId;
 }
 
-HttpException(int httpErrorCode, String requestId, String message) {
+protected HttpException(int httpErrorCode, String requestId, String 
message) {
   super(message);
   this.httpErrorCode = httpErrorCode;
   this.requestId = requestId;


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated (d255efa -> 2d8799f)

2019-08-27 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a change to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from d255efa  HDFS-14779. Fix logging error in 
TestEditLog#testMultiStreamsLoadEditWithConfMaxTxns
 new 006ae25  HADOOP-16163. NPE in setup/teardown of 
ITestAbfsDelegationTokens.
 new dd63612  HADOOP-16269. ABFS: add listFileStatus with StartFrom.
 new a6d50a9  HADOOP-16376. ABFS: Override access() to no-op.
 new ce23e97  HADOOP-16340. ABFS driver continues to retry on IOException 
responses from REST operations.
 new 3b3c0c4  HADOOP-16479. ABFS FileStatus.getModificationTime returns 
localized time instead of UTC.
 new 9d722c6  HADOOP-16460: ABFS: fix for Sever Name Indication (SNI)
 new 2d8799f  HADOOP-15832. Upgrade BouncyCastle to 1.60. Contributed by 
Robert Kanter.

The 7 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../hadoop-client-check-invariants/pom.xml |   2 +
 .../hadoop-client-check-test-invariants/pom.xml|   2 +
 .../hadoop-client-minicluster/pom.xml  |   2 +
 .../hadoop-client-runtime/pom.xml  |   2 +
 hadoop-common-project/hadoop-common/pom.xml|   2 +-
 hadoop-common-project/hadoop-kms/pom.xml   |   2 +-
 hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml |   2 +-
 hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml|   2 +-
 hadoop-hdfs-project/hadoop-hdfs/pom.xml|   2 +-
 .../hadoop-mapreduce-client-app/pom.xml|  20 +++
 .../hadoop-mapreduce-client-jobclient/pom.xml  |   7 +-
 hadoop-project/pom.xml |  14 +-
 hadoop-tools/hadoop-azure/pom.xml  |   2 +
 .../hadoop/fs/azurebfs/AzureBlobFileSystem.java|  23 +++-
 .../fs/azurebfs/AzureBlobFileSystemStore.java  | 109 ++-
 .../fs/azurebfs/constants/AbfsHttpConstants.java   |   9 ++
 .../fs/azurebfs/oauth2/AzureADAuthenticator.java   |   4 +-
 .../org/apache/hadoop/fs/azurebfs/utils/CRC64.java |  60 
 .../fs/azurebfs/AbstractAbfsIntegrationTest.java   |   7 +-
 .../ITestAzureBlobFileSystemFileStatus.java|  18 +++
 ...zureBlobFileSystemStoreListStatusWithRange.java | 151 +
 .../apache/hadoop/fs/azurebfs/TestAbfsCrc64.java   |  26 ++--
 .../hadoop-yarn/hadoop-yarn-common/pom.xml |   2 +-
 .../pom.xml|   2 +-
 .../hadoop-yarn-server-tests/pom.xml   |   2 +-
 .../hadoop-yarn-server-web-proxy/pom.xml   |   8 ++
 26 files changed, 442 insertions(+), 40 deletions(-)
 create mode 100644 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/utils/CRC64.java
 create mode 100644 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemStoreListStatusWithRange.java
 copy 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TestJUnitSetup.java
 => 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAbfsCrc64.java
 (59%)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 01/07: HADOOP-16163. NPE in setup/teardown of ITestAbfsDelegationTokens.

2019-08-27 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 006ae258b3b8e95cfa4a4f6e16b9d56f9149be12
Author: Da Zhou 
AuthorDate: Tue Mar 5 14:01:21 2019 +

HADOOP-16163. NPE in setup/teardown of ITestAbfsDelegationTokens.

Contributed by Da Zhou.

Signed-off-by: Steve Loughran 
---
 hadoop-tools/hadoop-azure/pom.xml | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/hadoop-tools/hadoop-azure/pom.xml 
b/hadoop-tools/hadoop-azure/pom.xml
index 01562fd..832fa95 100644
--- a/hadoop-tools/hadoop-azure/pom.xml
+++ b/hadoop-tools/hadoop-azure/pom.xml
@@ -566,6 +566,7 @@
 
**/azurebfs/ITestAzureBlobFileSystemE2EScale.java
 
**/azurebfs/ITestAbfsReadWriteAndSeek.java
 
**/azurebfs/ITestAzureBlobFileSystemListStatus.java
+
**/azurebfs/extensions/ITestAbfsDelegationTokens.java
   
 
 
@@ -604,6 +605,7 @@
 
**/azurebfs/ITestAzureBlobFileSystemE2EScale.java
 
**/azurebfs/ITestAbfsReadWriteAndSeek.java
 
**/azurebfs/ITestAzureBlobFileSystemListStatus.java
+
**/azurebfs/extensions/ITestAbfsDelegationTokens.java
   
 
   


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 05/07: HADOOP-16479. ABFS FileStatus.getModificationTime returns localized time instead of UTC.

2019-08-27 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 3b3c0c4b8790ec4c96072a4704b320812296074b
Author: bilaharith 
AuthorDate: Thu Aug 8 19:08:04 2019 +0100

HADOOP-16479. ABFS FileStatus.getModificationTime returns localized time 
instead of UTC.

Contributed by Bilahari T H

Change-Id: I532055baaadfd7c324710e4b25f60cdf0378bdc0
---
 .../hadoop/fs/azurebfs/AzureBlobFileSystemStore.java   |  2 +-
 .../azurebfs/ITestAzureBlobFileSystemFileStatus.java   | 18 ++
 2 files changed, 19 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
index 06a819a..ce0d411 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
@@ -115,7 +115,7 @@ public class AzureBlobFileSystemStore {
   private URI uri;
   private String userName;
   private String primaryUserGroup;
-  private static final String DATE_TIME_PATTERN = "E, dd MMM  HH:mm:ss 
'GMT'";
+  private static final String DATE_TIME_PATTERN = "E, dd MMM  HH:mm:ss z";
   private static final String TOKEN_DATE_PATTERN = 
"-MM-dd'T'HH:mm:ss.SSS'Z'";
   private static final String XMS_PROPERTIES_ENCODING = "ISO-8859-1";
   private static final int LIST_MAX_RESULTS = 500;
diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemFileStatus.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemFileStatus.java
index f514696..421fa9a 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemFileStatus.java
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemFileStatus.java
@@ -122,4 +122,22 @@ public class ITestAzureBlobFileSystemFileStatus extends
 assertEquals(pathWithHost2.getName(), fileStatus2.getPath().getName());
   }
 
+  @Test
+  public void testLastModifiedTime() throws IOException {
+AzureBlobFileSystem fs = this.getFileSystem();
+Path testFilePath = new Path("childfile1.txt");
+long createStartTime = System.currentTimeMillis();
+long minCreateStartTime = (createStartTime / 1000) * 1000 - 1;
+//  Dividing and multiplying by 1000 to make last 3 digits 0.
+//  It is observed that modification time is returned with last 3
+//  digits 0 always.
+fs.create(testFilePath);
+long createEndTime = System.currentTimeMillis();
+FileStatus fStat = fs.getFileStatus(testFilePath);
+long lastModifiedTime = fStat.getModificationTime();
+assertTrue("lastModifiedTime should be after minCreateStartTime",
+minCreateStartTime < lastModifiedTime);
+assertTrue("lastModifiedTime should be before createEndTime",
+createEndTime > lastModifiedTime);
+  }
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 02/07: HADOOP-16269. ABFS: add listFileStatus with StartFrom.

2019-08-27 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit dd636127e9c3d80885b228631b44d7a1bc83ab8c
Author: Da Zhou 
AuthorDate: Wed May 8 17:20:46 2019 +0100

HADOOP-16269. ABFS: add listFileStatus with StartFrom.

Author:Da Zhou
---
 .../fs/azurebfs/AzureBlobFileSystemStore.java  | 107 ++-
 .../fs/azurebfs/constants/AbfsHttpConstants.java   |   9 ++
 .../org/apache/hadoop/fs/azurebfs/utils/CRC64.java |  60 
 .../fs/azurebfs/AbstractAbfsIntegrationTest.java   |   7 +-
 ...zureBlobFileSystemStoreListStatusWithRange.java | 151 +
 .../apache/hadoop/fs/azurebfs/TestAbfsCrc64.java   |  38 ++
 6 files changed, 363 insertions(+), 9 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
index bfab487..06a819a 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
@@ -31,6 +31,7 @@ import java.nio.charset.CharacterCodingException;
 import java.nio.charset.Charset;
 import java.nio.charset.CharsetDecoder;
 import java.nio.charset.CharsetEncoder;
+import java.nio.charset.StandardCharsets;
 import java.text.ParseException;
 import java.text.SimpleDateFormat;
 import java.util.ArrayList;
@@ -46,6 +47,7 @@ import java.util.Set;
 
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Preconditions;
+import com.google.common.base.Strings;
 
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
@@ -79,6 +81,7 @@ import org.apache.hadoop.fs.azurebfs.services.AuthType;
 import org.apache.hadoop.fs.azurebfs.services.ExponentialRetryPolicy;
 import org.apache.hadoop.fs.azurebfs.services.SharedKeyCredentials;
 import org.apache.hadoop.fs.azurebfs.utils.Base64;
+import org.apache.hadoop.fs.azurebfs.utils.CRC64;
 import org.apache.hadoop.fs.azurebfs.utils.UriUtils;
 import org.apache.hadoop.fs.permission.AclEntry;
 import org.apache.hadoop.fs.permission.AclStatus;
@@ -89,7 +92,17 @@ import org.apache.http.client.utils.URIBuilder;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.CHAR_EQUALS;
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.CHAR_FORWARD_SLASH;
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.CHAR_HYPHEN;
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.CHAR_PLUS;
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.CHAR_STAR;
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.CHAR_UNDERSCORE;
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.ROOT_PATH;
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.SINGLE_WHITE_SPACE;
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.TOKEN_VERSION;
 import static 
org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.AZURE_ABFS_ENDPOINT;
+
 /**
  * Provides the bridging logic between Hadoop's abstract filesystem and Azure 
Storage.
  */
@@ -103,6 +116,7 @@ public class AzureBlobFileSystemStore {
   private String userName;
   private String primaryUserGroup;
   private static final String DATE_TIME_PATTERN = "E, dd MMM  HH:mm:ss 
'GMT'";
+  private static final String TOKEN_DATE_PATTERN = 
"-MM-dd'T'HH:mm:ss.SSS'Z'";
   private static final String XMS_PROPERTIES_ENCODING = "ISO-8859-1";
   private static final int LIST_MAX_RESULTS = 500;
 
@@ -514,15 +528,43 @@ public class AzureBlobFileSystemStore {
 eTag);
   }
 
+  /**
+   * @param path The list path.
+   * @return the entries in the path.
+   * */
   public FileStatus[] listStatus(final Path path) throws IOException {
-LOG.debug("listStatus filesystem: {} path: {}",
+return listStatus(path, null);
+  }
+
+  /**
+   * @param path Path the list path.
+   * @param startFrom the entry name that list results should start with.
+   *  For example, if folder "/folder" contains four files: 
"afile", "bfile", "hfile", "ifile".
+   *  Then listStatus(Path("/folder"), "hfile") will return 
"/folder/hfile" and "folder/ifile"
+   *  Notice that if startFrom is a non-existent entry name, 
then the list response contains
+   *  all entries after this non-existent entry in lexical 
o

[hadoop] 07/07: HADOOP-15832. Upgrade BouncyCastle to 1.60. Contributed by Robert Kanter.

2019-08-27 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 2d8799f4bc2297b0414b7f9b30c7e465deaf76d4
Author: Akira Ajisaka 
AuthorDate: Wed Oct 10 10:16:57 2018 +0900

HADOOP-15832. Upgrade BouncyCastle to 1.60. Contributed by Robert Kanter.
---
 .../hadoop-client-check-invariants/pom.xml   |  2 ++
 .../hadoop-client-check-test-invariants/pom.xml  |  2 ++
 .../hadoop-client-minicluster/pom.xml|  2 ++
 hadoop-client-modules/hadoop-client-runtime/pom.xml  |  2 ++
 hadoop-common-project/hadoop-common/pom.xml  |  2 +-
 hadoop-common-project/hadoop-kms/pom.xml |  2 +-
 hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml   |  2 +-
 hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml  |  2 +-
 hadoop-hdfs-project/hadoop-hdfs/pom.xml  |  2 +-
 .../hadoop-mapreduce-client-app/pom.xml  | 20 
 .../hadoop-mapreduce-client-jobclient/pom.xml|  7 ++-
 hadoop-project/pom.xml   | 12 +---
 .../hadoop-yarn/hadoop-yarn-common/pom.xml   |  2 +-
 .../pom.xml  |  2 +-
 .../hadoop-yarn-server-tests/pom.xml |  2 +-
 .../hadoop-yarn-server-web-proxy/pom.xml |  8 
 16 files changed, 59 insertions(+), 12 deletions(-)

diff --git a/hadoop-client-modules/hadoop-client-check-invariants/pom.xml 
b/hadoop-client-modules/hadoop-client-check-invariants/pom.xml
index 89ea837..4c94a69 100644
--- a/hadoop-client-modules/hadoop-client-check-invariants/pom.xml
+++ b/hadoop-client-modules/hadoop-client-check-invariants/pom.xml
@@ -90,6 +90,8 @@
 log4j:log4j
 
 com.google.code.findbugs:jsr305
+
+org.bouncycastle:*
   
 
 
diff --git a/hadoop-client-modules/hadoop-client-check-test-invariants/pom.xml 
b/hadoop-client-modules/hadoop-client-check-test-invariants/pom.xml
index 99ec36e..586ccee 100644
--- a/hadoop-client-modules/hadoop-client-check-test-invariants/pom.xml
+++ b/hadoop-client-modules/hadoop-client-check-test-invariants/pom.xml
@@ -98,6 +98,8 @@
  org.hamcrest:hamcrest-core
 
 com.google.code.findbugs:jsr305
+
+org.bouncycastle:*
   
 
 
diff --git a/hadoop-client-modules/hadoop-client-minicluster/pom.xml 
b/hadoop-client-modules/hadoop-client-minicluster/pom.xml
index dcf3da9..964fed0 100644
--- a/hadoop-client-modules/hadoop-client-minicluster/pom.xml
+++ b/hadoop-client-modules/hadoop-client-minicluster/pom.xml
@@ -667,6 +667,8 @@
   com.google.code.findbugs:jsr305
   log4j:log4j
   
+  
+  org.bouncycastle:*
 
   
   
diff --git a/hadoop-client-modules/hadoop-client-runtime/pom.xml 
b/hadoop-client-modules/hadoop-client-runtime/pom.xml
index 80fd3b6..8c2130c 100644
--- a/hadoop-client-modules/hadoop-client-runtime/pom.xml
+++ b/hadoop-client-modules/hadoop-client-runtime/pom.xml
@@ -158,6 +158,8 @@
   
   com.google.code.findbugs:jsr305
   io.dropwizard.metrics:metrics-core
+  
+  org.bouncycastle:*
 
   
   
diff --git a/hadoop-common-project/hadoop-common/pom.xml 
b/hadoop-common-project/hadoop-common/pom.xml
index e2b096d..369c5d8 100644
--- a/hadoop-common-project/hadoop-common/pom.xml
+++ b/hadoop-common-project/hadoop-common/pom.xml
@@ -298,7 +298,7 @@
 
 
   org.bouncycastle
-  bcprov-jdk16
+  bcprov-jdk15on
   test
 
 
diff --git a/hadoop-common-project/hadoop-kms/pom.xml 
b/hadoop-common-project/hadoop-kms/pom.xml
index 21ad81d..b7f996a 100644
--- a/hadoop-common-project/hadoop-kms/pom.xml
+++ b/hadoop-common-project/hadoop-kms/pom.xml
@@ -171,7 +171,7 @@
 
 
   org.bouncycastle
-  bcprov-jdk16
+  bcprov-jdk15on
   test
 
 
diff --git a/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
index 3379aa4..4223272 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
@@ -204,7 +204,7 @@
 
 
   org.bouncycastle
-  bcprov-jdk16
+  bcprov-jdk15on
   test
 
   
diff --git a/hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml 
b/hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml
index 30f4bea..96b7c3c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml
@@ -165,7

[hadoop] 06/07: HADOOP-16460: ABFS: fix for Sever Name Indication (SNI)

2019-08-27 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 9d722c637eb863aeaf05bf3b528ab8dc32470eb7
Author: Sneha Vijayarajan 
AuthorDate: Tue Jul 30 15:18:15 2019 +

HADOOP-16460: ABFS: fix for Sever Name Indication (SNI)

Contributed by Sneha Vijayarajan 
---
 hadoop-project/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 5c03ad5..b096f93 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -1249,7 +1249,7 @@
   
 org.wildfly.openssl
 wildfly-openssl
-1.0.4.Final
+1.0.7.Final
   
 
   


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 03/07: HADOOP-16376. ABFS: Override access() to no-op.

2019-08-27 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit a6d50a90542f1cb141f45b24d864cae42c2c2274
Author: Da Zhou 
AuthorDate: Sun Jun 16 19:20:46 2019 +0100

HADOOP-16376. ABFS: Override access() to no-op.

Contributed by Da Zhou.

Change-Id: Ia0024bba32250189a87eb6247808b2473c331ed0
---
 .../hadoop/fs/azurebfs/AzureBlobFileSystem.java| 23 --
 1 file changed, 21 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
index e321e9e..1663ed9 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
@@ -38,12 +38,12 @@ import java.util.concurrent.Future;
 
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Preconditions;
-import org.apache.hadoop.fs.azurebfs.services.AbfsClient;
-import org.apache.hadoop.fs.azurebfs.services.AbfsClientThrottlingIntercept;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import org.apache.commons.lang3.ArrayUtils;
+import org.apache.hadoop.fs.azurebfs.services.AbfsClient;
+import org.apache.hadoop.fs.azurebfs.services.AbfsClientThrottlingIntercept;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.BlockLocation;
@@ -70,6 +70,7 @@ import org.apache.hadoop.fs.permission.AclEntry;
 import org.apache.hadoop.fs.permission.AclStatus;
 import org.apache.hadoop.fs.permission.FsAction;
 import org.apache.hadoop.fs.permission.FsPermission;
+import org.apache.hadoop.security.AccessControlException;
 import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.util.Progressable;
@@ -839,6 +840,24 @@ public class AzureBlobFileSystem extends FileSystem {
 }
   }
 
+  /**
+   * Checks if the user can access a path.  The mode specifies which access
+   * checks to perform.  If the requested permissions are granted, then the
+   * method returns normally.  If access is denied, then the method throws an
+   * {@link AccessControlException}.
+   *
+   * @param path Path to check
+   * @param mode type of access to check
+   * @throws AccessControlExceptionif access is denied
+   * @throws java.io.FileNotFoundException if the path does not exist
+   * @throws IOException   see specific implementation
+   */
+  @Override
+  public void access(final Path path, FsAction mode) throws IOException {
+// TODO: make it no-op to unblock hive permission issue for now.
+// Will add a long term fix similar to the implementation in AdlFileSystem.
+  }
+
   private FileStatus tryGetFileStatus(final Path f) {
 try {
   return getFileStatus(f);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HADOOP-16460: ABFS: fix for Sever Name Indication (SNI)

2019-07-30 Thread tmarquardt
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 12a526c  HADOOP-16460: ABFS: fix for Sever Name Indication (SNI)
12a526c is described below

commit 12a526c080ea37d74f1bc1e543943dc847e2d823
Author: Sneha Vijayarajan 
AuthorDate: Tue Jul 30 15:18:15 2019 +

HADOOP-16460: ABFS: fix for Sever Name Indication (SNI)

Contributed by Sneha Vijayarajan 
---
 hadoop-project/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 13c2fb4..4692f38 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -1306,7 +1306,7 @@
   
 org.wildfly.openssl
 wildfly-openssl
-1.0.4.Final
+1.0.7.Final
   
 
   


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[03/50] [abbrv] hadoop git commit: HDDS-462. Optimize ContainerStateMap#getMatchingContainerIDs in SCM. Contributed by Nanda kumar.

2018-09-17 Thread tmarquardt
HDDS-462. Optimize ContainerStateMap#getMatchingContainerIDs in SCM. 
Contributed by Nanda kumar.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c9fa0818
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c9fa0818
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c9fa0818

Branch: refs/heads/HADOOP-15407
Commit: c9fa081897df34dba1c2989f597e67a1f384a4e3
Parents: a65c3ea
Author: Nanda kumar 
Authored: Sat Sep 15 23:11:39 2018 +0530
Committer: Nanda kumar 
Committed: Sat Sep 15 23:11:39 2018 +0530

--
 .../hadoop/hdds/scm/container/ContainerID.java  |  26 +++--
 .../container/common/helpers/ContainerInfo.java |  10 +-
 .../scm/container/ContainerStateManager.java|   4 +-
 .../scm/container/states/ContainerQueryKey.java | 110 +++
 .../scm/container/states/ContainerStateMap.java |  42 ++-
 .../scm/node/states/TestNode2ContainerMap.java  |   7 +-
 .../genesis/BenchMarkContainerStateMap.java |  24 +++-
 .../genesis/BenchMarkDatanodeDispatcher.java|  42 +++
 .../apache/hadoop/ozone/genesis/Genesis.java|   9 +-
 9 files changed, 224 insertions(+), 50 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c9fa0818/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerID.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerID.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerID.java
index 9845c04..49af297 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerID.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerID.java
@@ -19,7 +19,9 @@
 package org.apache.hadoop.hdds.scm.container;
 
 import com.google.common.base.Preconditions;
-import org.apache.commons.math3.util.MathUtils;
+import org.apache.commons.lang3.builder.CompareToBuilder;
+import org.apache.commons.lang3.builder.EqualsBuilder;
+import org.apache.commons.lang3.builder.HashCodeBuilder;
 
 /**
  * Container ID is an integer that is a value between 1..MAX_CONTAINER ID.
@@ -48,7 +50,6 @@ public class ContainerID implements Comparable {
* @return ContainerID.
*/
   public static ContainerID valueof(long containerID) {
-Preconditions.checkState(containerID > 0);
 return new ContainerID(containerID);
   }
 
@@ -66,28 +67,37 @@ public class ContainerID implements Comparable {
 if (this == o) {
   return true;
 }
+
 if (o == null || getClass() != o.getClass()) {
   return false;
 }
 
 ContainerID that = (ContainerID) o;
 
-return id == that.id;
+return new EqualsBuilder()
+.append(getId(), that.getId())
+.isEquals();
   }
 
   @Override
   public int hashCode() {
-return MathUtils.hash(id);
+return new HashCodeBuilder(61, 71)
+.append(getId())
+.toHashCode();
   }
 
   @Override
   public int compareTo(Object o) {
 Preconditions.checkNotNull(o);
-if (o instanceof ContainerID) {
-  return Long.compare(((ContainerID) o).getId(), this.getId());
+if(getClass() != o.getClass()) {
+  throw new ClassCastException("ContainerID class expected. found:" +
+  o.getClass().toString());
 }
-throw new IllegalArgumentException("Object O, should be an instance " +
-"of ContainerID");
+
+ContainerID that = (ContainerID) o;
+return new CompareToBuilder()
+.append(this.getId(), that.getId())
+.build();
   }
 
   @Override

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c9fa0818/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/common/helpers/ContainerInfo.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/common/helpers/ContainerInfo.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/common/helpers/ContainerInfo.java
index ed0e0aa..5abcd14 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/common/helpers/ContainerInfo.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/common/helpers/ContainerInfo.java
@@ -106,6 +106,13 @@ public class ContainerInfo implements 
Comparator,
 this.replicationType = repType;
   }
 
+  public ContainerInfo(ContainerInfo info) {
+this(info.getContainerID(), info.getState(), info.getPipelineID(),
+info.getAllocatedBytes(), info.getUsedBytes(), info.getNumberOfKeys(),
+info.getStateEnterTime(), info.getOwner(),
+info.getDeleteTransactionId(), info.getReplicationFactor(),
+info.getReplicationType());
+  }
   /**
  

[14/50] [abbrv] hadoop git commit: HDDS-463. Fix the release packaging of the ozone distribution. Contributed by Elek Marton.

2018-09-17 Thread tmarquardt
HDDS-463. Fix the release packaging of the ozone distribution. Contributed by 
Elek Marton.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3d89c3e7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3d89c3e7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3d89c3e7

Branch: refs/heads/HADOOP-15407
Commit: 3d89c3e73eba280b8780228fcd097809271b4c8a
Parents: 8af8453
Author: Bharat Viswanadham 
Authored: Mon Sep 17 11:49:09 2018 -0700
Committer: Bharat Viswanadham 
Committed: Mon Sep 17 11:49:09 2018 -0700

--
 dev-support/bin/ozone-dist-layout-stitching | 11 +++-
 .../assemblies/hadoop-src-with-hdds.xml | 56 
 .../assemblies/hadoop-src-with-hdsl.xml | 56 
 hadoop-dist/src/main/ozone/README.txt   | 51 ++
 4 files changed, 116 insertions(+), 58 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3d89c3e7/dev-support/bin/ozone-dist-layout-stitching
--
diff --git a/dev-support/bin/ozone-dist-layout-stitching 
b/dev-support/bin/ozone-dist-layout-stitching
index b4d94b3..8f1f169 100755
--- a/dev-support/bin/ozone-dist-layout-stitching
+++ b/dev-support/bin/ozone-dist-layout-stitching
@@ -122,7 +122,6 @@ run mkdir "ozone-${HDDS_VERSION}"
 run cd "ozone-${HDDS_VERSION}"
 run cp -p "${ROOT}/LICENSE.txt" .
 run cp -p "${ROOT}/NOTICE.txt" .
-run cp -p "${ROOT}/README.txt" .
 
 # Copy hadoop-common first so that it have always have all dependencies.
 # Remaining projects will copy only libraries which are not present already in 
'share' directory.
@@ -162,6 +161,14 @@ cp -r 
"${ROOT}/hadoop-ozone/docs/target/classes/webapps/docs" ./
 rm sbin/*all.sh
 rm sbin/*all.cmd
 
+#remove test and java sources
+find . -name "*tests.jar" | xargs rm
+find . -name "*sources.jar" | xargs rm
+find . -name jdiff -type d | xargs rm -rf
+
+#add ozone specific readme
+
+run cp "${ROOT}/hadoop-dist/src/main/ozone/README.txt" README.txt
 #Copy docker compose files
 run cp -p -r "${ROOT}/hadoop-dist/src/main/compose" .
 
@@ -169,5 +176,5 @@ mkdir -p ./share/hadoop/mapreduce
 mkdir -p ./share/hadoop/yarn
 mkdir -p ./share/hadoop/hdfs
 echo
-echo "Hadoop Ozone dist layout available at: ${BASEDIR}/ozone"
+echo "Hadoop Ozone dist layout available at: ${BASEDIR}/ozone-${HDDS_VERSION}"
 echo

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3d89c3e7/hadoop-assemblies/src/main/resources/assemblies/hadoop-src-with-hdds.xml
--
diff --git 
a/hadoop-assemblies/src/main/resources/assemblies/hadoop-src-with-hdds.xml 
b/hadoop-assemblies/src/main/resources/assemblies/hadoop-src-with-hdds.xml
new file mode 100644
index 000..b1e039f
--- /dev/null
+++ b/hadoop-assemblies/src/main/resources/assemblies/hadoop-src-with-hdds.xml
@@ -0,0 +1,56 @@
+
+http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3";
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+  
xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3
 http://maven.apache.org/xsd/assembly-1.1.3.xsd";>
+  hadoop-src
+  
+tar.gz
+  
+  true
+  
+
+  .
+  
+LICENCE.txt
+README.txt
+NOTICE.txt
+  
+
+
+  .
+  true
+  
+.git/**
+**/.gitignore
+**/.svn
+**/*.iws
+**/*.ipr
+**/*.iml
+**/.classpath
+**/.project
+**/.settings
+**/target/**
+
+**/*.log
+**/build/**
+**/file:/**
+**/SecurityAuth.audit*
+  
+
+  
+

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3d89c3e7/hadoop-assemblies/src/main/resources/assemblies/hadoop-src-with-hdsl.xml
--
diff --git 
a/hadoop-assemblies/src/main/resources/assemblies/hadoop-src-with-hdsl.xml 
b/hadoop-assemblies/src/main/resources/assemblies/hadoop-src-with-hdsl.xml
deleted file mode 100644
index b1e039f..000
--- a/hadoop-assemblies/src/main/resources/assemblies/hadoop-src-with-hdsl.xml
+++ /dev/null
@@ -1,56 +0,0 @@
-
-http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3";
-  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
-  
xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3
 http://maven.apache.org/xsd/assembly-1.1.3.xsd";>
-  hadoop-src
-  
-tar.gz
-  
-  true
-  
-
-  .
-  
-LICENCE.txt
-README.txt
-NOTICE.txt
-  
-
-
-  .
-  true
-  
-.git/**
-**/.gitignore
-**/.svn
-**/*.iws
-**/*.ipr
-**/*.iml
-**/.classpath
-

[04/50] [abbrv] hadoop git commit: HDDS-465. Suppress group mapping lookup warnings for ozone. Contributed by Xiaoyu Yao.

2018-09-17 Thread tmarquardt
HDDS-465. Suppress group mapping lookup warnings for ozone. Contributed by 
Xiaoyu Yao.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/87e2c0f4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/87e2c0f4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/87e2c0f4

Branch: refs/heads/HADOOP-15407
Commit: 87e2c0f4258f2e46183f796d7d904c0b27030df0
Parents: c9fa081
Author: Nanda kumar 
Authored: Sat Sep 15 23:14:57 2018 +0530
Committer: Nanda kumar 
Committed: Sat Sep 15 23:14:57 2018 +0530

--
 hadoop-dist/src/main/compose/ozone/docker-config | 1 +
 hadoop-dist/src/main/compose/ozoneperf/docker-config | 1 +
 hadoop-hdds/common/src/main/conf/log4j.properties| 1 +
 3 files changed, 3 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/87e2c0f4/hadoop-dist/src/main/compose/ozone/docker-config
--
diff --git a/hadoop-dist/src/main/compose/ozone/docker-config 
b/hadoop-dist/src/main/compose/ozone/docker-config
index 0def70e..0bf76a3 100644
--- a/hadoop-dist/src/main/compose/ozone/docker-config
+++ b/hadoop-dist/src/main/compose/ozone/docker-config
@@ -31,6 +31,7 @@ 
LOG4J.PROPERTIES_log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
 LOG4J.PROPERTIES_log4j.appender.stdout.layout.ConversionPattern=%d{-MM-dd 
HH:mm:ss} %-5p %c{1}:%L - %m%n
 LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR
 LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.conf.ConfUtils=WARN
+LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.security.ShellBasedUnixGroupsMapping=ERROR
 
 #Enable this variable to print out all hadoop rpc traffic to the stdout. See 
http://byteman.jboss.org/ to define your own instrumentation.
 
#BYTEMAN_SCRIPT_URL=https://raw.githubusercontent.com/apache/hadoop/trunk/dev-support/byteman/hadooprpc.btm

http://git-wip-us.apache.org/repos/asf/hadoop/blob/87e2c0f4/hadoop-dist/src/main/compose/ozoneperf/docker-config
--
diff --git a/hadoop-dist/src/main/compose/ozoneperf/docker-config 
b/hadoop-dist/src/main/compose/ozoneperf/docker-config
index 309adee..acfdb86 100644
--- a/hadoop-dist/src/main/compose/ozoneperf/docker-config
+++ b/hadoop-dist/src/main/compose/ozoneperf/docker-config
@@ -31,4 +31,5 @@ 
LOG4J.PROPERTIES_log4j.appender.stdout.layout.ConversionPattern=%d{-MM-dd HH
 
HADOOP_OPTS=-javaagent:/opt/jmxpromo.jar=port=0:consulHost=consul:consulMode=node
 LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR
 LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.conf.ConfUtils=WARN
+LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.security.ShellBasedUnixGroupsMapping=ERROR
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/87e2c0f4/hadoop-hdds/common/src/main/conf/log4j.properties
--
diff --git a/hadoop-hdds/common/src/main/conf/log4j.properties 
b/hadoop-hdds/common/src/main/conf/log4j.properties
index 87c8da8..663e254 100644
--- a/hadoop-hdds/common/src/main/conf/log4j.properties
+++ b/hadoop-hdds/common/src/main/conf/log4j.properties
@@ -154,3 +154,4 @@ log4j.logger.org.apache.commons.beanutils=WARN
 
 log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR
 log4j.logger.org.apache.ratis.conf.ConfUtils=WARN
+log4j.logger.org.apache.hadoop.security.ShellBasedUnixGroupsMapping=ERROR


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[30/50] [abbrv] hadoop git commit: HADOOP-15740. ABFS: Check variable names during initialization of AbfsClientThrottlingIntercept. Contributed by Sneha Varma.

2018-09-17 Thread tmarquardt
HADOOP-15740. ABFS: Check variable names during initialization of 
AbfsClientThrottlingIntercept.
Contributed by Sneha Varma.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/13c70e9b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/13c70e9b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/13c70e9b

Branch: refs/heads/HADOOP-15407
Commit: 13c70e9ba3c168b6aa2184e183291411b346d531
Parents: 9475fd9
Author: Thomas Marquardt 
Authored: Wed Sep 12 21:53:09 2018 +
Committer: Thomas Marquardt 
Committed: Mon Sep 17 19:54:01 2018 +

--
 .../fs/azurebfs/services/AbfsClientThrottlingIntercept.java  | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/13c70e9b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClientThrottlingIntercept.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClientThrottlingIntercept.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClientThrottlingIntercept.java
index 0892219..97ea2a6 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClientThrottlingIntercept.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClientThrottlingIntercept.java
@@ -48,8 +48,8 @@ public final class AbfsClientThrottlingIntercept {
 writeThrottler = new AbfsClientThrottlingAnalyzer("write");
   }
 
-  public static synchronized void initializeSingleton(boolean 
isAutoThrottlingEnabled) {
-if (!isAutoThrottlingEnabled) {
+  public static synchronized void initializeSingleton(boolean 
enableAutoThrottling) {
+if (!enableAutoThrottling) {
   return;
 }
 if (singleton == null) {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[35/50] [abbrv] hadoop git commit: HADOOP-15407. HADOOP-15540. Support Windows Azure Storage - Blob file system "ABFS" in Hadoop: Core Commit.

2018-09-17 Thread tmarquardt
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f044deed/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/contract/ITestAbfsFileSystemContractRename.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/contract/ITestAbfsFileSystemContractRename.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/contract/ITestAbfsFileSystemContractRename.java
new file mode 100644
index 000..6d1c4ae
--- /dev/null
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/contract/ITestAbfsFileSystemContractRename.java
@@ -0,0 +1,63 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.contract;
+
+import java.util.Arrays;
+
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.contract.AbstractContractRenameTest;
+import org.apache.hadoop.fs.contract.AbstractFSContract;
+
+/**
+ * Contract test for rename operation.
+ */
+@RunWith(Parameterized.class)
+public class ITestAbfsFileSystemContractRename extends 
AbstractContractRenameTest {
+  @Parameterized.Parameters(name = "SecureMode={0}")
+  public static Iterable secure() {
+return Arrays.asList(new Object[][] { {true}, {false} });
+  }
+
+  private final boolean isSecure;
+  private final DependencyInjectedContractTest dependencyInjectedContractTest;
+
+  public ITestAbfsFileSystemContractRename(final boolean secure) throws 
Exception {
+this.isSecure = secure;
+dependencyInjectedContractTest = new 
DependencyInjectedContractTest(this.isSecure);
+  }
+
+  @Override
+  public void setup() throws Exception {
+dependencyInjectedContractTest.initialize();
+super.setup();
+  }
+
+  @Override
+  protected Configuration createConfiguration() {
+return this.dependencyInjectedContractTest.getConfiguration();
+  }
+
+  @Override
+  protected AbstractFSContract createContract(final Configuration conf) {
+return new ITestAbfsFileSystemContract(conf, this.isSecure);
+  }
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f044deed/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/contract/ITestAbfsFileSystemContractRootDirectory.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/contract/ITestAbfsFileSystemContractRootDirectory.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/contract/ITestAbfsFileSystemContractRootDirectory.java
new file mode 100644
index 000..46072ad
--- /dev/null
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/contract/ITestAbfsFileSystemContractRootDirectory.java
@@ -0,0 +1,67 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.azurebfs.contract;
+
+import java.util.Arrays;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest;
+import org.apache.hadoop.fs.contract.AbstractFSContract;
+import org.junit.Ignore;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+
+/**
+ * Contract test for root directory operation.
+ */
+@RunWith(Parameterized.class)
+public class ITestA

[07/50] [abbrv] hadoop git commit: HDDS-362. Modify functions impacted by SCM chill mode in ScmBlockLocationProtocol. Contributed by Ajay Kumar.

2018-09-17 Thread tmarquardt
HDDS-362. Modify functions impacted by SCM chill mode in 
ScmBlockLocationProtocol. Contributed by Ajay Kumar.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/95231f17
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/95231f17
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/95231f17

Branch: refs/heads/HADOOP-15407
Commit: 95231f1749301b011fe48c9399953f774c40513d
Parents: 07385f8
Author: Xiaoyu Yao 
Authored: Sun Sep 16 17:55:46 2018 -0700
Committer: Xiaoyu Yao 
Committed: Sun Sep 16 17:55:46 2018 -0700

--
 hadoop-hdds/common/src/main/proto/hdds.proto|  7 ++
 .../hadoop/hdds/scm/block/BlockManagerImpl.java | 49 +--
 .../replication/ReplicationActivityStatus.java  | 55 +---
 .../hadoop/hdds/scm/events/SCMEvents.java   |  2 +
 .../hdds/scm/server/ChillModePrecheck.java  | 54 
 .../apache/hadoop/hdds/scm/server/Precheck.java | 29 +++
 .../hdds/scm/server/SCMChillModeManager.java| 49 ++-
 .../scm/server/StorageContainerManager.java |  7 +-
 .../hadoop/hdds/scm/block/TestBlockManager.java | 89 +++-
 .../TestReplicationActivityStatus.java  | 63 ++
 10 files changed, 360 insertions(+), 44 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/95231f17/hadoop-hdds/common/src/main/proto/hdds.proto
--
diff --git a/hadoop-hdds/common/src/main/proto/hdds.proto 
b/hadoop-hdds/common/src/main/proto/hdds.proto
index 89c928b..41f1851 100644
--- a/hadoop-hdds/common/src/main/proto/hdds.proto
+++ b/hadoop-hdds/common/src/main/proto/hdds.proto
@@ -171,6 +171,13 @@ enum ReplicationFactor {
 THREE = 3;
 }
 
+enum ScmOps {
+allocateBlock = 1;
+keyBlocksInfoList = 2;
+getScmInfo = 3;
+deleteBlock = 4;
+}
+
 /**
  * Block ID that uniquely identify a block by SCM.
  */

http://git-wip-us.apache.org/repos/asf/hadoop/blob/95231f17/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
--
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
index e4e33c7..8322b73 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
@@ -18,6 +18,7 @@ package org.apache.hadoop.hdds.scm.block;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.conf.StorageUnit;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ScmOps;
 import org.apache.hadoop.hdds.scm.ScmConfigKeys;
 import org.apache.hadoop.hdds.scm.container.Mapping;
 import org.apache.hadoop.hdds.scm.container.common.helpers.AllocatedBlock;
@@ -28,6 +29,9 @@ import org.apache.hadoop.hdds.scm.node.NodeManager;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationType;
+import org.apache.hadoop.hdds.scm.server.ChillModePrecheck;
+import org.apache.hadoop.hdds.scm.server.Precheck;
+import org.apache.hadoop.hdds.server.events.EventHandler;
 import org.apache.hadoop.hdds.server.events.EventPublisher;
 import org.apache.hadoop.metrics2.util.MBeans;
 import org.apache.hadoop.hdds.client.BlockID;
@@ -61,7 +65,8 @@ import static org.apache.hadoop.ozone.OzoneConfigKeys
 .OZONE_BLOCK_DELETING_SERVICE_TIMEOUT_DEFAULT;
 
 /** Block Manager manages the block access for SCM. */
-public class BlockManagerImpl implements BlockManager, BlockmanagerMXBean {
+public class BlockManagerImpl implements EventHandler,
+BlockManager, BlockmanagerMXBean {
   private static final Logger LOG =
   LoggerFactory.getLogger(BlockManagerImpl.class);
   // TODO : FIX ME : Hard coding the owner.
@@ -80,6 +85,7 @@ public class BlockManagerImpl implements BlockManager, 
BlockmanagerMXBean {
   private final int containerProvisionBatchSize;
   private final Random rand;
   private ObjectName mxBean;
+  private ChillModePrecheck chillModePrecheck;
 
   /**
* Constructor.
@@ -125,6 +131,7 @@ public class BlockManagerImpl implements BlockManager, 
BlockmanagerMXBean {
 blockDeletingService =
 new SCMBlockDeletingService(deletedBlockLog, containerManager,
 nodeManager, eventPublisher, svcInterval, serviceTimeout, conf);
+chillModePrecheck = new ChillModePrecheck();
   }
 
   /**
@@ -187,19 +194,13 @@ public class BlockManagerImpl implements BlockManager, 
BlockmanagerMXBean {
   ReplicationType type, Re

[19/50] [abbrv] hadoop git commit: HADOOP-15560. ABFS: removed dependency injection and unnecessary dependencies. Contributed by Da Zhou.

2018-09-17 Thread tmarquardt
http://git-wip-us.apache.org/repos/asf/hadoop/blob/a271fd0e/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/contract/ITestAzureBlobFileSystemBasics.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/contract/ITestAzureBlobFileSystemBasics.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/contract/ITestAzureBlobFileSystemBasics.java
index e148a05..9f3b4a7 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/contract/ITestAzureBlobFileSystemBasics.java
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/contract/ITestAzureBlobFileSystemBasics.java
@@ -23,6 +23,7 @@ import org.apache.hadoop.fs.FileSystemContractBaseTest;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.Path;
 
+import org.junit.After;
 import org.junit.Before;
 import org.junit.Ignore;
 import org.junit.Test;
@@ -37,7 +38,7 @@ public class ITestAzureBlobFileSystemBasics extends 
FileSystemContractBaseTest {
   private final DependencyInjectedContractTest dependencyInjectedContractTest;
 
   public ITestAzureBlobFileSystemBasics() throws Exception {
-// If contract tests are running in parallel, some root level tests in 
this file will fail
+// If all contract tests are running in parallel, some root level tests in 
FileSystemContractBaseTest will fail
 // due to the race condition. Hence for this contract test it should be 
tested in different container
 dependencyInjectedContractTest = new DependencyInjectedContractTest(false, 
false);
   }
@@ -48,6 +49,14 @@ public class ITestAzureBlobFileSystemBasics extends 
FileSystemContractBaseTest {
 fs = this.dependencyInjectedContractTest.getFileSystem();
   }
 
+  @After
+  public void testCleanup() throws Exception {
+// This contract test is not using existing container for test,
+// instead it creates its own temp container for test, hence we need to 
destroy
+// it after the test.
+this.dependencyInjectedContractTest.testCleanup();
+  }
+
   @Test
   public void testListOnFolderWithNoChildren() throws IOException {
 assertTrue(fs.mkdirs(path("testListStatus/c/1")));

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a271fd0e/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsHttpServiceImpl.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsHttpServiceImpl.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsHttpServiceImpl.java
deleted file mode 100644
index 588df20..000
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsHttpServiceImpl.java
+++ /dev/null
@@ -1,122 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.fs.azurebfs.services;
-
-import java.util.Hashtable;
-
-import org.junit.Assert;
-import org.junit.Ignore;
-import org.junit.Test;
-
-import org.apache.hadoop.fs.FSDataInputStream;
-import org.apache.hadoop.fs.FSDataOutputStream;
-import org.apache.hadoop.fs.FileStatus;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem;
-import org.apache.hadoop.fs.azurebfs.DependencyInjectedTest;
-import org.apache.hadoop.fs.azurebfs.contracts.services.AbfsHttpService;
-
-import static org.junit.Assert.assertEquals;
-
-/**
- * Test AbfsHttpServiceImpl.
- */
-public class ITestAbfsHttpServiceImpl extends DependencyInjectedTest {
-  private static final int TEST_DATA = 100;
-  private static final Path TEST_PATH = new Path("/testfile");
-  public ITestAbfsHttpServiceImpl() {
-super();
-  }
-
-  @Test
-  public void testReadWriteBytesToFileAndEnsureThreadPoolCleanup() throws 
Exception {
-final AzureBlobFileSystem fs = this.getFileSystem();
-testWriteOneByteToFileAndEnsureThreadPoolCleanup();
-
-FSDataInputStream inputStream = fs.open(TEST_PATH, 4 * 1024 * 1024);
-int i = inputStream.read();
-
-assertEquals(TEST_DATA,

[21/50] [abbrv] hadoop git commit: HADOOP-15560. ABFS: removed dependency injection and unnecessary dependencies. Contributed by Da Zhou.

2018-09-17 Thread tmarquardt
HADOOP-15560. ABFS: removed dependency injection and unnecessary dependencies.
Contributed by Da Zhou.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a271fd0e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a271fd0e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a271fd0e

Branch: refs/heads/HADOOP-15407
Commit: a271fd0eca75cef8b8ba940cdac8ad4fd21b4462
Parents: f044dee
Author: Steve Loughran 
Authored: Tue Jul 3 18:55:10 2018 +0200
Committer: Thomas Marquardt 
Committed: Mon Sep 17 19:54:01 2018 +

--
 hadoop-tools/hadoop-azure/pom.xml   |  18 -
 .../src/config/checkstyle-suppressions.xml  |   2 +-
 .../hadoop/fs/azurebfs/AzureBlobFileSystem.java |  88 ++-
 .../fs/azurebfs/AzureBlobFileSystemStore.java   | 701 +++
 .../exceptions/ServiceResolutionException.java  |  36 -
 .../services/AbfsHttpClientFactory.java |  39 --
 .../contracts/services/AbfsHttpService.java | 162 -
 .../contracts/services/AbfsServiceProvider.java |  40 --
 .../services/ConfigurationService.java  | 143 
 .../contracts/services/InjectableService.java   |  30 -
 .../contracts/services/TracingService.java  |  66 --
 .../hadoop/fs/azurebfs/services/AbfsClient.java |   7 +-
 .../fs/azurebfs/services/AbfsConfiguration.java | 297 
 .../services/AbfsHttpClientFactoryImpl.java | 116 ---
 .../azurebfs/services/AbfsHttpServiceImpl.java  | 693 --
 .../services/AbfsServiceInjectorImpl.java   |  81 ---
 .../services/AbfsServiceProviderImpl.java   |  96 ---
 .../services/ConfigurationServiceImpl.java  | 317 -
 .../services/ExponentialRetryPolicy.java|   9 +-
 .../azurebfs/services/LoggerSpanReceiver.java   |  74 --
 .../azurebfs/services/TracingServiceImpl.java   | 134 
 .../fs/azurebfs/DependencyInjectedTest.java |  59 +-
 .../azurebfs/ITestAzureBlobFileSystemE2E.java   |   7 +-
 .../ITestAzureBlobFileSystemRandomRead.java |   7 +-
 .../azurebfs/ITestFileSystemInitialization.java |  23 +-
 .../fs/azurebfs/ITestFileSystemProperties.java  | 126 
 .../azurebfs/ITestFileSystemRegistration.java   |  23 +-
 .../ITestAzureBlobFileSystemBasics.java |  11 +-
 .../services/ITestAbfsHttpServiceImpl.java  | 122 
 .../services/ITestReadWriteAndSeek.java |   8 +-
 .../services/ITestTracingServiceImpl.java   |  79 ---
 .../services/MockAbfsHttpClientFactoryImpl.java |  69 --
 .../services/MockAbfsServiceInjectorImpl.java   |  50 --
 .../services/MockServiceProviderImpl.java   |  36 -
 .../TestAbfsConfigurationFieldsValidation.java  | 149 
 ...estConfigurationServiceFieldsValidation.java | 149 
 .../utils/CleanUpAbfsTestContainer.java |  68 ++
 37 files changed, 1432 insertions(+), 2703 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a271fd0e/hadoop-tools/hadoop-azure/pom.xml
--
diff --git a/hadoop-tools/hadoop-azure/pom.xml 
b/hadoop-tools/hadoop-azure/pom.xml
index d4046ef..cbd4dfb 100644
--- a/hadoop-tools/hadoop-azure/pom.xml
+++ b/hadoop-tools/hadoop-azure/pom.xml
@@ -150,12 +150,6 @@
 
 
 
-  org.threadly
-  threadly
-  compile
-
-
-
   com.fasterxml.jackson.core
   jackson-core
   compile
@@ -186,18 +180,6 @@
 
 
 
-  org.apache.htrace
-  htrace-core
-  compile
-
-
-
-  org.apache.htrace
-  htrace-core4
-  compile
-
-
-
   com.google.inject
   guice
   compile

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a271fd0e/hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml
--
diff --git a/hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
b/hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml
index 0204355..751a227 100644
--- a/hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml
+++ b/hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml
@@ -43,5 +43,5 @@
 
 
 
+  
files="org[\\/]apache[\\/]hadoop[\\/]fs[\\/]azurebfs[\\/]AzureBlobFileSystemStore.java"/>
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a271fd0e/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
index 707c81e..cf5acbb 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
+++ 
b/

[34/50] [abbrv] hadoop git commit: HADOOP-15692. ABFS: extensible support for custom oauth. Contributed by Junhua Gu and Rajeev Bansal.

2018-09-17 Thread tmarquardt
HADOOP-15692. ABFS: extensible support for custom oauth.
Contributed by Junhua Gu and Rajeev Bansal.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/df57c6c3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/df57c6c3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/df57c6c3

Branch: refs/heads/HADOOP-15407
Commit: df57c6c3b12117788b78c30b9d0703c5e9d88458
Parents: dd2b22f
Author: Thomas Marquardt 
Authored: Wed Aug 29 05:59:44 2018 +
Committer: Thomas Marquardt 
Committed: Mon Sep 17 19:54:01 2018 +

--
 .../hadoop/fs/azurebfs/AbfsConfiguration.java   | 63 +++--
 .../hadoop/fs/azurebfs/AzureBlobFileSystem.java | 28 ++
 .../azurebfs/constants/ConfigurationKeys.java   |  3 +
 .../constants/FileSystemConfigurations.java |  1 +
 .../CustomDelegationTokenManager.java   | 66 ++
 .../extensions/CustomTokenProviderAdaptee.java  | 75 +++
 .../fs/azurebfs/extensions/package-info.java| 21 +
 .../oauth2/CustomTokenProviderAdaptee.java  | 75 ---
 .../oauth2/CustomTokenProviderAdapter.java  |  1 +
 .../security/AbfsDelegationTokenIdentifier.java | 49 ++
 .../security/AbfsDelegationTokenManager.java| 88 ++
 .../fs/azurebfs/security/AbfsTokenRenewer.java  | 96 
 .../fs/azurebfs/security/package-info.java  | 23 +
 ...apache.hadoop.security.token.TokenIdentifier |  1 +
 ...rg.apache.hadoop.security.token.TokenRenewer |  1 +
 .../ITestAzureBlobFileSystemFileStatus.java |  3 +-
 16 files changed, 492 insertions(+), 102 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/df57c6c3/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
index f26f562..069f17a 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
@@ -27,7 +27,6 @@ import com.google.common.annotations.VisibleForTesting;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.azurebfs.constants.FileSystemConfigurations;
 import 
org.apache.hadoop.fs.azurebfs.contracts.annotations.ConfigurationValidationAnnotations.IntegerConfigurationValidatorAnnotation;
 import 
org.apache.hadoop.fs.azurebfs.contracts.annotations.ConfigurationValidationAnnotations.LongConfigurationValidatorAnnotation;
 import 
org.apache.hadoop.fs.azurebfs.contracts.annotations.ConfigurationValidationAnnotations.StringConfigurationValidatorAnnotation;
@@ -43,13 +42,14 @@ import 
org.apache.hadoop.fs.azurebfs.diagnostics.BooleanConfigurationBasicValida
 import 
org.apache.hadoop.fs.azurebfs.diagnostics.IntegerConfigurationBasicValidator;
 import 
org.apache.hadoop.fs.azurebfs.diagnostics.LongConfigurationBasicValidator;
 import 
org.apache.hadoop.fs.azurebfs.diagnostics.StringConfigurationBasicValidator;
+import org.apache.hadoop.fs.azurebfs.extensions.CustomTokenProviderAdaptee;
 import org.apache.hadoop.fs.azurebfs.oauth2.AccessTokenProvider;
 import org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider;
-import org.apache.hadoop.fs.azurebfs.oauth2.CustomTokenProviderAdaptee;
 import org.apache.hadoop.fs.azurebfs.oauth2.CustomTokenProviderAdapter;
 import org.apache.hadoop.fs.azurebfs.oauth2.MsiTokenProvider;
 import org.apache.hadoop.fs.azurebfs.oauth2.RefreshTokenBasedTokenProvider;
 import org.apache.hadoop.fs.azurebfs.oauth2.UserPasswordTokenProvider;
+import org.apache.hadoop.fs.azurebfs.security.AbfsDelegationTokenManager;
 import org.apache.hadoop.fs.azurebfs.services.AuthType;
 import org.apache.hadoop.fs.azurebfs.services.KeyProvider;
 import org.apache.hadoop.fs.azurebfs.services.SimpleKeyProvider;
@@ -57,7 +57,7 @@ import org.apache.hadoop.fs.azurebfs.utils.SSLSocketFactoryEx;
 import org.apache.hadoop.util.ReflectionUtils;
 
 import static org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.*;
-import static 
org.apache.hadoop.fs.azurebfs.constants.FileSystemConfigurations.DEFAULT_FS_AZURE_SSL_CHANNEL_MODE;
+import static 
org.apache.hadoop.fs.azurebfs.constants.FileSystemConfigurations.*;
 
 /**
  * Configuration for Azure Blob FileSystem.
@@ -69,83 +69,86 @@ public class AbfsConfiguration{
   private final boolean isSecure;
 
   @IntegerConfigurationValidatorAnnotation(ConfigurationKey = 
AZURE_WR

[36/50] [abbrv] hadoop git commit: HADOOP-15407. HADOOP-15540. Support Windows Azure Storage - Blob file system "ABFS" in Hadoop: Core Commit.

2018-09-17 Thread tmarquardt
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f044deed/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemListStatus.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemListStatus.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemListStatus.java
new file mode 100644
index 000..6059766
--- /dev/null
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemListStatus.java
@@ -0,0 +1,132 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import java.io.FileNotFoundException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.Path;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.assertFalse;
+
+/**
+ * Test listStatus operation.
+ */
+public class ITestAzureBlobFileSystemListStatus extends DependencyInjectedTest 
{
+  private static final int TEST_FILES_NUMBER = 6000;
+  public ITestAzureBlobFileSystemListStatus() {
+super();
+  }
+
+  @Test
+  public void testListPath() throws Exception {
+final AzureBlobFileSystem fs = this.getFileSystem();
+final List tasks = new ArrayList<>();
+
+ExecutorService es = Executors.newFixedThreadPool(10);
+for (int i = 0; i < TEST_FILES_NUMBER; i++) {
+  final Path fileName = new Path("/test" + i);
+  Callable callable = new Callable() {
+@Override
+public Void call() throws Exception {
+  fs.create(fileName);
+  return null;
+}
+  };
+
+  tasks.add(es.submit(callable));
+}
+
+for (Future task : tasks) {
+  task.get();
+}
+
+es.shutdownNow();
+FileStatus[] files = fs.listStatus(new Path("/"));
+Assert.assertEquals(files.length, TEST_FILES_NUMBER + 1 /* user directory 
*/);
+  }
+
+  @Test
+  public void testListFileVsListDir() throws Exception {
+final AzureBlobFileSystem fs = this.getFileSystem();
+fs.create(new Path("/testFile"));
+
+FileStatus[] testFiles = fs.listStatus(new Path("/testFile"));
+Assert.assertEquals(testFiles.length, 1);
+Assert.assertFalse(testFiles[0].isDirectory());
+  }
+
+  @Test
+  public void testListFileVsListDir2() throws Exception {
+final AzureBlobFileSystem fs = this.getFileSystem();
+fs.mkdirs(new Path("/testFolder"));
+fs.mkdirs(new Path("/testFolder/testFolder2"));
+fs.mkdirs(new Path("/testFolder/testFolder2/testFolder3"));
+fs.create(new Path("/testFolder/testFolder2/testFolder3/testFile"));
+
+FileStatus[] testFiles = fs.listStatus(new 
Path("/testFolder/testFolder2/testFolder3/testFile"));
+Assert.assertEquals(testFiles.length, 1);
+Assert.assertEquals(testFiles[0].getPath(), new Path(this.getTestUrl(),
+"/testFolder/testFolder2/testFolder3/testFile"));
+Assert.assertFalse(testFiles[0].isDirectory());
+  }
+
+  @Test(expected = FileNotFoundException.class)
+  public void testListNonExistentDir() throws Exception {
+final AzureBlobFileSystem fs = this.getFileSystem();
+fs.listStatus(new Path("/testFile/"));
+  }
+
+  @Test
+  public void testListFiles() throws Exception {
+final AzureBlobFileSystem fs = this.getFileSystem();
+fs.mkdirs(new Path("/test"));
+
+FileStatus[] fileStatuses = fs.listStatus(new Path("/"));
+assertEquals(fileStatuses.length, 2);
+
+fs.mkdirs(new Path("/test/sub"));
+fileStatuses = fs.listStatus(new Path("/test"));
+assertEquals(fileStatuses.length, 1);
+assertEquals(fileStatuses[0].getPath().getName(), "sub");
+assertTrue(fileStatuses[0].isDirectory());
+assertEquals(fileStatuses[0].getLen(), 0);
+
+fs.create(new Path("/test/f"));
+fileStatuses = fs.listStatus

[22/50] [abbrv] hadoop git commit: HADOOP-15663. ABFS: Simplify configuration. Contributed by Da Zhou.

2018-09-17 Thread tmarquardt
HADOOP-15663. ABFS: Simplify configuration.
Contributed by Da Zhou.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/81dc4a99
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/81dc4a99
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/81dc4a99

Branch: refs/heads/HADOOP-15407
Commit: 81dc4a995c3837b721a0c1f897698b5ed47b8fb0
Parents: df57c6c
Author: Thomas Marquardt 
Authored: Fri Aug 31 03:24:42 2018 +
Committer: Thomas Marquardt 
Committed: Mon Sep 17 19:54:01 2018 +

--
 .../src/main/resources/core-default.xml |  12 ++
 .../hadoop/fs/azurebfs/AbfsConfiguration.java   |   4 -
 .../fs/azurebfs/AzureBlobFileSystemStore.java   |  24 ++-
 .../azurebfs/constants/ConfigurationKeys.java   |   5 +-
 .../hadoop/fs/azurebfs/utils/UriUtils.java  |  15 +-
 .../src/site/markdown/testing_azure.md  | 209 ++-
 .../fs/azure/AzureBlobStorageTestAccount.java   |  22 +-
 ...TestFileSystemOperationExceptionMessage.java |   3 +-
 .../fs/azure/ITestWasbUriAndConfiguration.java  |  26 ---
 .../azure/integration/AzureTestConstants.java   |   6 +-
 .../fs/azure/integration/AzureTestUtils.java|  18 ++
 .../azure/metrics/TestRollingWindowAverage.java |   4 +-
 .../azurebfs/AbstractAbfsIntegrationTest.java   |  73 ---
 .../azurebfs/AbstractAbfsTestWithTimeout.java   |  70 +++
 .../ITestAzureBlobFileSystemBackCompat.java |   2 +-
 .../fs/azurebfs/ITestWasbAbfsCompatibility.java |   2 +-
 .../TestAbfsConfigurationFieldsValidation.java  |   3 +-
 .../constants/TestConfigurationKeys.java|  13 +-
 .../contract/ABFSContractTestBinding.java   |   3 +
 .../TestConfigurationValidators.java|   1 -
 .../hadoop/fs/azurebfs/utils/AbfsTestUtils.java |  80 +++
 .../utils/CleanUpAbfsTestContainer.java |  77 ---
 .../hadoop/fs/azurebfs/utils/TestUriUtils.java  |  12 +-
 .../src/test/resources/azure-bfs-test.xml   | 188 -
 .../src/test/resources/azure-test.xml   |  56 ++---
 25 files changed, 473 insertions(+), 455 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/81dc4a99/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml 
b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
index 471dacc..3fcdecb 100644
--- a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
+++ b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
@@ -1619,6 +1619,18 @@
 
 
 
+  fs.AbstractFileSystem.wasb.impl
+  org.apache.hadoop.fs.azure.Wasb
+  AbstractFileSystem implementation class of wasb://
+
+
+
+  fs.AbstractFileSystem.wasbs.impl
+  org.apache.hadoop.fs.azure.Wasbs
+  AbstractFileSystem implementation class of 
wasbs://
+
+
+
   fs.wasb.impl
   org.apache.hadoop.fs.azure.NativeAzureFileSystem
   The implementation class of the Native Azure 
Filesystem

http://git-wip-us.apache.org/repos/asf/hadoop/blob/81dc4a99/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
index 069f17a..924bc3e 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
@@ -173,10 +173,6 @@ public class AbfsConfiguration{
 }
   }
 
-  public boolean isEmulator() {
-return this.getConfiguration().getBoolean(FS_AZURE_EMULATOR_ENABLED, 
false);
-  }
-
   public boolean isSecureMode() {
 return isSecure;
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/81dc4a99/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
index fc60127..6542a64 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
@@ -77,6 +77,7 @@ import 
org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation;
 import org.apache.hadoop.fs.azurebfs.services.AuthType;
 import o

[16/50] [abbrv] hadoop git commit: HADOOP-15660. ABFS: Add support for OAuth Contributed by Da Zhou, Rajeev Bansal, and Junhua Gu.

2018-09-17 Thread tmarquardt
HADOOP-15660. ABFS: Add support for OAuth
Contributed by Da Zhou, Rajeev Bansal, and Junhua Gu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9149b970
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9149b970
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9149b970

Branch: refs/heads/HADOOP-15407
Commit: 9149b9703e3ab09abdc087db129e82ad3f4cefa1
Parents: d6a4f39
Author: Thomas Marquardt 
Authored: Sat Aug 18 18:53:32 2018 +
Committer: Thomas Marquardt 
Committed: Mon Sep 17 19:54:01 2018 +

--
 .../hadoop/fs/azurebfs/AbfsConfiguration.java   | 149 ++--
 .../fs/azurebfs/AzureBlobFileSystemStore.java   |  26 +-
 .../azurebfs/constants/ConfigurationKeys.java   |  19 +
 .../TokenAccessProviderException.java   |  36 ++
 .../services/AzureServiceErrorCode.java |   1 +
 .../services/ListResultEntrySchema.java |  89 -
 .../fs/azurebfs/oauth2/AccessTokenProvider.java |  98 ++
 .../azurebfs/oauth2/AzureADAuthenticator.java   | 344 +++
 .../hadoop/fs/azurebfs/oauth2/AzureADToken.java |  47 +++
 .../oauth2/ClientCredsTokenProvider.java|  62 
 .../oauth2/CustomTokenProviderAdaptee.java  |  75 
 .../oauth2/CustomTokenProviderAdapter.java  |  57 +++
 .../fs/azurebfs/oauth2/MsiTokenProvider.java|  48 +++
 .../hadoop/fs/azurebfs/oauth2/QueryParams.java  |  69 
 .../oauth2/RefreshTokenBasedTokenProvider.java  |  57 +++
 .../oauth2/UserPasswordTokenProvider.java   |  66 
 .../hadoop/fs/azurebfs/oauth2/package-info.java |  18 +
 .../hadoop/fs/azurebfs/services/AbfsClient.java |  18 +-
 .../fs/azurebfs/services/AbfsHttpHeader.java|   2 +-
 .../fs/azurebfs/services/AbfsRestOperation.java |  19 +-
 .../hadoop/fs/azurebfs/services/AuthType.java   |  27 ++
 .../azurebfs/AbstractAbfsIntegrationTest.java   |  35 +-
 .../hadoop/fs/azurebfs/ITestAbfsClient.java |   2 +-
 .../ITestAzureBlobFileSystemBackCompat.java |   4 +
 .../ITestAzureBlobFileSystemFileStatus.java |   3 -
 .../ITestAzureBlobFileSystemFinalize.java   |   8 +-
 .../azurebfs/ITestAzureBlobFileSystemFlush.java |   8 +-
 .../azurebfs/ITestAzureBlobFileSystemOauth.java | 176 ++
 .../ITestAzureBlobFileSystemRandomRead.java |   3 +
 .../azurebfs/ITestFileSystemInitialization.java |   5 +-
 .../azurebfs/ITestFileSystemRegistration.java   |  11 +-
 .../fs/azurebfs/ITestWasbAbfsCompatibility.java |   2 +
 .../constants/TestConfigurationKeys.java|   6 +
 .../contract/ABFSContractTestBinding.java   |  14 +-
 .../ITestAbfsFileSystemContractAppend.java  |  19 +-
 .../ITestAbfsFileSystemContractConcat.java  |  17 +-
 .../ITestAbfsFileSystemContractCreate.java  |  17 +-
 .../ITestAbfsFileSystemContractDelete.java  |  17 +-
 .../ITestAbfsFileSystemContractDistCp.java  |   2 +-
 ...TestAbfsFileSystemContractGetFileStatus.java |  17 +-
 .../ITestAbfsFileSystemContractMkdir.java   |  17 +-
 .../ITestAbfsFileSystemContractOpen.java|  17 +-
 .../ITestAbfsFileSystemContractRename.java  |  17 +-
 ...TestAbfsFileSystemContractRootDirectory.java |  16 +-
 ...ITestAbfsFileSystemContractSecureDistCp.java |   2 +-
 .../ITestAbfsFileSystemContractSeek.java|  17 +-
 .../ITestAbfsFileSystemContractSetTimes.java|  17 +-
 .../ITestAzureBlobFileSystemBasics.java |   2 +-
 .../fs/azurebfs/services/TestAbfsClient.java|   6 +-
 .../fs/azurebfs/services/TestQueryParams.java   |  72 
 .../utils/CleanUpAbfsTestContainer.java |  13 +-
 .../src/test/resources/azure-bfs-test.xml   | 128 ++-
 52 files changed, 1768 insertions(+), 249 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9149b970/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
index e647ae8..f26f562 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
@@ -18,6 +18,7 @@
 
 package org.apache.hadoop.fs.azurebfs;
 
+import java.io.IOException;
 import java.lang.reflect.Field;
 import java.util.Map;
 
@@ -26,7 +27,6 @@ import com.google.common.annotations.VisibleForTesting;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys;
 import org.apache.ha

[45/50] [abbrv] hadoop git commit: HADOOP-15753. ABFS: support path "abfs://mycluster/file/path" Contributed by Da Zhou.

2018-09-17 Thread tmarquardt
HADOOP-15753. ABFS: support path "abfs://mycluster/file/path"
Contributed by Da Zhou.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/26211019
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/26211019
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/26211019

Branch: refs/heads/HADOOP-15407
Commit: 26211019c80e6180297dd94abcefe718b70e8cd9
Parents: e5593cb
Author: Thomas Marquardt 
Authored: Fri Sep 14 16:50:26 2018 +
Committer: Thomas Marquardt 
Committed: Mon Sep 17 19:54:01 2018 +

--
 .../hadoop/fs/azurebfs/AzureBlobFileSystem.java | 23 +++
 .../ITestAzureBlobFileSystemFileStatus.java | 24 
 2 files changed, 47 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/26211019/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
index 7cbf4d7..2e8de78 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
@@ -366,6 +366,29 @@ public class AzureBlobFileSystem extends FileSystem {
 }
   }
 
+  /**
+   * Qualify a path to one which uses this FileSystem and, if relative,
+   * made absolute.
+   * @param path to qualify.
+   * @return this path if it contains a scheme and authority and is absolute, 
or
+   * a new path that includes a path and authority and is fully qualified
+   * @see Path#makeQualified(URI, Path)
+   * @throws IllegalArgumentException if the path has a schema/URI different
+   * from this FileSystem.
+   */
+  @Override
+  public Path makeQualified(Path path) {
+// To support format: abfs://{dfs.nameservices}/file/path,
+// path need to be first converted to URI, then get the raw path string,
+// during which {dfs.nameservices} will be omitted.
+if (path != null ) {
+  String uriPath = path.toUri().getPath();
+  path = uriPath.isEmpty() ? path : new Path(uriPath);
+}
+return super.makeQualified(path);
+  }
+
+
   @Override
   public Path getWorkingDirectory() {
 return this.workingDir;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/26211019/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemFileStatus.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemFileStatus.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemFileStatus.java
index b08b920..02f938f 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemFileStatus.java
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemFileStatus.java
@@ -98,4 +98,28 @@ public class ITestAzureBlobFileSystemFileStatus extends
 validateStatus(fs, TEST_FOLDER, true);
   }
 
+  @Test
+  public void testAbfsPathWithHost() throws IOException {
+AzureBlobFileSystem fs = this.getFileSystem();
+Path pathWithHost1 = new Path("abfs://mycluster/abfs/file1.txt");
+Path pathwithouthost1 = new Path("/abfs/file1.txt");
+
+Path pathWithHost2 = new Path("abfs://mycluster/abfs/file2.txt");
+Path pathwithouthost2 = new Path("/abfs/file2.txt");
+
+// verify compatibility of this path format
+fs.create(pathWithHost1);
+assertTrue(fs.exists(pathwithouthost1));
+
+fs.create(pathwithouthost2);
+assertTrue(fs.exists(pathWithHost2));
+
+// verify get
+FileStatus fileStatus1 = fs.getFileStatus(pathWithHost1);
+assertEquals(pathwithouthost1.getName(), fileStatus1.getPath().getName());
+
+FileStatus fileStatus2 = fs.getFileStatus(pathwithouthost2);
+assertEquals(pathWithHost2.getName(), fileStatus2.getPath().getName());
+  }
+
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[01/50] [abbrv] hadoop git commit: HDDS-409. Ozone acceptance-test and integration-test packages have undefined hadoop component. Contributed by Dinesh Chitlangia. [Forced Update!]

2018-09-17 Thread tmarquardt
Repository: hadoop
Updated Branches:
  refs/heads/HADOOP-15407 8873d29d3 -> b4c23043d (forced update)


HDDS-409. Ozone acceptance-test and integration-test packages have undefined 
hadoop component. Contributed by Dinesh Chitlangia.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/985f3bf3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/985f3bf3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/985f3bf3

Branch: refs/heads/HADOOP-15407
Commit: 985f3bf3fb2e9ba6cccf0420cd91cb4b9394d750
Parents: b95aa56
Author: Márton Elek 
Authored: Sat Sep 15 13:16:59 2018 +0200
Committer: Márton Elek 
Committed: Sat Sep 15 13:21:32 2018 +0200

--
 hadoop-ozone/acceptance-test/pom.xml  | 6 ++
 hadoop-ozone/integration-test/pom.xml | 5 +
 2 files changed, 11 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/985f3bf3/hadoop-ozone/acceptance-test/pom.xml
--
diff --git a/hadoop-ozone/acceptance-test/pom.xml 
b/hadoop-ozone/acceptance-test/pom.xml
index fc11c07..a60d4b0 100644
--- a/hadoop-ozone/acceptance-test/pom.xml
+++ b/hadoop-ozone/acceptance-test/pom.xml
@@ -27,6 +27,12 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd";>
   Apache Hadoop Ozone Acceptance Tests
   Apache Hadoop Ozone Acceptance Tests
   pom
+
+  
+ozone
+true
+  
+
   
 
   ozone-acceptance-test

http://git-wip-us.apache.org/repos/asf/hadoop/blob/985f3bf3/hadoop-ozone/integration-test/pom.xml
--
diff --git a/hadoop-ozone/integration-test/pom.xml 
b/hadoop-ozone/integration-test/pom.xml
index d7a3bc0..993e91f 100644
--- a/hadoop-ozone/integration-test/pom.xml
+++ b/hadoop-ozone/integration-test/pom.xml
@@ -28,6 +28,11 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd";>
   Apache Hadoop Ozone Integration Tests
   jar
 
+  
+ozone
+true
+  
+
   
 
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[25/50] [abbrv] hadoop git commit: HADOOP-15659. Code changes for bug fix and new tests. Contributed by Da Zhou.

2018-09-17 Thread tmarquardt
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b54b0c1b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemFinalize.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemFinalize.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemFinalize.java
new file mode 100644
index 000..e4acbae
--- /dev/null
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemFinalize.java
@@ -0,0 +1,60 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import java.lang.ref.WeakReference;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+
+/**
+ * Test finalize() method when "fs.abfs.impl.disable.cache" is enabled.
+ */
+public class ITestAzureBlobFileSystemFinalize extends AbstractAbfsScaleTest{
+  static final String DISABLE_CACHE_KEY = "fs.abfs.impl.disable.cache";
+
+  public ITestAzureBlobFileSystemFinalize() throws Exception {
+super();
+  }
+
+  @Test
+  public void testFinalize() throws Exception {
+// Disable the cache for filesystem to make sure there is no reference.
+Configuration configuration = this.getConfiguration();
+configuration.setBoolean(this.DISABLE_CACHE_KEY, true);
+
+AzureBlobFileSystem fs = (AzureBlobFileSystem) 
FileSystem.get(configuration);
+
+WeakReference ref = new WeakReference(fs);
+fs = null;
+
+int i = 0;
+int maxTries = 1000;
+while (ref.get() != null && i < maxTries) {
+  System.gc();
+  System.runFinalization();
+  i++;
+}
+
+Assert.assertTrue("testFinalizer didn't get cleaned up within maxTries", 
ref.get() == null);
+  }
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b54b0c1b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemFlush.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemFlush.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemFlush.java
index d90f018..2f40b64 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemFlush.java
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemFlush.java
@@ -20,12 +20,20 @@ package org.apache.hadoop.fs.azurebfs;
 
 import java.util.ArrayList;
 import java.util.List;
+import java.util.EnumSet;
 import java.util.Random;
 import java.util.concurrent.Callable;
 import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
 import java.util.concurrent.Future;
+import java.io.IOException;
 
+import com.microsoft.azure.storage.blob.BlockEntry;
+import com.microsoft.azure.storage.blob.BlockListingFilter;
+import com.microsoft.azure.storage.blob.CloudBlockBlob;
+import org.apache.hadoop.fs.azure.AzureBlobStorageTestAccount;
+import org.hamcrest.core.IsEqual;
+import org.hamcrest.core.IsNot;
 import org.junit.Test;
 
 import org.apache.hadoop.fs.FSDataInputStream;
@@ -46,6 +54,8 @@ public class ITestAzureBlobFileSystemFlush extends 
AbstractAbfsScaleTest {
   private static final int THREAD_SLEEP_TIME = 6000;
 
   private static final Path TEST_FILE_PATH = new Path("/testfile");
+  private static final int TEST_FILE_LENGTH = 1024 * 1024 * 8;
+  private static final int WAITING_TIME = 4000;
 
   public ITestAzureBlobFileSystemFlush() {
 super();
@@ -55,7 +65,7 @@ public class ITestAzureBlobFileSystemFlush extends 
AbstractAbfsScaleTest {
   public void testAbfsOutputStreamAsyncFlushWithRetainUncommittedData() throws 
Exception {
 final AzureBlobFileSystem fs = getFileSystem();
 final byte[] b;
-try(final FSDataOutputStream stream = fs.create(TEST_FILE_PATH)) {
+try (FSDataOutputStream stream = fs.create(TEST_FILE_PATH)) {
   b 

[26/50] [abbrv] hadoop git commit: HADOOP-15659. Code changes for bug fix and new tests. Contributed by Da Zhou.

2018-09-17 Thread tmarquardt
HADOOP-15659. Code changes for bug fix and new tests.
Contributed by Da Zhou.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b54b0c1b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b54b0c1b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b54b0c1b

Branch: refs/heads/HADOOP-15407
Commit: b54b0c1b676c616aef9574e4e88ea30c314c79dc
Parents: ce03a93
Author: Thomas Marquardt 
Authored: Sat Aug 11 00:10:26 2018 +
Committer: Thomas Marquardt 
Committed: Mon Sep 17 19:54:01 2018 +

--
 hadoop-tools/hadoop-azure/pom.xml   |  26 +-
 .../hadoop/fs/azurebfs/AbfsConfiguration.java   | 356 +++
 .../hadoop/fs/azurebfs/AzureBlobFileSystem.java |  55 ++-
 .../fs/azurebfs/AzureBlobFileSystemStore.java   |  39 +-
 .../azurebfs/constants/ConfigurationKeys.java   |   6 +
 .../constants/FileSystemConfigurations.java |   4 +-
 .../exceptions/KeyProviderException.java|  42 +++
 .../services/AzureServiceErrorCode.java |   1 +
 .../services/ListResultEntrySchema.java |   2 +-
 .../contracts/services/ListResultSchema.java|   2 +-
 .../hadoop/fs/azurebfs/services/AbfsClient.java |  26 +-
 .../fs/azurebfs/services/AbfsConfiguration.java | 297 
 .../fs/azurebfs/services/AbfsHttpOperation.java |  19 +-
 .../fs/azurebfs/services/AbfsInputStream.java   |   2 +-
 .../fs/azurebfs/services/AbfsOutputStream.java  |  25 +-
 .../fs/azurebfs/services/AbfsRestOperation.java |   2 +-
 .../azurebfs/services/AbfsUriQueryBuilder.java  |   8 +-
 .../fs/azurebfs/services/KeyProvider.java   |  42 +++
 .../services/ShellDecryptionKeyProvider.java|  63 
 .../fs/azurebfs/services/SimpleKeyProvider.java |  54 +++
 .../azurebfs/AbstractAbfsIntegrationTest.java   |  17 +-
 .../hadoop/fs/azurebfs/ITestAbfsClient.java |  45 +++
 .../fs/azurebfs/ITestAbfsReadWriteAndSeek.java  |  89 +
 .../azurebfs/ITestAzureBlobFileSystemE2E.java   |   2 +-
 .../ITestAzureBlobFileSystemE2EScale.java   |   4 +-
 .../ITestAzureBlobFileSystemFinalize.java   |  60 
 .../azurebfs/ITestAzureBlobFileSystemFlush.java | 136 ++-
 .../ITestAzureBlobFileSystemInitAndCreate.java  |   4 +-
 .../ITestAzureBlobFileSystemRename.java |   3 +-
 .../fs/azurebfs/ITestFileSystemProperties.java  |   4 -
 .../TestAbfsConfigurationFieldsValidation.java  | 149 
 .../contract/AbfsFileSystemContract.java|   5 +-
 .../services/ITestAbfsReadWriteAndSeek.java |  91 -
 .../fs/azurebfs/services/TestAbfsClient.java|  60 
 .../TestAbfsConfigurationFieldsValidation.java  | 147 
 .../TestShellDecryptionKeyProvider.java |  89 +
 36 files changed, 1344 insertions(+), 632 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b54b0c1b/hadoop-tools/hadoop-azure/pom.xml
--
diff --git a/hadoop-tools/hadoop-azure/pom.xml 
b/hadoop-tools/hadoop-azure/pom.xml
index cbd4dfb..7d0406c 100644
--- a/hadoop-tools/hadoop-azure/pom.xml
+++ b/hadoop-tools/hadoop-azure/pom.xml
@@ -149,17 +149,6 @@
   provided
 
 
-
-  com.fasterxml.jackson.core
-  jackson-core
-  compile
-
-
-
-  com.fasterxml.jackson.core
-  jackson-databind
-  compile
-
 
 
   org.apache.httpcomponents
@@ -198,17 +187,24 @@
 
 
 
-  joda-time
-  joda-time
+  org.eclipse.jetty
+  jetty-util-ajax
   compile
 
 
 
-  org.eclipse.jetty
-  jetty-util-ajax
+  org.codehaus.jackson
+  jackson-mapper-asl
+  compile
+
+
+  org.codehaus.jackson
+  jackson-core-asl
   compile
 
 
+
+
 
 
   junit

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b54b0c1b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
new file mode 100644
index 000..1fb5df9
--- /dev/null
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
@@ -0,0 +1,356 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.

[11/50] [abbrv] hadoop git commit: HDFS-13919. Documentation: Improper formatting in Disk Balancer for Settings. Contributed by Ayush Saxena.

2018-09-17 Thread tmarquardt
HDFS-13919. Documentation: Improper formatting in Disk Balancer for Settings.
Contributed by Ayush Saxena.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fdf5a3fd
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fdf5a3fd
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fdf5a3fd

Branch: refs/heads/HADOOP-15407
Commit: fdf5a3fd63a24b2cb2acafbc30ae4f993ff33145
Parents: 8469366
Author: Anu Engineer 
Authored: Mon Sep 17 10:08:23 2018 -0700
Committer: Anu Engineer 
Committed: Mon Sep 17 10:08:23 2018 -0700

--
 .../hadoop-hdfs/src/site/markdown/HDFSDiskbalancer.md   | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fdf5a3fd/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSDiskbalancer.md
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSDiskbalancer.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSDiskbalancer.md
index ed0233a..5dd6ffc 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSDiskbalancer.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSDiskbalancer.md
@@ -127,6 +127,7 @@ There is a set of diskbalancer settings that can be 
controlled via hdfs-site.xml
 |`dfs.disk.balancer.block.tolerance.percent`| The tolerance percent specifies 
when we have reached a good enough value for any copy step. For example, if you 
specify 10% then getting close to 10% of the target value is good enough.|
 |`dfs.disk.balancer.plan.threshold.percent`| The percentage threshold value 
for volume Data Density in a plan. If the absolute value of volume Data Density 
which is out of threshold value in a node, it means that the volumes 
corresponding to the disks should do the balancing in the plan. The default 
value is 10.|
 |`dfs.disk.balancer.plan.valid.interval`| Maximum amount of time disk balancer 
plan is valid. Supports the following suffixes (case insensitive): ms(millis), 
s(sec), m(min), h(hour), d(day) to specify the time (such as 2s, 2m, 1h, etc.). 
If no suffix is specified then milliseconds is assumed. Default value is 1d|
+
  Debugging
 -
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[48/50] [abbrv] hadoop git commit: HADOOP-15446. ABFS: tune imports & javadocs; stabilise tests. Contributed by Steve Loughran and Da Zhou.

2018-09-17 Thread tmarquardt
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ce03a93f/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemBackCompat.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemBackCompat.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemBackCompat.java
index d107c9d..d696481 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemBackCompat.java
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemBackCompat.java
@@ -27,13 +27,11 @@ import org.junit.Test;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.Path;
 
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertTrue;
-
 /**
  * Test AzureBlobFileSystem back compatibility with WASB.
  */
-public class ITestAzureBlobFileSystemBackCompat extends DependencyInjectedTest 
{
+public class ITestAzureBlobFileSystemBackCompat extends
+AbstractAbfsIntegrationTest {
   public ITestAzureBlobFileSystemBackCompat() {
 super();
   }
@@ -54,13 +52,13 @@ public class ITestAzureBlobFileSystemBackCompat extends 
DependencyInjectedTest {
 blockBlob.uploadText("");
 
 FileStatus[] fileStatuses = fs.listStatus(new Path("/test/10/"));
-assertEquals(fileStatuses.length, 2);
-assertEquals(fileStatuses[0].getPath().getName(), "10");
+assertEquals(2, fileStatuses.length);
+assertEquals("10", fileStatuses[0].getPath().getName());
 assertTrue(fileStatuses[0].isDirectory());
-assertEquals(fileStatuses[0].getLen(), 0);
-assertEquals(fileStatuses[1].getPath().getName(), "123");
+assertEquals(0, fileStatuses[0].getLen());
+assertEquals("123", fileStatuses[1].getPath().getName());
 assertTrue(fileStatuses[1].isDirectory());
-assertEquals(fileStatuses[1].getLen(), 0);
+assertEquals(0, fileStatuses[1].getLen());
   }
 
   private String getBlobConnectionString() {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ce03a93f/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemCopy.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemCopy.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemCopy.java
index c158e03..90eff97 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemCopy.java
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemCopy.java
@@ -33,30 +33,29 @@ import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.FileUtil;
 import org.apache.hadoop.fs.Path;
 
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertTrue;
+import static org.apache.hadoop.fs.contract.ContractTestUtils.assertIsFile;
 
 /**
  * Test copy operation.
  */
-public class ITestAzureBlobFileSystemCopy extends DependencyInjectedTest {
+public class ITestAzureBlobFileSystemCopy extends AbstractAbfsIntegrationTest {
   public ITestAzureBlobFileSystemCopy() {
 super();
   }
 
   @Test
   public void testCopyFromLocalFileSystem() throws Exception {
-final AzureBlobFileSystem fs = this.getFileSystem();
+final AzureBlobFileSystem fs = getFileSystem();
 Path localFilePath = new Path(System.getProperty("test.build.data",
 "azure_test"));
-FileSystem localFs = FileSystem.get(new Configuration());
+FileSystem localFs = FileSystem.getLocal(new Configuration());
 localFs.delete(localFilePath, true);
 try {
   writeString(localFs, localFilePath, "Testing");
   Path dstPath = new Path("copiedFromLocal");
   assertTrue(FileUtil.copy(localFs, localFilePath, fs, dstPath, false,
   fs.getConf()));
-  assertTrue(fs.exists(dstPath));
+  assertIsFile(fs, dstPath);
   assertEquals("Testing", readString(fs, dstPath));
   fs.delete(dstPath, true);
 } finally {
@@ -65,36 +64,32 @@ public class ITestAzureBlobFileSystemCopy extends 
DependencyInjectedTest {
   }
 
   private String readString(FileSystem fs, Path testFile) throws IOException {
-FSDataInputStream inputStream = fs.open(testFile);
-String ret = readString(inputStream);
-inputStream.close();
-return ret;
+return readString(fs.open(testFile));
   }
 
   private String readString(FSDataInputStream inputStream) throws IOException {
-BufferedReader reader = new BufferedReader(new InputStreamReader(
-inputStream));
-final int bufferSize = 1024;
-char[] buffer = new char[bufferSize];
-int count = reader.read(buffer, 0, bufferSize);
-if (count > bufferSize) {
- 

[32/50] [abbrv] hadoop git commit: HADOOP-15694. ABFS: Allow OAuth credentials to not be tied to accounts. Contributed by Sean Mackrory.

2018-09-17 Thread tmarquardt
HADOOP-15694. ABFS: Allow OAuth credentials to not be tied to accounts.
Contributed by Sean Mackrory.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e5593cbd
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e5593cbd
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e5593cbd

Branch: refs/heads/HADOOP-15407
Commit: e5593cbd8323399359b3e8da46bd58e8364cbf22
Parents: 13c70e9
Author: Thomas Marquardt 
Authored: Wed Sep 12 22:51:41 2018 +
Committer: Thomas Marquardt 
Committed: Mon Sep 17 19:54:01 2018 +

--
 .../hadoop/fs/azurebfs/AbfsConfiguration.java   | 208 ++
 .../fs/azurebfs/AzureBlobFileSystemStore.java   |  69 ++---
 .../azurebfs/constants/ConfigurationKeys.java   |  44 +--
 .../oauth2/UserPasswordTokenProvider.java   |  10 -
 .../services/ShellDecryptionKeyProvider.java|  14 +-
 .../fs/azurebfs/services/SimpleKeyProvider.java |  18 +-
 .../hadoop-azure/src/site/markdown/abfs.md  |  10 +
 .../azurebfs/AbstractAbfsIntegrationTest.java   |  63 +++--
 .../fs/azurebfs/AbstractAbfsScaleTest.java  |   8 +-
 .../hadoop/fs/azurebfs/ITestAbfsClient.java |   6 +-
 .../fs/azurebfs/ITestAbfsReadWriteAndSeek.java  |   4 +-
 .../ITestAzureBlobFileSystemAppend.java |   3 +-
 .../ITestAzureBlobFileSystemBackCompat.java |   3 +-
 .../azurebfs/ITestAzureBlobFileSystemCopy.java  |   3 +-
 .../ITestAzureBlobFileSystemCreate.java |   3 +-
 .../ITestAzureBlobFileSystemDelete.java |   3 +-
 .../azurebfs/ITestAzureBlobFileSystemE2E.java   |   5 +-
 .../ITestAzureBlobFileSystemE2EScale.java   |   2 +-
 .../ITestAzureBlobFileSystemFileStatus.java |   2 +-
 .../ITestAzureBlobFileSystemFinalize.java   |   8 +-
 .../azurebfs/ITestAzureBlobFileSystemFlush.java |   2 +-
 .../ITestAzureBlobFileSystemInitAndCreate.java  |   2 +-
 .../ITestAzureBlobFileSystemListStatus.java |   3 +-
 .../azurebfs/ITestAzureBlobFileSystemMkDir.java |   3 +-
 .../azurebfs/ITestAzureBlobFileSystemOauth.java |  20 +-
 .../ITestAzureBlobFileSystemRename.java |   4 +-
 .../ITestAzureBlobFileSystemRenameUnicode.java  |   2 +-
 .../azurebfs/ITestFileSystemInitialization.java |   8 +-
 .../fs/azurebfs/ITestFileSystemProperties.java  |   2 +-
 .../azurebfs/ITestFileSystemRegistration.java   |  15 +-
 .../TestAbfsConfigurationFieldsValidation.java  |  24 +-
 .../fs/azurebfs/TestAccountConfiguration.java   | 273 +++
 .../constants/TestConfigurationKeys.java|   2 +-
 .../contract/ABFSContractTestBinding.java   |   8 +-
 .../ITestAbfsFileSystemContractAppend.java  |   2 +-
 .../ITestAbfsFileSystemContractConcat.java  |   2 +-
 .../ITestAbfsFileSystemContractCreate.java  |   2 +-
 .../ITestAbfsFileSystemContractDelete.java  |   2 +-
 ...TestAbfsFileSystemContractGetFileStatus.java |   2 +-
 .../ITestAbfsFileSystemContractMkdir.java   |   2 +-
 .../ITestAbfsFileSystemContractOpen.java|   2 +-
 .../ITestAbfsFileSystemContractRename.java  |   2 +-
 ...TestAbfsFileSystemContractRootDirectory.java |   2 +-
 .../ITestAbfsFileSystemContractSeek.java|   2 +-
 .../ITestAbfsFileSystemContractSetTimes.java|   2 +-
 .../fs/azurebfs/services/TestAbfsClient.java|  10 +-
 .../TestShellDecryptionKeyProvider.java |   1 +
 .../hadoop/fs/azurebfs/utils/AbfsTestUtils.java |   5 +
 48 files changed, 665 insertions(+), 227 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e5593cbd/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
index 518fef9..927a315 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
@@ -54,6 +54,7 @@ import org.apache.hadoop.fs.azurebfs.services.AuthType;
 import org.apache.hadoop.fs.azurebfs.services.KeyProvider;
 import org.apache.hadoop.fs.azurebfs.services.SimpleKeyProvider;
 import org.apache.hadoop.fs.azurebfs.utils.SSLSocketFactoryEx;
+import org.apache.hadoop.security.ProviderUtils;
 import org.apache.hadoop.util.ReflectionUtils;
 
 import static org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.*;
@@ -65,7 +66,8 @@ import static 
org.apache.hadoop.fs.azurebfs.constants.FileSystemConfigurations.*
 @InterfaceAudience.Private
 @InterfaceStability.Evolving
 public class AbfsConfiguration{
-  private final Configuration configuration;
+  private final Configuration rawConfig;
+  

[10/50] [abbrv] hadoop git commit: HDDS-399. Persist open pipeline information across SCM restart. Contributed by Mukul Kumar Singh.

2018-09-17 Thread tmarquardt
HDDS-399. Persist open pipeline information across SCM restart. Contributed by 
Mukul Kumar Singh.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/84693669
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/84693669
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/84693669

Branch: refs/heads/HADOOP-15407
Commit: 846936698b2c8c50662e43534ac999df82066a8b
Parents: 9a265fa
Author: Nanda kumar 
Authored: Mon Sep 17 21:51:54 2018 +0530
Committer: Nanda kumar 
Committed: Mon Sep 17 21:51:54 2018 +0530

--
 .../scm/container/common/helpers/Pipeline.java  |  24 ++
 .../org/apache/hadoop/ozone/OzoneConsts.java|   2 +
 .../hdds/scm/container/ContainerMapping.java|  24 +-
 .../scm/container/ContainerStateManager.java|  25 +-
 .../scm/container/states/ContainerStateMap.java |  38 ---
 .../hdds/scm/pipelines/PipelineManager.java | 148 +--
 .../hdds/scm/pipelines/PipelineSelector.java| 249 +--
 .../scm/pipelines/PipelineStateManager.java | 136 ++
 .../scm/pipelines/ratis/RatisManagerImpl.java   |   8 +-
 .../standalone/StandaloneManagerImpl.java   |   8 +-
 .../container/TestContainerReportHandler.java   |   3 +-
 .../container/TestContainerStateManager.java|   4 +-
 .../hdds/scm/node/TestDeadNodeHandler.java  |   4 +-
 .../TestContainerStateManagerIntegration.java   |  10 +-
 .../hdds/scm/pipeline/TestNode2PipelineMap.java |  22 +-
 .../hdds/scm/pipeline/TestPipelineClose.java|  15 +-
 .../hdds/scm/pipeline/TestSCMRestart.java   | 101 
 .../apache/hadoop/ozone/MiniOzoneCluster.java   |   5 +-
 .../hadoop/ozone/MiniOzoneClusterImpl.java  |   8 +-
 19 files changed, 510 insertions(+), 324 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/84693669/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/common/helpers/Pipeline.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/common/helpers/Pipeline.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/common/helpers/Pipeline.java
index 6757262..ef148e5 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/common/helpers/Pipeline.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/common/helpers/Pipeline.java
@@ -86,6 +86,30 @@ public class Pipeline {
 datanodes = new TreeMap<>();
   }
 
+  @Override
+  public int hashCode() {
+return id.hashCode();
+  }
+
+  @Override
+  public boolean equals(Object o) {
+if (this == o) {
+  return true;
+}
+if (o == null || getClass() != o.getClass()) {
+  return false;
+}
+
+Pipeline that = (Pipeline) o;
+
+return id.equals(that.id)
+&& factor.equals(that.factor)
+&& type.equals(that.type)
+&& lifeCycleState.equals(that.lifeCycleState)
+&& leaderID.equals(that.leaderID);
+
+  }
+
   /**
* Gets pipeline object from protobuf.
*

http://git-wip-us.apache.org/repos/asf/hadoop/blob/84693669/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
index bf4508b..0a15ec8 100644
--- a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
+++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
@@ -90,7 +90,9 @@ public final class OzoneConsts {
* level DB names used by SCM and data nodes.
*/
   public static final String CONTAINER_DB_SUFFIX = "container.db";
+  public static final String PIPELINE_DB_SUFFIX = "pipeline.db";
   public static final String SCM_CONTAINER_DB = "scm-" + CONTAINER_DB_SUFFIX;
+  public static final String SCM_PIPELINE_DB = "scm-" + PIPELINE_DB_SUFFIX;
   public static final String DN_CONTAINER_DB = "-dn-"+ CONTAINER_DB_SUFFIX;
   public static final String DELETED_BLOCK_DB = "deletedBlock.db";
   public static final String OM_DB_NAME = "om.db";

http://git-wip-us.apache.org/repos/asf/hadoop/blob/84693669/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java
--
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java
index 5678205..11cc9ee 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMappin

[41/50] [abbrv] hadoop git commit: HADOOP-15407. HADOOP-15540. Support Windows Azure Storage - Blob file system "ABFS" in Hadoop: Core Commit.

2018-09-17 Thread tmarquardt
HADOOP-15407. HADOOP-15540. Support Windows Azure Storage - Blob file system 
"ABFS" in Hadoop: Core Commit.

Contributed by Shane Mainali, Thomas Marquardt, Zichen Sun, Georgi Chalakov, 
Esfandiar Manii, Amit Singh, Dana Kaban, Da Zhou, Junhua Gu, Saher Ahwal, 
Saurabh Pant, James Baker, Shaoyu Zhang, Lawrence Chen, Kevin Chen and Steve 
Loughran


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f044deed
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f044deed
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f044deed

Branch: refs/heads/HADOOP-15407
Commit: f044deedbbfee0812316d587139cb828f27172e9
Parents: 3d89c3e
Author: Steve Loughran 
Authored: Fri Jun 15 18:14:13 2018 +0100
Committer: Thomas Marquardt 
Committed: Mon Sep 17 19:54:01 2018 +

--
 .gitignore  |   1 +
 .../src/main/resources/core-default.xml |  10 +
 .../conf/TestCommonConfigurationFields.java |   3 +
 hadoop-project/pom.xml  |  11 +
 hadoop-tools/hadoop-azure/pom.xml   |  61 +-
 .../src/config/checkstyle-suppressions.xml  |  47 ++
 .../org/apache/hadoop/fs/azurebfs/Abfs.java |  48 ++
 .../org/apache/hadoop/fs/azurebfs/Abfss.java|  48 ++
 .../hadoop/fs/azurebfs/AzureBlobFileSystem.java | 612 
 .../fs/azurebfs/SecureAzureBlobFileSystem.java  |  41 ++
 .../azurebfs/constants/AbfsHttpConstants.java   |  76 ++
 .../azurebfs/constants/ConfigurationKeys.java   |  57 ++
 .../constants/FileSystemConfigurations.java |  59 ++
 .../constants/FileSystemUriSchemes.java |  42 ++
 .../constants/HttpHeaderConfigurations.java |  57 ++
 .../fs/azurebfs/constants/HttpQueryParams.java  |  40 ++
 .../fs/azurebfs/constants/package-info.java |  22 +
 .../ConfigurationValidationAnnotations.java | 104 +++
 .../contracts/annotations/package-info.java |  22 +
 .../diagnostics/ConfigurationValidator.java |  37 +
 .../contracts/diagnostics/package-info.java |  22 +
 .../exceptions/AbfsRestOperationException.java  |  84 +++
 .../AzureBlobFileSystemException.java   |  56 ++
 .../ConfigurationPropertyNotFoundException.java |  32 +
 .../FileSystemOperationUnhandledException.java  |  33 +
 .../InvalidAbfsRestOperationException.java  |  40 ++
 .../InvalidConfigurationValueException.java |  37 +
 .../InvalidFileSystemPropertyException.java |  33 +
 .../InvalidUriAuthorityException.java   |  33 +
 .../exceptions/InvalidUriException.java |  33 +
 .../exceptions/ServiceResolutionException.java  |  36 +
 .../contracts/exceptions/TimeoutException.java  |  33 +
 .../contracts/exceptions/package-info.java  |  22 +
 .../fs/azurebfs/contracts/package-info.java |  22 +
 .../services/AbfsHttpClientFactory.java |  39 ++
 .../contracts/services/AbfsHttpService.java | 162 +
 .../contracts/services/AbfsServiceProvider.java |  40 ++
 .../services/AzureServiceErrorCode.java | 112 +++
 .../services/ConfigurationService.java  | 143 
 .../contracts/services/InjectableService.java   |  30 +
 .../services/ListResultEntrySchema.java | 160 +
 .../contracts/services/ListResultSchema.java|  58 ++
 .../contracts/services/ReadBufferStatus.java|  29 +
 .../contracts/services/TracingService.java  |  66 ++
 .../contracts/services/package-info.java|  22 +
 ...Base64StringConfigurationBasicValidator.java |  50 ++
 .../BooleanConfigurationBasicValidator.java |  50 ++
 .../ConfigurationBasicValidator.java|  67 ++
 .../IntegerConfigurationBasicValidator.java |  68 ++
 .../LongConfigurationBasicValidator.java|  65 ++
 .../StringConfigurationBasicValidator.java  |  45 ++
 .../fs/azurebfs/diagnostics/package-info.java   |  22 +
 .../org/apache/hadoop/fs/azurebfs/package.html  |  31 +
 .../hadoop/fs/azurebfs/services/AbfsClient.java | 402 +++
 .../services/AbfsHttpClientFactoryImpl.java | 116 
 .../fs/azurebfs/services/AbfsHttpHeader.java|  40 ++
 .../fs/azurebfs/services/AbfsHttpOperation.java | 430 
 .../azurebfs/services/AbfsHttpServiceImpl.java  | 693 +++
 .../fs/azurebfs/services/AbfsInputStream.java   | 382 ++
 .../fs/azurebfs/services/AbfsOutputStream.java  | 335 +
 .../fs/azurebfs/services/AbfsRestOperation.java | 178 +
 .../services/AbfsServiceInjectorImpl.java   |  81 +++
 .../services/AbfsServiceProviderImpl.java   |  96 +++
 .../azurebfs/services/AbfsUriQueryBuilder.java  |  58 ++
 .../services/ConfigurationServiceImpl.java  | 317 +
 .../services/ExponentialRetryPolicy.java| 141 
 .../azurebfs/services/LoggerSpanReceiver.java   |  74 ++
 .../hadoop/fs/azurebfs/services/ReadBuffer.java | 139 
 .../fs/azurebfs/services/ReadBufferManager.java | 391 

[43/50] [abbrv] hadoop git commit: HADOOP 15688. ABFS: InputStream wrapped in FSDataInputStream twice. Contributed by Sean Mackrory.

2018-09-17 Thread tmarquardt
HADOOP 15688. ABFS: InputStream wrapped in FSDataInputStream twice.
Contributed by Sean Mackrory.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6b6f8cc2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6b6f8cc2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6b6f8cc2

Branch: refs/heads/HADOOP-15407
Commit: 6b6f8cc2bedefc98028d875398ce022edaf77933
Parents: 9c1e4e8
Author: Thomas Marquardt 
Authored: Thu Aug 23 20:43:52 2018 +
Committer: Thomas Marquardt 
Committed: Mon Sep 17 19:54:01 2018 +

--
 .../fs/azurebfs/AzureBlobFileSystemStore.java   | 34 +---
 1 file changed, 16 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6b6f8cc2/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
index 58df914..fc60127 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
@@ -19,7 +19,6 @@ package org.apache.hadoop.fs.azurebfs;
 
 import java.io.File;
 import java.io.IOException;
-import java.io.InputStream;
 import java.io.OutputStream;
 import java.net.MalformedURLException;
 import java.net.URI;
@@ -50,8 +49,6 @@ import com.google.common.base.Preconditions;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.FSDataInputStream;
-import org.apache.hadoop.fs.FSDataOutputStream;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
@@ -251,11 +248,12 @@ public class AzureBlobFileSystemStore {
 isNamespaceEnabled ? getOctalNotation(permission) : null,
 isNamespaceEnabled ? getOctalNotation(umask) : null);
 
-final OutputStream outputStream;
-outputStream = new FSDataOutputStream(
-new AbfsOutputStream(client, AbfsHttpConstants.FORWARD_SLASH + 
getRelativePath(path), 0,
-abfsConfiguration.getWriteBufferSize(), 
abfsConfiguration.isFlushEnabled()), null);
-return outputStream;
+return new AbfsOutputStream(
+client,
+AbfsHttpConstants.FORWARD_SLASH + getRelativePath(path),
+0,
+abfsConfiguration.getWriteBufferSize(),
+abfsConfiguration.isFlushEnabled());
   }
 
   public void createDirectory(final Path path, final FsPermission permission, 
final FsPermission umask)
@@ -273,7 +271,7 @@ public class AzureBlobFileSystemStore {
 isNamespaceEnabled ? getOctalNotation(umask) : null);
   }
 
-  public InputStream openFileForRead(final Path path, final 
FileSystem.Statistics statistics)
+  public AbfsInputStream openFileForRead(final Path path, final 
FileSystem.Statistics statistics)
   throws AzureBlobFileSystemException {
 LOG.debug("openFileForRead filesystem: {} path: {}",
 client.getFileSystem(),
@@ -294,10 +292,9 @@ public class AzureBlobFileSystemStore {
 }
 
 // Add statistics for InputStream
-return new FSDataInputStream(
-new AbfsInputStream(client, statistics,
-AbfsHttpConstants.FORWARD_SLASH + getRelativePath(path), 
contentLength,
-abfsConfiguration.getReadBufferSize(), 
abfsConfiguration.getReadAheadQueueDepth(), eTag));
+return new AbfsInputStream(client, statistics,
+AbfsHttpConstants.FORWARD_SLASH + getRelativePath(path), 
contentLength,
+abfsConfiguration.getReadBufferSize(), 
abfsConfiguration.getReadAheadQueueDepth(), eTag);
   }
 
   public OutputStream openFileForWrite(final Path path, final boolean 
overwrite) throws
@@ -322,11 +319,12 @@ public class AzureBlobFileSystemStore {
 
 final long offset = overwrite ? 0 : contentLength;
 
-final OutputStream outputStream;
-outputStream = new FSDataOutputStream(
-new AbfsOutputStream(client, AbfsHttpConstants.FORWARD_SLASH + 
getRelativePath(path),
-offset, abfsConfiguration.getWriteBufferSize(), 
abfsConfiguration.isFlushEnabled()), null);
-return outputStream;
+return new AbfsOutputStream(
+client,
+AbfsHttpConstants.FORWARD_SLASH + getRelativePath(path),
+offset,
+abfsConfiguration.getWriteBufferSize(),
+abfsConfiguration.isFlushEnabled());
   }
 
   public void rename(fi

[08/50] [abbrv] hadoop git commit: YARN-8715. Make allocation tags in the placement spec optional for node-attributes. Contributed by Weiwei Yang.

2018-09-17 Thread tmarquardt
YARN-8715. Make allocation tags in the placement spec optional for 
node-attributes. Contributed by Weiwei Yang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/33d8327c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/33d8327c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/33d8327c

Branch: refs/heads/HADOOP-15407
Commit: 33d8327cffdc483b538aec3022fd8730b85babdb
Parents: 95231f1
Author: Sunil G 
Authored: Mon Sep 17 10:07:45 2018 +0530
Committer: Sunil G 
Committed: Mon Sep 17 10:07:45 2018 +0530

--
 .../constraint/PlacementConstraintParser.java   | 44 ++--
 .../resource/TestPlacementConstraintParser.java | 22 ++
 .../distributedshell/ApplicationMaster.java | 20 +++--
 3 files changed, 78 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/33d8327c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/constraint/PlacementConstraintParser.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/constraint/PlacementConstraintParser.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/constraint/PlacementConstraintParser.java
index 93fd706..de9419a 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/constraint/PlacementConstraintParser.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/constraint/PlacementConstraintParser.java
@@ -17,6 +17,7 @@
  */
 package org.apache.hadoop.yarn.util.constraint;
 
+import com.google.common.base.Strings;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.yarn.api.records.NodeAttributeOpCode;
@@ -589,6 +590,14 @@ public final class PlacementConstraintParser {
   this.num = number;
 }
 
+public static SourceTags emptySourceTags() {
+  return new SourceTags("", 0);
+}
+
+public boolean isEmpty() {
+  return Strings.isNullOrEmpty(tag) && num == 0;
+}
+
 public String getTag() {
   return this.tag;
 }
@@ -692,20 +701,47 @@ public final class PlacementConstraintParser {
   // foo=4,Pn
   String[] splitted = specStr.split(
   String.valueOf(EXPRESSION_VAL_DELIM), 2);
-  if (splitted.length != 2) {
+  final SourceTags st;
+  final String exprs;
+  if (splitted.length == 1) {
+// source tags not specified
+exprs = splitted[0];
+st = SourceTags.emptySourceTags();
+  } else if (splitted.length == 2) {
+exprs = splitted[1];
+String tagAlloc = splitted[0];
+st = SourceTags.parseFrom(tagAlloc);
+  } else {
 throw new PlacementConstraintParseException(
 "Unexpected placement constraint expression " + specStr);
   }
 
-  String tagAlloc = splitted[0];
-  SourceTags st = SourceTags.parseFrom(tagAlloc);
-  String exprs = splitted[1];
   AbstractConstraint constraint =
   PlacementConstraintParser.parseExpression(exprs);
 
   result.put(st, constraint.build());
 }
 
+// Validation
+Set sourceTagSet = result.keySet();
+if (sourceTagSet.stream()
+.filter(sourceTags -> sourceTags.isEmpty())
+.findAny()
+.isPresent()) {
+  // Source tags, e.g foo=3, is optional for a node-attribute constraint,
+  // but when source tags is absent, the parser only accept single
+  // constraint expression to avoid ambiguous semantic. This is because
+  // DS AM is requesting number of containers per the number specified
+  // in the source tags, we do overwrite when there is no source tags
+  // with num_containers argument from commandline. If that is partially
+  // missed in the constraints, we don't know if it is ought to
+  // overwritten or not.
+  if (result.size() != 1) {
+throw new PlacementConstraintParseException(
+"Source allocation tags is required for a multi placement"
++ " constraint expression.");
+  }
+}
 return result;
   }
 }
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/33d8327c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/api/resource/TestPlacementConstraintParser.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/api/resource/TestPlacementConstraintParser.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-a

[40/50] [abbrv] hadoop git commit: HADOOP-15407. HADOOP-15540. Support Windows Azure Storage - Blob file system "ABFS" in Hadoop: Core Commit.

2018-09-17 Thread tmarquardt
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f044deed/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/contracts/exceptions/FileSystemOperationUnhandledException.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/contracts/exceptions/FileSystemOperationUnhandledException.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/contracts/exceptions/FileSystemOperationUnhandledException.java
new file mode 100644
index 000..484c838
--- /dev/null
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/contracts/exceptions/FileSystemOperationUnhandledException.java
@@ -0,0 +1,33 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.contracts.exceptions;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+
+/**
+ * Thrown when an unhandled exception is occurred during a file system 
operation.
+ */
+@InterfaceAudience.Public
+@InterfaceStability.Evolving
+public final class FileSystemOperationUnhandledException extends 
AzureBlobFileSystemException {
+  public FileSystemOperationUnhandledException(Exception innerException) {
+super("An unhandled file operation exception", innerException);
+  }
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f044deed/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/contracts/exceptions/InvalidAbfsRestOperationException.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/contracts/exceptions/InvalidAbfsRestOperationException.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/contracts/exceptions/InvalidAbfsRestOperationException.java
new file mode 100644
index 000..aba1d8c
--- /dev/null
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/contracts/exceptions/InvalidAbfsRestOperationException.java
@@ -0,0 +1,40 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+package org.apache.hadoop.fs.azurebfs.contracts.exceptions;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.fs.azurebfs.contracts.services.AzureServiceErrorCode;
+
+/**
+ * Exception to wrap invalid Azure service error responses.
+ */
+@InterfaceAudience.Public
+@InterfaceStability.Evolving
+public class InvalidAbfsRestOperationException extends 
AbfsRestOperationException {
+  public InvalidAbfsRestOperationException(
+  final Exception innerException) {
+super(
+AzureServiceErrorCode.UNKNOWN.getStatusCode(),
+AzureServiceErrorCode.UNKNOWN.getErrorCode(),
+"InvalidAbfsRestOperationException",
+innerException);
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f044deed/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/contracts/exceptions/InvalidConfigurationValueException.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/contracts/exceptions/InvalidConfigurationValueException.java
 
b/hadoop-tools/hadoop-azure/src/m

[29/50] [abbrv] hadoop git commit: HADOOP-15661. ABFS: Add support for ACL. Contributed by Junhua Gu and Da Zhou.

2018-09-17 Thread tmarquardt
HADOOP-15661. ABFS: Add support for ACL.
Contributed by Junhua Gu and Da Zhou.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9c1e4e81
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9c1e4e81
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9c1e4e81

Branch: refs/heads/HADOOP-15407
Commit: 9c1e4e81399913f180131f4faa95604087c6d962
Parents: 9149b97
Author: Thomas Marquardt 
Authored: Wed Aug 22 18:31:47 2018 +
Committer: Thomas Marquardt 
Committed: Mon Sep 17 19:54:01 2018 +

--
 .../hadoop/fs/azurebfs/AzureBlobFileSystem.java |  199 +++-
 .../fs/azurebfs/AzureBlobFileSystemStore.java   |  351 +-
 .../azurebfs/constants/AbfsHttpConstants.java   |   15 +
 .../constants/HttpHeaderConfigurations.java |6 +
 .../InvalidAclOperationException.java   |   33 +
 .../services/AzureServiceErrorCode.java |1 +
 .../fs/azurebfs/services/AbfsAclHelper.java |  202 
 .../hadoop/fs/azurebfs/services/AbfsClient.java |  119 +-
 .../fs/azurebfs/services/AbfsPermission.java|  114 ++
 .../ITestAzureBlobFileSystemBackCompat.java |2 +
 .../ITestAzureBlobFileSystemFileStatus.java |   43 +-
 .../azurebfs/ITestAzureBlobFileSystemFlush.java |6 +
 .../ITestAzureBlobFileSystemPermission.java |  109 ++
 .../ITestAzureBlobFileSystemRandomRead.java |4 +-
 .../ITestAzureBlobFileSystemRename.java |   14 +
 .../azurebfs/ITestAzureBlobFilesystemAcl.java   | 1071 ++
 .../fs/azurebfs/ITestWasbAbfsCompatibility.java |   12 +
 .../fs/azurebfs/utils/AclTestHelpers.java   |  119 ++
 .../hadoop/fs/azurebfs/utils/Parallelized.java  |   60 +
 19 files changed, 2422 insertions(+), 58 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9c1e4e81/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
index 2cb517b..6bec7cb 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
@@ -26,6 +26,7 @@ import java.io.OutputStream;
 import java.net.HttpURLConnection;
 import java.net.URI;
 import java.net.URISyntaxException;
+import java.util.List;
 import java.util.ArrayList;
 import java.util.EnumSet;
 import java.util.concurrent.Callable;
@@ -60,6 +61,8 @@ import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.FileSystemOperationUnh
 import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.InvalidUriAuthorityException;
 import org.apache.hadoop.fs.azurebfs.contracts.exceptions.InvalidUriException;
 import org.apache.hadoop.fs.azurebfs.contracts.services.AzureServiceErrorCode;
+import org.apache.hadoop.fs.permission.AclEntry;
+import org.apache.hadoop.fs.permission.AclStatus;
 import org.apache.hadoop.fs.permission.FsPermission;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.util.Progressable;
@@ -154,7 +157,8 @@ public class AzureBlobFileSystem extends FileSystem {
 blockSize);
 
 try {
-  OutputStream outputStream = abfsStore.createFile(makeQualified(f), 
overwrite);
+  OutputStream outputStream = abfsStore.createFile(makeQualified(f), 
overwrite,
+  permission == null ? FsPermission.getFileDefault() : permission, 
FsPermission.getUMask(getConf()));
   return new FSDataOutputStream(outputStream, statistics);
 } catch(AzureBlobFileSystemException ex) {
   checkException(f, ex);
@@ -253,7 +257,8 @@ public class AzureBlobFileSystem extends FileSystem {
   AzureServiceErrorCode.INVALID_RENAME_SOURCE_PATH,
   AzureServiceErrorCode.SOURCE_PATH_NOT_FOUND,
   
AzureServiceErrorCode.INVALID_SOURCE_OR_DESTINATION_RESOURCE_TYPE,
-  AzureServiceErrorCode.RENAME_DESTINATION_PARENT_PATH_NOT_FOUND);
+  AzureServiceErrorCode.RENAME_DESTINATION_PARENT_PATH_NOT_FOUND,
+  AzureServiceErrorCode.INTERNAL_OPERATION_ABORT);
   return false;
 }
 
@@ -308,7 +313,8 @@ public class AzureBlobFileSystem extends FileSystem {
 }
 
 try {
-  abfsStore.createDirectory(makeQualified(f));
+  abfsStore.createDirectory(makeQualified(f), permission == null ? 
FsPermission.getDirDefault() : permission,
+  FsPermission.getUMask(getConf()));
   return true;
 } catch (AzureBlobFileSystemException ex) {
   checkException(f, ex, AzureServiceErrorCode.PATH_ALREADY_EXISTS);
@@ -457,6 +463,188 @@ publi

[47/50] [abbrv] hadoop git commit: HADOOP-15446. ABFS: tune imports & javadocs; stabilise tests. Contributed by Steve Loughran and Da Zhou.

2018-09-17 Thread tmarquardt
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ce03a93f/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestWasbAbfsCompatibility.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestWasbAbfsCompatibility.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestWasbAbfsCompatibility.java
index 7010e74..a89c044 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestWasbAbfsCompatibility.java
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestWasbAbfsCompatibility.java
@@ -17,59 +17,62 @@
  */
 package org.apache.hadoop.fs.azurebfs;
 
-import org.apache.hadoop.fs.FileStatus;
-import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.FSDataInputStream;
-import org.apache.hadoop.fs.FSDataOutputStream;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.fs.azure.NativeAzureFileSystem;
+import java.io.BufferedReader;
+import java.io.InputStreamReader;
 
-import org.junit.Assert;
 import org.junit.Assume;
 import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
-import java.io.BufferedReader;
-import java.io.InputStreamReader;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azure.NativeAzureFileSystem;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
 
-import static junit.framework.TestCase.assertEquals;
-import static junit.framework.TestCase.assertFalse;
-import static junit.framework.TestCase.assertTrue;
+import static org.apache.hadoop.fs.contract.ContractTestUtils.assertDeleted;
+import static 
org.apache.hadoop.fs.contract.ContractTestUtils.assertIsDirectory;
+import static org.apache.hadoop.fs.contract.ContractTestUtils.assertMkdirs;
+import static org.apache.hadoop.fs.contract.ContractTestUtils.assertPathExists;
 
 /**
  * Test compatibility between ABFS client and WASB client.
  */
-public class ITestWasbAbfsCompatibility extends DependencyInjectedTest {
+public class ITestWasbAbfsCompatibility extends AbstractAbfsIntegrationTest {
   private static final String WASB_TEST_CONTEXT = "wasb test file";
   private static final String ABFS_TEST_CONTEXT = "abfs test file";
   private static final String TEST_CONTEXT = "THIS IS FOR TEST";
 
-  public ITestWasbAbfsCompatibility() throws Exception {
-super();
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ITestWasbAbfsCompatibility.class);
 
-Assume.assumeFalse(this.isEmulator());
+  public ITestWasbAbfsCompatibility() throws Exception {
+Assume.assumeFalse("Emulator is not supported", isEmulator());
   }
 
   @Test
   public void testListFileStatus() throws Exception {
 // crate file using abfs
-AzureBlobFileSystem fs = this.getFileSystem();
-NativeAzureFileSystem wasb = this.getWasbFileSystem();
+AzureBlobFileSystem fs = getFileSystem();
+NativeAzureFileSystem wasb = getWasbFileSystem();
 
 Path path1 = new Path("/testfiles/~12/!008/3/abFsTestfile");
-FSDataOutputStream abfsStream = fs.create(path1, true);
-abfsStream.write(ABFS_TEST_CONTEXT.getBytes());
-abfsStream.flush();
-abfsStream.hsync();
-abfsStream.close();
+try(FSDataOutputStream abfsStream = fs.create(path1, true)) {
+  abfsStream.write(ABFS_TEST_CONTEXT.getBytes());
+  abfsStream.flush();
+  abfsStream.hsync();
+}
 
 // create file using wasb
 Path path2 = new Path("/testfiles/~12/!008/3/nativeFsTestfile");
-System.out.println(wasb.getUri());
-FSDataOutputStream nativeFsStream = wasb.create(path2, true);
-nativeFsStream.write(WASB_TEST_CONTEXT.getBytes());
-nativeFsStream.flush();
-nativeFsStream.hsync();
-nativeFsStream.close();
+LOG.info("{}", wasb.getUri());
+try(FSDataOutputStream nativeFsStream = wasb.create(path2, true)) {
+  nativeFsStream.write(WASB_TEST_CONTEXT.getBytes());
+  nativeFsStream.flush();
+  nativeFsStream.hsync();
+}
 // list file using abfs and wasb
 FileStatus[] abfsFileStatus = fs.listStatus(new 
Path("/testfiles/~12/!008/3/"));
 FileStatus[] nativeFsFileStatus = wasb.listStatus(new 
Path("/testfiles/~12/!008/3/"));
@@ -83,52 +86,34 @@ public class ITestWasbAbfsCompatibility extends 
DependencyInjectedTest {
 boolean[] createFileWithAbfs = new boolean[]{false, true, false, true};
 boolean[] readFileWithAbfs = new boolean[]{false, true, true, false};
 
-AzureBlobFileSystem abfs = this.getFileSystem();
-NativeAzureFileSystem wasb = this.getWasbFileSystem();
+AzureBlobFileSystem abfs = getFileSystem();
+NativeAzureFileSystem wasb = getWasbFileSystem();
 
-FileSystem fs;
-BufferedReader br = null;
 for (int i = 0; i< 4; i++) {
-  try {
-Path p

[37/50] [abbrv] hadoop git commit: HADOOP-15407. HADOOP-15540. Support Windows Azure Storage - Blob file system "ABFS" in Hadoop: Core Commit.

2018-09-17 Thread tmarquardt
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f044deed/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/SharedKeyCredentials.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/SharedKeyCredentials.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/SharedKeyCredentials.java
new file mode 100644
index 000..dd59892
--- /dev/null
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/SharedKeyCredentials.java
@@ -0,0 +1,507 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import javax.crypto.Mac;
+import javax.crypto.spec.SecretKeySpec;
+import java.io.UnsupportedEncodingException;
+import java.net.HttpURLConnection;
+import java.net.URLDecoder;
+import java.text.DateFormat;
+import java.text.SimpleDateFormat;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.Date;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Locale;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.TimeZone;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
+
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+
+import org.apache.commons.codec.binary.Base64;
+import org.apache.commons.codec.Charsets;
+/**
+ * Represents the shared key credentials used to access an Azure Storage
+ * account.
+ */
+public class SharedKeyCredentials {
+  private static final int EXPECTED_BLOB_QUEUE_CANONICALIZED_STRING_LENGTH = 
300;
+  private static final Pattern CRLF = Pattern.compile("\r\n", Pattern.LITERAL);
+  private static final String HMAC_SHA256 = "HmacSHA256";
+  private static final Base64 BASE_64 = new Base64();
+
+  /**
+   * Stores a reference to the RFC1123 date/time pattern.
+   */
+  private static final String RFC1123_PATTERN = "EEE, dd MMM  HH:mm:ss z";
+
+
+  private String accountName;
+  private byte[] accountKey;
+  private Mac hmacSha256;
+
+  public SharedKeyCredentials(final String accountName,
+  final String accountKey) {
+if (accountName == null || accountName.isEmpty()) {
+  throw new IllegalArgumentException("Invalid account name.");
+}
+if (accountKey == null || accountKey.isEmpty()) {
+  throw new IllegalArgumentException("Invalid account key.");
+}
+this.accountName = accountName;
+this.accountKey = BASE_64.decode(accountKey);
+initializeMac();
+  }
+
+  public void signRequest(HttpURLConnection connection, final long 
contentLength) throws UnsupportedEncodingException {
+
+connection.setRequestProperty(HttpHeaderConfigurations.X_MS_DATE, 
getGMTTime());
+
+final String stringToSign = canonicalize(connection, accountName, 
contentLength);
+
+final String computedBase64Signature = computeHmac256(stringToSign);
+
+connection.setRequestProperty(HttpHeaderConfigurations.AUTHORIZATION,
+String.format("%s %s:%s", "SharedKey", accountName, 
computedBase64Signature));
+  }
+
+  private String computeHmac256(final String stringToSign) {
+byte[] utf8Bytes = null;
+try {
+  utf8Bytes = stringToSign.getBytes(AbfsHttpConstants.UTF_8);
+} catch (final UnsupportedEncodingException e) {
+  throw new IllegalArgumentException(e);
+}
+byte[] hmac;
+synchronized (this) {
+  hmac = hmacSha256.doFinal(utf8Bytes);
+}
+return new String(BASE_64.encode(hmac), Charsets.UTF_8);
+  }
+
+  /**
+   * Add x-ms- prefixed headers in a fixed order.
+   *
+   * @param connthe HttpURLConnection for the operation
+   * @param canonicalizedString the canonicalized string to add the 
canonicalized headerst to.
+   */
+  private static void addCanonicalizedHeaders(final HttpURLConnection conn, 
final StringBuilder canonicalizedString) {
+// Look for header names that start with
+// HeaderNames.PrefixForStorageHeader
+// Then sort them in case-insensitive manner.
+
+fi

[42/50] [abbrv] hadoop git commit: HADOOP-15664. ABFS: Reduce test run time via parallelization and grouping. Contributed by Da Zhou.

2018-09-17 Thread tmarquardt
HADOOP-15664. ABFS: Reduce test run time via parallelization and grouping.
Contributed by Da Zhou.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4410eacb
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4410eacb
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4410eacb

Branch: refs/heads/HADOOP-15407
Commit: 4410eacba7862ec24173356fe3fd468fd79aeb8f
Parents: 81dc4a9
Author: Thomas Marquardt 
Authored: Sat Sep 1 20:39:34 2018 +
Committer: Thomas Marquardt 
Committed: Mon Sep 17 19:54:01 2018 +

--
 hadoop-tools/hadoop-azure/pom.xml   | 350 ++-
 .../hadoop/fs/azurebfs/AzureBlobFileSystem.java |   8 +-
 .../fs/azurebfs/services/AbfsOutputStream.java  |   6 +
 .../azure/ITestNativeFileSystemStatistics.java  |  99 ++
 .../fs/azure/NativeAzureFileSystemBaseTest.java |  80 +
 .../fs/azure/integration/AzureTestUtils.java|  53 ++-
 .../ITestAzureBlobFileSystemE2EScale.java   |  11 +-
 .../ITestAzureBlobFileSystemFileStatus.java |   3 +
 .../azurebfs/ITestAzureBlobFileSystemFlush.java | 167 +
 .../fs/azurebfs/ITestWasbAbfsCompatibility.java |   2 +-
 10 files changed, 631 insertions(+), 148 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4410eacb/hadoop-tools/hadoop-azure/pom.xml
--
diff --git a/hadoop-tools/hadoop-azure/pom.xml 
b/hadoop-tools/hadoop-azure/pom.xml
index 7152f638..42f4d05 100644
--- a/hadoop-tools/hadoop-azure/pom.xml
+++ b/hadoop-tools/hadoop-azure/pom.xml
@@ -253,6 +253,351 @@
 
   
 
+  parallel-tests-wasb
+  
+
+  parallel-tests-wasb
+
+  
+  
+
+  
+maven-antrun-plugin
+
+  
+create-parallel-tests-dirs
+test-compile
+
+  
+
+  
+
+
+  run
+
+  
+
+  
+  
+org.apache.maven.plugins
+maven-surefire-plugin
+
+  
+default-test
+
+  test
+
+
+  1
+  ${testsThreadCount}
+  false
+  ${maven-surefire-plugin.argLine} 
-DminiClusterDedicatedDirs=true
+  
${fs.azure.scale.test.timeout}
+  
+
${test.build.data}/${surefire.forkNumber}
+
${test.build.dir}/${surefire.forkNumber}
+
${hadoop.tmp.dir}/${surefire.forkNumber}
+
fork-${surefire.forkNumber}
+
${fs.azure.scale.test.enabled}
+
${fs.azure.scale.test.huge.filesize}
+
${fs.azure.scale.test.huge.partitionsize}
+
${fs.azure.scale.test.timeout}
+
${fs.azure.scale.test.list.performance.threads}
+
${fs.azure.scale.test.list.performance.files}
+  
+  
+**/azure/Test*.java
+**/azure/**/Test*.java
+  
+  
+
**/azure/**/TestRollingWindowAverage*.java
+  
+
+  
+  
+serialized-test-wasb
+
+  test
+
+
+  1
+  false
+  ${maven-surefire-plugin.argLine} 
-DminiClusterDedicatedDirs=true
+  
${fs.azure.scale.test.timeout}
+  
+
${test.build.data}/${surefire.forkNumber}
+
${test.build.dir}/${surefire.forkNumber}
+
${hadoop.tmp.dir}/${surefire.forkNumber}
+
fork-${surefire.forkNumber}
+
${fs.azure.scale.test.enabled}
+
${fs.azure.scale.test.huge.filesize}
+
${fs.azure.scale.te

[49/50] [abbrv] hadoop git commit: HADOOP-15446. ABFS: tune imports & javadocs; stabilise tests. Contributed by Steve Loughran and Da Zhou.

2018-09-17 Thread tmarquardt
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ce03a93f/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java
index a78e7af..2b3ccc0 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java
@@ -26,14 +26,17 @@ import java.util.ArrayList;
 import java.util.List;
 import java.util.Locale;
 
-import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.AzureBlobFileSystemException;
-import org.apache.hadoop.fs.azurebfs.contracts.exceptions.InvalidUriException;
-import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
-import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
-import org.apache.hadoop.fs.azurebfs.constants.HttpQueryParams;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.AzureBlobFileSystemException;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.InvalidUriException;
+
+
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.*;
+import static 
org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.*;
+import static org.apache.hadoop.fs.azurebfs.constants.HttpQueryParams.*;
+
 /**
  * AbfsClient
  */
@@ -53,7 +56,7 @@ public class AbfsClient {
 this.baseUrl = baseUrl;
 this.sharedKeyCredentials = sharedKeyCredentials;
 String baseUrlString = baseUrl.toString();
-this.filesystem = 
baseUrlString.substring(baseUrlString.lastIndexOf(AbfsHttpConstants.FORWARD_SLASH)
 + 1);
+this.filesystem = 
baseUrlString.substring(baseUrlString.lastIndexOf(FORWARD_SLASH) + 1);
 this.abfsConfiguration = abfsConfiguration;
 this.retryPolicy = exponentialRetryPolicy;
 this.userAgent = initializeUserAgent();
@@ -73,19 +76,19 @@ public class AbfsClient {
 
   List createDefaultHeaders() {
 final List requestHeaders = new 
ArrayList();
-requestHeaders.add(new 
AbfsHttpHeader(HttpHeaderConfigurations.X_MS_VERSION, xMsVersion));
-requestHeaders.add(new AbfsHttpHeader(HttpHeaderConfigurations.ACCEPT, 
AbfsHttpConstants.APPLICATION_JSON
-+ AbfsHttpConstants.COMMA + AbfsHttpConstants.SINGLE_WHITE_SPACE + 
AbfsHttpConstants.APPLICATION_OCTET_STREAM));
-requestHeaders.add(new 
AbfsHttpHeader(HttpHeaderConfigurations.ACCEPT_CHARSET,
-AbfsHttpConstants.UTF_8));
-requestHeaders.add(new 
AbfsHttpHeader(HttpHeaderConfigurations.CONTENT_TYPE, 
AbfsHttpConstants.EMPTY_STRING));
-requestHeaders.add(new AbfsHttpHeader(HttpHeaderConfigurations.USER_AGENT, 
userAgent));
+requestHeaders.add(new AbfsHttpHeader(X_MS_VERSION, xMsVersion));
+requestHeaders.add(new AbfsHttpHeader(ACCEPT, APPLICATION_JSON
++ COMMA + SINGLE_WHITE_SPACE + APPLICATION_OCTET_STREAM));
+requestHeaders.add(new AbfsHttpHeader(ACCEPT_CHARSET,
+UTF_8));
+requestHeaders.add(new AbfsHttpHeader(CONTENT_TYPE, EMPTY_STRING));
+requestHeaders.add(new AbfsHttpHeader(USER_AGENT, userAgent));
 return requestHeaders;
   }
 
   AbfsUriQueryBuilder createDefaultUriQueryBuilder() {
 final AbfsUriQueryBuilder abfsUriQueryBuilder = new AbfsUriQueryBuilder();
-abfsUriQueryBuilder.addQuery(HttpQueryParams.QUERY_PARAM_TIMEOUT, 
AbfsHttpConstants.DEFAULT_TIMEOUT);
+abfsUriQueryBuilder.addQuery(QUERY_PARAM_TIMEOUT, DEFAULT_TIMEOUT);
 return abfsUriQueryBuilder;
   }
 
@@ -93,12 +96,12 @@ public class AbfsClient {
 final List requestHeaders = createDefaultHeaders();
 
 final AbfsUriQueryBuilder abfsUriQueryBuilder = new AbfsUriQueryBuilder();
-abfsUriQueryBuilder.addQuery(HttpQueryParams.QUERY_PARAM_RESOURCE, 
AbfsHttpConstants.FILESYSTEM);
+abfsUriQueryBuilder.addQuery(QUERY_PARAM_RESOURCE, FILESYSTEM);
 
 final URL url = createRequestUrl(abfsUriQueryBuilder.toString());
 final AbfsRestOperation op = new AbfsRestOperation(
 this,
-AbfsHttpConstants.HTTP_METHOD_PUT,
+HTTP_METHOD_PUT,
 url,
 requestHeaders);
 op.execute();
@@ -109,19 +112,19 @@ public class AbfsClient {
 final List requestHeaders = createDefaultHeaders();
 // JDK7 does not support PATCH, so to workaround the issue we will use
 // PUT and specify the real method in the X-Http-Method-Override header.
-requestHeaders.add(new 
AbfsHttpHeader(HttpHeaderConfigurations.X_HTTP_METHOD_OVERRIDE,
-AbfsHttpConstants.HTTP_METHOD_PATCH));
+requestHeaders.add(new AbfsHttpHeader(X_HTTP_METHOD_OVERRIDE,
+HTTP_METHOD_PATCH));
 
-requestHeaders.add(new 
AbfsHttpHeader

[31/50] [abbrv] hadoop git commit: HADOOP-15694. ABFS: Allow OAuth credentials to not be tied to accounts. Contributed by Sean Mackrory.

2018-09-17 Thread tmarquardt
http://git-wip-us.apache.org/repos/asf/hadoop/blob/e5593cbd/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAccountConfiguration.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAccountConfiguration.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAccountConfiguration.java
new file mode 100644
index 000..425485c
--- /dev/null
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAccountConfiguration.java
@@ -0,0 +1,273 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import java.io.IOException;
+
+import org.apache.hadoop.conf.Configuration;
+import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.InvalidConfigurationValueException;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNull;
+
+import org.junit.Test;
+
+/**
+ * Tests correct precedence of various configurations that might be returned.
+ * Configuration can be specified with the account name as a suffix to the
+ * config key, or without one. Account-specific values should be returned
+ * whenever they exist. Account-agnostic values are returned if they do not.
+ * Default values are returned if neither exists.
+ *
+ * These tests are in 2 main groups: tests of methods that allow default values
+ * (such as get and getPasswordString) are of one form, while tests of methods
+ * that do allow default values (all others) follow another form.
+ */
+public class TestAccountConfiguration {
+
+  @Test
+  public void testStringPrecedence()
+  throws IllegalAccessException, IOException, 
InvalidConfigurationValueException {
+AbfsConfiguration abfsConf;
+final Configuration conf = new Configuration();
+
+final String accountName1 = "account1";
+final String accountName2 = "account2";
+final String accountName3 = "account3";
+
+final String globalKey = "fs.azure.configuration";
+final String accountKey1 = globalKey + "." + accountName1;
+final String accountKey2 = globalKey + "." + accountName2;
+final String accountKey3 = globalKey + "." + accountName3;
+
+final String globalValue = "global";
+final String accountValue1 = "one";
+final String accountValue2 = "two";
+
+conf.set(accountKey1, accountValue1);
+conf.set(accountKey2, accountValue2);
+conf.set(globalKey, globalValue);
+
+abfsConf = new AbfsConfiguration(conf, accountName1);
+assertEquals("Wrong value returned when account-specific value was 
requested",
+abfsConf.get(accountKey1), accountValue1);
+assertEquals("Account-specific value was not returned when one existed",
+abfsConf.get(globalKey), accountValue1);
+
+abfsConf = new AbfsConfiguration(conf, accountName2);
+assertEquals("Wrong value returned when a different account-specific value 
was requested",
+abfsConf.get(accountKey1), accountValue1);
+assertEquals("Wrong value returned when account-specific value was 
requested",
+abfsConf.get(accountKey2), accountValue2);
+assertEquals("Account-agnostic value return even though account-specific 
value was set",
+abfsConf.get(globalKey), accountValue2);
+
+abfsConf = new AbfsConfiguration(conf, accountName3);
+assertNull("Account-specific value returned when none was set",
+abfsConf.get(accountKey3));
+assertEquals("Account-agnostic value not returned when no account-specific 
value was set",
+abfsConf.get(globalKey), globalValue);
+  }
+
+  @Test
+  public void testPasswordPrecedence()
+  throws IllegalAccessException, IOException, 
InvalidConfigurationValueException {
+AbfsConfiguration abfsConf;
+final Configuration conf = new Configuration();
+
+final String accountName1 = "account1";
+final String accountName2 = "account2";
+final String accountName3 = "account3";
+
+final String globalKey = "fs.azure.password";
+final String accountKey1 = globalKey + "." + accountName1;
+final String accountKey2 = globalKey + "." + accountName2;
+final String acc

[38/50] [abbrv] hadoop git commit: HADOOP-15407. HADOOP-15540. Support Windows Azure Storage - Blob file system "ABFS" in Hadoop: Core Commit.

2018-09-17 Thread tmarquardt
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f044deed/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java
new file mode 100644
index 000..de5c934
--- /dev/null
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java
@@ -0,0 +1,335 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import java.util.concurrent.ConcurrentLinkedDeque;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.ExecutorCompletionService;
+import java.util.concurrent.ThreadPoolExecutor;
+import java.util.concurrent.Callable;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+
+import com.google.common.base.Preconditions;
+
+import org.apache.hadoop.fs.FSExceptionMessages;
+import org.apache.hadoop.fs.Syncable;
+import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.AzureBlobFileSystemException;
+
+/**
+ * The BlobFsOutputStream for Rest AbfsClient
+ */
+public class AbfsOutputStream extends OutputStream implements Syncable {
+  private final AbfsClient client;
+  private final String path;
+  private long position;
+  private boolean closed;
+  private volatile IOException lastError;
+
+  private long lastFlushOffset;
+  private long lastTotalAppendOffset = 0;
+
+  private final int bufferSize;
+  private byte[] buffer;
+  private int bufferIndex;
+  private final int maxConcurrentRequestCount;
+
+  private ConcurrentLinkedDeque writeOperations;
+  private final ThreadPoolExecutor threadExecutor;
+  private final ExecutorCompletionService completionService;
+
+  public AbfsOutputStream(
+  final AbfsClient client,
+  final String path,
+  final long position,
+  final int bufferSize) {
+this.client = client;
+this.path = path;
+this.position = position;
+this.closed = false;
+this.lastError = null;
+this.lastFlushOffset = 0;
+this.bufferSize = bufferSize;
+this.buffer = new byte[bufferSize];
+this.bufferIndex = 0;
+this.writeOperations = new ConcurrentLinkedDeque<>();
+
+this.maxConcurrentRequestCount = 4 * 
Runtime.getRuntime().availableProcessors();
+
+this.threadExecutor
+= new ThreadPoolExecutor(maxConcurrentRequestCount,
+maxConcurrentRequestCount,
+10L,
+TimeUnit.SECONDS,
+new LinkedBlockingQueue());
+this.completionService = new 
ExecutorCompletionService(this.threadExecutor);
+  }
+
+  /**
+   * Writes the specified byte to this output stream. The general contract for
+   * write is that one byte is written to the output stream. The byte to be
+   * written is the eight low-order bits of the argument b. The 24 high-order
+   * bits of b are ignored.
+   *
+   * @param byteVal the byteValue to write.
+   * @throws IOException if an I/O error occurs. In particular, an IOException 
may be
+   * thrown if the output stream has been closed.
+   */
+  @Override
+  public void write(final int byteVal) throws IOException {
+write(new byte[]{(byte) (byteVal & 0xFF)});
+  }
+
+  /**
+   * Writes length bytes from the specified byte array starting at off to
+   * this output stream.
+   *
+   * @param data   the byte array to write.
+   * @param off the start off in the data.
+   * @param length the number of bytes to write.
+   * @throws IOException if an I/O error occurs. In particular, an IOException 
may be
+   * thrown if the output stream has been closed.
+   */
+  @Override
+  public synchronized void write(final byte[] data, final int off, final int 
length)
+  throws IOException {
+if (this.lastError != null) {
+  throw this.lastError;
+}
+
+Preconditions.checkArgument(data != null, "null data");
+
+if (off < 0 || length < 0 || length >

[33/50] [abbrv] hadoop git commit: HADOOP-15728. ABFS: Add backward compatibility to handle Unsupported Operation for storage account with no namespace feature.

2018-09-17 Thread tmarquardt
HADOOP-15728. ABFS: Add backward compatibility to handle Unsupported Operation
for storage account with no namespace feature.

Contributed by Da Zhou.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6801b307
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6801b307
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6801b307

Branch: refs/heads/HADOOP-15407
Commit: 6801b3073317000d5a9c24764aa93918955c27a6
Parents: 347a52a
Author: Thomas Marquardt 
Authored: Fri Sep 7 03:45:35 2018 +
Committer: Thomas Marquardt 
Committed: Mon Sep 17 19:54:01 2018 +

--
 .../hadoop/fs/azurebfs/AzureBlobFileSystem.java |  44 +
 .../ITestAzureBlobFileSystemPermission.java |   4 +-
 .../azurebfs/ITestAzureBlobFilesystemAcl.java   | 185 ++-
 3 files changed, 228 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6801b307/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
index c0ecc35..7cbf4d7 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
@@ -497,6 +497,10 @@ public class AzureBlobFileSystem extends FileSystem {
   throws IOException {
 LOG.debug(
 "AzureBlobFileSystem.setOwner path: {}", path);
+if (!getIsNamespaceEnabeld()) {
+  super.setOwner(path, owner, group);
+  return;
+}
 
 if ((owner == null || owner.isEmpty()) && (group == null || 
group.isEmpty())) {
   throw new IllegalArgumentException("A valid owner or group must be 
specified.");
@@ -521,6 +525,10 @@ public class AzureBlobFileSystem extends FileSystem {
   public void setPermission(final Path path, final FsPermission permission)
   throws IOException {
 LOG.debug("AzureBlobFileSystem.setPermission path: {}", path);
+if (!getIsNamespaceEnabeld()) {
+  super.setPermission(path, permission);
+  return;
+}
 
 if (permission == null) {
   throw new IllegalArgumentException("The permission can't be null");
@@ -549,6 +557,12 @@ public class AzureBlobFileSystem extends FileSystem {
   throws IOException {
 LOG.debug("AzureBlobFileSystem.modifyAclEntries path: {}", 
path.toString());
 
+if (!getIsNamespaceEnabeld()) {
+  throw new UnsupportedOperationException(
+  "modifyAclEntries is only supported by storage accounts " +
+  "with the hierarchical namespace enabled.");
+}
+
 if (aclSpec == null || aclSpec.isEmpty()) {
   throw new IllegalArgumentException("The value of the aclSpec parameter 
is invalid.");
 }
@@ -574,6 +588,12 @@ public class AzureBlobFileSystem extends FileSystem {
   throws IOException {
 LOG.debug("AzureBlobFileSystem.removeAclEntries path: {}", path);
 
+if (!getIsNamespaceEnabeld()) {
+  throw new UnsupportedOperationException(
+  "removeAclEntries is only supported by storage accounts " +
+  "with the hierarchical namespace enabled.");
+}
+
 if (aclSpec == null || aclSpec.isEmpty()) {
   throw new IllegalArgumentException("The aclSpec argument is invalid.");
 }
@@ -595,6 +615,12 @@ public class AzureBlobFileSystem extends FileSystem {
   public void removeDefaultAcl(final Path path) throws IOException {
 LOG.debug("AzureBlobFileSystem.removeDefaultAcl path: {}", path);
 
+if (!getIsNamespaceEnabeld()) {
+  throw new UnsupportedOperationException(
+  "removeDefaultAcl is only supported by storage accounts" +
+  " with the hierarchical namespace enabled.");
+}
+
 try {
   abfsStore.removeDefaultAcl(makeQualified(path));
 } catch (AzureBlobFileSystemException ex) {
@@ -614,6 +640,12 @@ public class AzureBlobFileSystem extends FileSystem {
   public void removeAcl(final Path path) throws IOException {
 LOG.debug("AzureBlobFileSystem.removeAcl path: {}", path);
 
+if (!getIsNamespaceEnabeld()) {
+  throw new UnsupportedOperationException(
+  "removeAcl is only supported by storage accounts" +
+  " with the hierarchical namespace enabled.");
+}
+
 try {
   abfsStore.removeAcl(makeQualified(path));
 } catch (AzureBlobFileSystemException ex) {
@@ -636,6 +668,12 @@ public class AzureBlobFileSystem extends FileSystem {
   throws IOException {
 LOG.debug("AzureBlobFileSyst

[39/50] [abbrv] hadoop git commit: HADOOP-15407. HADOOP-15540. Support Windows Azure Storage - Blob file system "ABFS" in Hadoop: Core Commit.

2018-09-17 Thread tmarquardt
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f044deed/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java
new file mode 100644
index 000..c17a5c1
--- /dev/null
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java
@@ -0,0 +1,402 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.UnsupportedEncodingException;
+import java.net.MalformedURLException;
+import java.net.URL;
+import java.net.URLEncoder;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Locale;
+
+import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.AzureBlobFileSystemException;
+import org.apache.hadoop.fs.azurebfs.contracts.exceptions.InvalidUriException;
+import org.apache.hadoop.fs.azurebfs.contracts.services.ConfigurationService;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.constants.HttpQueryParams;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * AbfsClient
+ */
+public class AbfsClient {
+  public static final Logger LOG = LoggerFactory.getLogger(AbfsClient.class);
+  private final URL baseUrl;
+  private final SharedKeyCredentials sharedKeyCredentials;
+  private final String xMsVersion = "2018-03-28";
+  private final ExponentialRetryPolicy retryPolicy;
+  private final String filesystem;
+  private final ConfigurationService configurationService;
+  private final String userAgent;
+
+  public AbfsClient(final URL baseUrl, final SharedKeyCredentials 
sharedKeyCredentials,
+final ConfigurationService configurationService,
+final ExponentialRetryPolicy exponentialRetryPolicy) {
+this.baseUrl = baseUrl;
+this.sharedKeyCredentials = sharedKeyCredentials;
+String baseUrlString = baseUrl.toString();
+this.filesystem = 
baseUrlString.substring(baseUrlString.lastIndexOf(AbfsHttpConstants.FORWARD_SLASH)
 + 1);
+this.configurationService = configurationService;
+this.retryPolicy = exponentialRetryPolicy;
+this.userAgent = initializeUserAgent();
+  }
+
+  public String getFileSystem() {
+return filesystem;
+  }
+
+  ExponentialRetryPolicy getRetryPolicy() {
+return retryPolicy;
+  }
+
+  SharedKeyCredentials getSharedKeyCredentials() {
+return sharedKeyCredentials;
+  }
+
+  List createDefaultHeaders() {
+final List requestHeaders = new 
ArrayList();
+requestHeaders.add(new 
AbfsHttpHeader(HttpHeaderConfigurations.X_MS_VERSION, xMsVersion));
+requestHeaders.add(new AbfsHttpHeader(HttpHeaderConfigurations.ACCEPT, 
AbfsHttpConstants.APPLICATION_JSON
++ AbfsHttpConstants.COMMA + AbfsHttpConstants.SINGLE_WHITE_SPACE + 
AbfsHttpConstants.APPLICATION_OCTET_STREAM));
+requestHeaders.add(new 
AbfsHttpHeader(HttpHeaderConfigurations.ACCEPT_CHARSET,
+AbfsHttpConstants.UTF_8));
+requestHeaders.add(new 
AbfsHttpHeader(HttpHeaderConfigurations.CONTENT_TYPE, 
AbfsHttpConstants.EMPTY_STRING));
+requestHeaders.add(new AbfsHttpHeader(HttpHeaderConfigurations.USER_AGENT, 
userAgent));
+return requestHeaders;
+  }
+
+  AbfsUriQueryBuilder createDefaultUriQueryBuilder() {
+final AbfsUriQueryBuilder abfsUriQueryBuilder = new AbfsUriQueryBuilder();
+abfsUriQueryBuilder.addQuery(HttpQueryParams.QUERY_PARAM_TIMEOUT, 
AbfsHttpConstants.DEFAULT_TIMEOUT);
+return abfsUriQueryBuilder;
+  }
+
+  public AbfsRestOperation createFilesystem() throws 
AzureBlobFileSystemException {
+final List requestHeaders = createDefaultHeaders();
+
+final AbfsUriQueryBuilder abfsUriQueryBuilder = new AbfsUriQueryBuilder();
+abfsUriQueryBuilder.addQuery(HttpQueryParams.QUERY_PARAM_RESOURCE, 
AbfsHttpConstants.FILESYSTEM);
+
+final URL url = createRequestUrl(ab

[50/50] [abbrv] hadoop git commit: HADOOP-15446. ABFS: tune imports & javadocs; stabilise tests. Contributed by Steve Loughran and Da Zhou.

2018-09-17 Thread tmarquardt
HADOOP-15446. ABFS: tune imports & javadocs; stabilise tests.
Contributed by Steve Loughran and Da Zhou.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ce03a93f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ce03a93f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ce03a93f

Branch: refs/heads/HADOOP-15407
Commit: ce03a93f78c4d97ccb48a3906fcd77ad0ac756be
Parents: a271fd0
Author: Thomas Marquardt 
Authored: Wed Aug 8 18:52:12 2018 +
Committer: Thomas Marquardt 
Committed: Mon Sep 17 19:54:01 2018 +

--
 .../apache/hadoop/fs/RawLocalFileSystem.java|   2 +-
 .../src/main/resources/core-default.xml |  15 +
 .../src/site/markdown/filesystem/filesystem.md  |  11 +-
 .../fs/contract/AbstractContractAppendTest.java |  57 ++--
 .../fs/contract/AbstractContractConcatTest.java |  34 +--
 .../AbstractContractGetFileStatusTest.java  |  26 +-
 .../fs/contract/AbstractContractMkdirTest.java  |   8 +
 .../hadoop/fs/contract/AbstractFSContract.java  |   2 -
 .../hadoop/fs/contract/ContractTestUtils.java   |  19 +-
 .../org/apache/hadoop/fs/azurebfs/Abfs.java |   4 +-
 .../org/apache/hadoop/fs/azurebfs/Abfss.java|   4 +-
 .../hadoop/fs/azurebfs/AzureBlobFileSystem.java |  98 +++---
 .../fs/azurebfs/AzureBlobFileSystemStore.java   | 147 -
 .../fs/azurebfs/SecureAzureBlobFileSystem.java  |   4 +-
 .../azurebfs/constants/AbfsHttpConstants.java   |   2 +-
 .../constants/HttpHeaderConfigurations.java |   2 +-
 .../fs/azurebfs/constants/HttpQueryParams.java  |   2 +-
 .../ConfigurationValidationAnnotations.java |  14 +-
 .../diagnostics/ConfigurationValidator.java |   6 +-
 .../AzureBlobFileSystemException.java   |   4 +-
 .../exceptions/InvalidUriException.java |   4 +-
 ...Base64StringConfigurationBasicValidator.java |   2 +-
 .../BooleanConfigurationBasicValidator.java |   4 +-
 .../ConfigurationBasicValidator.java|   2 +-
 .../IntegerConfigurationBasicValidator.java |   2 +-
 .../LongConfigurationBasicValidator.java|   4 +-
 .../StringConfigurationBasicValidator.java  |   4 +-
 .../hadoop/fs/azurebfs/services/AbfsClient.java | 157 +-
 .../fs/azurebfs/services/AbfsHttpOperation.java |   6 +-
 .../fs/azurebfs/services/AbfsInputStream.java   |   5 +-
 .../fs/azurebfs/services/AbfsOutputStream.java  | 125 
 .../fs/azurebfs/services/AbfsRestOperation.java |  24 +-
 .../azurebfs/services/AbfsUriQueryBuilder.java  |   6 +-
 .../services/ExponentialRetryPolicy.java|   2 +-
 .../hadoop/fs/azurebfs/services/ReadBuffer.java |   4 +-
 .../fs/azurebfs/services/ReadBufferManager.java |  56 ++--
 .../fs/azurebfs/services/ReadBufferWorker.java  |   4 +-
 .../azurebfs/services/SharedKeyCredentials.java |  32 +-
 .../hadoop-azure/src/site/markdown/abfs.md  |  72 +
 .../src/site/markdown/testing_azure.md  |  76 +
 .../ITestAzureNativeContractAppend.java |  23 ++
 .../azurebfs/AbstractAbfsIntegrationTest.java   | 304 +++
 .../fs/azurebfs/AbstractAbfsScaleTest.java  |  53 
 .../fs/azurebfs/DependencyInjectedTest.java | 206 -
 .../ITestAzureBlobFileSystemAppend.java |  28 +-
 .../ITestAzureBlobFileSystemBackCompat.java |  16 +-
 .../azurebfs/ITestAzureBlobFileSystemCopy.java  |  45 ++-
 .../ITestAzureBlobFileSystemCreate.java |  54 ++--
 .../ITestAzureBlobFileSystemDelete.java |  79 +++--
 .../azurebfs/ITestAzureBlobFileSystemE2E.java   |  66 ++--
 .../ITestAzureBlobFileSystemE2EScale.java   |  80 ++---
 .../ITestAzureBlobFileSystemFileStatus.java |  45 ++-
 .../azurebfs/ITestAzureBlobFileSystemFlush.java | 209 +++--
 .../ITestAzureBlobFileSystemInitAndCreate.java  |  17 +-
 .../ITestAzureBlobFileSystemListStatus.java | 123 +---
 .../azurebfs/ITestAzureBlobFileSystemMkDir.java |  55 +---
 .../azurebfs/ITestAzureBlobFileSystemOpen.java  |  41 ---
 .../ITestAzureBlobFileSystemRandomRead.java |  48 +--
 .../ITestAzureBlobFileSystemRename.java | 129 
 .../ITestAzureBlobFileSystemRenameUnicode.java  |  98 ++
 .../azurebfs/ITestFileSystemInitialization.java |  47 ++-
 .../fs/azurebfs/ITestFileSystemProperties.java  |  47 ++-
 .../azurebfs/ITestFileSystemRegistration.java   |  78 +++--
 .../fs/azurebfs/ITestWasbAbfsCompatibility.java | 166 +-
 .../constants/TestConfigurationKeys.java|  11 +-
 .../contract/ABFSContractTestBinding.java   |  64 
 .../contract/AbfsFileSystemContract.java|  65 
 .../DependencyInjectedContractTest.java |  63 
 .../contract/ITestAbfsFileSystemContract.java   |  54 
 .../ITestAbfsFileSystemContractAppend.java  |  14 +-
 .../ITestAbfsFileSystemContractConcat.java  |  14 +-
 .../ITestAbfsFileSystemContractCreate.java  |  10 +-
 .../ITestAbfsFileS

[46/50] [abbrv] hadoop git commit: HADOOP-15745. ABFS: Add ABFS configuration to ConfigRedactor.

2018-09-17 Thread tmarquardt
HADOOP-15745. ABFS: Add ABFS configuration to ConfigRedactor.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9475fd90
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9475fd90
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9475fd90

Branch: refs/heads/HADOOP-15407
Commit: 9475fd902a37e94fb7687877d33aa7dfff92d9eb
Parents: 6801b30
Author: Sean Mackrory 
Authored: Wed Sep 12 07:14:31 2018 -0600
Committer: Thomas Marquardt 
Committed: Mon Sep 17 19:54:01 2018 +

--
 .../org/apache/hadoop/fs/CommonConfigurationKeysPublic.java   | 4 +++-
 .../hadoop-common/src/main/resources/core-default.xml | 4 +++-
 .../test/java/org/apache/hadoop/conf/TestConfigRedactor.java  | 7 +++
 3 files changed, 13 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9475fd90/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
index b101b3b..b92d325 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
@@ -886,7 +886,9 @@ public class CommonConfigurationKeysPublic {
   "fs.s3a.*.server-side-encryption.key",
   "fs.azure\\.account.key.*",
   "credential$",
-  "oauth.*token$",
+  "oauth.*secret",
+  "oauth.*password",
+  "oauth.*token",
   HADOOP_SECURITY_SENSITIVE_CONFIG_KEYS);
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9475fd90/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml 
b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
index 3fcdecb..f8eba04 100644
--- a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
+++ b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
@@ -603,7 +603,9 @@
   fs.s3a.*.server-side-encryption.key
   fs.azure.account.key.*
   credential$
-  oauth.*token$
+  oauth.*secret
+  oauth.*password
+  oauth.*token
   hadoop.security.sensitive-config-keys
   
   A comma-separated or multi-line list of regular expressions to

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9475fd90/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfigRedactor.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfigRedactor.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfigRedactor.java
index 3133942..ca53fa7 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfigRedactor.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfigRedactor.java
@@ -55,6 +55,13 @@ public class TestConfigRedactor {
 "fs.s3a.server-side-encryption.key",
 "fs.s3a.bucket.engineering.server-side-encryption.key",
 "fs.azure.account.key.abcdefg.blob.core.windows.net",
+"fs.azure.account.key.abcdefg.dfs.core.windows.net",
+"fs.azure.account.oauth2.client.secret",
+"fs.azure.account.oauth2.client.secret.account.dfs.core.windows.net",
+"fs.azure.account.oauth2.user.password",
+"fs.azure.account.oauth2.user.password.account.dfs.core.windows.net",
+"fs.azure.account.oauth2.refresh.token",
+"fs.azure.account.oauth2.refresh.token.account.dfs.core.windows.net",
 "fs.adl.oauth2.refresh.token",
 "fs.adl.oauth2.credential",
 "dfs.adls.oauth2.refresh.token",


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[06/50] [abbrv] hadoop git commit: HDDS-470. Ozone acceptance tests are failing. Contributed by Elek, Marton.

2018-09-17 Thread tmarquardt
HDDS-470. Ozone acceptance tests are failing. Contributed by Elek, Marton.

(cherry picked from commit dca8d0c2615d142bca55d367a0bc988ce9860368)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/07385f88
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/07385f88
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/07385f88

Branch: refs/heads/HADOOP-15407
Commit: 07385f886ed534aba527820c0bda4dcf410e05f6
Parents: 82fbbd5
Author: Arpit Agarwal 
Authored: Sun Sep 16 14:31:09 2018 -0700
Committer: Arpit Agarwal 
Committed: Sun Sep 16 14:31:29 2018 -0700

--
 .../test/acceptance/ozonefs/ozonesinglenode.robot | 18 +-
 1 file changed, 9 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/07385f88/hadoop-ozone/acceptance-test/src/test/acceptance/ozonefs/ozonesinglenode.robot
--
diff --git 
a/hadoop-ozone/acceptance-test/src/test/acceptance/ozonefs/ozonesinglenode.robot
 
b/hadoop-ozone/acceptance-test/src/test/acceptance/ozonefs/ozonesinglenode.robot
index b718bc9..15ad5bb 100644
--- 
a/hadoop-ozone/acceptance-test/src/test/acceptance/ozonefs/ozonesinglenode.robot
+++ 
b/hadoop-ozone/acceptance-test/src/test/acceptance/ozonefs/ozonesinglenode.robot
@@ -14,7 +14,7 @@
 # limitations under the License.
 
 *** Settings ***
-Documentation   Ozonefs Single Node Test
+Documentation   Ozone Single Node Test
 Library OperatingSystem
 Suite Setup Startup Ozone cluster with size  1
 Suite Teardown  Teardown Ozone cluster
@@ -27,23 +27,23 @@ ${PROJECTDIR}   ${CURDIR}/../../../../../..
 
 *** Test Cases ***
 Create volume and bucket
-Execute on  datanodeozone sh -createVolume 
http://ozoneManager/fstest -user bilbo -quota 100TB -root
-Execute on  datanodeozone sh -createBucket 
http://ozoneManager/fstest/bucket1
+Execute on  datanodeozone sh volume create 
http://ozoneManager/fstest --user bilbo --quota 100TB --root
+Execute on  datanodeozone sh bucket create 
http://ozoneManager/fstest/bucket1
 
 Check volume from ozonefs
 ${result} = Execute on  datanode  ozone fs -ls 
o3://bucket1.fstest/
 
 Create directory from ozonefs
 Execute on  datanode  ozone fs -mkdir 
-p o3://bucket1.fstest/testdir/deep
-${result} = Execute on  ozoneManager  ozone sh 
-listKey o3://ozoneManager/fstest/bucket1 | grep -v WARN | jq -r '.[].keyName'
+${result} = Execute on  ozoneManager  ozone sh key 
list o3://ozoneManager/fstest/bucket1 | grep -v WARN | jq -r '.[].keyName'
 Should contain${result}
 testdir/deep
 Test key handling
-Execute on  datanodeozone sh -putKey 
o3://ozoneManager/fstest/bucket1/key1 -file NOTICE.txt -replicationFactor 1
+Execute on  datanodeozone sh key put 
o3://ozoneManager/fstest/bucket1/key1 NOTICE.txt --replication ONE
 Execute on  datanoderm -f NOTICE.txt.1
-Execute on  datanodeozone sh -getKey 
o3://ozoneManager/fstest/bucket1/key1 -file NOTICE.txt.1
+Execute on  datanodeozone sh key get 
o3://ozoneManager/fstest/bucket1/key1 NOTICE.txt.1
 Execute on  datanodels -l NOTICE.txt.1
-${result} = Execute on  datanodeozone sh -infoKey 
o3://ozoneManager/fstest/bucket1/key1 | grep -Ev 
'Removed|WARN|DEBUG|ERROR|INFO|TRACE' | jq -r '. | select(.keyName=="key1")'
+${result} = Execute on  datanodeozone sh key info 
o3://ozoneManager/fstest/bucket1/key1 | grep -Ev 
'Removed|WARN|DEBUG|ERROR|INFO|TRACE' | jq -r '. | select(.keyName=="key1")'
 Should contain  ${result}   createdOn
-${result} = Execute on  datanodeozone sh -listKey 
o3://ozoneManager/fstest/bucket1 | grep -Ev 
'Removed|WARN|DEBUG|ERROR|INFO|TRACE' | jq -r '.[] | select(.keyName=="key1") | 
.keyName'
+${result} = Execute on  datanodeozone sh key list 
o3://ozoneManager/fstest/bucket1 | grep -Ev 
'Removed|WARN|DEBUG|ERROR|INFO|TRACE' | jq -r '.[] | select(.keyName=="key1") | 
.keyName'
 Should Be Equal ${result}   key1
-Execute on  datanodeozone sh -deleteKey 
o3://ozoneManager/fstest/bucket1/key1 -v
+Execute on  datanodeozone sh key delete 
o3://ozoneManager/fstest/bucket1/key1


-

[02/50] [abbrv] hadoop git commit: HDDS-449. Add a NULL check to protect DeadNodeHandler#onMessage. Contributed by LiXin Ge.

2018-09-17 Thread tmarquardt
HDDS-449. Add a NULL check to protect DeadNodeHandler#onMessage. Contributed by 
LiXin Ge.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a65c3ea9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a65c3ea9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a65c3ea9

Branch: refs/heads/HADOOP-15407
Commit: a65c3ea91cad7e8b453976bab2165ea4a3c6daf9
Parents: 985f3bf
Author: Márton Elek 
Authored: Sat Sep 15 13:35:00 2018 +0200
Committer: Márton Elek 
Committed: Sat Sep 15 13:35:21 2018 +0200

--
 .../java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java   | 5 +
 1 file changed, 5 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a65c3ea9/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java
--
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java
index d694a10..7fda67d 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java
@@ -57,6 +57,11 @@ public class DeadNodeHandler implements 
EventHandler {
   EventPublisher publisher) {
 Set containers =
 node2ContainerMap.getContainers(datanodeDetails.getUuid());
+if (containers == null) {
+  LOG.info("There's no containers in dead datanode {}, no replica will be"
+  + " removed from the in-memory state.", datanodeDetails.getUuid());
+  return;
+}
 LOG.info(
 "Datanode {}  is dead. Removing replications from the in-memory 
state.",
 datanodeDetails.getUuid());


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



  1   2   >