[jira] [Commented] (HADOOP-15226) Über-JIRA: S3Guard Phase III: Hadoop 3.2 features

2019-09-17 Thread Zhankun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16932116#comment-16932116
 ] 

Zhankun Tang commented on HADOOP-15226:
---

[~ste...@apache.org], any plan to backport this to branch 3.1?

> Über-JIRA: S3Guard Phase III: Hadoop 3.2 features
> -
>
> Key: HADOOP-15226
> URL: https://issues.apache.org/jira/browse/HADOOP-15226
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Steve Loughran
>Priority: Major
> Fix For: 3.2.0
>
>
> S3Guard features/improvements/fixes for Hadoop 3.2



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16583) Minor fixes to S3 testing instructions

2019-09-17 Thread Siddharth Seth (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated HADOOP-16583:

Status: Patch Available  (was: Open)

> Minor fixes to S3 testing instructions
> --
>
> Key: HADOOP-16583
> URL: https://issues.apache.org/jira/browse/HADOOP-16583
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs/s3
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
>Priority: Minor
>
> testing.md has some instructions which don't work any longer, and needs an 
> update.
> Specifically - how to enable s3guard and switch between dynamodb and localdb 
> as the store.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sidseth opened a new pull request #1467: HADOOP-16583. Minor fixes to S3 testing instructions

2019-09-17 Thread GitBox
sidseth opened a new pull request #1467: HADOOP-16583. Minor fixes to S3 
testing instructions
URL: https://github.com/apache/hadoop/pull/1467
 
 
   Documentation update only. No tests erquired.
   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sidseth commented on issue #1332: HADOOP-16445. Allow separate custom signing algorithms for S3 and DDB

2019-09-17 Thread GitBox
sidseth commented on issue #1332: HADOOP-16445. Allow separate custom signing 
algorithms for S3 and DDB
URL: https://github.com/apache/hadoop/pull/1332#issuecomment-532538523
 
 
   Updated. Have left the timeouts on the test, since these tests don't need 
the default 10 minute timeout.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sidseth commented on issue #1332: HADOOP-16445. Allow separate custom signing algorithms for S3 and DDB

2019-09-17 Thread GitBox
sidseth commented on issue #1332: HADOOP-16445. Allow separate custom signing 
algorithms for S3 and DDB
URL: https://github.com/apache/hadoop/pull/1332#issuecomment-532527888
 
 
   Removed the new method in S3AUtil, and introduced a SignerManager (more 
changes coming to this soon). Also incorporated one of the changes from the 
patch on HADOOP-16505 which uses the correct way to initialize signers.
   
   On the AwsConfigurationFactory - that makes sense. However, I don't think 
that should be in this patch. It's unrelated, and will end up moving more code 
from S3AUtils (including some public static methods). That's better handled in 
a separate refactoring only patch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sidseth commented on a change in pull request #1332: HADOOP-16445. Allow separate custom signing algorithms for S3 and DDB

2019-09-17 Thread GitBox
sidseth commented on a change in pull request #1332: HADOOP-16445. Allow 
separate custom signing algorithms for S3 and DDB
URL: https://github.com/apache/hadoop/pull/1332#discussion_r325478839
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AConfiguration.java
 ##
 @@ -617,4 +624,135 @@ public void testSecurityCredentialPropagationEndToEnd() 
throws Exception {
 "override,base");
   }
 
+  @Test(timeout = 10_000L)
+  public void testS3SpecificSignerOverride() throws IOException {
+ClientConfiguration clientConfiguration = null;
+Configuration config;
+
+String signerOverride = "testSigner";
+String s3SignerOverride = "testS3Signer";
+
+// Default SIGNING_ALGORITHM, overridden for S3 only
+config = new Configuration();
+config.set(SIGNING_ALGORITHM_S3, s3SignerOverride);
+clientConfiguration = S3AUtils.createAwsConfForS3(config, "dontcare");
+Assert.assertEquals(s3SignerOverride,
+clientConfiguration.getSignerOverride());
+clientConfiguration = S3AUtils.createAwsConfForDdb(config, "dontcare");
+Assert.assertNull(clientConfiguration.getSignerOverride());
+
+// Configured base SIGNING_ALGORITHM, overridden for S3 only
+config = new Configuration();
+config.set(SIGNING_ALGORITHM, signerOverride);
+config.set(SIGNING_ALGORITHM_S3, s3SignerOverride);
+clientConfiguration = S3AUtils.createAwsConfForS3(config, "dontcare");
+Assert.assertEquals(s3SignerOverride,
+clientConfiguration.getSignerOverride());
+clientConfiguration = S3AUtils.createAwsConfForDdb(config, "dontcare");
+Assert
+.assertEquals(signerOverride, clientConfiguration.getSignerOverride());
+  }
+
+  @Test(timeout = 10_000L)
+  public void testDdbSpecificSignerOverride() throws IOException {
+ClientConfiguration clientConfiguration = null;
+Configuration config;
+
+String signerOverride = "testSigner";
+String ddbSignerOverride = "testDdbSigner";
+
+// Default SIGNING_ALGORITHM, overridden for S3
+config = new Configuration();
+config.set(SIGNING_ALGORITHM_DDB, ddbSignerOverride);
+clientConfiguration = S3AUtils.createAwsConfForDdb(config, "dontcare");
+Assert.assertEquals(ddbSignerOverride,
+clientConfiguration.getSignerOverride());
+clientConfiguration = S3AUtils.createAwsConfForS3(config, "dontcare");
+Assert.assertNull(clientConfiguration.getSignerOverride());
+
+// Configured base SIGNING_ALGORITHM, overridden for S3
+config = new Configuration();
+config.set(SIGNING_ALGORITHM, signerOverride);
+config.set(SIGNING_ALGORITHM_DDB, ddbSignerOverride);
+clientConfiguration = S3AUtils.createAwsConfForDdb(config, "dontcare");
+Assert.assertEquals(ddbSignerOverride,
+clientConfiguration.getSignerOverride());
+clientConfiguration = S3AUtils.createAwsConfForS3(config, "dontcare");
+Assert
+.assertEquals(signerOverride, clientConfiguration.getSignerOverride());
+  }
+
+  // Expecting generic Exception.class to handle future implementation changes.
+  // For now, this is an NPE
+  @Test(timeout = 10_000L, expected = Exception.class)
+  public void testCustomSignerFailureIfNotRegistered() {
+Signer s1 = SignerFactory.createSigner("testsigner1", null);
+  }
+
+  @Test(timeout = 10_000L)
+  public void testCustomSignerInitialization() {
+Configuration config = new Configuration();
+SignerForTest1.reset();
+SignerForTest2.reset();
+config.set(CUSTOM_SIGNERS, "testsigner1:" + 
SignerForTest1.class.getName());
+initCustomSigners(config);
+Signer s1 = SignerFactory.createSigner("testsigner1", null);
+s1.sign(null, null);
+Assert.assertEquals(true, SignerForTest1.initialized);
 
 Review comment:
   Fixed, here and elsewhere.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sidseth commented on a change in pull request #1332: HADOOP-16445. Allow separate custom signing algorithms for S3 and DDB

2019-09-17 Thread GitBox
sidseth commented on a change in pull request #1332: HADOOP-16445. Allow 
separate custom signing algorithms for S3 and DDB
URL: https://github.com/apache/hadoop/pull/1332#discussion_r325478816
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AConfiguration.java
 ##
 @@ -617,4 +624,135 @@ public void testSecurityCredentialPropagationEndToEnd() 
throws Exception {
 "override,base");
   }
 
+  @Test(timeout = 10_000L)
+  public void testS3SpecificSignerOverride() throws IOException {
+ClientConfiguration clientConfiguration = null;
+Configuration config;
+
+String signerOverride = "testSigner";
+String s3SignerOverride = "testS3Signer";
+
+// Default SIGNING_ALGORITHM, overridden for S3 only
+config = new Configuration();
+config.set(SIGNING_ALGORITHM_S3, s3SignerOverride);
+clientConfiguration = S3AUtils.createAwsConfForS3(config, "dontcare");
+Assert.assertEquals(s3SignerOverride,
+clientConfiguration.getSignerOverride());
+clientConfiguration = S3AUtils.createAwsConfForDdb(config, "dontcare");
+Assert.assertNull(clientConfiguration.getSignerOverride());
+
+// Configured base SIGNING_ALGORITHM, overridden for S3 only
+config = new Configuration();
+config.set(SIGNING_ALGORITHM, signerOverride);
+config.set(SIGNING_ALGORITHM_S3, s3SignerOverride);
+clientConfiguration = S3AUtils.createAwsConfForS3(config, "dontcare");
+Assert.assertEquals(s3SignerOverride,
+clientConfiguration.getSignerOverride());
+clientConfiguration = S3AUtils.createAwsConfForDdb(config, "dontcare");
+Assert
+.assertEquals(signerOverride, clientConfiguration.getSignerOverride());
+  }
+
+  @Test(timeout = 10_000L)
+  public void testDdbSpecificSignerOverride() throws IOException {
+ClientConfiguration clientConfiguration = null;
+Configuration config;
+
+String signerOverride = "testSigner";
+String ddbSignerOverride = "testDdbSigner";
+
+// Default SIGNING_ALGORITHM, overridden for S3
+config = new Configuration();
+config.set(SIGNING_ALGORITHM_DDB, ddbSignerOverride);
+clientConfiguration = S3AUtils.createAwsConfForDdb(config, "dontcare");
+Assert.assertEquals(ddbSignerOverride,
+clientConfiguration.getSignerOverride());
+clientConfiguration = S3AUtils.createAwsConfForS3(config, "dontcare");
+Assert.assertNull(clientConfiguration.getSignerOverride());
+
+// Configured base SIGNING_ALGORITHM, overridden for S3
+config = new Configuration();
+config.set(SIGNING_ALGORITHM, signerOverride);
+config.set(SIGNING_ALGORITHM_DDB, ddbSignerOverride);
+clientConfiguration = S3AUtils.createAwsConfForDdb(config, "dontcare");
+Assert.assertEquals(ddbSignerOverride,
+clientConfiguration.getSignerOverride());
+clientConfiguration = S3AUtils.createAwsConfForS3(config, "dontcare");
+Assert
+.assertEquals(signerOverride, clientConfiguration.getSignerOverride());
+  }
+
+  // Expecting generic Exception.class to handle future implementation changes.
+  // For now, this is an NPE
+  @Test(timeout = 10_000L, expected = Exception.class)
+  public void testCustomSignerFailureIfNotRegistered() {
+Signer s1 = SignerFactory.createSigner("testsigner1", null);
 
 Review comment:
   Changed to use LambdaUtils. Not asserting any spcific string though to keep 
this generic.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sidseth commented on a change in pull request #1332: HADOOP-16445. Allow separate custom signing algorithms for S3 and DDB

2019-09-17 Thread GitBox
sidseth commented on a change in pull request #1332: HADOOP-16445. Allow 
separate custom signing algorithms for S3 and DDB
URL: https://github.com/apache/hadoop/pull/1332#discussion_r325478104
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AConfiguration.java
 ##
 @@ -617,4 +624,135 @@ public void testSecurityCredentialPropagationEndToEnd() 
throws Exception {
 "override,base");
   }
 
+  @Test(timeout = 10_000L)
+  public void testS3SpecificSignerOverride() throws IOException {
+ClientConfiguration clientConfiguration = null;
+Configuration config;
+
+String signerOverride = "testSigner";
+String s3SignerOverride = "testS3Signer";
+
+// Default SIGNING_ALGORITHM, overridden for S3 only
+config = new Configuration();
+config.set(SIGNING_ALGORITHM_S3, s3SignerOverride);
+clientConfiguration = S3AUtils.createAwsConfForS3(config, "dontcare");
+Assert.assertEquals(s3SignerOverride,
+clientConfiguration.getSignerOverride());
+clientConfiguration = S3AUtils.createAwsConfForDdb(config, "dontcare");
+Assert.assertNull(clientConfiguration.getSignerOverride());
 
 Review comment:
   Done as part of AssertJ


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sidseth commented on a change in pull request #1332: HADOOP-16445. Allow separate custom signing algorithms for S3 and DDB

2019-09-17 Thread GitBox
sidseth commented on a change in pull request #1332: HADOOP-16445. Allow 
separate custom signing algorithms for S3 and DDB
URL: https://github.com/apache/hadoop/pull/1332#discussion_r325478016
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AConfiguration.java
 ##
 @@ -19,17 +19,24 @@
 package org.apache.hadoop.fs.s3a;
 
 import com.amazonaws.ClientConfiguration;
+import com.amazonaws.SignableRequest;
+import com.amazonaws.auth.AWSCredentials;
+import com.amazonaws.auth.Signer;
+import com.amazonaws.auth.SignerFactory;
 import com.amazonaws.services.s3.AmazonS3;
 import com.amazonaws.services.s3.S3ClientOptions;
 
+import java.io.IOException;
 import org.apache.commons.lang3.StringUtils;
 import org.apache.commons.lang3.reflect.FieldUtils;
+import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.contract.ContractTestUtils;
 import org.apache.hadoop.fs.s3native.S3xLoginHelper;
 import org.apache.hadoop.test.GenericTestUtils;
+import org.junit.Assert;
 
 Review comment:
   Switched over to AssertJ


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sidseth commented on a change in pull request #1332: HADOOP-16445. Allow separate custom signing algorithms for S3 and DDB

2019-09-17 Thread GitBox
sidseth commented on a change in pull request #1332: HADOOP-16445. Allow 
separate custom signing algorithms for S3 and DDB
URL: https://github.com/apache/hadoop/pull/1332#discussion_r325478072
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AConfiguration.java
 ##
 @@ -617,4 +624,135 @@ public void testSecurityCredentialPropagationEndToEnd() 
throws Exception {
 "override,base");
   }
 
+  @Test(timeout = 10_000L)
 
 Review comment:
   Moved to a separate unit test which doesn't need to extend any other class - 
so retaining the timeout.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sidseth commented on a change in pull request #1332: HADOOP-16445. Allow separate custom signing algorithms for S3 and DDB

2019-09-17 Thread GitBox
sidseth commented on a change in pull request #1332: HADOOP-16445. Allow 
separate custom signing algorithms for S3 and DDB
URL: https://github.com/apache/hadoop/pull/1332#discussion_r325476992
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AConfiguration.java
 ##
 @@ -19,17 +19,24 @@
 package org.apache.hadoop.fs.s3a;
 
 import com.amazonaws.ClientConfiguration;
+import com.amazonaws.SignableRequest;
+import com.amazonaws.auth.AWSCredentials;
+import com.amazonaws.auth.Signer;
+import com.amazonaws.auth.SignerFactory;
 import com.amazonaws.services.s3.AmazonS3;
 import com.amazonaws.services.s3.S3ClientOptions;
 
+import java.io.IOException;
 
 Review comment:
   Removed this altogether. Re-factored the tests out into a separate class.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sidseth commented on a change in pull request #1332: HADOOP-16445. Allow separate custom signing algorithms for S3 and DDB

2019-09-17 Thread GitBox
sidseth commented on a change in pull request #1332: HADOOP-16445. Allow 
separate custom signing algorithms for S3 and DDB
URL: https://github.com/apache/hadoop/pull/1332#discussion_r325476850
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
 ##
 @@ -300,6 +300,8 @@ public void initialize(URI name, Configuration 
originalConf)
 LOG.debug("Initializing S3AFileSystem for {}", bucket);
 // clone the configuration into one with propagated bucket options
 Configuration conf = propagateBucketOptions(originalConf, bucket);
+// Initialize any custom signers
+initCustomSigners(conf);
 
 Review comment:
   Fixed. Have moved this to just before bindAWSClient


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on issue #1369: HDDS-2020. Remove mTLS from Ozone GRPC. Contributed by Xiaoyu Yao.

2019-09-17 Thread GitBox
xiaoyuyao commented on issue #1369: HDDS-2020. Remove mTLS from Ozone GRPC. 
Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/1369#issuecomment-532502002
 
 
   /retest
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16584) S3A Test failures when S3Guard is not enabled

2019-09-17 Thread Siddharth Seth (Jira)
Siddharth Seth created HADOOP-16584:
---

 Summary: S3A Test failures when S3Guard is not enabled
 Key: HADOOP-16584
 URL: https://issues.apache.org/jira/browse/HADOOP-16584
 Project: Hadoop Common
  Issue Type: Task
  Components: fs/s3
 Environment: S
Reporter: Siddharth Seth


There's several S3 test failures when S3Guard is not enabled.
All of these tests pass once the tests are configured to use S3Guard.

{code}
ITestS3GuardTtl#testListingFilteredExpiredItems
[INFO] Running org.apache.hadoop.fs.s3a.ITestS3GuardTtl
[ERROR] Tests run: 10, Failures: 2, Errors: 0, Skipped: 4, Time elapsed: 
102.988 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.ITestS3GuardTtl
[ERROR] 
testListingFilteredExpiredItems[0](org.apache.hadoop.fs.s3a.ITestS3GuardTtl)  
Time elapsed: 14.675 s  <<< FAILURE!
java.lang.AssertionError:
[Metastrore directory listing of 
s3a://sseth-dev-in/fork-0002/test/testListingFilteredExpiredItems]
Expecting actual not to be null
  at 
org.apache.hadoop.fs.s3a.ITestS3GuardTtl.getDirListingMetadata(ITestS3GuardTtl.java:367)
  at 
org.apache.hadoop.fs.s3a.ITestS3GuardTtl.testListingFilteredExpiredItems(ITestS3GuardTtl.java:335)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)
  at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
  at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
  at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
  at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
  at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
  at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
  at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
  at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
  at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  at java.lang.Thread.run(Thread.java:748)

[ERROR] 
testListingFilteredExpiredItems[1](org.apache.hadoop.fs.s3a.ITestS3GuardTtl)  
Time elapsed: 44.463 s  <<< FAILURE!
java.lang.AssertionError:
[Metastrore directory listing of 
s3a://sseth-dev-in/fork-0002/test/testListingFilteredExpiredItems]
Expecting actual not to be null
  at 
org.apache.hadoop.fs.s3a.ITestS3GuardTtl.getDirListingMetadata(ITestS3GuardTtl.java:367)
  at 
org.apache.hadoop.fs.s3a.ITestS3GuardTtl.testListingFilteredExpiredItems(ITestS3GuardTtl.java:335)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)
  at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
  at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
  at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
  at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
  at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
  at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
  at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
  at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
  at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  at java.lang.Thread.run(Thread.java:748)
{code}

Related to no metastore being used. Test failure happens in teardown with a 
NPE, since the setup did not complete. This one is likely a simple fix with 
some null checks in the teardown method.
 ITestAuthoritativePath (6 failures all with the same pattern)
{code}
  [ERROR] Tests run: 6, Failures: 0, Errors: 6, Skipped: 0, Time elapsed: 8.142 
s <<< FAILURE! - in org.apache.hadoop.fs.s3a.ITestAuthoritativePath
[ERROR] testPrefixVsDirectory(org.apache.hadoop.fs.s3a.ITestAuthoritativePath)  
Time elapsed: 6.821 s  <<< ERROR!
org.junit.AssumptionViolatedException: FS needs to have a metadatastore.
  at org.junit.Assume.assumeTrue(Assume.java:59)
  at 
org.apache.hadoop.fs.s3a.ITestAuthoritativePath.setup(ITestAuthoritativePath.java:63)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at 
sun.reflect.Nativ

[jira] [Created] (HADOOP-16583) Minor fixes to S3 testing instructions

2019-09-17 Thread Siddharth Seth (Jira)
Siddharth Seth created HADOOP-16583:
---

 Summary: Minor fixes to S3 testing instructions
 Key: HADOOP-16583
 URL: https://issues.apache.org/jira/browse/HADOOP-16583
 Project: Hadoop Common
  Issue Type: Task
  Components: fs/s3
Reporter: Siddharth Seth
Assignee: Siddharth Seth


testing.md has some instructions which don't work any longer, and needs an 
update.

Specifically - how to enable s3guard and switch between dynamodb and localdb as 
the store.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1399: HADOOP-16543: Cached DNS name resolution error

2019-09-17 Thread GitBox
hadoop-yetus commented on issue #1399: HADOOP-16543: Cached DNS name resolution 
error
URL: https://github.com/apache/hadoop/pull/1399#issuecomment-532475889
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 40 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 41 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1021 | trunk passed |
   | +1 | compile | 492 | trunk passed |
   | +1 | checkstyle | 84 | trunk passed |
   | +1 | mvnsite | 153 | trunk passed |
   | +1 | shadedclient | 945 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 141 | trunk passed |
   | 0 | spotbugs | 62 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 276 | trunk passed |
   | -0 | patch | 102 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 18 | Maven dependency ordering for patch |
   | -1 | mvninstall | 18 | hadoop-yarn-common in the patch failed. |
   | -1 | mvninstall | 24 | hadoop-yarn-client in the patch failed. |
   | -1 | compile | 34 | hadoop-yarn in the patch failed. |
   | -1 | javac | 34 | hadoop-yarn in the patch failed. |
   | -0 | checkstyle | 24 | The patch fails to run checkstyle in hadoop-yarn |
   | -1 | mvnsite | 19 | hadoop-yarn-common in the patch failed. |
   | -1 | mvnsite | 25 | hadoop-yarn-client in the patch failed. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | -1 | shadedclient | 235 | patch has errors when building and testing our 
client artifacts. |
   | -1 | javadoc | 20 | hadoop-yarn-common in the patch failed. |
   | -1 | findbugs | 19 | hadoop-yarn-common in the patch failed. |
   | -1 | findbugs | 24 | hadoop-yarn-client in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 46 | hadoop-yarn-api in the patch passed. |
   | -1 | unit | 19 | hadoop-yarn-common in the patch failed. |
   | -1 | unit | 25 | hadoop-yarn-client in the patch failed. |
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 3944 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1399/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1399 |
   | JIRA Issue | HADOOP-16543 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 1e8ab181f480 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3cf6e42 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1399/2/artifact/out/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1399/2/artifact/out/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1399/2/artifact/out/patch-compile-hadoop-yarn-project_hadoop-yarn.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1399/2/artifact/out/patch-compile-hadoop-yarn-project_hadoop-yarn.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1399/2/artifact/out//home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1399/out/maven-patch-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1399/2/artifact/out/patch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1399/2/artifact/out/patch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1399/2/artifact/out/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1399/2/artifact/out/patch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1399/2/artifact/out/patch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
 |
   | uni

[jira] [Commented] (HADOOP-16543) Cached DNS name resolution error

2019-09-17 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16931945#comment-16931945
 ] 

Hadoop QA commented on HADOOP-16543:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
40s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
21s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m  
2s{color} | {color:blue} Used deprecated FindBugs config; considering switching 
to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
36s{color} | {color:green} trunk passed {color} |
| {color:orange}-0{color} | {color:orange} patch {color} | {color:orange}  1m 
42s{color} | {color:orange} Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
18s{color} | {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
34s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 34s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 24s{color} | {color:orange} The patch fails to run checkstyle in hadoop-yarn 
{color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
55s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-yarn-client in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
46s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{col

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1455: HDDS-2137 : OzoneUtils to verify resourceName using HddsClientUtils

2019-09-17 Thread GitBox
bharatviswa504 commented on a change in pull request #1455: HDDS-2137 : 
OzoneUtils to verify resourceName using HddsClientUtils
URL: https://github.com/apache/hadoop/pull/1455#discussion_r325443331
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/HddsClientUtils.java
 ##
 @@ -126,75 +126,63 @@ public static long formatDateTime(String date) throws 
ParseException {
 .toInstant().toEpochMilli();
   }
 
-
-
   /**
* verifies that bucket name / volume name is a valid DNS name.
*
* @param resName Bucket or volume Name to be validated
*
* @throws IllegalArgumentException
*/
-  public static void verifyResourceName(String resName)
-  throws IllegalArgumentException {
-
+  public static void verifyResourceName(String resName) throws 
IllegalArgumentException {
 if (resName == null) {
   throw new IllegalArgumentException("Bucket or Volume name is null");
 }
 
-if ((resName.length() < OzoneConsts.OZONE_MIN_BUCKET_NAME_LENGTH) ||
-(resName.length() > OzoneConsts.OZONE_MAX_BUCKET_NAME_LENGTH)) {
+if (resName.length() < OzoneConsts.OZONE_MIN_BUCKET_NAME_LENGTH ||
+resName.length() > OzoneConsts.OZONE_MAX_BUCKET_NAME_LENGTH) {
   throw new IllegalArgumentException(
-  "Bucket or Volume length is illegal, " +
-  "valid length is 3-63 characters");
+  "Bucket or Volume length is illegal, valid length is 3-63 
characters");
 }
 
-if ((resName.charAt(0) == '.') || (resName.charAt(0) == '-')) {
+if (resName.charAt(0) == '.' || resName.charAt(0) == '-') {
   throw new IllegalArgumentException(
   "Bucket or Volume name cannot start with a period or dash");
 }
 
 if ((resName.charAt(resName.length() - 1) == '.') ||
 
 Review comment:
   `if (resName.charAt(resName.length() - 1 == '-' || 
resName.charAt(resName.length() - 1 == '-') {`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1455: HDDS-2137 : OzoneUtils to verify resourceName using HddsClientUtils

2019-09-17 Thread GitBox
bharatviswa504 commented on a change in pull request #1455: HDDS-2137 : 
OzoneUtils to verify resourceName using HddsClientUtils
URL: https://github.com/apache/hadoop/pull/1455#discussion_r325443331
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/HddsClientUtils.java
 ##
 @@ -126,75 +126,63 @@ public static long formatDateTime(String date) throws 
ParseException {
 .toInstant().toEpochMilli();
   }
 
-
-
   /**
* verifies that bucket name / volume name is a valid DNS name.
*
* @param resName Bucket or volume Name to be validated
*
* @throws IllegalArgumentException
*/
-  public static void verifyResourceName(String resName)
-  throws IllegalArgumentException {
-
+  public static void verifyResourceName(String resName) throws 
IllegalArgumentException {
 if (resName == null) {
   throw new IllegalArgumentException("Bucket or Volume name is null");
 }
 
-if ((resName.length() < OzoneConsts.OZONE_MIN_BUCKET_NAME_LENGTH) ||
-(resName.length() > OzoneConsts.OZONE_MAX_BUCKET_NAME_LENGTH)) {
+if (resName.length() < OzoneConsts.OZONE_MIN_BUCKET_NAME_LENGTH ||
+resName.length() > OzoneConsts.OZONE_MAX_BUCKET_NAME_LENGTH) {
   throw new IllegalArgumentException(
-  "Bucket or Volume length is illegal, " +
-  "valid length is 3-63 characters");
+  "Bucket or Volume length is illegal, valid length is 3-63 
characters");
 }
 
-if ((resName.charAt(0) == '.') || (resName.charAt(0) == '-')) {
+if (resName.charAt(0) == '.' || resName.charAt(0) == '-') {
   throw new IllegalArgumentException(
   "Bucket or Volume name cannot start with a period or dash");
 }
 
 if ((resName.charAt(resName.length() - 1) == '.') ||
 
 Review comment:
   `if (resName.charAt(resName.length() - 1) == '-' || 
resName.charAt(resName.length() - 1) == '-') {`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1455: HDDS-2137 : OzoneUtils to verify resourceName using HddsClientUtils

2019-09-17 Thread GitBox
bharatviswa504 commented on a change in pull request #1455: HDDS-2137 : 
OzoneUtils to verify resourceName using HddsClientUtils
URL: https://github.com/apache/hadoop/pull/1455#discussion_r325443145
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/HddsClientUtils.java
 ##
 @@ -126,75 +126,63 @@ public static long formatDateTime(String date) throws 
ParseException {
 .toInstant().toEpochMilli();
   }
 
-
-
   /**
* verifies that bucket name / volume name is a valid DNS name.
*
* @param resName Bucket or volume Name to be validated
*
* @throws IllegalArgumentException
*/
-  public static void verifyResourceName(String resName)
-  throws IllegalArgumentException {
-
+  public static void verifyResourceName(String resName) throws 
IllegalArgumentException {
 if (resName == null) {
   throw new IllegalArgumentException("Bucket or Volume name is null");
 }
 
-if ((resName.length() < OzoneConsts.OZONE_MIN_BUCKET_NAME_LENGTH) ||
-(resName.length() > OzoneConsts.OZONE_MAX_BUCKET_NAME_LENGTH)) {
+if (resName.length() < OzoneConsts.OZONE_MIN_BUCKET_NAME_LENGTH ||
+resName.length() > OzoneConsts.OZONE_MAX_BUCKET_NAME_LENGTH) {
   throw new IllegalArgumentException(
-  "Bucket or Volume length is illegal, " +
-  "valid length is 3-63 characters");
+  "Bucket or Volume length is illegal, valid length is 3-63 
characters");
 }
 
-if ((resName.charAt(0) == '.') || (resName.charAt(0) == '-')) {
+if (resName.charAt(0) == '.' || resName.charAt(0) == '-') {
   throw new IllegalArgumentException(
   "Bucket or Volume name cannot start with a period or dash");
 }
 
 if ((resName.charAt(resName.length() - 1) == '.') ||
 
 Review comment:
   I see the braces are removed everywhere, to be consistent can we remove from 
here too?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1466: HDDS-2144. MR job failing on secure Ozone cluster.

2019-09-17 Thread GitBox
hadoop-yetus commented on issue #1466: HDDS-2144. MR job failing on secure 
Ozone cluster.
URL: https://github.com/apache/hadoop/pull/1466#issuecomment-532469685
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 77 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 29 | hadoop-ozone in trunk failed. |
   | -1 | compile | 20 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 60 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 910 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 189 | trunk passed |
   | 0 | spotbugs | 218 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 26 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 35 | hadoop-ozone in the patch failed. |
   | -1 | compile | 24 | hadoop-ozone in the patch failed. |
   | -1 | javac | 24 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 60 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 784 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 158 | the patch passed |
   | -1 | findbugs | 23 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 294 | hadoop-hdds in the patch passed. |
   | -1 | unit | 25 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 30 | The patch does not generate ASF License warnings. |
   | | | 3570 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1466/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1466 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux cbbfbcbc7c91 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3cf6e42 |
   | Default Java | 1.8.0_212 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1466/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1466/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1466/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1466/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1466/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1466/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1466/1/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1466/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1466/1/testReport/ |
   | Max. process+thread count | 429 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common U: hadoop-ozone/common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1466/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 merged pull request #1456: HDDS-2139. Update BeanUtils and Jackson Databind dependency versions.

2019-09-17 Thread GitBox
bharatviswa504 merged pull request #1456: HDDS-2139. Update BeanUtils and 
Jackson Databind dependency versions.
URL: https://github.com/apache/hadoop/pull/1456
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #1456: HDDS-2139. Update BeanUtils and Jackson Databind dependency versions.

2019-09-17 Thread GitBox
bharatviswa504 commented on issue #1456: HDDS-2139. Update BeanUtils and 
Jackson Databind dependency versions.
URL: https://github.com/apache/hadoop/pull/1456#issuecomment-532468971
 
 
   +1 LGTM.
   I have committed this to trunk.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #1460: HDDS-2136. OM block allocation metric not paired with its failures

2019-09-17 Thread GitBox
bharatviswa504 commented on issue #1460: HDDS-2136. OM block allocation metric 
not paired with its failures
URL: https://github.com/apache/hadoop/pull/1460#issuecomment-532466882
 
 
   Thank You @adoroszlai for the contribution.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 merged pull request #1460: HDDS-2136. OM block allocation metric not paired with its failures

2019-09-17 Thread GitBox
bharatviswa504 merged pull request #1460: HDDS-2136. OM block allocation metric 
not paired with its failures
URL: https://github.com/apache/hadoop/pull/1460
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #1461: HDDS-2142. OM metrics mismatch (abort multipart request)

2019-09-17 Thread GitBox
bharatviswa504 commented on issue #1461: HDDS-2142. OM metrics mismatch (abort 
multipart request)
URL: https://github.com/apache/hadoop/pull/1461#issuecomment-532465789
 
 
   Thank You @adoroszlai for the contribution and @arp7 for the review.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 merged pull request #1461: HDDS-2142. OM metrics mismatch (abort multipart request)

2019-09-17 Thread GitBox
bharatviswa504 merged pull request #1461: HDDS-2142. OM metrics mismatch (abort 
multipart request)
URL: https://github.com/apache/hadoop/pull/1461
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1465: HDDS-2143. Rename classes under package org.apache.hadoop.utils.

2019-09-17 Thread GitBox
hadoop-yetus commented on issue #1465: HDDS-2143. Rename classes under package 
org.apache.hadoop.utils.
URL: https://github.com/apache/hadoop/pull/1465#issuecomment-532463617
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 35 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 9 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 61 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 72 | Maven dependency ordering for branch |
   | -1 | mvninstall | 33 | hadoop-ozone in trunk failed. |
   | -1 | compile | 22 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 154 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 993 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 150 | trunk passed |
   | 0 | spotbugs | 175 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 23 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | -1 | mvninstall | 30 | hadoop-ozone in the patch failed. |
   | -1 | compile | 21 | hadoop-ozone in the patch failed. |
   | -1 | javac | 50 | hadoop-hdds generated 6 new + 21 unchanged - 6 fixed = 
27 total (was 27) |
   | -1 | javac | 21 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 57 | hadoop-hdds: The patch generated 19 new + 1708 
unchanged - 13 fixed = 1727 total (was 1721) |
   | -0 | checkstyle | 78 | hadoop-ozone: The patch generated 75 new + 2851 
unchanged - 75 fixed = 2926 total (was 2926) |
   | +1 | mvnsite | 0 | the patch passed |
   | -1 | whitespace | 0 | The patch has 4 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 716 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 74 | hadoop-hdds generated 1 new + 15 unchanged - 1 fixed = 
16 total (was 16) |
   | -1 | javadoc | 50 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 22 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 257 | hadoop-hdds in the patch passed. |
   | -1 | unit | 24 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 3554 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1465/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1465 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux ce9423a26897 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3cf6e42 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1465/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1465/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1465/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1465/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1465/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1465/1/artifact/out/diff-compile-javac-hadoop-hdds.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1465/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1465/1/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1465/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1465/1/artifact/out/whitespace-eol.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1465/1/artifact/out/diff-javadoc-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1465/1/artifact/out/patch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1465/1/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1465/

[jira] [Commented] (HADOOP-16582) LocalFileSystem's mkdirs() does not work as expected under viewfs.

2019-09-17 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16931928#comment-16931928
 ] 

Hadoop QA commented on HADOOP-16582:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
41s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 52s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 148 unchanged - 1 fixed = 150 total (was 149) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 54s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
46s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
52s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 99m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:39e82acc485 |
| JIRA Issue | HADOOP-16582 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12980550/HADOOP-16582.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d196a4e40bac 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f580a87 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16531/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16531/testReport/ |
| Max. process+thread count | 1599 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16531/console |
| Powered b

[GitHub] [hadoop] goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error

2019-09-17 Thread GitBox
goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS 
name resolution error
URL: https://github.com/apache/hadoop/pull/1399#discussion_r325431403
 
 

 ##
 File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/AutoRefreshRMFailoverProxyProvider.java
 ##
 @@ -0,0 +1,72 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.client;
+
+import java.io.Closeable;
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Map.Entry;
+
+import java.util.Set;
+import java.util.HashSet;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
+import org.apache.hadoop.ipc.RPC;
+import org.apache.hadoop.yarn.conf.HAUtil;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+
+@InterfaceAudience.Private
+@InterfaceStability.Unstable
+/**
+ * A subclass of {@link RMFailoverProxyProvider} which tries to
+ * resolve the proxy DNS in the event of failover.
+ * This provider doesn't support Federation.
+ */
+public class AutoRefreshRMFailoverProxyProvider
+extends ConfiguredRMFailoverProxyProvider {
+  private static final Logger LOG =
+LoggerFactory.getLogger(AutoRefreshRMFailoverProxyProvider.class);
+
+  @Override
+  public synchronized void performFailover(T currentProxy) {
+RPC.stopProxy(currentProxy);
+
+//clears out all keys that map to currentProxy
+Set rmIds = new HashSet<>();
+for (Entry entry : proxies.entrySet()) {
+if (entry.getValue().equals(currentProxy)) {
+String rmId = entry.getKey()
 
 Review comment:
   Indentation doesn't look right.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error

2019-09-17 Thread GitBox
goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS 
name resolution error
URL: https://github.com/apache/hadoop/pull/1399#discussion_r325431153
 
 

 ##
 File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/AutoRefreshRMFailoverProxyProvider.java
 ##
 @@ -0,0 +1,72 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.client;
+
+import java.io.Closeable;
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Map.Entry;
+
+import java.util.Set;
+import java.util.HashSet;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
+import org.apache.hadoop.ipc.RPC;
+import org.apache.hadoop.yarn.conf.HAUtil;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+
+@InterfaceAudience.Private
+@InterfaceStability.Unstable
+/**
 
 Review comment:
   Breakline


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error

2019-09-17 Thread GitBox
goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS 
name resolution error
URL: https://github.com/apache/hadoop/pull/1399#discussion_r325432005
 
 

 ##
 File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
 ##
 @@ -715,6 +715,14 @@
 
org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider
   
 
+  
+When HA is not enabled, the class to be used by Clients, AMs 
and
 
 Review comment:
   Is there any other place where we can document this proxy? Maybe one of the 
HA md files.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error

2019-09-17 Thread GitBox
goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS 
name resolution error
URL: https://github.com/apache/hadoop/pull/1399#discussion_r325431479
 
 

 ##
 File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/RMProxy.java
 ##
 @@ -57,7 +57,7 @@
 public class RMProxy {
 
   private static final Logger LOG =
-  LoggerFactory.getLogger(RMProxy.class);
+LoggerFactory.getLogger(RMProxy.class);
 
 Review comment:
   Avoid this change.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error

2019-09-17 Thread GitBox
goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS 
name resolution error
URL: https://github.com/apache/hadoop/pull/1399#discussion_r325430402
 
 

 ##
 File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/TestNoHaRMFailoverProxyProvider.java
 ##
 @@ -0,0 +1,269 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.yarn.client;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.retry.FailoverProxyProvider;
+import org.apache.hadoop.yarn.api.ApplicationClientProtocol;
+import org.apache.hadoop.yarn.api.records.NodeReport;
+import org.apache.hadoop.yarn.client.api.YarnClient;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.apache.hadoop.yarn.exceptions.YarnException;
+import org.apache.hadoop.yarn.server.MiniYARNCluster;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.Closeable;
+import java.io.IOException;
+import java.lang.reflect.InvocationHandler;
+import java.lang.reflect.Proxy;
+import java.net.InetSocketAddress;
+import java.util.List;
+
+import static org.mockito.Mockito.any;
+import static org.mockito.Mockito.eq;
+import static org.mockito.Mockito.times;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
+
+public class TestNoHaRMFailoverProxyProvider {
+private final int NODE_MANAGER_COUNT = 1;
 
 Review comment:
   Indentation (actually Yetus is already complaining).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error

2019-09-17 Thread GitBox
goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS 
name resolution error
URL: https://github.com/apache/hadoop/pull/1399#discussion_r325430775
 
 

 ##
 File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/TestNoHaRMFailoverProxyProvider.java
 ##
 @@ -0,0 +1,269 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.yarn.client;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.retry.FailoverProxyProvider;
+import org.apache.hadoop.yarn.api.ApplicationClientProtocol;
+import org.apache.hadoop.yarn.api.records.NodeReport;
+import org.apache.hadoop.yarn.client.api.YarnClient;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.apache.hadoop.yarn.exceptions.YarnException;
+import org.apache.hadoop.yarn.server.MiniYARNCluster;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.Closeable;
+import java.io.IOException;
+import java.lang.reflect.InvocationHandler;
+import java.lang.reflect.Proxy;
+import java.net.InetSocketAddress;
+import java.util.List;
+
+import static org.mockito.Mockito.any;
+import static org.mockito.Mockito.eq;
+import static org.mockito.Mockito.times;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
+
+public class TestNoHaRMFailoverProxyProvider {
+private final int NODE_MANAGER_COUNT = 1;
+private Configuration conf;
+
+@Before
+public void setUp() throws IOException, YarnException {
+conf = new YarnConfiguration();
+}
+
+@Test
+public void testRestartedRM() throws Exception {
+try {
+MiniYARNCluster cluster =
+new MiniYARNCluster(
+"testRestartedRMNegative", NODE_MANAGER_COUNT, 1, 1);
+YarnClient rmClient = YarnClient.createYarnClient();
+cluster.init(conf);
+cluster.start();
+final Configuration yarnConf = cluster.getConfig();
+rmClient = YarnClient.createYarnClient();
+rmClient.init(yarnConf);
+rmClient.start();
+List nodeReports = rmClient.getNodeReports();
+Assert.assertEquals(
+"The proxy didn't get expected number of node reports",
+NODE_MANAGER_COUNT, nodeReports.size());
+} finally {
+if (rmClient != null) {
+rmClient.stop();
+}
+cluster.stop();
+}
+}
+
+/**
+ * Tests the proxy generated by {@link 
AutoRefreshNoHARMFailoverProxyProvider}
+ * will connect to RM.
+ */
+@Test
+public void testConnectingToRM() throws Exception {
+conf.setClass(YarnConfiguration.CLIENT_FAILOVER_NO_HA_PROXY_PROVIDER,
+AutoRefreshNoHARMFailoverProxyProvider.class, 
RMFailoverProxyProvider.class);
+
+try {
+MiniYARNCluster cluster =
+new MiniYARNCluster(
+"testRestartedRMNegative", NODE_MANAGER_COUNT, 1, 1);
+YarnClient rmClient = null;cluster.init(conf);
+cluster.start();
+final Configuration yarnConf = cluster.getConfig();
+rmClient = YarnClient.createYarnClient();
+rmClient.init(yarnConf);
+rmClient.start();
+List nodeReports = rmClient.getNodeReports();
+Assert.assertEquals(
+"The proxy didn't get expected number of node reports",
+NODE_MANAGER_COUNT, nodeReports.size());
+} finally {
+if (rmClient != null) {
+rmClient.stop();
+}
+cluster.stop();
+}
+}
+
+@Test
+public void testDefaultFPPGetOneProxy() throws Exception {
+class TestProxy extends Proxy implements Closeable {
 
 Review comment:
   Give it a little more meaningful name than TestProxy.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the

[GitHub] [hadoop] goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error

2019-09-17 Thread GitBox
goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS 
name resolution error
URL: https://github.com/apache/hadoop/pull/1399#discussion_r325430578
 
 

 ##
 File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/TestNoHaRMFailoverProxyProvider.java
 ##
 @@ -0,0 +1,269 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.yarn.client;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.retry.FailoverProxyProvider;
+import org.apache.hadoop.yarn.api.ApplicationClientProtocol;
+import org.apache.hadoop.yarn.api.records.NodeReport;
+import org.apache.hadoop.yarn.client.api.YarnClient;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.apache.hadoop.yarn.exceptions.YarnException;
+import org.apache.hadoop.yarn.server.MiniYARNCluster;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.Closeable;
+import java.io.IOException;
+import java.lang.reflect.InvocationHandler;
+import java.lang.reflect.Proxy;
+import java.net.InetSocketAddress;
+import java.util.List;
+
+import static org.mockito.Mockito.any;
+import static org.mockito.Mockito.eq;
+import static org.mockito.Mockito.times;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
+
+public class TestNoHaRMFailoverProxyProvider {
+private final int NODE_MANAGER_COUNT = 1;
+private Configuration conf;
+
+@Before
+public void setUp() throws IOException, YarnException {
+conf = new YarnConfiguration();
+}
+
+@Test
+public void testRestartedRM() throws Exception {
+try {
+MiniYARNCluster cluster =
+new MiniYARNCluster(
+"testRestartedRMNegative", NODE_MANAGER_COUNT, 1, 1);
+YarnClient rmClient = YarnClient.createYarnClient();
+cluster.init(conf);
+cluster.start();
+final Configuration yarnConf = cluster.getConfig();
+rmClient = YarnClient.createYarnClient();
+rmClient.init(yarnConf);
+rmClient.start();
+List nodeReports = rmClient.getNodeReports();
+Assert.assertEquals(
 
 Review comment:
   Import static.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error

2019-09-17 Thread GitBox
goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS 
name resolution error
URL: https://github.com/apache/hadoop/pull/1399#discussion_r325431302
 
 

 ##
 File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/AutoRefreshRMFailoverProxyProvider.java
 ##
 @@ -0,0 +1,72 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.client;
+
+import java.io.Closeable;
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Map.Entry;
+
+import java.util.Set;
+import java.util.HashSet;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
+import org.apache.hadoop.ipc.RPC;
+import org.apache.hadoop.yarn.conf.HAUtil;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+
+@InterfaceAudience.Private
+@InterfaceStability.Unstable
+/**
+ * A subclass of {@link RMFailoverProxyProvider} which tries to
+ * resolve the proxy DNS in the event of failover.
+ * This provider doesn't support Federation.
 
 Review comment:
   Mention this one supports HA and point out where.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error

2019-09-17 Thread GitBox
goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS 
name resolution error
URL: https://github.com/apache/hadoop/pull/1399#discussion_r325431355
 
 

 ##
 File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/AutoRefreshRMFailoverProxyProvider.java
 ##
 @@ -0,0 +1,72 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.client;
+
+import java.io.Closeable;
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Map.Entry;
+
+import java.util.Set;
+import java.util.HashSet;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
+import org.apache.hadoop.ipc.RPC;
+import org.apache.hadoop.yarn.conf.HAUtil;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+
+@InterfaceAudience.Private
+@InterfaceStability.Unstable
+/**
+ * A subclass of {@link RMFailoverProxyProvider} which tries to
+ * resolve the proxy DNS in the event of failover.
+ * This provider doesn't support Federation.
+ */
+public class AutoRefreshRMFailoverProxyProvider
+extends ConfiguredRMFailoverProxyProvider {
+  private static final Logger LOG =
+LoggerFactory.getLogger(AutoRefreshRMFailoverProxyProvider.class);
+
+  @Override
+  public synchronized void performFailover(T currentProxy) {
+RPC.stopProxy(currentProxy);
+
+//clears out all keys that map to currentProxy
+Set rmIds = new HashSet<>();
+for (Entry entry : proxies.entrySet()) {
+if (entry.getValue().equals(currentProxy)) {
 
 Review comment:
   Extract getValue().


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error

2019-09-17 Thread GitBox
goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS 
name resolution error
URL: https://github.com/apache/hadoop/pull/1399#discussion_r325430339
 
 

 ##
 File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 ##
 @@ -924,6 +924,10 @@ public static boolean isAclEnabled(Configuration conf) {
   CLIENT_FAILOVER_PREFIX + "proxy-provider";
   public static final String DEFAULT_CLIENT_FAILOVER_PROXY_PROVIDER =
   "org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider";
+  public static final String CLIENT_FAILOVER_NO_HA_PROXY_PROVIDER =
+CLIENT_FAILOVER_PREFIX + "no-ha-proxy-provider";
 
 Review comment:
   Indentation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error

2019-09-17 Thread GitBox
goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS 
name resolution error
URL: https://github.com/apache/hadoop/pull/1399#discussion_r325431120
 
 

 ##
 File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/AutoRefreshNoHARMFailoverProxyProvider.java
 ##
 @@ -0,0 +1,78 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.client;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.ipc.RPC;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+
+import java.io.IOException;
+import java.net.InetSocketAddress;
+
+/**
+ * A subclass of {@link RMFailoverProxyProvider} which tries to
+ * resolve the proxy DNS in the event of failover.
+ * This provider doesn't support HA or Federation.
+ */
+public class AutoRefreshNoHARMFailoverProxyProvider
+extends DefaultNoHARMFailoverProxyProvider {
+
+  private static final Logger LOG =
+LoggerFactory.getLogger(AutoRefreshNoHARMFailoverProxyProvider.class);
+  protected RMProxy rmProxy;
+  protected YarnConfiguration conf;
+
+  @Override
+  public void init(Configuration configuration, RMProxy rmProxy,
+  Class protocol) {
+this.rmProxy = rmProxy;
+this.protocol = protocol;
+this.conf = new YarnConfiguration(configuration);
+  }
+
+  @Override
+  public synchronized ProxyInfo getProxy() {
+if (proxy == null) {
+  proxy = getProxyInternal();
+}
+return new ProxyInfo(proxy, null);
+  }
+
+  protected T getProxyInternal() {
+try {
+  final InetSocketAddress rmAddress = rmProxy.getRMAddress(conf, protocol);
+  return rmProxy.getProxy(conf, protocol, rmAddress);
+} catch (IOException ioe) {
+  LOG.error("Unable to create proxy to the ResourceManager ", ioe);
 
 Review comment:
   You can remove the last space.
   In addition, probably enough to show ioe.getMessage()


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error

2019-09-17 Thread GitBox
goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS 
name resolution error
URL: https://github.com/apache/hadoop/pull/1399#discussion_r325430846
 
 

 ##
 File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/TestNoHaRMFailoverProxyProvider.java
 ##
 @@ -0,0 +1,269 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.yarn.client;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.retry.FailoverProxyProvider;
+import org.apache.hadoop.yarn.api.ApplicationClientProtocol;
+import org.apache.hadoop.yarn.api.records.NodeReport;
+import org.apache.hadoop.yarn.client.api.YarnClient;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.apache.hadoop.yarn.exceptions.YarnException;
+import org.apache.hadoop.yarn.server.MiniYARNCluster;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.Closeable;
+import java.io.IOException;
+import java.lang.reflect.InvocationHandler;
+import java.lang.reflect.Proxy;
+import java.net.InetSocketAddress;
+import java.util.List;
+
+import static org.mockito.Mockito.any;
+import static org.mockito.Mockito.eq;
+import static org.mockito.Mockito.times;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
+
+public class TestNoHaRMFailoverProxyProvider {
+private final int NODE_MANAGER_COUNT = 1;
+private Configuration conf;
+
+@Before
+public void setUp() throws IOException, YarnException {
+conf = new YarnConfiguration();
+}
+
+@Test
+public void testRestartedRM() throws Exception {
+try {
+MiniYARNCluster cluster =
+new MiniYARNCluster(
+"testRestartedRMNegative", NODE_MANAGER_COUNT, 1, 1);
+YarnClient rmClient = YarnClient.createYarnClient();
+cluster.init(conf);
+cluster.start();
+final Configuration yarnConf = cluster.getConfig();
+rmClient = YarnClient.createYarnClient();
+rmClient.init(yarnConf);
+rmClient.start();
+List nodeReports = rmClient.getNodeReports();
+Assert.assertEquals(
+"The proxy didn't get expected number of node reports",
+NODE_MANAGER_COUNT, nodeReports.size());
+} finally {
+if (rmClient != null) {
+rmClient.stop();
+}
+cluster.stop();
+}
+}
+
+/**
+ * Tests the proxy generated by {@link 
AutoRefreshNoHARMFailoverProxyProvider}
+ * will connect to RM.
+ */
+@Test
+public void testConnectingToRM() throws Exception {
+conf.setClass(YarnConfiguration.CLIENT_FAILOVER_NO_HA_PROXY_PROVIDER,
+AutoRefreshNoHARMFailoverProxyProvider.class, 
RMFailoverProxyProvider.class);
+
+try {
+MiniYARNCluster cluster =
+new MiniYARNCluster(
+"testRestartedRMNegative", NODE_MANAGER_COUNT, 1, 1);
+YarnClient rmClient = null;cluster.init(conf);
+cluster.start();
+final Configuration yarnConf = cluster.getConfig();
+rmClient = YarnClient.createYarnClient();
+rmClient.init(yarnConf);
+rmClient.start();
+List nodeReports = rmClient.getNodeReports();
+Assert.assertEquals(
+"The proxy didn't get expected number of node reports",
+NODE_MANAGER_COUNT, nodeReports.size());
+} finally {
+if (rmClient != null) {
+rmClient.stop();
+}
+cluster.stop();
+}
+}
+
+@Test
+public void testDefaultFPPGetOneProxy() throws Exception {
 
 Review comment:
   It would be good to have javadocs for each test to describe the high level 
goals.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific 

[GitHub] [hadoop] goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error

2019-09-17 Thread GitBox
goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS 
name resolution error
URL: https://github.com/apache/hadoop/pull/1399#discussion_r325430661
 
 

 ##
 File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/TestNoHaRMFailoverProxyProvider.java
 ##
 @@ -0,0 +1,269 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.yarn.client;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.retry.FailoverProxyProvider;
+import org.apache.hadoop.yarn.api.ApplicationClientProtocol;
+import org.apache.hadoop.yarn.api.records.NodeReport;
+import org.apache.hadoop.yarn.client.api.YarnClient;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.apache.hadoop.yarn.exceptions.YarnException;
+import org.apache.hadoop.yarn.server.MiniYARNCluster;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.Closeable;
+import java.io.IOException;
+import java.lang.reflect.InvocationHandler;
+import java.lang.reflect.Proxy;
+import java.net.InetSocketAddress;
+import java.util.List;
+
+import static org.mockito.Mockito.any;
+import static org.mockito.Mockito.eq;
+import static org.mockito.Mockito.times;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
+
+public class TestNoHaRMFailoverProxyProvider {
+private final int NODE_MANAGER_COUNT = 1;
+private Configuration conf;
+
+@Before
+public void setUp() throws IOException, YarnException {
+conf = new YarnConfiguration();
+}
+
+@Test
+public void testRestartedRM() throws Exception {
+try {
+MiniYARNCluster cluster =
+new MiniYARNCluster(
+"testRestartedRMNegative", NODE_MANAGER_COUNT, 1, 1);
+YarnClient rmClient = YarnClient.createYarnClient();
+cluster.init(conf);
+cluster.start();
+final Configuration yarnConf = cluster.getConfig();
+rmClient = YarnClient.createYarnClient();
+rmClient.init(yarnConf);
+rmClient.start();
+List nodeReports = rmClient.getNodeReports();
+Assert.assertEquals(
+"The proxy didn't get expected number of node reports",
+NODE_MANAGER_COUNT, nodeReports.size());
+} finally {
+if (rmClient != null) {
+rmClient.stop();
+}
+cluster.stop();
+}
+}
+
+/**
+ * Tests the proxy generated by {@link 
AutoRefreshNoHARMFailoverProxyProvider}
+ * will connect to RM.
+ */
+@Test
+public void testConnectingToRM() throws Exception {
+conf.setClass(YarnConfiguration.CLIENT_FAILOVER_NO_HA_PROXY_PROVIDER,
+AutoRefreshNoHARMFailoverProxyProvider.class, 
RMFailoverProxyProvider.class);
+
+try {
+MiniYARNCluster cluster =
 
 Review comment:
   Once you fix the indentation, this should fit in one line.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error

2019-09-17 Thread GitBox
goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS 
name resolution error
URL: https://github.com/apache/hadoop/pull/1399#discussion_r325431912
 
 

 ##
 File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/RMProxy.java
 ##
 @@ -154,19 +151,44 @@ public T run() {
   });
   }
 
+
+  /**
+   * Helper method to create non-HA RMFailoverProxyProvider.
+   */
+  private  RMFailoverProxyProvider createNonHaRMFailoverProxyProvider(
+  Configuration conf, Class protocol) {
+String defaultProviderClassName =
+YarnConfiguration.DEFAULT_CLIENT_FAILOVER_NO_HA_PROXY_PROVIDER;
+Class> defaultProviderClass;
+try {
+  defaultProviderClass = (Class>)
+  Class.forName(defaultProviderClassName);
+} catch (Exception e) {
+  throw new YarnRuntimeException("Invalid default failover provider class" 
+
+  defaultProviderClassName, e);
+}
+
+RMFailoverProxyProvider provider = ReflectionUtils.newInstance(
+conf.getClass(YarnConfiguration.CLIENT_FAILOVER_NO_HA_PROXY_PROVIDER,
+defaultProviderClass, RMFailoverProxyProvider.class), conf);
+provider.init(conf, (RMProxy) this, protocol);
+return provider;
+  }
+
   /**
* Helper method to create FailoverProxyProvider.
*/
   private  RMFailoverProxyProvider createRMFailoverProxyProvider(
   Configuration conf, Class protocol) {
+String defaultProviderClassName =
 
 Review comment:
   I don't get it, aren't you just extracting the constant here and having the 
same behavior as before?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error

2019-09-17 Thread GitBox
goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS 
name resolution error
URL: https://github.com/apache/hadoop/pull/1399#discussion_r325430924
 
 

 ##
 File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/TestNoHaRMFailoverProxyProvider.java
 ##
 @@ -0,0 +1,269 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.yarn.client;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.retry.FailoverProxyProvider;
+import org.apache.hadoop.yarn.api.ApplicationClientProtocol;
+import org.apache.hadoop.yarn.api.records.NodeReport;
+import org.apache.hadoop.yarn.client.api.YarnClient;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.apache.hadoop.yarn.exceptions.YarnException;
+import org.apache.hadoop.yarn.server.MiniYARNCluster;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.Closeable;
+import java.io.IOException;
+import java.lang.reflect.InvocationHandler;
+import java.lang.reflect.Proxy;
+import java.net.InetSocketAddress;
+import java.util.List;
+
+import static org.mockito.Mockito.any;
+import static org.mockito.Mockito.eq;
+import static org.mockito.Mockito.times;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
+
+public class TestNoHaRMFailoverProxyProvider {
 
 Review comment:
   High level javadoc.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error

2019-09-17 Thread GitBox
goiri commented on a change in pull request #1399: HADOOP-16543: Cached DNS 
name resolution error
URL: https://github.com/apache/hadoop/pull/1399#discussion_r325431541
 
 

 ##
 File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/RMProxy.java
 ##
 @@ -125,16 +125,13 @@ public InetSocketAddress getRMAddress(
   private static  T newProxyInstance(final YarnConfiguration conf,
   final Class protocol, RMProxy instance, RetryPolicy retryPolicy)
   throws IOException{
+RMFailoverProxyProvider provider;
 if (HAUtil.isHAEnabled(conf) || HAUtil.isFederationEnabled(conf)) {
-  RMFailoverProxyProvider provider =
-  instance.createRMFailoverProxyProvider(conf, protocol);
-  return (T) RetryProxy.create(protocol, provider, retryPolicy);
+  provider = instance.createRMFailoverProxyProvider(conf, protocol);
 } else {
-  InetSocketAddress rmAddress = instance.getRMAddress(conf, protocol);
-  LOG.info("Connecting to ResourceManager at " + rmAddress);
-  T proxy = instance.getProxy(conf, protocol, rmAddress);
-  return (T) RetryProxy.create(protocol, proxy, retryPolicy);
+  provider = instance.createNonHaRMFailoverProxyProvider(conf, protocol);
 
 Review comment:
   Thoughts?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 opened a new pull request #1466: HDDS-2144. MR job failing on secure Ozone cluster.

2019-09-17 Thread GitBox
bharatviswa504 opened a new pull request #1466: HDDS-2144. MR job failing on 
secure Ozone cluster.
URL: https://github.com/apache/hadoop/pull/1466
 
 
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] RogPodge commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error

2019-09-17 Thread GitBox
RogPodge commented on a change in pull request #1399: HADOOP-16543: Cached DNS 
name resolution error
URL: https://github.com/apache/hadoop/pull/1399#discussion_r325429276
 
 

 ##
 File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/RMProxy.java
 ##
 @@ -154,19 +151,44 @@ public T run() {
   });
   }
 
+
+  /**
+   * Helper method to create non-HA RMFailoverProxyProvider.
+   */
+  private  RMFailoverProxyProvider createNonHaRMFailoverProxyProvider(
+  Configuration conf, Class protocol) {
+String defaultProviderClassName =
+YarnConfiguration.DEFAULT_CLIENT_FAILOVER_NO_HA_PROXY_PROVIDER;
+Class> defaultProviderClass;
+try {
+  defaultProviderClass = (Class>)
+  Class.forName(defaultProviderClassName);
+} catch (Exception e) {
+  throw new YarnRuntimeException("Invalid default failover provider class" 
+
+  defaultProviderClassName, e);
+}
+
+RMFailoverProxyProvider provider = ReflectionUtils.newInstance(
+conf.getClass(YarnConfiguration.CLIENT_FAILOVER_NO_HA_PROXY_PROVIDER,
+defaultProviderClass, RMFailoverProxyProvider.class), conf);
+provider.init(conf, (RMProxy) this, protocol);
+return provider;
+  }
+
   /**
* Helper method to create FailoverProxyProvider.
*/
   private  RMFailoverProxyProvider createRMFailoverProxyProvider(
   Configuration conf, Class protocol) {
+String defaultProviderClassName =
 
 Review comment:
   We need this change so that the AutoRefreshNoHARMFailoverProxyProvider 
works. Otherwise, it will try to get the IP address once and then fail if there 
is ever a disconnect from the RM later


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1464: HDDS-730. Ozone fs cli prints hadoop fs in usage.

2019-09-17 Thread GitBox
hadoop-yetus commented on issue #1464: HDDS-730. Ozone fs cli prints hadoop fs 
in usage.
URL: https://github.com/apache/hadoop/pull/1464#issuecomment-532437413
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 43 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 15 | Maven dependency ordering for branch |
   | -1 | mvninstall | 31 | hadoop-ozone in trunk failed. |
   | -1 | compile | 22 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 53 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 794 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 162 | trunk passed |
   | 0 | spotbugs | 173 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 32 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 18 | Maven dependency ordering for patch |
   | -1 | mvninstall | 35 | hadoop-ozone in the patch failed. |
   | -1 | compile | 27 | hadoop-ozone in the patch failed. |
   | -1 | javac | 27 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 32 | hadoop-ozone: The patch generated 30 new + 0 
unchanged - 0 fixed = 30 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 33 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 669 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 159 | the patch passed |
   | -1 | findbugs | 29 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 256 | hadoop-hdds in the patch failed. |
   | -1 | unit | 30 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 3310 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.container.placement.algorithms.TestSCMContainerPlacementRackAware
 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1464/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1464 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs 
compile javac javadoc mvninstall shadedclient findbugs checkstyle |
   | uname | Linux 67d77efce9db 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f580a87 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1464/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1464/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1464/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1464/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1464/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1464/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1464/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1464/1/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1464/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1464/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1464/1/testReport/ |
   | Max. process+thread count | 449 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozonefs U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1464/1/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   

[GitHub] [hadoop] hadoop-yetus commented on issue #1452: HDDS-2121. Create a shaded ozone filesystem (client) jar

2019-09-17 Thread GitBox
hadoop-yetus commented on issue #1452: HDDS-2121. Create a shaded ozone 
filesystem (client) jar
URL: https://github.com/apache/hadoop/pull/1452#issuecomment-532436493
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 2872 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 38 | hadoop-ozone in trunk failed. |
   | -1 | compile | 24 | hadoop-ozone in trunk failed. |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1210 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 168 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 30 | hadoop-ozone in the patch failed. |
   | -1 | compile | 22 | hadoop-ozone in the patch failed. |
   | -1 | javac | 22 | hadoop-ozone in the patch failed. |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 727 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 152 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 270 | hadoop-hdds in the patch passed. |
   | -1 | unit | 25 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 30 | The patch does not generate ASF License warnings. |
   | | | 5807 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1452/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1452 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux 629d1f742a64 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f580a87 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1452/5/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1452/5/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1452/5/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1452/5/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1452/5/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1452/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1452/5/testReport/ |
   | Max. process+thread count | 467 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozonefs-lib-current U: 
hadoop-ozone/ozonefs-lib-current |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1452/5/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer closed pull request #1436: HDFS-14846: libhdfs tests are failing on trunk due to jni usage bugs

2019-09-17 Thread GitBox
anuengineer closed pull request #1436: HDFS-14846: libhdfs tests are failing on 
trunk due to jni usage bugs
URL: https://github.com/apache/hadoop/pull/1436
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #1436: HDFS-14846: libhdfs tests are failing on trunk due to jni usage bugs

2019-09-17 Thread GitBox
anuengineer commented on issue #1436: HDFS-14846: libhdfs tests are failing on 
trunk due to jni usage bugs
URL: https://github.com/apache/hadoop/pull/1436#issuecomment-532435385
 
 
   Thank you for the contribution. I have committed this patch to the trunk 
branch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1277: HDDS-1054. List Multipart uploads in a bucket

2019-09-17 Thread GitBox
bharatviswa504 commented on a change in pull request #1277: HDDS-1054. List 
Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#discussion_r325421790
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerRequestHandler.java
 ##
 @@ -822,6 +822,7 @@ private MultipartInfoInitiateResponse 
initiateMultiPartUpload(
 .setBucketName(keyArgs.getBucketName())
 .setKeyName(keyArgs.getKeyName())
 .setType(keyArgs.getType())
+.setFactor(keyArgs.getFactor())
 
 Review comment:
   Minor: We don't need this, as in line 827 it is already being done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 opened a new pull request #1465: HDDS-2143. Rename classes under package org.apache.hadoop.utils.

2019-09-17 Thread GitBox
bharatviswa504 opened a new pull request #1465: HDDS-2143. Rename classes under 
package org.apache.hadoop.utils.
URL: https://github.com/apache/hadoop/pull/1465
 
 
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1452: HDDS-2121. Create a shaded ozone filesystem (client) jar

2019-09-17 Thread GitBox
hadoop-yetus commented on issue #1452: HDDS-2121. Create a shaded ozone 
filesystem (client) jar
URL: https://github.com/apache/hadoop/pull/1452#issuecomment-532426055
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 51 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 34 | hadoop-ozone in trunk failed. |
   | -1 | compile | 22 | hadoop-ozone in trunk failed. |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1150 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 178 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 35 | hadoop-ozone in the patch failed. |
   | -1 | compile | 24 | hadoop-ozone in the patch failed. |
   | -1 | javac | 24 | hadoop-ozone in the patch failed. |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 799 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 178 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 250 | hadoop-hdds in the patch failed. |
   | -1 | unit | 29 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 35 | The patch does not generate ASF License warnings. |
   | | | 3069 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.container.placement.algorithms.TestSCMContainerPlacementRackAware
 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1452/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1452 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux 48f13474670c 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / eefe9bc |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1452/4/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1452/4/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1452/4/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1452/4/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1452/4/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1452/4/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1452/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1452/4/testReport/ |
   | Max. process+thread count | 553 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozonefs-lib-current U: 
hadoop-ozone/ozonefs-lib-current |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1452/4/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1452: HDDS-2121. Create a shaded ozone filesystem (client) jar

2019-09-17 Thread GitBox
bharatviswa504 commented on a change in pull request #1452: HDDS-2121. Create a 
shaded ozone filesystem (client) jar
URL: https://github.com/apache/hadoop/pull/1452#discussion_r325400148
 
 

 ##
 File path: hadoop-ozone/ozonefs-lib-current/pom.xml
 ##
 @@ -83,6 +63,78 @@
   true
 
   
+  
+org.apache.maven.plugins
+maven-shade-plugin
+
+  
+package
+
+  shade
+
+
+  
+
+  classworlds:classworlds
 
 Review comment:
   Do we need this?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16582) LocalFileSystem's mkdirs() does not work as expected under viewfs.

2019-09-17 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-16582:

Status: Patch Available  (was: Open)

> LocalFileSystem's mkdirs() does not work as expected under viewfs.
> --
>
> Key: HADOOP-16582
> URL: https://issues.apache.org/jira/browse/HADOOP-16582
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Major
> Attachments: HADOOP-16582.patch
>
>
> When {{mkdirs(Path)}} is called against {{LocalFileSystem}}, the 
> implementation in {{RawLocalFileSystem}} is called and the directory 
> permission is determined by the umask.  However, if it is under 
> {{ViewFileSystem}}, the default implementation in {{FileSystem}} is called 
> and this causes explicit {{chmod()}} to 0777.
> The {{mkdirs(Path)}} method needs to be overriden in
> - ViewFileSystem to avoid calling the default implementation
> - ChRootedFileSystem for proper resolution of viewfs mount table
> - FilterFileSystem to avoid calling the default implementation
> Only then the same method in the target ({{LocalFileSystem}} in this case) 
> will be called.  Hdfs does not suffer from the same flaw since it applies 
> umask in all cases, regardless of what version of {{mkdirs()}} was called.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16582) LocalFileSystem's mkdirs() does not work as expected under viewfs.

2019-09-17 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-16582:

Attachment: HADOOP-16582.patch

> LocalFileSystem's mkdirs() does not work as expected under viewfs.
> --
>
> Key: HADOOP-16582
> URL: https://issues.apache.org/jira/browse/HADOOP-16582
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Major
> Attachments: HADOOP-16582.patch
>
>
> When {{mkdirs(Path)}} is called against {{LocalFileSystem}}, the 
> implementation in {{RawLocalFileSystem}} is called and the directory 
> permission is determined by the umask.  However, if it is under 
> {{ViewFileSystem}}, the default implementation in {{FileSystem}} is called 
> and this causes explicit {{chmod()}} to 0777.
> The {{mkdirs(Path)}} method needs to be overriden in
> - ViewFileSystem to avoid calling the default implementation
> - ChRootedFileSystem for proper resolution of viewfs mount table
> - FilterFileSystem to avoid calling the default implementation
> Only then the same method in the target ({{LocalFileSystem}} in this case) 
> will be called.  Hdfs does not suffer from the same flaw since it applies 
> umask in all cases, regardless of what version of {{mkdirs()}} was called.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16582) LocalFileSystem's mkdirs() does not work as expected under viewfs.

2019-09-17 Thread Kihwal Lee (Jira)
Kihwal Lee created HADOOP-16582:
---

 Summary: LocalFileSystem's mkdirs() does not work as expected 
under viewfs.
 Key: HADOOP-16582
 URL: https://issues.apache.org/jira/browse/HADOOP-16582
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kihwal Lee


When {{mkdirs(Path)}} is called against {{LocalFileSystem}}, the implementation 
in {{RawLocalFileSystem}} is called and the directory permission is determined 
by the umask.  However, if it is under {{ViewFileSystem}}, the default 
implementation in {{FileSystem}} is called and this causes explicit {{chmod()}} 
to 0777.

The {{mkdirs(Path)}} method needs to be overriden in
- ViewFileSystem to avoid calling the default implementation
- ChRootedFileSystem for proper resolution of viewfs mount table
- FilterFileSystem to avoid calling the default implementation

Only then the same method in the target ({{LocalFileSystem}} in this case) will 
be called.  Hdfs does not suffer from the same flaw since it applies umask in 
all cases, regardless of what version of {{mkdirs()}} was called.




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16582) LocalFileSystem's mkdirs() does not work as expected under viewfs.

2019-09-17 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee reassigned HADOOP-16582:
---

Assignee: Kihwal Lee

> LocalFileSystem's mkdirs() does not work as expected under viewfs.
> --
>
> Key: HADOOP-16582
> URL: https://issues.apache.org/jira/browse/HADOOP-16582
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Major
>
> When {{mkdirs(Path)}} is called against {{LocalFileSystem}}, the 
> implementation in {{RawLocalFileSystem}} is called and the directory 
> permission is determined by the umask.  However, if it is under 
> {{ViewFileSystem}}, the default implementation in {{FileSystem}} is called 
> and this causes explicit {{chmod()}} to 0777.
> The {{mkdirs(Path)}} method needs to be overriden in
> - ViewFileSystem to avoid calling the default implementation
> - ChRootedFileSystem for proper resolution of viewfs mount table
> - FilterFileSystem to avoid calling the default implementation
> Only then the same method in the target ({{LocalFileSystem}} in this case) 
> will be called.  Hdfs does not suffer from the same flaw since it applies 
> umask in all cases, regardless of what version of {{mkdirs()}} was called.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16581) ValueQueue does not trigger an async refill when number of values falls below watermark

2019-09-17 Thread Yuval Degani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuval Degani updated HADOOP-16581:
--
Status: Patch Available  (was: In Progress)

> ValueQueue does not trigger an async refill when number of values falls below 
> watermark
> ---
>
> Key: HADOOP-16581
> URL: https://issues.apache.org/jira/browse/HADOOP-16581
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Affects Versions: 3.2.0, 2.7.4
>Reporter: Yuval Degani
>Assignee: Yuval Degani
>Priority: Major
> Fix For: 3.2.1
>
>
> The ValueQueue facility was designed to cache EDEKs for KMS KeyProviders so 
> that EDEKs could be served quickly, while the cache is replenished in a 
> background thread.
> The existing code for triggering an asynchronous refill is only triggered 
> when a key queue becomes empty, rather than when it falls below the 
> configured watermark.
> This is a relatively minor fix in the main code, however, most of the tests 
> require some changes as they verify the previous unintended behavior.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] cxorm opened a new pull request #1464: HDDS-730. Ozone fs cli prints hadoop fs in usage.

2019-09-17 Thread GitBox
cxorm opened a new pull request #1464: HDDS-730. Ozone fs cli prints hadoop fs 
in usage.
URL: https://github.com/apache/hadoop/pull/1464
 
 
   Create OzoneFsShell that extends hadoop FsShell
   
   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-16581) ValueQueue does not trigger an async refill when number of values falls below watermark

2019-09-17 Thread Yuval Degani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-16581 started by Yuval Degani.
-
> ValueQueue does not trigger an async refill when number of values falls below 
> watermark
> ---
>
> Key: HADOOP-16581
> URL: https://issues.apache.org/jira/browse/HADOOP-16581
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Affects Versions: 2.7.4, 3.2.0
>Reporter: Yuval Degani
>Assignee: Yuval Degani
>Priority: Major
> Fix For: 3.2.1
>
>
> The ValueQueue facility was designed to cache EDEKs for KMS KeyProviders so 
> that EDEKs could be served quickly, while the cache is replenished in a 
> background thread.
> The existing code for triggering an asynchronous refill is only triggered 
> when a key queue becomes empty, rather than when it falls below the 
> configured watermark.
> This is a relatively minor fix in the main code, however, most of the tests 
> require some changes as they verify the previous unintended behavior.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] yuvaldeg opened a new pull request #1463: HADOOP-16581. Revise ValueQueue to correctly replenish queues that go…

2019-09-17 Thread GitBox
yuvaldeg opened a new pull request #1463: HADOOP-16581. Revise ValueQueue to 
correctly replenish queues that go…
URL: https://github.com/apache/hadoop/pull/1463
 
 
   … below the watermark
   
   In the existing implementation, the ValueQueue::getAtMost() method will only 
trigger a refill on a key queue if it has gone empty, instead of triggering a 
refill when it has gone below the watermark. Revised the test suite to 
correctly verify this behavior.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1402: HADOOP-16547. make sure that s3guard prune sets up the FS

2019-09-17 Thread GitBox
hadoop-yetus commented on issue #1402: HADOOP-16547. make sure that s3guard 
prune sets up the FS
URL: https://github.com/apache/hadoop/pull/1402#issuecomment-532391837
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 50 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 1 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1344 | trunk passed |
   | +1 | compile | 36 | trunk passed |
   | +1 | checkstyle | 27 | trunk passed |
   | +1 | mvnsite | 40 | trunk passed |
   | +1 | shadedclient | 851 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 28 | trunk passed |
   | 0 | spotbugs | 68 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 66 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 37 | the patch passed |
   | +1 | compile | 31 | the patch passed |
   | +1 | javac | 31 | the patch passed |
   | +1 | checkstyle | 23 | the patch passed |
   | +1 | mvnsite | 38 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 935 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 26 | the patch passed |
   | +1 | findbugs | 72 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 84 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 3836 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1402/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1402 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux faf14cf15b01 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / eefe9bc |
   | Default Java | 1.8.0_222 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1402/5/testReport/ |
   | Max. process+thread count | 440 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1402/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16581) ValueQueue does not trigger an async refill when number of values falls below watermark

2019-09-17 Thread Yuval Degani (Jira)
Yuval Degani created HADOOP-16581:
-

 Summary: ValueQueue does not trigger an async refill when number 
of values falls below watermark
 Key: HADOOP-16581
 URL: https://issues.apache.org/jira/browse/HADOOP-16581
 Project: Hadoop Common
  Issue Type: Bug
  Components: common, kms
Affects Versions: 3.2.0, 2.7.4
Reporter: Yuval Degani
Assignee: Yuval Degani
 Fix For: 3.2.1


The ValueQueue facility was designed to cache EDEKs for KMS KeyProviders so 
that EDEKs could be served quickly, while the cache is replenished in a 
background thread.

The existing code for triggering an asynchronous refill is only triggered when 
a key queue becomes empty, rather than when it falls below the configured 
watermark.

This is a relatively minor fix in the main code, however, most of the tests 
require some changes as they verify the previous unintended behavior.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1369: HDDS-2020. Remove mTLS from Ozone GRPC. Contributed by Xiaoyu Yao.

2019-09-17 Thread GitBox
hadoop-yetus commented on issue #1369: HDDS-2020. Remove mTLS from Ozone GRPC. 
Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/1369#issuecomment-532385397
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 41 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 13 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 65 | Maven dependency ordering for branch |
   | -1 | mvninstall | 30 | hadoop-ozone in trunk failed. |
   | -1 | compile | 20 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 97 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 836 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 159 | trunk passed |
   | 0 | spotbugs | 175 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 26 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for patch |
   | -1 | mvninstall | 31 | hadoop-ozone in the patch failed. |
   | -1 | compile | 24 | hadoop-ozone in the patch failed. |
   | -1 | cc | 24 | hadoop-ozone in the patch failed. |
   | -1 | javac | 24 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 47 | hadoop-hdds: The patch generated 64 new + 908 
unchanged - 46 fixed = 972 total (was 954) |
   | -0 | checkstyle | 50 | hadoop-ozone: The patch generated 77 new + 973 
unchanged - 64 fixed = 1050 total (was 1037) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 680 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 87 | hadoop-ozone generated 3 new + 252 unchanged - 2 fixed 
= 255 total (was 254) |
   | -1 | findbugs | 27 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 229 | hadoop-hdds in the patch passed. |
   | -1 | unit | 28 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | |  | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1369 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml cc |
   | uname | Linux 30df383d112f 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / eefe9bc |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/9/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/9/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/9/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/9/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/9/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | cc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/9/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/9/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/9/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/9/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/9/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/9/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/9/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/9/testReport/ |
   | Max. process+thread count | 495 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/client hadoop-hdds/common 
hadoop-hdds/container-service hadoop-hdds/server-scm hadoop-ozone/client 
hadoop-ozone/common hadoop-ozone/integr

[GitHub] [hadoop] hadoop-yetus commented on issue #1420: HDDS-2032. Ozone client should retry writes in case of any ratis/stateMachine exceptions.

2019-09-17 Thread GitBox
hadoop-yetus commented on issue #1420: HDDS-2032. Ozone client should retry 
writes in case of any ratis/stateMachine exceptions.
URL: https://github.com/apache/hadoop/pull/1420#issuecomment-532362125
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 78 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for branch |
   | -1 | mvninstall | 28 | hadoop-ozone in trunk failed. |
   | -1 | compile | 19 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 51 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 943 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 159 | trunk passed |
   | 0 | spotbugs | 173 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 23 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | -1 | mvninstall | 30 | hadoop-ozone in the patch failed. |
   | -1 | compile | 21 | hadoop-ozone in the patch failed. |
   | -1 | javac | 21 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 25 | hadoop-hdds: The patch generated 2 new + 40 
unchanged - 3 fixed = 42 total (was 43) |
   | -0 | checkstyle | 27 | hadoop-ozone: The patch generated 2 new + 144 
unchanged - 2 fixed = 146 total (was 146) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 728 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 66 | hadoop-hdds in the patch passed. |
   | +1 | javadoc | 83 | hadoop-ozone generated 0 new + 253 unchanged - 2 fixed 
= 253 total (was 255) |
   | -1 | findbugs | 23 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 263 | hadoop-hdds in the patch passed. |
   | -1 | unit | 25 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 30 | The patch does not generate ASF License warnings. |
   | | | 3399 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1420/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1420 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux ea6521a1b3ea 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / eefe9bc |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1420/3/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1420/3/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1420/3/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1420/3/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1420/3/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1420/3/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1420/3/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1420/3/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1420/3/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1420/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1420/3/testReport/ |
   | Max. process+thread count | 438 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/client hadoop-ozone/client U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1420/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
 

[GitHub] [hadoop] ashvina commented on a change in pull request #1446: YARN-9834. Allow using a pool of local users to run Yarn Secure Conta…

2019-09-17 Thread GitBox
ashvina commented on a change in pull request #1446: YARN-9834. Allow using a 
pool of local users to run Yarn Secure Conta…
URL: https://github.com/apache/hadoop/pull/1446#discussion_r325337854
 
 

 ##
 File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
 ##
 @@ -212,6 +213,15 @@ public ResourceLocalizationService(Dispatcher dispatcher,
 this.delService = delService;
 this.dirsHandler = dirsHandler;
 
+this.disablePrivateVis = UserGroupInformation.isSecurityEnabled() &&
+context.getConf().getBoolean(
+YarnConfiguration.NM_SECURE_MODE_USE_POOL_USER,
+YarnConfiguration.DEFAULT_NM_SECURE_MODE_USE_POOL_USER);
+if (this.disablePrivateVis) {
+  LOG.info("When " + YarnConfiguration.NM_SECURE_MODE_USE_POOL_USER +
 
 Review comment:
   All resources can either be `public` or `application`. It seems this 
constraint is added since a `local-user` may get assigned to different 
`real-users`. Can you comment on the cases where a real user application may 
require `private` resources and `local-user-pooling` is enabled?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ashvina commented on a change in pull request #1446: YARN-9834. Allow using a pool of local users to run Yarn Secure Conta…

2019-09-17 Thread GitBox
ashvina commented on a change in pull request #1446: YARN-9834. Allow using a 
pool of local users to run Yarn Secure Conta…
URL: https://github.com/apache/hadoop/pull/1446#discussion_r325328431
 
 

 ##
 File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
 ##
 @@ -1611,7 +1625,7 @@ private void cleanUpFilesPerUserDir(FileContext lfs, 
DeletionService del,
 String owner = status.getOwner();
 List pathList = new ArrayList<>();
 pathList.add(status.getPath());
-FileDeletionTask deletionTask = new FileDeletionTask(del, owner, null,
+FileDeletionTask deletionTask = new FileDeletionTask(del, null, null,
 
 Review comment:
   Is this change intentional? I am not sure if I understand why the 
information of the user deleting the file is not required?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vinayakumarb commented on a change in pull request #1432: HADOOP-16557. [pb-upgrade] Upgrade protobuf.version to 3.7.1

2019-09-17 Thread GitBox
vinayakumarb commented on a change in pull request #1432: HADOOP-16557. 
[pb-upgrade] Upgrade protobuf.version to 3.7.1
URL: https://github.com/apache/hadoop/pull/1432#discussion_r325325239
 
 

 ##
 File path: hadoop-client-modules/hadoop-client-runtime/pom.xml
 ##
 @@ -229,6 +229,13 @@
 update*
   
 
+
+  com.google.protobuf:protobuf-java
+  
+google/protobuf/*.proto
+google/protobuf/**/*.proto
+  
+
 
 Review comment:
   proto files are excluded from shading. 
   .5.0 protobuf jar didnt have proto files in jars. 3.7.1 jar jar have proto 
files, so shaded jar verifier was complaining about unwanted content in shaded 
jars.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vinayakumarb commented on a change in pull request #1432: HADOOP-16557. [pb-upgrade] Upgrade protobuf.version to 3.7.1

2019-09-17 Thread GitBox
vinayakumarb commented on a change in pull request #1432: HADOOP-16557. 
[pb-upgrade] Upgrade protobuf.version to 3.7.1
URL: https://github.com/apache/hadoop/pull/1432#discussion_r325324952
 
 

 ##
 File path: hadoop-project/pom.xml
 ##
 @@ -1918,6 +1918,9 @@
   
 false
   
+  
+/opt/protobuf-3.7/bin/protoc
+  
 
 Review comment:
   This is kept temporarily to make jenkins run for this PR alone.
   This PR alone needs both versions of protoc. Before patch 2.5.0 and after 
patch 3.7.1.
   Dockerfile will be updated in subsequent PR once this PR goes in.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vinayakumarb commented on a change in pull request #1432: HADOOP-16557. [pb-upgrade] Upgrade protobuf.version to 3.7.1

2019-09-17 Thread GitBox
vinayakumarb commented on a change in pull request #1432: HADOOP-16557. 
[pb-upgrade] Upgrade protobuf.version to 3.7.1
URL: https://github.com/apache/hadoop/pull/1432#discussion_r325324073
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/BlockListAsLongs.java
 ##
 @@ -276,12 +276,12 @@ public void add(Replica replica) {
   try {
 // zig-zag to reduce size of legacy blocks
 cos.writeSInt64NoTag(replica.getBlockId());
-cos.writeRawVarint64(replica.getBytesOnDisk());
-cos.writeRawVarint64(replica.getGenerationStamp());
+cos.writeUInt64NoTag(replica.getBytesOnDisk());
+cos.writeUInt64NoTag(replica.getGenerationStamp());
 ReplicaState state = replica.getState();
 // although state is not a 64-bit value, using a long varint to
 // allow for future use of the upper bits
-cos.writeRawVarint64(state.getValue());
+cos.writeUInt64NoTag(state.getValue());
 
 Review comment:
   Hopefully Nope.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vinayakumarb commented on a change in pull request #1432: HADOOP-16557. [pb-upgrade] Upgrade protobuf.version to 3.7.1

2019-09-17 Thread GitBox
vinayakumarb commented on a change in pull request #1432: HADOOP-16557. 
[pb-upgrade] Upgrade protobuf.version to 3.7.1
URL: https://github.com/apache/hadoop/pull/1432#discussion_r325323930
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
 ##
 @@ -3274,18 +3274,18 @@ private void setupResponse(RpcCall call,
 cos.writeRawByte((byte)((length >>> 16) & 0xFF));
 cos.writeRawByte((byte)((length >>>  8) & 0xFF));
 cos.writeRawByte((byte)((length >>>  0) & 0xFF));
-cos.writeRawVarint32(header.getSerializedSize());
+cos.writeUInt32NoTag(header.getSerializedSize());
 header.writeTo(cos);
 if (payload != null) {
-  cos.writeRawVarint32(payload.getSerializedSize());
+  cos.writeUInt32NoTag(payload.getSerializedSize());
 
 Review comment:
   Ditto :) All variations of *\*RawVarint\*()* APIs are deprecated and 
replaced with *\*UInt\*NoTag()* APIs.
   Deprecated APIs internally call *\*UInt\*NoTag()* API itself.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vinayakumarb commented on a change in pull request #1432: HADOOP-16557. [pb-upgrade] Upgrade protobuf.version to 3.7.1

2019-09-17 Thread GitBox
vinayakumarb commented on a change in pull request #1432: HADOOP-16557. 
[pb-upgrade] Upgrade protobuf.version to 3.7.1
URL: https://github.com/apache/hadoop/pull/1432#discussion_r325322423
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RpcWritable.java
 ##
 @@ -106,7 +106,7 @@ Message getMessage() {
 @Override
 void writeTo(ResponseBuffer out) throws IOException {
   int length = message.getSerializedSize();
-  length += CodedOutputStream.computeRawVarint32Size(length);
+  length += CodedOutputStream.computeUInt32SizeNoTag(length);
 
 Review comment:
   `computeRawVarint32Size()` is deprecated and internally it calls 
`computeUInt32SizeNoTag()` itself. So used directly `computeUInt32SizeNoTag()` 
to avoid javac warnings.
   I don't think this is an incompatible change.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vinayakumarb commented on a change in pull request #1432: HADOOP-16557. [pb-upgrade] Upgrade protobuf.version to 3.7.1

2019-09-17 Thread GitBox
vinayakumarb commented on a change in pull request #1432: HADOOP-16557. 
[pb-upgrade] Upgrade protobuf.version to 3.7.1
URL: https://github.com/apache/hadoop/pull/1432#discussion_r325320650
 
 

 ##
 File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
 ##
 @@ -55,6 +55,7 @@
   org.apache.hadoop
   hadoop-annotations
 
+
 
 Review comment:
   No, this is not about that. This is about javac warnings in generated java 
code. Mostly due to (default) "proto2" syntax in proto files.
   
   I have handled deprecated APIs in non-generated code.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16578) ABFS: fileSystemExists() should not call container level apis

2019-09-17 Thread Sneha Vijayarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Vijayarajan reassigned HADOOP-16578:
--

Assignee: Sneha Vijayarajan

> ABFS: fileSystemExists() should not call container level apis
> -
>
> Key: HADOOP-16578
> URL: https://issues.apache.org/jira/browse/HADOOP-16578
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Da Zhou
>Assignee: Sneha Vijayarajan
>Priority: Major
> Fix For: 3.3.0
>
>
> ABFS driver should not use container level api "Get Container Properties" as 
> there is no concept of container in HDFS, and this caused some RBAC check 
> issue.
> Fix: use getFileStatus() to check if the container exists.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1402: HADOOP-16547. make sure that s3guard prune sets up the FS

2019-09-17 Thread GitBox
steveloughran commented on issue #1402: HADOOP-16547. make sure that s3guard 
prune sets up the FS
URL: https://github.com/apache/hadoop/pull/1402#issuecomment-532329287
 
 
   I'm adding the test with the caveat that because only certain configs could 
trigger it (no fs.s3a.ddb.region) even if there was a regression you can't be 
confident it is valid. Though if it fails for someone, we can be happy that 
they have found that regression.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16580) Disable retry of FailoverOnNetworkExceptionRetry in case of AccessControlException

2019-09-17 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16931695#comment-16931695
 ] 

Hadoop QA commented on HADOOP-16580:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
54s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 52s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 116 unchanged - 0 fixed = 118 total (was 116) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
53s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:39e82acc485 |
| JIRA Issue | HADOOP-16580 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12980520/HADOOP-16580.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f0b810c684f0 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c474e24 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16530/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16530/testReport/ |
| Max. process+thread count | 1424 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16530/console |
| Power

[GitHub] [hadoop] hadoop-yetus commented on issue #1454: HADOOP-16565. Region must be provided when requesting session credentials or SdkClientException will be thrown

2019-09-17 Thread GitBox
hadoop-yetus commented on issue #1454: HADOOP-16565. Region must be provided 
when requesting session credentials or SdkClientException will be thrown
URL: https://github.com/apache/hadoop/pull/1454#issuecomment-532324214
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 55 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1116 | trunk passed |
   | +1 | compile | 32 | trunk passed |
   | +1 | checkstyle | 26 | trunk passed |
   | +1 | mvnsite | 43 | trunk passed |
   | +1 | shadedclient | 755 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 28 | trunk passed |
   | 0 | spotbugs | 61 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 59 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 37 | the patch passed |
   | +1 | compile | 31 | the patch passed |
   | +1 | javac | 31 | the patch passed |
   | -0 | checkstyle | 20 | hadoop-tools/hadoop-aws: The patch generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 33 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 752 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 25 | the patch passed |
   | +1 | findbugs | 67 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 82 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 35 | The patch does not generate ASF License warnings. |
   | | | 3293 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1454/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1454 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 1e29d30b247d 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c474e24 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1454/2/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1454/2/testReport/ |
   | Max. process+thread count | 412 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1454/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16547) s3guard prune command doesn't get AWS auth chain from FS

2019-09-17 Thread Aaron Fabbri (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16931680#comment-16931680
 ] 

Aaron Fabbri commented on HADOOP-16547:
---

Current patch +1 LTGM after you address [~gabor.bota]'s comment about adding a 
test. 

 

> s3guard prune command doesn't get AWS auth chain from FS
> 
>
> Key: HADOOP-16547
> URL: https://issues.apache.org/jira/browse/HADOOP-16547
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> s3guard prune command doesn't get AWS auth chain from any FS, so it just 
> drives the DDB store from the conf settings. If S3A is set up to use 
> Delegation tokens then the DTs/custom AWS auth sequence is not picked up, so 
> you get an auth failure.
> Fix:
> # instantiate the FS before calling initMetadataStore
> # review other commands to make sure problem isn't replicated



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1462: HDDS-2141. Missing total number of operations

2019-09-17 Thread GitBox
hadoop-yetus commented on issue #1462: HDDS-2141. Missing total number of 
operations
URL: https://github.com/apache/hadoop/pull/1462#issuecomment-532316424
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 164 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 35 | hadoop-ozone in trunk failed. |
   | -1 | compile | 25 | hadoop-ozone in trunk failed. |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1088 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 206 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 41 | hadoop-ozone in the patch failed. |
   | -1 | jshint | 106 | The patch generated 1393 new + 2737 unchanged - 0 
fixed = 4130 total (was 2737) |
   | -1 | compile | 26 | hadoop-ozone in the patch failed. |
   | -1 | javac | 26 | hadoop-ozone in the patch failed. |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 802 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 265 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 227 | hadoop-hdds in the patch failed. |
   | -1 | unit | 108 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 114 | The patch does not generate ASF License warnings. |
   | | | 3663 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.lock.TestLockManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1462/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1462 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient jshint |
   | uname | Linux 03d06927b330 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c474e24 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1462/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1462/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1462/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | jshint | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1462/1/artifact/out/diff-patch-jshint.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1462/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1462/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1462/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1462/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1462/1/testReport/ |
   | Max. process+thread count | 306 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1462/1/console |
   | versions | git=2.7.4 maven=3.3.9 jshint=2.10.2 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on issue #1434: HDDS-2120. Remove hadoop classes from ozonefs-current jar

2019-09-17 Thread GitBox
arp7 commented on issue #1434: HDDS-2120. Remove hadoop classes from 
ozonefs-current jar
URL: https://github.com/apache/hadoop/pull/1434#issuecomment-532316448
 
 
   It looks like this failed compilation in Jenkins... does the Jenkins job 
need to be updated to use the separated pom compilation command?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #1462: HDDS-2141. Missing total number of operations

2019-09-17 Thread GitBox
hadoop-yetus commented on a change in pull request #1462: HDDS-2141. Missing 
total number of operations
URL: https://github.com/apache/hadoop/pull/1462#discussion_r325288674
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/resources/webapps/ozoneManager/ozoneManager.js
 ##
 @@ -87,6 +88,7 @@
 if (name == "Ops") {
 groupedMetrics.nums[type].ops = 
metrics[key]
 } else {
+groupedMetrics.nums[type].total += 
metrics[key]
 
 Review comment:
   jshint:84:W033:Missing semicolon.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16423) S3Guard fsck: Check metadata consistency from S3 to metadatastore (log)

2019-09-17 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16423:

Summary: S3Guard fsck: Check metadata consistency from S3 to metadatastore 
(log)  (was: S3Guarld fsck: Check metadata consistency from S3 to metadatastore 
(log))

> S3Guard fsck: Check metadata consistency from S3 to metadatastore (log)
> ---
>
> Key: HADOOP-16423
> URL: https://issues.apache.org/jira/browse/HADOOP-16423
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> This part is only for logging the inconsistencies.
> This issue only covers the part when the walk is being done in the S3 and 
> compares all metadata to the MS.
> There will be no part where the walk is being done in the MS and compare it 
> to the S3. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] saintstack commented on a change in pull request #1432: HADOOP-16557. [pb-upgrade] Upgrade protobuf.version to 3.7.1

2019-09-17 Thread GitBox
saintstack commented on a change in pull request #1432: HADOOP-16557. 
[pb-upgrade] Upgrade protobuf.version to 3.7.1
URL: https://github.com/apache/hadoop/pull/1432#discussion_r325276884
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/BlockListAsLongs.java
 ##
 @@ -276,12 +276,12 @@ public void add(Replica replica) {
   try {
 // zig-zag to reduce size of legacy blocks
 cos.writeSInt64NoTag(replica.getBlockId());
-cos.writeRawVarint64(replica.getBytesOnDisk());
-cos.writeRawVarint64(replica.getGenerationStamp());
+cos.writeUInt64NoTag(replica.getBytesOnDisk());
+cos.writeUInt64NoTag(replica.getGenerationStamp());
 ReplicaState state = replica.getState();
 // although state is not a 64-bit value, using a long varint to
 // allow for future use of the upper bits
-cos.writeRawVarint64(state.getValue());
+cos.writeUInt64NoTag(state.getValue());
 
 Review comment:
   Incompat change?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] saintstack commented on a change in pull request #1432: HADOOP-16557. [pb-upgrade] Upgrade protobuf.version to 3.7.1

2019-09-17 Thread GitBox
saintstack commented on a change in pull request #1432: HADOOP-16557. 
[pb-upgrade] Upgrade protobuf.version to 3.7.1
URL: https://github.com/apache/hadoop/pull/1432#discussion_r325276503
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
 ##
 @@ -3274,18 +3274,18 @@ private void setupResponse(RpcCall call,
 cos.writeRawByte((byte)((length >>> 16) & 0xFF));
 cos.writeRawByte((byte)((length >>>  8) & 0xFF));
 cos.writeRawByte((byte)((length >>>  0) & 0xFF));
-cos.writeRawVarint32(header.getSerializedSize());
+cos.writeUInt32NoTag(header.getSerializedSize());
 header.writeTo(cos);
 if (payload != null) {
-  cos.writeRawVarint32(payload.getSerializedSize());
+  cos.writeUInt32NoTag(payload.getSerializedSize());
 
 Review comment:
   Ditto


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] saintstack commented on a change in pull request #1432: HADOOP-16557. [pb-upgrade] Upgrade protobuf.version to 3.7.1

2019-09-17 Thread GitBox
saintstack commented on a change in pull request #1432: HADOOP-16557. 
[pb-upgrade] Upgrade protobuf.version to 3.7.1
URL: https://github.com/apache/hadoop/pull/1432#discussion_r325277198
 
 

 ##
 File path: hadoop-project/pom.xml
 ##
 @@ -1918,6 +1918,9 @@
   
 false
   
+  
+/opt/protobuf-3.7/bin/protoc
+  
 
 Review comment:
   Yeah, this should go away.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] saintstack commented on a change in pull request #1432: HADOOP-16557. [pb-upgrade] Upgrade protobuf.version to 3.7.1

2019-09-17 Thread GitBox
saintstack commented on a change in pull request #1432: HADOOP-16557. 
[pb-upgrade] Upgrade protobuf.version to 3.7.1
URL: https://github.com/apache/hadoop/pull/1432#discussion_r325280742
 
 

 ##
 File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
 ##
 @@ -55,6 +55,7 @@
   org.apache.hadoop
   hadoop-annotations
 
+
 
 Review comment:
   Is the complaint about warning messages because proto files are in proto2 
format? Does this help? https://issues.apache.org/jira/browse/HBASE-18866


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] saintstack commented on a change in pull request #1432: HADOOP-16557. [pb-upgrade] Upgrade protobuf.version to 3.7.1

2019-09-17 Thread GitBox
saintstack commented on a change in pull request #1432: HADOOP-16557. 
[pb-upgrade] Upgrade protobuf.version to 3.7.1
URL: https://github.com/apache/hadoop/pull/1432#discussion_r325276027
 
 

 ##
 File path: hadoop-client-modules/hadoop-client-runtime/pom.xml
 ##
 @@ -229,6 +229,13 @@
 update*
   
 
+
+  com.google.protobuf:protobuf-java
+  
+google/protobuf/*.proto
+google/protobuf/**/*.proto
+  
+
 
 Review comment:
   These classes are shaded or they are excluded from shading?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] saintstack commented on a change in pull request #1432: HADOOP-16557. [pb-upgrade] Upgrade protobuf.version to 3.7.1

2019-09-17 Thread GitBox
saintstack commented on a change in pull request #1432: HADOOP-16557. 
[pb-upgrade] Upgrade protobuf.version to 3.7.1
URL: https://github.com/apache/hadoop/pull/1432#discussion_r325276424
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RpcWritable.java
 ##
 @@ -106,7 +106,7 @@ Message getMessage() {
 @Override
 void writeTo(ResponseBuffer out) throws IOException {
   int length = message.getSerializedSize();
-  length += CodedOutputStream.computeRawVarint32Size(length);
+  length += CodedOutputStream.computeUInt32SizeNoTag(length);
 
 Review comment:
   Change from varint to 32bit? This breaks compatibility?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] saintstack commented on a change in pull request #1432: HADOOP-16557. [pb-upgrade] Upgrade protobuf.version to 3.7.1

2019-09-17 Thread GitBox
saintstack commented on a change in pull request #1432: HADOOP-16557. 
[pb-upgrade] Upgrade protobuf.version to 3.7.1
URL: https://github.com/apache/hadoop/pull/1432#discussion_r325277082
 
 

 ##
 File path: hadoop-project/pom.xml
 ##
 @@ -84,7 +84,7 @@
 
 
 
-2.5.0
+3.7.1
 
 Review comment:
   Hurray!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on issue #1365: HDDS-1949. Missing or error-prone test cleanup

2019-09-17 Thread GitBox
arp7 commented on issue #1365: HDDS-1949. Missing or error-prone test cleanup
URL: https://github.com/apache/hadoop/pull/1365#issuecomment-532308400
 
 
   Hi @adoroszlai , I am +1 on the patch. Can you please resolve the conflicts 
so we cna get a new test run?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1277: HDDS-1054. List Multipart uploads in a bucket

2019-09-17 Thread GitBox
hadoop-yetus commented on issue #1277: HDDS-1054. List Multipart uploads in a 
bucket
URL: https://github.com/apache/hadoop/pull/1277#issuecomment-532305637
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 43 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 48 | Maven dependency ordering for branch |
   | -1 | mvninstall | 33 | hadoop-ozone in trunk failed. |
   | -1 | compile | 23 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 124 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1003 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | trunk passed |
   | 0 | spotbugs | 184 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 26 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 32 | Maven dependency ordering for patch |
   | -1 | mvninstall | 35 | hadoop-ozone in the patch failed. |
   | -1 | compile | 27 | hadoop-ozone in the patch failed. |
   | -1 | cc | 27 | hadoop-ozone in the patch failed. |
   | -1 | javac | 27 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 31 | hadoop-hdds: The patch generated 5 new + 9 
unchanged - 1 fixed = 14 total (was 10) |
   | -0 | checkstyle | 97 | hadoop-ozone: The patch generated 448 new + 2401 
unchanged - 92 fixed = 2849 total (was 2493) |
   | +1 | mvnsite | 0 | the patch passed |
   | -1 | whitespace | 0 | The patch has 1 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 765 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 91 | hadoop-ozone generated 9 new + 249 unchanged - 7 fixed 
= 258 total (was 256) |
   | -1 | findbugs | 28 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 247 | hadoop-hdds in the patch failed. |
   | -1 | unit | 31 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 35 | The patch does not generate ASF License warnings. |
   | | | 3706 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.container.placement.algorithms.TestSCMContainerPlacementRackAware
 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/12/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1277 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 4392a4c17837 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c474e24 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/12/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/12/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/12/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/12/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/12/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | cc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/12/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/12/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/12/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/12/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/12/artifact/out/whitespace-eol.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/12/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/12/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/12/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apach

[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #1277: HDDS-1054. List Multipart uploads in a bucket

2019-09-17 Thread GitBox
hadoop-yetus commented on a change in pull request #1277: HDDS-1054. List 
Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#discussion_r325276989
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmMultipartUploadListParts.java
 ##
 @@ -39,11 +45,15 @@
  // A list can be truncated if the number of parts exceeds the limit
  // returned in the MaxParts element.
   private boolean truncated;
+
   private final List partInfoList = new ArrayList<>();
 
   public OmMultipartUploadListParts(HddsProtos.ReplicationType type,
+  HddsProtos.ReplicationFactor factor,
   int nextMarker, boolean truncate) {
 this.replicationType = type;
+this.replicationFactor = factor;
+
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] cxorm closed pull request #1459: HDDS-730. Ozone fs cli prints hadoop fs in usage.

2019-09-17 Thread GitBox
cxorm closed pull request #1459: HDDS-730. Ozone fs cli prints hadoop fs in 
usage.
URL: https://github.com/apache/hadoop/pull/1459
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1461: HDDS-2142. OM metrics mismatch (abort multipart request)

2019-09-17 Thread GitBox
hadoop-yetus commented on issue #1461: HDDS-2142. OM metrics mismatch (abort 
multipart request)
URL: https://github.com/apache/hadoop/pull/1461#issuecomment-532297169
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 160 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 33 | hadoop-ozone in trunk failed. |
   | -1 | compile | 23 | hadoop-ozone in trunk failed. |
   | -0 | checkstyle | 38 | The patch fails to run checkstyle in hadoop-ozone |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1041 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 90 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 182 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 30 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 35 | hadoop-ozone in the patch failed. |
   | -1 | compile | 26 | hadoop-ozone in the patch failed. |
   | -1 | javac | 26 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 39 | The patch fails to run checkstyle in hadoop-ozone |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 808 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 97 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 26 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 344 | hadoop-hdds in the patch failed. |
   | -1 | unit | 31 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 37 | The patch does not generate ASF License warnings. |
   | | | 3879 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.container.placement.algorithms.TestSCMContainerPlacementRackAware
 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1461/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1461 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux ec2052e77e5a 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c474e24 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1461/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1461/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1461/1/artifact/out//home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1461/out/maven-branch-checkstyle-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1461/1/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1461/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1461/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1461/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1461/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1461/1/artifact/out//home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1461/out/maven-patch-checkstyle-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1461/1/artifact/out/patch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1461/1/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1461/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1461/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1461/1/testReport/ |
   | Max. process+thread count | 468 (vs. ulimit of 5500) |
   | m

[GitHub] [hadoop] bshashikant opened a new pull request #1420: HDDS-2032. Ozone client should retry writes in case of any ratis/stateMachine exceptions.

2019-09-17 Thread GitBox
bshashikant opened a new pull request #1420: HDDS-2032. Ozone client should 
retry writes in case of any ratis/stateMachine exceptions.
URL: https://github.com/apache/hadoop/pull/1420
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant closed pull request #1420: HDDS-2032. Ozone client should retry writes in case of any ratis/stateMachine exceptions.

2019-09-17 Thread GitBox
bshashikant closed pull request #1420: HDDS-2032. Ozone client should retry 
writes in case of any ratis/stateMachine exceptions.
URL: https://github.com/apache/hadoop/pull/1420
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukul1987 commented on issue #1420: HDDS-2032. Ozone client should retry writes in case of any ratis/stateMachine exceptions.

2019-09-17 Thread GitBox
mukul1987 commented on issue #1420: HDDS-2032. Ozone client should retry writes 
in case of any ratis/stateMachine exceptions.
URL: https://github.com/apache/hadoop/pull/1420#issuecomment-532278299
 
 
   Thanks for working on this @bshashikant , there are some conflicts with this 
patch. Can you please rebase.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on issue #1454: HADOOP-16565. Region must be provided when requesting session credentials or SdkClientException will be thrown

2019-09-17 Thread GitBox
bgaborg commented on issue #1454: HADOOP-16565. Region must be provided when 
requesting session credentials or SdkClientException will be thrown
URL: https://github.com/apache/hadoop/pull/1454#issuecomment-532275224
 
 
   Tested against eu-west1, no errors besides what we already had (testMR).
   Thanks for +1ing this @steveloughran , rebased to trunk and committing


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >