[jira] [Commented] (HADOOP-16026) Replace incorrect use of system property user.name

2019-01-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16736849#comment-16736849
 ] 

Hadoop QA commented on HADOOP-16026:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 22 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
59s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
23m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  9m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
50s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 40s{color} | {color:orange} root: The patch generated 9 new + 928 unchanged 
- 10 fixed = 937 total (was 938) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 10m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 24s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
4s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
49s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m  5s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
34s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 
43s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
49s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-aliyun in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:b

[jira] [Commented] (HADOOP-15938) [JDK 11] hadoop-annotations build fails with 'Failed to check signatures'

2019-01-07 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16736841#comment-16736841
 ] 

Akira Ajisaka commented on HADOOP-15938:


Using HEAD (compile locally and set the version to 1.18-SNAPSHOT) or upgrading 
the plugin dependency to use ASM 7.0 did not help
{noformat}
[ERROR] Failed to execute goal 
org.codehaus.mojo:animal-sniffer-maven-plugin:1.18-SNAPSHOT:check 
(signature-check) on project hadoop-annotations: Execution signature-check of 
goal org.codehaus.mojo:animal-sniffer-maven-plugin:1.18-SNAPSHOT:check failed: 
This feature requires ASM7 -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal 
org.codehaus.mojo:animal-sniffer-maven-plugin:1.18-SNAPSHOT:check 
(signature-check) on project hadoop-annotations: Execution signature-check of 
goal org.codehaus.mojo:animal-sniffer-maven-plugin:1.18-SNAPSHOT:check failed: 
This feature requires ASM7
at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:213)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:154)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:146)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
(LifecycleModuleBuilder.java:117)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
(LifecycleModuleBuilder.java:81)
at 
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build
 (SingleThreadedBuilder.java:56)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute 
(LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
at org.apache.maven.cli.MavenCli.execute (MavenCli.java:954)
at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:288)
at org.apache.maven.cli.MavenCli.main (MavenCli.java:192)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke 
(NativeMethodAccessorImpl.java:62)
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke 
(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke (Method.java:566)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced 
(Launcher.java:289)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch 
(Launcher.java:229)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode 
(Launcher.java:415)
at org.codehaus.plexus.classworlds.launcher.Launcher.main 
(Launcher.java:356)
Caused by: org.apache.maven.plugin.PluginExecutionException: Execution 
signature-check of goal 
org.codehaus.mojo:animal-sniffer-maven-plugin:1.18-SNAPSHOT:check failed: This 
feature requires ASM7
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo 
(DefaultBuildPluginManager.java:148)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:208)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:154)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:146)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
(LifecycleModuleBuilder.java:117)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
(LifecycleModuleBuilder.java:81)
at 
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build
 (SingleThreadedBuilder.java:56)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute 
(LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
at org.apache.maven.cli.MavenCli.execute (MavenCli.java:954)
at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:288)
at org.apache.maven.cli.MavenCli.main (MavenCli.java:192)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke 
(NativeMethodAccessorImpl.java:62)
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke 
(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke (Method.java:566)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced 
(Launcher.java:289)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch 
(Launcher.java:229)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode 
(Launcher.java:415)
at org.codehaus.plexus.classworlds.launcher.Launcher.main 
(Launcher.java:356)
Caused by: java.lang.UnsupportedOperati

[jira] [Commented] (HADOOP-15938) [JDK 11] hadoop-annotations build fails with 'Failed to check signatures'

2019-01-07 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16736833#comment-16736833
 ] 

Akira Ajisaka commented on HADOOP-15938:


I could reproduced the error by {{mvn install -DskipTests -Djavac.version=11}}.
After applying your patch, the command still failing.
{noformat}
[ERROR] Failed to execute goal 
org.codehaus.mojo:animal-sniffer-maven-plugin:1.17:check (signature-check) on 
project hadoop-annotations: Execution signature-check of goal 
org.codehaus.mojo:animal-sniffer-maven-plugin:1.17:check failed.: 
UnsupportedOperationException -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal 
org.codehaus.mojo:animal-sniffer-maven-plugin:1.17:check (signature-check) on 
project hadoop-annotations: Execution signature-check of goal 
org.codehaus.mojo:animal-sniffer-maven-plugin:1.17:check failed.
at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:213)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:154)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:146)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
(LifecycleModuleBuilder.java:117)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
(LifecycleModuleBuilder.java:81)
at 
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build
 (SingleThreadedBuilder.java:56)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute 
(LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
at org.apache.maven.cli.MavenCli.execute (MavenCli.java:954)
at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:288)
at org.apache.maven.cli.MavenCli.main (MavenCli.java:192)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke 
(NativeMethodAccessorImpl.java:62)
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke 
(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke (Method.java:566)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced 
(Launcher.java:289)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch 
(Launcher.java:229)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode 
(Launcher.java:415)
at org.codehaus.plexus.classworlds.launcher.Launcher.main 
(Launcher.java:356)
Caused by: org.apache.maven.plugin.PluginExecutionException: Execution 
signature-check of goal 
org.codehaus.mojo:animal-sniffer-maven-plugin:1.17:check failed.
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo 
(DefaultBuildPluginManager.java:148)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:208)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:154)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:146)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
(LifecycleModuleBuilder.java:117)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
(LifecycleModuleBuilder.java:81)
at 
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build
 (SingleThreadedBuilder.java:56)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute 
(LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
at org.apache.maven.cli.MavenCli.execute (MavenCli.java:954)
at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:288)
at org.apache.maven.cli.MavenCli.main (MavenCli.java:192)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke 
(NativeMethodAccessorImpl.java:62)
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke 
(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke (Method.java:566)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced 
(Launcher.java:289)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch 
(Launcher.java:229)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode 
(Launcher.java:415)
at org.codehaus.plexus.classworlds.launcher.Launcher.main 
(Launcher.java:356)
Caused by: java.lang.UnsupportedOperationException
at org.objectweb.asm.ClassVisitor.visitNestMemberExperimental 
(ClassVisitor.

[jira] [Commented] (HADOOP-15941) [JDK 11] Compilation failure: package com.sun.jndi.ldap is not visible

2019-01-07 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16736801#comment-16736801
 ] 

Akira Ajisaka commented on HADOOP-15941:


On my environment, {{mvn install -DskipTests}} succeeded and {{mvn 
javadoc:javadoc}} failed.

I prefer adding an option "--add-exports 
java.naming/com.sun.jndi.ldap=ALL-UNNAMED" when running javadoc via 
maven-javadoc-plugin rather than changing a public variable. Changing the 
source code only to pass javadoc seems overkill.

Compilation does not require --add-exports option but javadoc does require it, 
that seems strange for me. It would be better javadoc does not require the 
option.

> [JDK 11] Compilation failure: package com.sun.jndi.ldap is not visible
> --
>
> Key: HADOOP-15941
> URL: https://issues.apache.org/jira/browse/HADOOP-15941
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Uma Maheswara Rao G
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-15941.1.patch
>
>
> With JDK 11: Compilation failed because package com.sun.jndi.ldap is not 
> visible.
>  
> {noformat}
> org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
> goal org.apache.maven.plugins:maven-compiler-plugin:3.8.0:compile 
> (default-compile) on project hadoop-common: Compilation failure
> /C:/Users/umgangum/Work/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/LdapGroupsMapping.java:[545,23]
>  package com.sun.jndi.ldap is not visible
>  (package com.sun.jndi.ldap is declared in module java.naming, which does not 
> export it){noformat}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14556) S3A to support Delegation Tokens

2019-01-07 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16736777#comment-16736777
 ] 

Hudson commented on HADOOP-14556:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15741 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15741/])
Revert "HADOOP-14556. S3A to support Delegation Tokens." (aajisaka: rev 
7f783970364930cc461d1a73833bc58cdd10553e)
* (add) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionSSECBlockOutputStream.java
* (delete) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/NoAwsCredentialsException.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/RoleTestUtils.java
* (delete) 
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/delegation_token_architecture.md
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/SimpleAWSCredentialsProvider.java
* (delete) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/delegation/ITestRoleDelegationTokens.java
* (delete) 
hadoop-tools/hadoop-aws/src/main/resources/META-INF/services/org.apache.hadoop.security.token.DtFetcher
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/StorageStatistics.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestSSEConfiguration.java
* (delete) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/AbstractSessionCredentialsProvider.java
* (delete) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/EncryptionSecrets.java
* (edit) hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md
* (delete) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/delegation/Csvout.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/AbstractITCommitMRJob.java
* (delete) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/SessionTokenBinding.java
* (delete) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/delegation/ITestSessionDelegationInFileystem.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/TemporaryAWSCredentialsProvider.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/staging/TestStagingCommitter.java
* (edit) hadoop-tools/hadoop-aws/pom.xml
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumeRole.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/DurationInfo.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/RoleModel.java
* (delete) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/FullCredentialsTokenIdentifier.java
* (delete) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/delegation/ILoadTestSessionCredentials.java
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptionSwitch.java
* (edit) 
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/assumed_roles.md
* (delete) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/RoleTokenIdentifier.java
* (delete) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/mapreduce/MockJob.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumedRoleCommitOperations.java
* (delete) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/MarshalledCredentials.java
* (edit) hadoop-project/pom.xml
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AAWSCredentialsProvider.java
* (delete) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/DelegationTokenIOException.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/staging/TestStagingPartitionedJobCommit.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DirListingMetadata.java
* (delete) 
hadoop-tools/hadoop-aws/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenIdentifier
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/yarn/ITestS3AMiniYarnCluster.java
* (edit) 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpOptions.java
* (delete) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/FullCredentialsTokenBinding.java
* (edit) 
hadoop-too

[jira] [Reopened] (HADOOP-14556) S3A to support Delegation Tokens

2019-01-07 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reopened HADOOP-14556:


Reverted.

> S3A to support Delegation Tokens
> 
>
> Key: HADOOP-14556
> URL: https://issues.apache.org/jira/browse/HADOOP-14556
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-14556-001.patch, HADOOP-14556-002.patch, 
> HADOOP-14556-003.patch, HADOOP-14556-004.patch, HADOOP-14556-005.patch, 
> HADOOP-14556-007.patch, HADOOP-14556-008.patch, HADOOP-14556-009.patch, 
> HADOOP-14556-010.patch, HADOOP-14556-010.patch, HADOOP-14556-011.patch, 
> HADOOP-14556-012.patch, HADOOP-14556-013.patch, HADOOP-14556-014.patch, 
> HADOOP-14556-015.patch, HADOOP-14556-016.patch, HADOOP-14556-017.patch, 
> HADOOP-14556-018a.patch, HADOOP-14556-019.patch, HADOOP-14556-020.patch, 
> HADOOP-14556-021.patch, HADOOP-14556-022.patch, HADOOP-14556-023.patch, 
> HADOOP-14556-024.patch, HADOOP-14556-025.patch, HADOOP-14556-026.patch, 
> HADOOP-14556-027.patch, HADOOP-14556.oath-002.patch, HADOOP-14556.oath.patch
>
>
> S3A to support delegation tokens where
> * an authenticated client can request a token via 
> {{FileSystem.getDelegationToken()}}
> * Amazon's token service is used to request short-lived session secret & id; 
> these will be saved in the token and  marshalled with jobs
> * A new authentication provider will look for a token for the current user 
> and authenticate the user if found
> This will not support renewals; the lifespan of a token will be limited to 
> the initial duration. Also, as you can't request an STS token from a 
> temporary session, IAM instances won't be able to issue tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14556) S3A to support Delegation Tokens

2019-01-07 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14556:
---
Fix Version/s: (was: 3.3.0)

> S3A to support Delegation Tokens
> 
>
> Key: HADOOP-14556
> URL: https://issues.apache.org/jira/browse/HADOOP-14556
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-14556-001.patch, HADOOP-14556-002.patch, 
> HADOOP-14556-003.patch, HADOOP-14556-004.patch, HADOOP-14556-005.patch, 
> HADOOP-14556-007.patch, HADOOP-14556-008.patch, HADOOP-14556-009.patch, 
> HADOOP-14556-010.patch, HADOOP-14556-010.patch, HADOOP-14556-011.patch, 
> HADOOP-14556-012.patch, HADOOP-14556-013.patch, HADOOP-14556-014.patch, 
> HADOOP-14556-015.patch, HADOOP-14556-016.patch, HADOOP-14556-017.patch, 
> HADOOP-14556-018a.patch, HADOOP-14556-019.patch, HADOOP-14556-020.patch, 
> HADOOP-14556-021.patch, HADOOP-14556-022.patch, HADOOP-14556-023.patch, 
> HADOOP-14556-024.patch, HADOOP-14556-025.patch, HADOOP-14556-026.patch, 
> HADOOP-14556-027.patch, HADOOP-14556.oath-002.patch, HADOOP-14556.oath.patch
>
>
> S3A to support delegation tokens where
> * an authenticated client can request a token via 
> {{FileSystem.getDelegationToken()}}
> * Amazon's token service is used to request short-lived session secret & id; 
> these will be saved in the token and  marshalled with jobs
> * A new authentication provider will look for a token for the current user 
> and authenticate the user if found
> This will not support renewals; the lifespan of a token will be limited to 
> the initial duration. Also, as you can't request an STS token from a 
> temporary session, IAM instances won't be able to issue tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14556) S3A to support Delegation Tokens

2019-01-07 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16736752#comment-16736752
 ] 

Akira Ajisaka commented on HADOOP-14556:


In addition to Kai's comment, this commit caused javadoc error.
{noformat:title=AbstractS3ATokenIdentifier.java}
 * Kind => class, which is then looked up to deserialize token
{noformat}
{noformat:title=AbstractS3ATokenIdentifier.java}
   * catch & downgrade. RuntimeExceptions (e.g. Preconditions checks) are
{noformat}
{noformat:title=AbstractDelegationTokenBinding.java}
   * This is logged during after service start & binding:
{noformat}
{{&}} and {{>}} must be escaped in javadoc.

> S3A to support Delegation Tokens
> 
>
> Key: HADOOP-14556
> URL: https://issues.apache.org/jira/browse/HADOOP-14556
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-14556-001.patch, HADOOP-14556-002.patch, 
> HADOOP-14556-003.patch, HADOOP-14556-004.patch, HADOOP-14556-005.patch, 
> HADOOP-14556-007.patch, HADOOP-14556-008.patch, HADOOP-14556-009.patch, 
> HADOOP-14556-010.patch, HADOOP-14556-010.patch, HADOOP-14556-011.patch, 
> HADOOP-14556-012.patch, HADOOP-14556-013.patch, HADOOP-14556-014.patch, 
> HADOOP-14556-015.patch, HADOOP-14556-016.patch, HADOOP-14556-017.patch, 
> HADOOP-14556-018a.patch, HADOOP-14556-019.patch, HADOOP-14556-020.patch, 
> HADOOP-14556-021.patch, HADOOP-14556-022.patch, HADOOP-14556-023.patch, 
> HADOOP-14556-024.patch, HADOOP-14556-025.patch, HADOOP-14556-026.patch, 
> HADOOP-14556-027.patch, HADOOP-14556.oath-002.patch, HADOOP-14556.oath.patch
>
>
> S3A to support delegation tokens where
> * an authenticated client can request a token via 
> {{FileSystem.getDelegationToken()}}
> * Amazon's token service is used to request short-lived session secret & id; 
> these will be saved in the token and  marshalled with jobs
> * A new authentication provider will look for a token for the current user 
> and authenticate the user if found
> This will not support renewals; the lifespan of a token will be limited to 
> the initial duration. Also, as you can't request an STS token from a 
> temporary session, IAM instances won't be able to issue tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16031) TestSecureLogins#testValidKerberosName fails

2019-01-07 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated HADOOP-16031:

Fix Version/s: (was: 3.2.1)
   3.2.0

> TestSecureLogins#testValidKerberosName fails
> 
>
> Key: HADOOP-16031
> URL: https://issues.apache.org/jira/browse/HADOOP-16031
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.2.0, 3.3.0, 3.1.3
>
> Attachments: HADOOP-16031.01.patch
>
>
> {noformat}
> [INFO] Running org.apache.hadoop.registry.secure.TestSecureLogins
> [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 2.724 s <<< FAILURE! - in org.apache.hadoop.registry.secure.TestSecureLogins
> [ERROR] 
> testValidKerberosName(org.apache.hadoop.registry.secure.TestSecureLogins)  
> Time elapsed: 0.01 s  <<< ERROR!
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to zookeeper/localhost
>   at 
> org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:429)
>   at 
> org.apache.hadoop.registry.secure.TestSecureLogins.testValidKerberosName(TestSecureLogins.java:203)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15996) Plugin interface to support more complex usernames in Hadoop

2019-01-07 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated HADOOP-15996:

Fix Version/s: (was: 3.2.1)
   3.2.0

> Plugin interface to support more complex usernames in Hadoop
> 
>
> Key: HADOOP-15996
> URL: https://issues.apache.org/jira/browse/HADOOP-15996
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Eric Yang
>Assignee: Bolke de Bruin
>Priority: Major
> Fix For: 3.2.0, 3.3.0, 3.1.2
>
> Attachments: 0001-HADOOP-15996-Make-auth-to-local-configurable.patch, 
> 0001-Make-allowing-or-configurable.patch, 
> 0001-Simple-trial-of-using-krb5.conf-for-auth_to_local-ru.patch, 
> 0002-HADOOP-15996-Make-auth-to-local-configurable.patch, 
> 0003-HADOOP-15996-Make-auth-to-local-configurable.patch, 
> 0004-HADOOP-15996-Make-auth-to-local-configurable.patch, 
> 0005-HADOOP-15996-Make-auth-to-local-configurable.patch, 
> HADOOP-15996.0005.patch, HADOOP-15996.0006.patch, HADOOP-15996.0007.patch, 
> HADOOP-15996.0008.patch, HADOOP-15996.0009.patch, HADOOP-15996.0010.patch, 
> HADOOP-15996.0011.patch, HADOOP-15996.0012.patch
>
>
> Hadoop does not allow support of @ character in username in recent security 
> mailing list vote to revert HADOOP-12751.  Hadoop auth_to_local rule must 
> match to authorize user to login to Hadoop cluster.  This design does not 
> work well in multi-realm environment where identical username between two 
> realms do not map to the same user.  There is also possibility that lossy 
> regex can incorrectly map users.  In the interest of supporting multi-realms, 
> it maybe preferred to pass principal name without rewrite to uniquely 
> distinguish users.  This jira is to revisit if Hadoop can support full 
> principal names without rewrite and provide a plugin to override Hadoop's 
> default implementation of auth_to_local for multi-realm use case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16026) Replace incorrect use of system property user.name

2019-01-07 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HADOOP-16026:
---
Attachment: HADOOP-16026.01.patch
Status: Patch Available  (was: Open)

[~jojochuang] Attached patch 01 for your review. When I made changes to files 
in common pkg, I had to make related changes in other modules so that I see a 
clean compilation and build.

> Replace incorrect use of system property user.name
> --
>
> Key: HADOOP-16026
> URL: https://issues.apache.org/jira/browse/HADOOP-16026
> Project: Hadoop Common
>  Issue Type: Improvement
> Environment: Kerberized
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HADOOP-16026.01.patch
>
>
> This jira has been created to track the suggested changes for Hadoop Common 
> as identified in HDFS-14176
> Following occurrence need to be corrected:
> Common/FileSystem L2233
> Common/AbstractFileSystem L451
> Common/KMSWebApp L91
> Common/SFTPConnectionPool L146
> Common/SshFenceByTcpPort L239



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14556) S3A to support Delegation Tokens

2019-01-07 Thread Kai Xie (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16736638#comment-16736638
 ] 

Kai Xie commented on HADOOP-14556:
--

Hi [~ste...@apache.org]

It seems the commit contains the patch for distcp from HADOOP-16018,

could you help to revert that? thanks

> S3A to support Delegation Tokens
> 
>
> Key: HADOOP-14556
> URL: https://issues.apache.org/jira/browse/HADOOP-14556
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-14556-001.patch, HADOOP-14556-002.patch, 
> HADOOP-14556-003.patch, HADOOP-14556-004.patch, HADOOP-14556-005.patch, 
> HADOOP-14556-007.patch, HADOOP-14556-008.patch, HADOOP-14556-009.patch, 
> HADOOP-14556-010.patch, HADOOP-14556-010.patch, HADOOP-14556-011.patch, 
> HADOOP-14556-012.patch, HADOOP-14556-013.patch, HADOOP-14556-014.patch, 
> HADOOP-14556-015.patch, HADOOP-14556-016.patch, HADOOP-14556-017.patch, 
> HADOOP-14556-018a.patch, HADOOP-14556-019.patch, HADOOP-14556-020.patch, 
> HADOOP-14556-021.patch, HADOOP-14556-022.patch, HADOOP-14556-023.patch, 
> HADOOP-14556-024.patch, HADOOP-14556-025.patch, HADOOP-14556-026.patch, 
> HADOOP-14556-027.patch, HADOOP-14556.oath-002.patch, HADOOP-14556.oath.patch
>
>
> S3A to support delegation tokens where
> * an authenticated client can request a token via 
> {{FileSystem.getDelegationToken()}}
> * Amazon's token service is used to request short-lived session secret & id; 
> these will be saved in the token and  marshalled with jobs
> * A new authentication provider will look for a token for the current user 
> and authenticate the user if found
> This will not support renewals; the lifespan of a token will be limited to 
> the initial duration. Also, as you can't request an STS token from a 
> temporary session, IAM instances won't be able to issue tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15996) Plugin interface to support more complex usernames in Hadoop

2019-01-07 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated HADOOP-15996:

Fix Version/s: 3.2.1

> Plugin interface to support more complex usernames in Hadoop
> 
>
> Key: HADOOP-15996
> URL: https://issues.apache.org/jira/browse/HADOOP-15996
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Eric Yang
>Assignee: Bolke de Bruin
>Priority: Major
> Fix For: 3.3.0, 3.1.2, 3.2.1
>
> Attachments: 0001-HADOOP-15996-Make-auth-to-local-configurable.patch, 
> 0001-Make-allowing-or-configurable.patch, 
> 0001-Simple-trial-of-using-krb5.conf-for-auth_to_local-ru.patch, 
> 0002-HADOOP-15996-Make-auth-to-local-configurable.patch, 
> 0003-HADOOP-15996-Make-auth-to-local-configurable.patch, 
> 0004-HADOOP-15996-Make-auth-to-local-configurable.patch, 
> 0005-HADOOP-15996-Make-auth-to-local-configurable.patch, 
> HADOOP-15996.0005.patch, HADOOP-15996.0006.patch, HADOOP-15996.0007.patch, 
> HADOOP-15996.0008.patch, HADOOP-15996.0009.patch, HADOOP-15996.0010.patch, 
> HADOOP-15996.0011.patch, HADOOP-15996.0012.patch
>
>
> Hadoop does not allow support of @ character in username in recent security 
> mailing list vote to revert HADOOP-12751.  Hadoop auth_to_local rule must 
> match to authorize user to login to Hadoop cluster.  This design does not 
> work well in multi-realm environment where identical username between two 
> realms do not map to the same user.  There is also possibility that lossy 
> regex can incorrectly map users.  In the interest of supporting multi-realms, 
> it maybe preferred to pass principal name without rewrite to uniquely 
> distinguish users.  This jira is to revisit if Hadoop can support full 
> principal names without rewrite and provide a plugin to override Hadoop's 
> default implementation of auth_to_local for multi-realm use case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15996) Plugin interface to support more complex usernames in Hadoop

2019-01-07 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated HADOOP-15996:

Fix Version/s: (was: 3.2.0)

> Plugin interface to support more complex usernames in Hadoop
> 
>
> Key: HADOOP-15996
> URL: https://issues.apache.org/jira/browse/HADOOP-15996
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Eric Yang
>Assignee: Bolke de Bruin
>Priority: Major
> Fix For: 3.3.0, 3.1.2
>
> Attachments: 0001-HADOOP-15996-Make-auth-to-local-configurable.patch, 
> 0001-Make-allowing-or-configurable.patch, 
> 0001-Simple-trial-of-using-krb5.conf-for-auth_to_local-ru.patch, 
> 0002-HADOOP-15996-Make-auth-to-local-configurable.patch, 
> 0003-HADOOP-15996-Make-auth-to-local-configurable.patch, 
> 0004-HADOOP-15996-Make-auth-to-local-configurable.patch, 
> 0005-HADOOP-15996-Make-auth-to-local-configurable.patch, 
> HADOOP-15996.0005.patch, HADOOP-15996.0006.patch, HADOOP-15996.0007.patch, 
> HADOOP-15996.0008.patch, HADOOP-15996.0009.patch, HADOOP-15996.0010.patch, 
> HADOOP-15996.0011.patch, HADOOP-15996.0012.patch
>
>
> Hadoop does not allow support of @ character in username in recent security 
> mailing list vote to revert HADOOP-12751.  Hadoop auth_to_local rule must 
> match to authorize user to login to Hadoop cluster.  This design does not 
> work well in multi-realm environment where identical username between two 
> realms do not map to the same user.  There is also possibility that lossy 
> regex can incorrectly map users.  In the interest of supporting multi-realms, 
> it maybe preferred to pass principal name without rewrite to uniquely 
> distinguish users.  This jira is to revisit if Hadoop can support full 
> principal names without rewrite and provide a plugin to override Hadoop's 
> default implementation of auth_to_local for multi-realm use case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16031) TestSecureLogins#testValidKerberosName fails

2019-01-07 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16736484#comment-16736484
 ] 

Akira Ajisaka commented on HADOOP-16031:


Thank you, [~eyang].

> TestSecureLogins#testValidKerberosName fails
> 
>
> Key: HADOOP-16031
> URL: https://issues.apache.org/jira/browse/HADOOP-16031
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HADOOP-16031.01.patch
>
>
> {noformat}
> [INFO] Running org.apache.hadoop.registry.secure.TestSecureLogins
> [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 2.724 s <<< FAILURE! - in org.apache.hadoop.registry.secure.TestSecureLogins
> [ERROR] 
> testValidKerberosName(org.apache.hadoop.registry.secure.TestSecureLogins)  
> Time elapsed: 0.01 s  <<< ERROR!
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to zookeeper/localhost
>   at 
> org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:429)
>   at 
> org.apache.hadoop.registry.secure.TestSecureLogins.testValidKerberosName(TestSecureLogins.java:203)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16026) Replace incorrect use of system property user.name

2019-01-07 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HADOOP-16026:
---
Description: 
This jira has been created to track the suggested changes for Hadoop Common as 
identified in HDFS-14176

Following occurrence need to be corrected:
Common/FileSystem L2233
Common/AbstractFileSystem L451
Common/KMSWebApp L91
Common/SFTPConnectionPool L146
Common/SshFenceByTcpPort L239

  was:
This jira has been created to track the suggested changes for Hadoop Common as 
identified in HDFS-14176

Following occurrence need to be corrected:
Common/PseudoAuthenticator L85
Common/FileSystem L2233
Common/AbstractFileSystem L451
Common/KMSWebApp L91
Common/SFTPConnectionPool L146
Common/SshFenceByTcpPort L239


> Replace incorrect use of system property user.name
> --
>
> Key: HADOOP-16026
> URL: https://issues.apache.org/jira/browse/HADOOP-16026
> Project: Hadoop Common
>  Issue Type: Improvement
> Environment: Kerberized
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> This jira has been created to track the suggested changes for Hadoop Common 
> as identified in HDFS-14176
> Following occurrence need to be corrected:
> Common/FileSystem L2233
> Common/AbstractFileSystem L451
> Common/KMSWebApp L91
> Common/SFTPConnectionPool L146
> Common/SshFenceByTcpPort L239



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15938) [JDK 11] hadoop-annotations build fails with 'Failed to check signatures'

2019-01-07 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16736359#comment-16736359
 ] 

Dinesh Chitlangia commented on HADOOP-15938:


[~ajisakaa] could you review this please ? Thank you!

> [JDK 11] hadoop-annotations build fails with 'Failed to check signatures'
> -
>
> Key: HADOOP-15938
> URL: https://issues.apache.org/jira/browse/HADOOP-15938
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 0.3.2
> Environment: openjdk version "11" 2018-09-25
>Reporter: Devaraj K
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HADOOP-15938.001.patch
>
>
> {code:xml}
> [INFO] Checking unresolved references to 
> org.codehaus.mojo.signature:java18:1.0
> [ERROR] Bad class file 
> /hadoop/hadoop-common-project/hadoop-annotations/target/classes/org/apache/hadoop/classification/InterfaceAudience.class
> {code}
> {code:xml}
> [ERROR] Failed to execute goal 
> org.codehaus.mojo:animal-sniffer-maven-plugin:1.16:check (signature-check) on 
> project hadoop-annotations: Failed to check signatures: Bad class file 
> /hadoop/hadoop-common-project/hadoop-annotations/target/classes/org/apache/hadoop/classification/InterfaceAudience.class:
>  IllegalArgumentException -> [Help 1]
> org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
> goal org.codehaus.mojo:animal-sniffer-maven-plugin:1.16:check 
> (signature-check) on project hadoop-annotations: Failed to check signatures
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
> (MojoExecutor.java:213)
> {code}
> {code:xml}
> Caused by: java.io.IOException: Bad class file 
> /hadoop/hadoop-common-project/hadoop-annotations/target/classes/org/apache/hadoop/classification/InterfaceAudience.class
> at org.codehaus.mojo.animal_sniffer.ClassListBuilder.process 
> (ClassListBuilder.java:91)
> {code}
> {code:xml}
> Caused by: java.lang.IllegalArgumentException
> at org.objectweb.asm.ClassReader. (Unknown Source)
> at org.objectweb.asm.ClassReader. (Unknown Source)
> at org.objectweb.asm.ClassReader. (Unknown Source)
> at org.codehaus.mojo.animal_sniffer.ClassListBuilder.process 
> (ClassListBuilder.java:69)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15954) ABFS: Enable owner and group conversion for MSI and login user using OAuth

2019-01-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16736330#comment-16736330
 ] 

Hadoop QA commented on HADOOP-15954:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} hadoop-tools/hadoop-azure: The patch generated 0 new 
+ 2 unchanged - 1 fixed = 2 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
9s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15954 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12954042/HADOOP-15954-007.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 452bc201ae5b 4.4.0-134-generic #160~14.04.1-Ubuntu SMP Fri Aug 
17 11:07:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 06279ec |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15743/testReport/ |
| Max. process+thread count | 341 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15743/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.or

[jira] [Commented] (HADOOP-16023) Support system /etc/krb5.conf for auth_to_local rules

2019-01-07 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16736328#comment-16736328
 ] 

Eric Yang commented on HADOOP-16023:


[~bolke] Some Apache projects are using dual license libraries. (e.g. HBase 
uses jruby).  Most of the time, this is not an issue, however, com.sun.jna 
package name is an area of concern.  [Oracle and Google 
lawsuit|https://en.wikipedia.org/wiki/Oracle_America,_Inc._v._Google,_Inc.] is 
a related example that Oracle might seek damages that subject to non-fair use 
of Java API packages.  There is a tiny risk to use jna library.

> Support system /etc/krb5.conf for auth_to_local rules
> -
>
> Key: HADOOP-16023
> URL: https://issues.apache.org/jira/browse/HADOOP-16023
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Bolke de Bruin
>Assignee: Bolke de Bruin
>Priority: Major
>  Labels: security
>
> Hadoop has long maintained its own configuration for Kerberos' auth_to_local 
> rules. To the user this is counter intuitive and increases the complexity of 
> maintaining a secure system as the normal way of configuring these 
> auth_to_local rules is done in the site wide krb5.conf usually /etc/krb5.conf.
> With HADOOP-15996 there is now support for configuring how Hadoop should 
> evaluate auth_to_local rules. A "system" mechanism should be added. 
> It should be investigated how to properly parse krb5.conf. JDK seems to be 
> lacking as it is unable to obtain auth_to_local rules due to a bug in its 
> parser. Apache Kerby has an implementation that could be used. A native (C) 
> version is also a possibility. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15954) ABFS: Enable owner and group conversion for MSI and login user using OAuth

2019-01-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16736308#comment-16736308
 ] 

Hadoop QA commented on HADOOP-15954:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} hadoop-tools/hadoop-azure: The patch generated 0 new 
+ 2 unchanged - 1 fixed = 2 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
13s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15954 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12954038/HADOOP-15954-007.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5ef785ddb5a8 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 06279ec |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15742/testReport/ |
| Max. process+thread count | 294 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15742/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org

[jira] [Commented] (HADOOP-15986) Allowing files to be moved between encryption zones having the same encryption key

2019-01-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16736242#comment-16736242
 ] 

Hadoop QA commented on HADOOP-15986:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 58s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 55s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}162m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.TestDFSClientRetries |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.cli.TestCryptoAdminCLI |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15986 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12954024/HADOOP-15986.000.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 9f49c20e7545 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d715233 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15741/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15741/testReport/ |
|

[jira] [Commented] (HADOOP-15954) ABFS: Enable owner and group conversion for MSI and login user using OAuth

2019-01-07 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16736247#comment-16736247
 ] 

Da Zhou commented on HADOOP-15954:
--

Thanks for the review.
Attaching 007 patch:
- Removed the unnecessary replacement for $superuser
- updated the DefaultSPIdentityTransformer so it is enabled only when 
isSecurityEnabled is true.
- replaced the log.error() with log.warn() in 
DefaultSPIdentityTransformer.initialize() when fetching name/group from 
userGroupInformation.
-  I see in KerbroseName.java "Locale.ENGLISH" is being used, updated 
toLowerCase() to toLowerCase(Locale.ENGLISH).

All tests passed:
non-XNS account, ShareKey: 
Tests run: 35, Failures: 0, Errors: 0, Skipped: 0
Tests run: 316, Failures: 0, Errors: 0, Skipped: 205
Tests run: 165, Failures: 0, Errors: 0, Skipped: 15

XNS account, sharedKey:
Tests run: 35, Failures: 0, Errors: 0, Skipped: 0
Tests run: 316, Failures: 0, Errors: 0, Skipped: 19
Tests run: 165, Failures: 0, Errors: 0, Skipped: 15

XNS account, OAuth:
Tests run: 35, Failures: 0, Errors: 0, Skipped: 0
Tests run: 316, Failures: 0, Errors: 0, Skipped: 20
Tests run: 165, Failures: 0, Errors: 0, Skipped: 21

> ABFS: Enable owner and group conversion for MSI and login user using OAuth
> --
>
> Key: HADOOP-15954
> URL: https://issues.apache.org/jira/browse/HADOOP-15954
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: junhua gu
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15954-001.patch, HADOOP-15954-002.patch, 
> HADOOP-15954-003.patch, HADOOP-15954-004.patch, HADOOP-15954-005.patch, 
> HADOOP-15954-006.patch, HADOOP-15954-007.patch
>
>
> Add support for overwriting owner and group in set/get operations to be the 
> service principal id when OAuth is used. Add support for upn short name 
> format.
>  
> Add Standard Transformer for SharedKey / Service 
> Add interface provides an extensible model for customizing the acquisition of 
> Identity Transformer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16022) Increase Compression Buffer Sizes - Remove Magic Numbers

2019-01-07 Thread BELUGA BEHR (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16736256#comment-16736256
 ] 

BELUGA BEHR commented on HADOOP-16022:
--

[~ste...@apache.org] Thanks Steve for the interest.

I looked at the test failures and found the entire setup a bit wonky.

In particular... 
[Here|https://github.com/apache/hadoop/blob/7b57f2f71fbaa5af4897309597cca70a95b04edd/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/file/tfile/TFile.java#L659]

 
{code:java|title=TFile.java}
void finishDataBlock(boolean bForceFinish) throws IOException {
...
// exceeded the size limit, do the compression and finish the block
if (bForceFinish || blkAppender.getCompressedSize() >= sizeMinBlock) {
...

{code}
As I understand it:

The general flow of this code is that a bunch of small records are serialized 
into bytes and written out to a stream. After a certain threshold of bytes from 
the stream have been compressed, the stream is stopped, flushed, and written 
out as a single block. Well, the current logic is a bit flawed I believe 
because, as we can see here, the block size is based on the size of the 
compressed bytes and not the total number of bytes written into the stream.

What is happening here is that as the bytes are written to the stream, they are 
first buffered (into the {{BufferedInputStream}} I touched) before being passed 
to the compressor. The bytes only make it to the compressor once the buffer has 
filled and been forced to flush.

So, in the current implementation, 4K bytes are written to the 
{{BufferedInputStream}}, the buffer is flushed, the bytes compressed, the 
compressed size reported by {{getCompressedSize()}}, and then flushed out as a 
block. When I changed the buffer to 8K, now twice the amount of data is being 
buffered before compression and is written to each block. This is very 
confusing to say the least... the number of blocks written out are dependent on 
the arbitrary size of this {{BufferedInputStream}} returned by the 
{{Compression}} class. That is very confusing and hard to test.  The person 
crafting the unit test must know how big this internal, non-configurable, write 
buffer is in order to write an effective test.  Also, if we use the default JDK 
buffer size (as recommended), these tests may fail depending on the JDK 
implementation. I think it is better to change the code to make blocks based on 
the number of raw bytes written into the stream, not the number of bytes in its 
compressed form. In this way, writing {{n}} bytes will always yield {{y}} 
blocks, no matter how big the write buffer is.

 

Thoughts?

> Increase Compression Buffer Sizes - Remove Magic Numbers
> 
>
> Key: HADOOP-16022
> URL: https://issues.apache.org/jira/browse/HADOOP-16022
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 2.10.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-16022.1.patch
>
>
> {code:java|title=Compression.java}
> // data input buffer size to absorb small reads from application.
> private static final int DATA_IBUF_SIZE = 1 * 1024;
> // data output buffer size to absorb small writes from application.
> private static final int DATA_OBUF_SIZE = 4 * 1024;
> {code}
> There exists these hard coded buffer sizes in the Compression code.  Instead, 
> use the JVM default sizes, which, this day and age, are usually set for 8K.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15954) ABFS: Enable owner and group conversion for MSI and login user using OAuth

2019-01-07 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-15954:
-
Attachment: HADOOP-15954-007.patch

> ABFS: Enable owner and group conversion for MSI and login user using OAuth
> --
>
> Key: HADOOP-15954
> URL: https://issues.apache.org/jira/browse/HADOOP-15954
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: junhua gu
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15954-001.patch, HADOOP-15954-002.patch, 
> HADOOP-15954-003.patch, HADOOP-15954-004.patch, HADOOP-15954-005.patch, 
> HADOOP-15954-006.patch, HADOOP-15954-007.patch
>
>
> Add support for overwriting owner and group in set/get operations to be the 
> service principal id when OAuth is used. Add support for upn short name 
> format.
>  
> Add Standard Transformer for SharedKey / Service 
> Add interface provides an extensible model for customizing the acquisition of 
> Identity Transformer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15229) Add FileSystem builder-based openFile() API to match createFile() + S3 Select

2019-01-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16736240#comment-16736240
 ] 

Hadoop QA commented on HADOOP-15229:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 24 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
21m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
14s{color} | {color:green} root generated 0 new + 1487 unchanged - 2 fixed = 
1487 total (was 1489) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 38s{color} | {color:orange} root: The patch generated 16 new + 1094 
unchanged - 3 fixed = 1110 total (was 1097) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  9m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 55s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
48s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 20s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
37s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m 22s{color} 
| {color:red} hadoop-streaming in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
36s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}228m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason ||

[jira] [Updated] (HADOOP-15954) ABFS: Enable owner and group conversion for MSI and login user using OAuth

2019-01-07 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-15954:
-
Attachment: (was: HADOOP-15954-007.patch)

> ABFS: Enable owner and group conversion for MSI and login user using OAuth
> --
>
> Key: HADOOP-15954
> URL: https://issues.apache.org/jira/browse/HADOOP-15954
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: junhua gu
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15954-001.patch, HADOOP-15954-002.patch, 
> HADOOP-15954-003.patch, HADOOP-15954-004.patch, HADOOP-15954-005.patch, 
> HADOOP-15954-006.patch
>
>
> Add support for overwriting owner and group in set/get operations to be the 
> service principal id when OAuth is used. Add support for upn short name 
> format.
>  
> Add Standard Transformer for SharedKey / Service 
> Add interface provides an extensible model for customizing the acquisition of 
> Identity Transformer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15954) ABFS: Enable owner and group conversion for MSI and login user using OAuth

2019-01-07 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-15954:
-
Attachment: HADOOP-15954-007.patch

> ABFS: Enable owner and group conversion for MSI and login user using OAuth
> --
>
> Key: HADOOP-15954
> URL: https://issues.apache.org/jira/browse/HADOOP-15954
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: junhua gu
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15954-001.patch, HADOOP-15954-002.patch, 
> HADOOP-15954-003.patch, HADOOP-15954-004.patch, HADOOP-15954-005.patch, 
> HADOOP-15954-006.patch, HADOOP-15954-007.patch
>
>
> Add support for overwriting owner and group in set/get operations to be the 
> service principal id when OAuth is used. Add support for upn short name 
> format.
>  
> Add Standard Transformer for SharedKey / Service 
> Add interface provides an extensible model for customizing the acquisition of 
> Identity Transformer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16031) TestSecureLogins#testValidKerberosName fails

2019-01-07 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16736175#comment-16736175
 ] 

Hudson commented on HADOOP-16031:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #15729 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15729/])
HADOOP-16031.  Fixed TestSecureLogins unit test.  Contributed by Akira (eyang: 
rev 802932ca0b10348634b5b5d7f8e7868ce93322f2)
* (edit) 
hadoop-common-project/hadoop-registry/src/test/java/org/apache/hadoop/registry/secure/TestSecureLogins.java


> TestSecureLogins#testValidKerberosName fails
> 
>
> Key: HADOOP-16031
> URL: https://issues.apache.org/jira/browse/HADOOP-16031
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HADOOP-16031.01.patch
>
>
> {noformat}
> [INFO] Running org.apache.hadoop.registry.secure.TestSecureLogins
> [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 2.724 s <<< FAILURE! - in org.apache.hadoop.registry.secure.TestSecureLogins
> [ERROR] 
> testValidKerberosName(org.apache.hadoop.registry.secure.TestSecureLogins)  
> Time elapsed: 0.01 s  <<< ERROR!
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to zookeeper/localhost
>   at 
> org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:429)
>   at 
> org.apache.hadoop.registry.secure.TestSecureLogins.testValidKerberosName(TestSecureLogins.java:203)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15959) revert HADOOP-12751

2019-01-07 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang resolved HADOOP-15959.

Resolution: Fixed

The failed registry DNS unit test has been addressed in HADOOP-16031.  Hence, 
close as resolved again.

> revert HADOOP-12751
> ---
>
> Key: HADOOP-15959
> URL: https://issues.apache.org/jira/browse/HADOOP-15959
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.2.0, 3.1.1, 2.9.2, 3.0.3, 2.7.7, 2.8.5
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.2.0, 2.7.8, 3.0.4, 3.1.2, 2.8.6, 2.9.3
>
> Attachments: HADOOP-15959-001.patch, HADOOP-15959-branch-2-002.patch, 
> HADOOP-15959-branch-2.7-003.patch
>
>
> HADOOP-12751 doesn't quite work right. Revert.
> (this patch is so jenkins can do the test runs)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16031) TestSecureLogins#testValidKerberosName fails

2019-01-07 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16736148#comment-16736148
 ] 

Eric Yang commented on HADOOP-16031:


[~ajisakaa] Thank you for the patch.  This has been committed to trunk, 
branch-3.2 and branch-3.1.

> TestSecureLogins#testValidKerberosName fails
> 
>
> Key: HADOOP-16031
> URL: https://issues.apache.org/jira/browse/HADOOP-16031
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HADOOP-16031.01.patch
>
>
> {noformat}
> [INFO] Running org.apache.hadoop.registry.secure.TestSecureLogins
> [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 2.724 s <<< FAILURE! - in org.apache.hadoop.registry.secure.TestSecureLogins
> [ERROR] 
> testValidKerberosName(org.apache.hadoop.registry.secure.TestSecureLogins)  
> Time elapsed: 0.01 s  <<< ERROR!
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to zookeeper/localhost
>   at 
> org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:429)
>   at 
> org.apache.hadoop.registry.secure.TestSecureLogins.testValidKerberosName(TestSecureLogins.java:203)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16031) TestSecureLogins#testValidKerberosName fails

2019-01-07 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-16031:
---
   Resolution: Fixed
Fix Version/s: 3.2.1
   3.1.2
   3.3.0
   Status: Resolved  (was: Patch Available)

> TestSecureLogins#testValidKerberosName fails
> 
>
> Key: HADOOP-16031
> URL: https://issues.apache.org/jira/browse/HADOOP-16031
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0, 3.1.2, 3.2.1
>
> Attachments: HADOOP-16031.01.patch
>
>
> {noformat}
> [INFO] Running org.apache.hadoop.registry.secure.TestSecureLogins
> [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 2.724 s <<< FAILURE! - in org.apache.hadoop.registry.secure.TestSecureLogins
> [ERROR] 
> testValidKerberosName(org.apache.hadoop.registry.secure.TestSecureLogins)  
> Time elapsed: 0.01 s  <<< ERROR!
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to zookeeper/localhost
>   at 
> org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:429)
>   at 
> org.apache.hadoop.registry.secure.TestSecureLogins.testValidKerberosName(TestSecureLogins.java:203)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16031) TestSecureLogins#testValidKerberosName fails

2019-01-07 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-16031:
---
Fix Version/s: (was: 3.1.2)
   3.1.3

> TestSecureLogins#testValidKerberosName fails
> 
>
> Key: HADOOP-16031
> URL: https://issues.apache.org/jira/browse/HADOOP-16031
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HADOOP-16031.01.patch
>
>
> {noformat}
> [INFO] Running org.apache.hadoop.registry.secure.TestSecureLogins
> [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 2.724 s <<< FAILURE! - in org.apache.hadoop.registry.secure.TestSecureLogins
> [ERROR] 
> testValidKerberosName(org.apache.hadoop.registry.secure.TestSecureLogins)  
> Time elapsed: 0.01 s  <<< ERROR!
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to zookeeper/localhost
>   at 
> org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:429)
>   at 
> org.apache.hadoop.registry.secure.TestSecureLogins.testValidKerberosName(TestSecureLogins.java:203)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16031) TestSecureLogins#testValidKerberosName fails

2019-01-07 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16736135#comment-16736135
 ] 

Eric Yang commented on HADOOP-16031:


+1 looks good to me.  Committing shortly.

> TestSecureLogins#testValidKerberosName fails
> 
>
> Key: HADOOP-16031
> URL: https://issues.apache.org/jira/browse/HADOOP-16031
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16031.01.patch
>
>
> {noformat}
> [INFO] Running org.apache.hadoop.registry.secure.TestSecureLogins
> [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 2.724 s <<< FAILURE! - in org.apache.hadoop.registry.secure.TestSecureLogins
> [ERROR] 
> testValidKerberosName(org.apache.hadoop.registry.secure.TestSecureLogins)  
> Time elapsed: 0.01 s  <<< ERROR!
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to zookeeper/localhost
>   at 
> org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:429)
>   at 
> org.apache.hadoop.registry.secure.TestSecureLogins.testValidKerberosName(TestSecureLogins.java:203)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14556) S3A to support Delegation Tokens

2019-01-07 Thread Jitendra Nath Pandey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HADOOP-14556:
--
Fix Version/s: 3.3.0

> S3A to support Delegation Tokens
> 
>
> Key: HADOOP-14556
> URL: https://issues.apache.org/jira/browse/HADOOP-14556
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-14556-001.patch, HADOOP-14556-002.patch, 
> HADOOP-14556-003.patch, HADOOP-14556-004.patch, HADOOP-14556-005.patch, 
> HADOOP-14556-007.patch, HADOOP-14556-008.patch, HADOOP-14556-009.patch, 
> HADOOP-14556-010.patch, HADOOP-14556-010.patch, HADOOP-14556-011.patch, 
> HADOOP-14556-012.patch, HADOOP-14556-013.patch, HADOOP-14556-014.patch, 
> HADOOP-14556-015.patch, HADOOP-14556-016.patch, HADOOP-14556-017.patch, 
> HADOOP-14556-018a.patch, HADOOP-14556-019.patch, HADOOP-14556-020.patch, 
> HADOOP-14556-021.patch, HADOOP-14556-022.patch, HADOOP-14556-023.patch, 
> HADOOP-14556-024.patch, HADOOP-14556-025.patch, HADOOP-14556-026.patch, 
> HADOOP-14556-027.patch, HADOOP-14556.oath-002.patch, HADOOP-14556.oath.patch
>
>
> S3A to support delegation tokens where
> * an authenticated client can request a token via 
> {{FileSystem.getDelegationToken()}}
> * Amazon's token service is used to request short-lived session secret & id; 
> these will be saved in the token and  marshalled with jobs
> * A new authentication provider will look for a token for the current user 
> and authenticate the user if found
> This will not support renewals; the lifespan of a token will be limited to 
> the initial duration. Also, as you can't request an STS token from a 
> temporary session, IAM instances won't be able to issue tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16031) TestSecureLogins#testValidKerberosName fails

2019-01-07 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-16031:

Target Version/s: 3.3.0, 3.2.1, 3.1.3  (was: 3.3.0, 3.1.2, 3.2.1)

> TestSecureLogins#testValidKerberosName fails
> 
>
> Key: HADOOP-16031
> URL: https://issues.apache.org/jira/browse/HADOOP-16031
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16031.01.patch
>
>
> {noformat}
> [INFO] Running org.apache.hadoop.registry.secure.TestSecureLogins
> [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 2.724 s <<< FAILURE! - in org.apache.hadoop.registry.secure.TestSecureLogins
> [ERROR] 
> testValidKerberosName(org.apache.hadoop.registry.secure.TestSecureLogins)  
> Time elapsed: 0.01 s  <<< ERROR!
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to zookeeper/localhost
>   at 
> org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:429)
>   at 
> org.apache.hadoop.registry.secure.TestSecureLogins.testValidKerberosName(TestSecureLogins.java:203)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16016) TestSSLFactory#testServerWeakCiphers sporadically fails in precommit builds

2019-01-07 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-16016:

Target Version/s: 3.0.4, 3.3.0, 2.8.6, 3.2.1, 2.9.3, 3.1.3  (was: 3.0.4, 
3.3.0, 3.1.2, 2.8.6, 3.2.1, 2.9.3)

> TestSSLFactory#testServerWeakCiphers sporadically fails in precommit builds
> ---
>
> Key: HADOOP-16016
> URL: https://issues.apache.org/jira/browse/HADOOP-16016
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security, test
> Environment: Java 1.8.0_191 or upper
>Reporter: Jason Lowe
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16016-002.patch, HADOOP-16016.01.patch, 
> HADOOP-16016.03.patch
>
>
> I have seen a couple of precommit builds across JIRAs fail in 
> TestSSLFactory#testServerWeakCiphers with the error:
> {noformat}
> [ERROR]   TestSSLFactory.testServerWeakCiphers:240 Expected to find 'no 
> cipher suites in common' but got unexpected 
> exception:javax.net.ssl.SSLHandshakeException: No appropriate protocol 
> (protocol is disabled or cipher suites are inappropriate)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15922) DelegationTokenAuthenticationFilter get wrong doAsUser since it does not decode URL

2019-01-07 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-15922:

Target Version/s: 3.3.0, 3.2.1, 3.1.3  (was: 3.3.0, 3.1.2, 3.2.1)

> DelegationTokenAuthenticationFilter get wrong doAsUser since it does not 
> decode URL
> ---
>
> Key: HADOOP-15922
> URL: https://issues.apache.org/jira/browse/HADOOP-15922
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Fix For: 3.3.0, 3.1.2, 3.2.1
>
> Attachments: HADOOP-15922.001.patch, HADOOP-15922.002.patch, 
> HADOOP-15922.003.patch, HADOOP-15922.004.patch, HADOOP-15922.005.patch, 
> HADOOP-15922.006.patch, HADOOP-15922.007.patch
>
>
> DelegationTokenAuthenticationFilter get wrong doAsUser when proxy user from 
> client is complete kerberos name (e.g., user/hostn...@realm.com, actually it 
> is acceptable), because DelegationTokenAuthenticationFilter does not decode 
> DOAS parameter in URL which is encoded by {{URLEncoder}} at client.
> e.g. KMS as example:
> a. KMSClientProvider creates connection to KMS Server using 
> DelegationTokenAuthenticatedURL#openConnection.
> b. If KMSClientProvider is a doAsUser, KMSClientProvider will put {{doas}} 
> with url encoded user as one parameter of http request. 
> {code:java}
> // proxyuser
> if (doAs != null) {
>   extraParams.put(DO_AS, URLEncoder.encode(doAs, "UTF-8"));
> }
> {code}
> c. when KMS server receives the request, it does not decode the proxy user.
> As result, KMS Server will get the wrong proxy user if this proxy user is 
> complete Kerberos Name or it includes some special character. Some other 
> authentication and authorization exception will throws next to it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15986) Allowing files to be moved between encryption zones having the same encryption key

2019-01-07 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16736043#comment-16736043
 ] 

Adam Antal commented on HADOOP-15986:
-

Added testcase (should fail by jenkins run), which aims to test this feature. 
(Also modified existing ones.)

> Allowing files to be moved between encryption zones having the same 
> encryption key
> --
>
> Key: HADOOP-15986
> URL: https://issues.apache.org/jira/browse/HADOOP-15986
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Major
> Attachments: HADOOP-15986.000.patch
>
>
> Currently HDFS blocks you from moving files from one encryption zone to 
> another. On the surface this is fine, but we also allow multiple encryption 
> zones to use the same encryption zone key. If we allow multiple zones to use 
> the same zone key, we should also allow files to be moved between the zones. 
> I believe this should be either we don't allow the same key to be used for 
> multiple encryption zones, or we allow moving between zones when the key is 
> the same. The latter is the most user-friendly and allows for different HDFS 
> directory structures.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15986) Allowing files to be moved between encryption zones having the same encryption key

2019-01-07 Thread Adam Antal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Antal updated HADOOP-15986:

Attachment: HADOOP-15986.000.patch

> Allowing files to be moved between encryption zones having the same 
> encryption key
> --
>
> Key: HADOOP-15986
> URL: https://issues.apache.org/jira/browse/HADOOP-15986
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Major
> Attachments: HADOOP-15986.000.patch
>
>
> Currently HDFS blocks you from moving files from one encryption zone to 
> another. On the surface this is fine, but we also allow multiple encryption 
> zones to use the same encryption zone key. If we allow multiple zones to use 
> the same zone key, we should also allow files to be moved between the zones. 
> I believe this should be either we don't allow the same key to be used for 
> multiple encryption zones, or we allow moving between zones when the key is 
> the same. The latter is the most user-friendly and allows for different HDFS 
> directory structures.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15986) Allowing files to be moved between encryption zones having the same encryption key

2019-01-07 Thread Adam Antal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Antal updated HADOOP-15986:

Status: Patch Available  (was: Open)

> Allowing files to be moved between encryption zones having the same 
> encryption key
> --
>
> Key: HADOOP-15986
> URL: https://issues.apache.org/jira/browse/HADOOP-15986
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Major
> Attachments: HADOOP-15986.000.patch
>
>
> Currently HDFS blocks you from moving files from one encryption zone to 
> another. On the surface this is fine, but we also allow multiple encryption 
> zones to use the same encryption zone key. If we allow multiple zones to use 
> the same zone key, we should also allow files to be moved between the zones. 
> I believe this should be either we don't allow the same key to be used for 
> multiple encryption zones, or we allow moving between zones when the key is 
> the same. The latter is the most user-friendly and allows for different HDFS 
> directory structures.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15229) Add FileSystem builder-based openFile() API to match createFile() + S3 Select

2019-01-07 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15229:

Affects Version/s: (was: 3.0.0)
   3.2.0
   Status: Patch Available  (was: Open)

HADOOP-15229 patch 018:  rebase to  trunk; checkstyle

+ add _COUNT to numeric row counts used in tests; add an EVEN_ROWS_COUNT (size 
same as the odd one) for tests selecting even values.
+ fix spelling/type of InternalSelectConstants

tested the new select ITests against S3 ireland; full test in progress

> Add FileSystem builder-based openFile() API to match createFile() + S3 Select
> -
>
> Key: HADOOP-15229
> URL: https://issues.apache.org/jira/browse/HADOOP-15229
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15229-001.patch, HADOOP-15229-002.patch, 
> HADOOP-15229-003.patch, HADOOP-15229-004.patch, HADOOP-15229-004.patch, 
> HADOOP-15229-005.patch, HADOOP-15229-006.patch, HADOOP-15229-007.patch, 
> HADOOP-15229-009.patch, HADOOP-15229-010.patch, HADOOP-15229-011.patch, 
> HADOOP-15229-012.patch, HADOOP-15229-013.patch, HADOOP-15229-014.patch, 
> HADOOP-15229-015.patch, HADOOP-15229-016.patch, HADOOP-15229-017.patch, 
> HADOOP-15229-018.patch
>
>
> Replicate HDFS-1170 and HADOOP-14365 with an API to open files.
> A key requirement of this is not HDFS, it's to put in the fadvise policy for 
> working with object stores, where getting the decision to do a full GET and 
> TCP abort on seek vs smaller GETs is fundamentally different: the wrong 
> option can cost you minutes. S3A and Azure both have adaptive policies now 
> (first backward seek), but they still don't do it that well.
> Columnar formats (ORC, Parquet) should be able to say "fs.input.fadvise" 
> "random" as an option when they open files; I can imagine other options too.
> The Builder model of [~eddyxu] is the one to mimic, method for method. 
> Ideally with as much code reuse as possible



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15229) Add FileSystem builder-based openFile() API to match createFile() + S3 Select

2019-01-07 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15229:

Attachment: HADOOP-15229-018.patch

> Add FileSystem builder-based openFile() API to match createFile() + S3 Select
> -
>
> Key: HADOOP-15229
> URL: https://issues.apache.org/jira/browse/HADOOP-15229
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15229-001.patch, HADOOP-15229-002.patch, 
> HADOOP-15229-003.patch, HADOOP-15229-004.patch, HADOOP-15229-004.patch, 
> HADOOP-15229-005.patch, HADOOP-15229-006.patch, HADOOP-15229-007.patch, 
> HADOOP-15229-009.patch, HADOOP-15229-010.patch, HADOOP-15229-011.patch, 
> HADOOP-15229-012.patch, HADOOP-15229-013.patch, HADOOP-15229-014.patch, 
> HADOOP-15229-015.patch, HADOOP-15229-016.patch, HADOOP-15229-017.patch, 
> HADOOP-15229-018.patch
>
>
> Replicate HDFS-1170 and HADOOP-14365 with an API to open files.
> A key requirement of this is not HDFS, it's to put in the fadvise policy for 
> working with object stores, where getting the decision to do a full GET and 
> TCP abort on seek vs smaller GETs is fundamentally different: the wrong 
> option can cost you minutes. S3A and Azure both have adaptive policies now 
> (first backward seek), but they still don't do it that well.
> Columnar formats (ORC, Parquet) should be able to say "fs.input.fadvise" 
> "random" as an option when they open files; I can imagine other options too.
> The Builder model of [~eddyxu] is the one to mimic, method for method. 
> Ideally with as much code reuse as possible



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15229) Add FileSystem builder-based openFile() API to match createFile() + S3 Select

2019-01-07 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15229:

Status: Open  (was: Patch Available)

> Add FileSystem builder-based openFile() API to match createFile() + S3 Select
> -
>
> Key: HADOOP-15229
> URL: https://issues.apache.org/jira/browse/HADOOP-15229
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15229-001.patch, HADOOP-15229-002.patch, 
> HADOOP-15229-003.patch, HADOOP-15229-004.patch, HADOOP-15229-004.patch, 
> HADOOP-15229-005.patch, HADOOP-15229-006.patch, HADOOP-15229-007.patch, 
> HADOOP-15229-009.patch, HADOOP-15229-010.patch, HADOOP-15229-011.patch, 
> HADOOP-15229-012.patch, HADOOP-15229-013.patch, HADOOP-15229-014.patch, 
> HADOOP-15229-015.patch, HADOOP-15229-016.patch, HADOOP-15229-017.patch
>
>
> Replicate HDFS-1170 and HADOOP-14365 with an API to open files.
> A key requirement of this is not HDFS, it's to put in the fadvise policy for 
> working with object stores, where getting the decision to do a full GET and 
> TCP abort on seek vs smaller GETs is fundamentally different: the wrong 
> option can cost you minutes. S3A and Azure both have adaptive policies now 
> (first backward seek), but they still don't do it that well.
> Columnar formats (ORC, Parquet) should be able to say "fs.input.fadvise" 
> "random" as an option when they open files; I can imagine other options too.
> The Builder model of [~eddyxu] is the one to mimic, method for method. 
> Ideally with as much code reuse as possible



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15937) [JDK 11] Update maven-shade-plugin.version to 3.2.1

2019-01-07 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16735909#comment-16735909
 ] 

Dinesh Chitlangia commented on HADOOP-15937:


[~ajisakaa] thank you for review and commit!

> [JDK 11] Update maven-shade-plugin.version to 3.2.1
> ---
>
> Key: HADOOP-15937
> URL: https://issues.apache.org/jira/browse/HADOOP-15937
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
> Environment: openjdk version "11" 2018-09-25
>Reporter: Devaraj K
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15937.01.patch
>
>
> Build fails with the below error,
> {code:xml}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-shade-plugin:3.2.0:shade (default) on project 
> hadoop-yarn-csi: Error creating shaded jar: Problem shading JAR 
> /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/target/hadoop-yarn-csi-3.3.0-SNAPSHOT.jar
>  entry csi/v0/Csi$GetPluginInfoRequestOrBuilder.class: 
> org.apache.maven.plugin.MojoExecutionException: Error in ASM processing class 
> csi/v0/Csi$GetPluginInfoRequestOrBuilder.class: UnsupportedOperationException 
> -> [Help 1]
> org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
> goal org.apache.maven.plugins:maven-shade-plugin:3.2.0:shade (default) on 
> project hadoop-yarn-csi: Error creating shaded jar: Problem shading JAR 
> /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/target/hadoop-yarn-csi-3.3.0-SNAPSHOT.jar
>  entry csi/v0/Csi$GetPluginInfoRequestOrBuilder.class: 
> org.apache.maven.plugin.MojoExecutionException: Error in ASM processing class 
> csi/v0/Csi$GetPluginInfoRequestOrBuilder.class
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
> (MojoExecutor.java:213)
> ...
> [ERROR]
> [ERROR] After correcting the problems, you can resume the build with the 
> command
> [ERROR]   mvn  -rf :hadoop-yarn-csi
> {code}
> Updating maven-shade-plugin.version to 3.2.1 fixes the issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16018) DistCp won't reassemble chunks when blocks per chunk > 0

2019-01-07 Thread Kai Xie (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16735893#comment-16735893
 ] 

Kai Xie commented on HADOOP-16018:
--

Hi [~ste...@apache.org]

It seems the patch is merged with HADOOP-14556's commit somehow, is it normal?

> DistCp won't reassemble chunks when blocks per chunk > 0
> 
>
> Key: HADOOP-16018
> URL: https://issues.apache.org/jira/browse/HADOOP-16018
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.2.0, 2.9.2
>Reporter: Kai Xie
>Assignee: Kai Xie
>Priority: Major
> Attachments: HADOOP-16018-002.patch, HADOOP-16018.01.patch
>
>
> I was investigating why hadoop-distcp-2.9.2 won't reassemble chunks of the 
> same file when blocks per chunk has been set > 0.
> In the CopyCommitter::commitJob, this logic can prevent chunks from 
> reassembling if blocks per chunk is equal to 0:
> {code:java}
> if (blocksPerChunk > 0) {
>   concatFileChunks(conf);
> }
> {code}
> Then in CopyCommitter's ctor, blocksPerChunk is initialised from the config:
> {code:java}
> blocksPerChunk = context.getConfiguration().getInt(
> DistCpOptionSwitch.BLOCKS_PER_CHUNK.getConfigLabel(), 0);
> {code}
>  
> But here the config key DistCpOptionSwitch.BLOCKS_PER_CHUNK.getConfigLabel() 
> will always returns empty string because it is constructed without config 
> label:
> {code:java}
> BLOCKS_PER_CHUNK("",
> new Option("blocksperchunk", true, "If set to a positive value, files"
> + "with more blocks than this value will be split into chunks of "
> + " blocks to be transferred in parallel, and "
> + "reassembled on the destination. By default,  is "
> + "0 and the files will be transmitted in their entirety without "
> + "splitting. This switch is only applicable when the source file "
> + "system implements getBlockLocations method and the target file "
> + "system implements concat method"))
> {code}
> As a result it will fall back to the default value 0 for blocksPerChunk, and 
> prevent the chunks from reassembling.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14556) S3A to support Delegation Tokens

2019-01-07 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16735856#comment-16735856
 ] 

Hudson commented on HADOOP-14556:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15725 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15725/])
HADOOP-14556. S3A to support Delegation Tokens. (stevel: rev 
d7152332b32a575c3a92e3f4c44b95e58462528d)
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AAWSCredentialsProvider.java
* (add) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/mapreduce/MockJob.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java
* (delete) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionSSES3BlockOutputStream.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/SimpleAWSCredentialsProvider.java
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/RoleTokenBinding.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/RolePolicies.java
* (add) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/delegation/Csvout.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/NoAuthWithAWSException.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/IAMInstanceCredentialsProvider.java
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/MarshalledCredentialProvider.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/STSClientFactory.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/MockS3ClientFactory.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/AbstractS3ATestBase.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractGetFileStatusTest.java
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/MarshalledCredentials.java
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/RoleTokenIdentifier.java
* (edit) hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/AWSPolicyProvider.java
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptionSwitch.java
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/S3ADelegationTokens.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/AbstractDelegationTokenBinding.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/LambdaTestUtils.java
* (add) 
hadoop-tools/hadoop-aws/src/main/resources/META-INF/services/org.apache.hadoop.security.token.DtFetcher
* (add) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/NanoTimerStats.java
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/NoAwsCredentialsException.java
* (delete) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionSSECBlockOutputStream.java
* (add) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/delegation/AbstractDelegationIT.java
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/AbstractDTService.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/RoleTestUtils.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Invoker.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Statistic.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DirListingMetadata.java
* (add) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/delegation/ITestRoleDelegationInFileystem.java
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/AbstractS3ATokenIdentifier.java
* (add) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/delegation/ITestSessionDelegationInFileystem.java
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/S3

[GitHub] pgoron opened a new pull request #458: YARN-8246 winutils - fix failure to retrieve disk and network perf co…

2019-01-07 Thread GitBox
pgoron opened a new pull request #458: YARN-8246 winutils - fix failure to 
retrieve disk and network perf co…
URL: https://github.com/apache/hadoop/pull/458
 
 
   …unters on localized windows installlation.
   
   PdhAddCounter expects performance counter path to be in the same language 
than the windows installation.
   With current code, the calls to PdhAddCounter will fail with error 
0xcbb8 (PDH_CSTATUS_NO_OBJECT)
   on non-english windows installation.
   
   The solution is to use PdhAddEnglishCounter function instead
   
https://docs.microsoft.com/en-us/windows/desktop/api/pdh/nf-pdh-pdhaddenglishcounterw
   
   "This function provides a language-neutral way to add performance counters 
to the query"


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14178) Move Mockito up to version 2.x

2019-01-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16735844#comment-16735844
 ] 

Hadoop QA commented on HADOOP-14178:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 289 new or modified 
test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  5m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 13m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
30m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-hdfs-project/hadoop-hdfs-native-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
hadoop-mapreduce-project/hadoop-mapreduce-client hadoop-mapreduce-project 
hadoop-client-modules/hadoop-client-minicluster . hadoop-ozone/integration-test 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
33s{color} | {color:red} framework in trunk failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
24s{color} | {color:red} server-scm in trunk failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
35s{color} | {color:red} ozonefs in trunk failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
13s{color} | {color:red} framework in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
13s{color} | {color:red} server-scm in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
12s{color} | {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
13s{color} | {color:red} ozonefs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m  
2s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 18m  2s{color} 
| {color:red} root generated 7 new + 1490 unchanged - 0 fixed = 1497 total (was 
1490) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
5m 59s{color} | {color:orange} root: The patch generated 1 new + 7071 unchanged 
- 83 fixed = 7072 total (was 7154) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 15m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
57s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-hdfs-project/hadoop-hdfs-native-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
hadoop-mapreduce-project/hadoop-mapreduce-

[jira] [Commented] (HADOOP-14178) Move Mockito up to version 2.x

2019-01-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16735837#comment-16735837
 ] 

Hadoop QA commented on HADOOP-14178:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 289 new or modified 
test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 14m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
31m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-hdfs-project/hadoop-hdfs-native-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
hadoop-mapreduce-project/hadoop-mapreduce-client hadoop-mapreduce-project 
hadoop-client-modules/hadoop-client-minicluster . {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
40s{color} | {color:red} framework in trunk failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
23s{color} | {color:red} server-scm in trunk failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
14s{color} | {color:red} framework in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
14s{color} | {color:red} server-scm in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 15m 36s{color} 
| {color:red} root generated 7 new + 1490 unchanged - 0 fixed = 1497 total (was 
1490) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
4m 56s{color} | {color:orange} root: The patch generated 1 new + 7071 unchanged 
- 83 fixed = 7072 total (was 7154) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 13m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
55s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-hdfs-project/hadoop-hdfs-native-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
hadoop-mapreduce-project/hadoop-mapreduce-client hadoop-mapreduce-project 
hadoop-client-modules/hadoop-client-minicluster . {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
16s{color} | {color:red} framework in the patch failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
14s{color} | {color:red} server-scm in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
2

[jira] [Updated] (HADOOP-14556) S3A to support Delegation Tokens

2019-01-07 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14556:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

thanks, committed

For the curious: Larry and I have been using this internally, it all seems to 
work. And with the hardening of token loading and removal of even transitive 
dependencies on aws-sdk in the token identifier's fields, I believe I've 
removed both the risk of classloading problems, and the consequences. 

And it's really slick to be able to submit distcp jobs into a cluster which 
doesn't have the permissions to read or decrypt the data you are working with. 
More downstream testing will of course be needed. 

For anyone new to this JIRA, *this does not work with hive*. Spark, yes, Hive no





> S3A to support Delegation Tokens
> 
>
> Key: HADOOP-14556
> URL: https://issues.apache.org/jira/browse/HADOOP-14556
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-14556-001.patch, HADOOP-14556-002.patch, 
> HADOOP-14556-003.patch, HADOOP-14556-004.patch, HADOOP-14556-005.patch, 
> HADOOP-14556-007.patch, HADOOP-14556-008.patch, HADOOP-14556-009.patch, 
> HADOOP-14556-010.patch, HADOOP-14556-010.patch, HADOOP-14556-011.patch, 
> HADOOP-14556-012.patch, HADOOP-14556-013.patch, HADOOP-14556-014.patch, 
> HADOOP-14556-015.patch, HADOOP-14556-016.patch, HADOOP-14556-017.patch, 
> HADOOP-14556-018a.patch, HADOOP-14556-019.patch, HADOOP-14556-020.patch, 
> HADOOP-14556-021.patch, HADOOP-14556-022.patch, HADOOP-14556-023.patch, 
> HADOOP-14556-024.patch, HADOOP-14556-025.patch, HADOOP-14556-026.patch, 
> HADOOP-14556-027.patch, HADOOP-14556.oath-002.patch, HADOOP-14556.oath.patch
>
>
> S3A to support delegation tokens where
> * an authenticated client can request a token via 
> {{FileSystem.getDelegationToken()}}
> * Amazon's token service is used to request short-lived session secret & id; 
> these will be saved in the token and  marshalled with jobs
> * A new authentication provider will look for a token for the current user 
> and authenticate the user if found
> This will not support renewals; the lifespan of a token will be limited to 
> the initial duration. Also, as you can't request an STS token from a 
> temporary session, IAM instances won't be able to issue tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16032) Distcp It should clear sub directory ACL before applying new ACL on it.

2019-01-07 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16735799#comment-16735799
 ] 

Ranith Sardar commented on HADOOP-16032:


Hi, [~ste...@apache.org] Updated the affected version.

> Distcp It should clear sub directory ACL before applying new ACL on it.
> ---
>
> Key: HADOOP-16032
> URL: https://issues.apache.org/jira/browse/HADOOP-16032
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.1.1
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>
> Distcp preserve can't update the ACL info properly when source dir has access 
>  ACL and dest dir has default ACL. It will only modify the basic ACL part. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16032) Distcp It should clear sub directory ACL before applying new ACL on it.

2019-01-07 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HADOOP-16032:
---
Affects Version/s: 3.1.1

> Distcp It should clear sub directory ACL before applying new ACL on it.
> ---
>
> Key: HADOOP-16032
> URL: https://issues.apache.org/jira/browse/HADOOP-16032
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.1.1
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>
> Distcp preserve can't update the ACL info properly when source dir has access 
>  ACL and dest dir has default ACL. It will only modify the basic ACL part. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16032) Distcp It should clear sub directory ACL before applying new ACL on it.

2019-01-07 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16032:

Component/s: tools/distcp

> Distcp It should clear sub directory ACL before applying new ACL on it.
> ---
>
> Key: HADOOP-16032
> URL: https://issues.apache.org/jira/browse/HADOOP-16032
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>
> Distcp preserve can't update the ACL info properly when source dir has access 
>  ACL and dest dir has default ACL. It will only modify the basic ACL part. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16032) Distcp It should clear sub directory ACL before applying new ACL on it.

2019-01-07 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16032:

Summary: Distcp It should clear sub directory ACL before applying new ACL 
on it.  (was: It should clear sub directory ACL before applying new ACL on it.)

> Distcp It should clear sub directory ACL before applying new ACL on it.
> ---
>
> Key: HADOOP-16032
> URL: https://issues.apache.org/jira/browse/HADOOP-16032
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>
> Distcp preserve can't update the ACL info properly when source dir has access 
>  ACL and dest dir has default ACL. It will only modify the basic ACL part. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16032) Distcp It should clear sub directory ACL before applying new ACL on it.

2019-01-07 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16735721#comment-16735721
 ] 

Steve Loughran commented on HADOOP-16032:
-

can you  update the "affects versions" field with the version you are seeing 
this with, and ideally test with the latest 2.x or 3.x release. Thanks

> Distcp It should clear sub directory ACL before applying new ACL on it.
> ---
>
> Key: HADOOP-16032
> URL: https://issues.apache.org/jira/browse/HADOOP-16032
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>
> Distcp preserve can't update the ACL info properly when source dir has access 
>  ACL and dest dir has default ACL. It will only modify the basic ACL part. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16030) AliyunOSS: bring fixes back from HADOOP-15671

2019-01-07 Thread wujinhu (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16735706#comment-16735706
 ] 

wujinhu commented on HADOOP-16030:
--

Many thanks, [~cheersyang] :)

> AliyunOSS: bring fixes back from HADOOP-15671
> -
>
> Key: HADOOP-16030
> URL: https://issues.apache.org/jira/browse/HADOOP-16030
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/oss
>Affects Versions: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3, 3.3.0
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Blocker
> Fix For: 2.10.0, 3.0.4, 3.3.0, 3.1.2, 3.2.1, 2.9.3
>
> Attachments: HADOOP-16030.001.patch, HADOOP-16030.002.patch
>
>
> https://issues.apache.org/jira/browse/HADOOP-15992 reverted 2 patches, 
> however, those two patches contain bug fixes, we should bring them back



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16027) [DOC] Effective use of FS instances during S3A integration tests

2019-01-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16735655#comment-16735655
 ] 

Hadoop QA commented on HADOOP-16027:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
31m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 44s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 28s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16027 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12953970/HADOOP-16027.002.patch
 |
| Optional Tests |  dupname  asflicense  mvnsite  |
| uname | Linux 8fd065f3b047 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5db7c49 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 410 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15739/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> [DOC] Effective use of FS instances during S3A integration tests
> 
>
> Key: HADOOP-16027
> URL: https://issues.apache.org/jira/browse/HADOOP-16027
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-16027.001.patch, HADOOP-16027.002.patch
>
>
> While fixing HADOOP-15819 we found that a closed fs got into the static fs 
> cache during testing, which caused other tests to fail when the tests were 
> running sequentially.
> We should document some best practices in the testing section on the s3 docs 
> with the following:
> {panel}
> Tests using FileSystems are fastest if they can recycle the existing FS 
> instance from the same JVM. If you do that, you MUST NOT close or do unique 
> configuration on them. If you want a guarantee of 100% isolation or an 
> instance with unique config, create a new instance
> which you MUST close in the teardown to avoid leakage of resources.
> Do not add FileSystem instances (with e.g 
> org.apache.hadoop.fs.FileSystem#addFileSystemForTesting) to the cache that 
> will be modified or closed during the test runs. This can cause other tests 
> to fail when using the same modified or closed FS instance. For more details 
> see HADOOP-15819.
> {panel}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16030) AliyunOSS: bring fixes back from HADOOP-15671

2019-01-07 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16735649#comment-16735649
 ] 

Weiwei Yang commented on HADOOP-16030:
--

I just committed this to trunk, cherry-picked to branch-2, branch-2.9, 
branch-3.0, branch-3.1 and branch-3.2. Thanks for the contribution [~wujinhu].

> AliyunOSS: bring fixes back from HADOOP-15671
> -
>
> Key: HADOOP-16030
> URL: https://issues.apache.org/jira/browse/HADOOP-16030
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/oss
>Affects Versions: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3, 3.3.0
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Blocker
> Fix For: 2.10.0, 3.0.4, 3.3.0, 3.1.2, 3.2.1, 2.9.3
>
> Attachments: HADOOP-16030.001.patch, HADOOP-16030.002.patch
>
>
> https://issues.apache.org/jira/browse/HADOOP-15992 reverted 2 patches, 
> however, those two patches contain bug fixes, we should bring them back



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16030) AliyunOSS: bring fixes back from HADOOP-15671

2019-01-07 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HADOOP-16030:
-
Fix Version/s: 2.9.3

> AliyunOSS: bring fixes back from HADOOP-15671
> -
>
> Key: HADOOP-16030
> URL: https://issues.apache.org/jira/browse/HADOOP-16030
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/oss
>Affects Versions: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3, 3.3.0
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Blocker
> Fix For: 2.10.0, 3.0.4, 3.3.0, 3.1.2, 3.2.1, 2.9.3
>
> Attachments: HADOOP-16030.001.patch, HADOOP-16030.002.patch
>
>
> https://issues.apache.org/jira/browse/HADOOP-15992 reverted 2 patches, 
> however, those two patches contain bug fixes, we should bring them back



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15937) [JDK 11] Update maven-shade-plugin.version to 3.2.1

2019-01-07 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16735646#comment-16735646
 ] 

Hudson commented on HADOOP-15937:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15720 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15720/])
HADOOP-15937. [JDK 11] Update maven-shade-plugin.version to 3.2.1. (aajisaka: 
rev 32d5caa0a8502f7c0b92f4494b1cfa0993d26700)
* (edit) hadoop-project/pom.xml


> [JDK 11] Update maven-shade-plugin.version to 3.2.1
> ---
>
> Key: HADOOP-15937
> URL: https://issues.apache.org/jira/browse/HADOOP-15937
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
> Environment: openjdk version "11" 2018-09-25
>Reporter: Devaraj K
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15937.01.patch
>
>
> Build fails with the below error,
> {code:xml}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-shade-plugin:3.2.0:shade (default) on project 
> hadoop-yarn-csi: Error creating shaded jar: Problem shading JAR 
> /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/target/hadoop-yarn-csi-3.3.0-SNAPSHOT.jar
>  entry csi/v0/Csi$GetPluginInfoRequestOrBuilder.class: 
> org.apache.maven.plugin.MojoExecutionException: Error in ASM processing class 
> csi/v0/Csi$GetPluginInfoRequestOrBuilder.class: UnsupportedOperationException 
> -> [Help 1]
> org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
> goal org.apache.maven.plugins:maven-shade-plugin:3.2.0:shade (default) on 
> project hadoop-yarn-csi: Error creating shaded jar: Problem shading JAR 
> /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/target/hadoop-yarn-csi-3.3.0-SNAPSHOT.jar
>  entry csi/v0/Csi$GetPluginInfoRequestOrBuilder.class: 
> org.apache.maven.plugin.MojoExecutionException: Error in ASM processing class 
> csi/v0/Csi$GetPluginInfoRequestOrBuilder.class
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
> (MojoExecutor.java:213)
> ...
> [ERROR]
> [ERROR] After correcting the problems, you can resume the build with the 
> command
> [ERROR]   mvn  -rf :hadoop-yarn-csi
> {code}
> Updating maven-shade-plugin.version to 3.2.1 fixes the issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15937) [JDK 11] Update maven-shade-plugin.version to 3.2.1

2019-01-07 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15937:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk. Thank you, [~dineshchitlangia]!

> [JDK 11] Update maven-shade-plugin.version to 3.2.1
> ---
>
> Key: HADOOP-15937
> URL: https://issues.apache.org/jira/browse/HADOOP-15937
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
> Environment: openjdk version "11" 2018-09-25
>Reporter: Devaraj K
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15937.01.patch
>
>
> Build fails with the below error,
> {code:xml}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-shade-plugin:3.2.0:shade (default) on project 
> hadoop-yarn-csi: Error creating shaded jar: Problem shading JAR 
> /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/target/hadoop-yarn-csi-3.3.0-SNAPSHOT.jar
>  entry csi/v0/Csi$GetPluginInfoRequestOrBuilder.class: 
> org.apache.maven.plugin.MojoExecutionException: Error in ASM processing class 
> csi/v0/Csi$GetPluginInfoRequestOrBuilder.class: UnsupportedOperationException 
> -> [Help 1]
> org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
> goal org.apache.maven.plugins:maven-shade-plugin:3.2.0:shade (default) on 
> project hadoop-yarn-csi: Error creating shaded jar: Problem shading JAR 
> /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/target/hadoop-yarn-csi-3.3.0-SNAPSHOT.jar
>  entry csi/v0/Csi$GetPluginInfoRequestOrBuilder.class: 
> org.apache.maven.plugin.MojoExecutionException: Error in ASM processing class 
> csi/v0/Csi$GetPluginInfoRequestOrBuilder.class
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
> (MojoExecutor.java:213)
> ...
> [ERROR]
> [ERROR] After correcting the problems, you can resume the build with the 
> command
> [ERROR]   mvn  -rf :hadoop-yarn-csi
> {code}
> Updating maven-shade-plugin.version to 3.2.1 fixes the issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15937) [JDK 11] Update maven-shade-plugin.version to 3.2.1

2019-01-07 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16735635#comment-16735635
 ] 

Akira Ajisaka commented on HADOOP-15937:


+1

> [JDK 11] Update maven-shade-plugin.version to 3.2.1
> ---
>
> Key: HADOOP-15937
> URL: https://issues.apache.org/jira/browse/HADOOP-15937
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
> Environment: openjdk version "11" 2018-09-25
>Reporter: Devaraj K
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HADOOP-15937.01.patch
>
>
> Build fails with the below error,
> {code:xml}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-shade-plugin:3.2.0:shade (default) on project 
> hadoop-yarn-csi: Error creating shaded jar: Problem shading JAR 
> /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/target/hadoop-yarn-csi-3.3.0-SNAPSHOT.jar
>  entry csi/v0/Csi$GetPluginInfoRequestOrBuilder.class: 
> org.apache.maven.plugin.MojoExecutionException: Error in ASM processing class 
> csi/v0/Csi$GetPluginInfoRequestOrBuilder.class: UnsupportedOperationException 
> -> [Help 1]
> org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
> goal org.apache.maven.plugins:maven-shade-plugin:3.2.0:shade (default) on 
> project hadoop-yarn-csi: Error creating shaded jar: Problem shading JAR 
> /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/target/hadoop-yarn-csi-3.3.0-SNAPSHOT.jar
>  entry csi/v0/Csi$GetPluginInfoRequestOrBuilder.class: 
> org.apache.maven.plugin.MojoExecutionException: Error in ASM processing class 
> csi/v0/Csi$GetPluginInfoRequestOrBuilder.class
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
> (MojoExecutor.java:213)
> ...
> [ERROR]
> [ERROR] After correcting the problems, you can resume the build with the 
> command
> [ERROR]   mvn  -rf :hadoop-yarn-csi
> {code}
> Updating maven-shade-plugin.version to 3.2.1 fixes the issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16030) AliyunOSS: bring fixes back from HADOOP-15671

2019-01-07 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HADOOP-16030:
-
Fix Version/s: 2.10.0

> AliyunOSS: bring fixes back from HADOOP-15671
> -
>
> Key: HADOOP-16030
> URL: https://issues.apache.org/jira/browse/HADOOP-16030
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/oss
>Affects Versions: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3, 3.3.0
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Blocker
> Fix For: 2.10.0, 3.0.4, 3.3.0, 3.1.2, 3.2.1
>
> Attachments: HADOOP-16030.001.patch, HADOOP-16030.002.patch
>
>
> https://issues.apache.org/jira/browse/HADOOP-15992 reverted 2 patches, 
> however, those two patches contain bug fixes, we should bring them back



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16027) [DOC] Effective use of FS instances during S3A integration tests

2019-01-07 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-16027:

Attachment: HADOOP-16027.002.patch

> [DOC] Effective use of FS instances during S3A integration tests
> 
>
> Key: HADOOP-16027
> URL: https://issues.apache.org/jira/browse/HADOOP-16027
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-16027.001.patch, HADOOP-16027.002.patch
>
>
> While fixing HADOOP-15819 we found that a closed fs got into the static fs 
> cache during testing, which caused other tests to fail when the tests were 
> running sequentially.
> We should document some best practices in the testing section on the s3 docs 
> with the following:
> {panel}
> Tests using FileSystems are fastest if they can recycle the existing FS 
> instance from the same JVM. If you do that, you MUST NOT close or do unique 
> configuration on them. If you want a guarantee of 100% isolation or an 
> instance with unique config, create a new instance
> which you MUST close in the teardown to avoid leakage of resources.
> Do not add FileSystem instances (with e.g 
> org.apache.hadoop.fs.FileSystem#addFileSystemForTesting) to the cache that 
> will be modified or closed during the test runs. This can cause other tests 
> to fail when using the same modified or closed FS instance. For more details 
> see HADOOP-15819.
> {panel}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16027) [DOC] Effective use of FS instances during S3A integration tests

2019-01-07 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16735623#comment-16735623
 ] 

Gabor Bota commented on HADOOP-16027:
-

Thanks [~adam.antal], uploaded v2 patch with this.

> [DOC] Effective use of FS instances during S3A integration tests
> 
>
> Key: HADOOP-16027
> URL: https://issues.apache.org/jira/browse/HADOOP-16027
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-16027.001.patch, HADOOP-16027.002.patch
>
>
> While fixing HADOOP-15819 we found that a closed fs got into the static fs 
> cache during testing, which caused other tests to fail when the tests were 
> running sequentially.
> We should document some best practices in the testing section on the s3 docs 
> with the following:
> {panel}
> Tests using FileSystems are fastest if they can recycle the existing FS 
> instance from the same JVM. If you do that, you MUST NOT close or do unique 
> configuration on them. If you want a guarantee of 100% isolation or an 
> instance with unique config, create a new instance
> which you MUST close in the teardown to avoid leakage of resources.
> Do not add FileSystem instances (with e.g 
> org.apache.hadoop.fs.FileSystem#addFileSystemForTesting) to the cache that 
> will be modified or closed during the test runs. This can cause other tests 
> to fail when using the same modified or closed FS instance. For more details 
> see HADOOP-15819.
> {panel}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16030) AliyunOSS: bring fixes back from HADOOP-15671

2019-01-07 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HADOOP-16030:
-
Fix Version/s: 3.0.4

> AliyunOSS: bring fixes back from HADOOP-15671
> -
>
> Key: HADOOP-16030
> URL: https://issues.apache.org/jira/browse/HADOOP-16030
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/oss
>Affects Versions: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3, 3.3.0
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Blocker
> Fix For: 3.0.4, 3.3.0, 3.1.2, 3.2.1
>
> Attachments: HADOOP-16030.001.patch, HADOOP-16030.002.patch
>
>
> https://issues.apache.org/jira/browse/HADOOP-15992 reverted 2 patches, 
> however, those two patches contain bug fixes, we should bring them back



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14178) Move Mockito up to version 2.x

2019-01-07 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16735607#comment-16735607
 ] 

Akira Ajisaka commented on HADOOP-14178:


For those who want to review this patch, the patch does mainly two things:
* Replace Matcher.xxx with ArgumentMatcher.xxx to fix deprecation warnings
* Use ArgumentMatcher.any() instead of any(xxx.class)/anyYYY() when the tests 
want to match null

> Move Mockito up to version 2.x
> --
>
> Key: HADOOP-14178
> URL: https://issues.apache.org/jira/browse/HADOOP-14178
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-14178.001.patch, HADOOP-14178.002.patch, 
> HADOOP-14178.003.patch, HADOOP-14178.004.patch, HADOOP-14178.005-wip.patch, 
> HADOOP-14178.005-wip2.patch, HADOOP-14178.005-wip3.patch, 
> HADOOP-14178.005-wip4.patch, HADOOP-14178.005-wip5.patch, 
> HADOOP-14178.005-wip6.patch, HADOOP-14178.005.patch, HADOOP-14178.006.patch, 
> HADOOP-14178.007.patch, HADOOP-14178.008.patch, HADOOP-14178.009.patch, 
> HADOOP-14178.010.patch, HADOOP-14178.011.patch, HADOOP-14178.012.patch, 
> HADOOP-14178.013.patch, HADOOP-14178.014.patch, HADOOP-14178.015.patch, 
> HADOOP-14178.016.patch, HADOOP-14178.017.patch, HADOOP-14178.018.patch, 
> HADOOP-14178.019.patch, HADOOP-14178.020.patch, HADOOP-14178.021.patch, 
> HADOOP-14178.022.patch, HADOOP-14178.023.patch, HADOOP-14178.024.patch, 
> HADOOP-14178.025.patch, HADOOP-14178.026.patch, HADOOP-14178.027.patch, 
> HADOOP-14178.028.patch, HADOOP-14178.029.patch
>
>
> I don't know when Hadoop picked up Mockito, but it has been frozen at 1.8.5 
> since the switch to maven in 2011. 
> Mockito is now at version 2.1, [with lots of Java 8 
> support|https://github.com/mockito/mockito/wiki/What%27s-new-in-Mockito-2]. 
> That' s not just defining actions as closures, but in supporting Optional 
> types, mocking methods in interfaces, etc. 
> It's only used for testing, and, *provided there aren't regressions*, cost of 
> upgrade is low. The good news: test tools usually come with good test 
> coverage. The bad: mockito does go deep into java bytecodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14178) Move Mockito up to version 2.x

2019-01-07 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14178:
---
Attachment: HADOOP-14178.028.patch

> Move Mockito up to version 2.x
> --
>
> Key: HADOOP-14178
> URL: https://issues.apache.org/jira/browse/HADOOP-14178
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-14178.001.patch, HADOOP-14178.002.patch, 
> HADOOP-14178.003.patch, HADOOP-14178.004.patch, HADOOP-14178.005-wip.patch, 
> HADOOP-14178.005-wip2.patch, HADOOP-14178.005-wip3.patch, 
> HADOOP-14178.005-wip4.patch, HADOOP-14178.005-wip5.patch, 
> HADOOP-14178.005-wip6.patch, HADOOP-14178.005.patch, HADOOP-14178.006.patch, 
> HADOOP-14178.007.patch, HADOOP-14178.008.patch, HADOOP-14178.009.patch, 
> HADOOP-14178.010.patch, HADOOP-14178.011.patch, HADOOP-14178.012.patch, 
> HADOOP-14178.013.patch, HADOOP-14178.014.patch, HADOOP-14178.015.patch, 
> HADOOP-14178.016.patch, HADOOP-14178.017.patch, HADOOP-14178.018.patch, 
> HADOOP-14178.019.patch, HADOOP-14178.020.patch, HADOOP-14178.021.patch, 
> HADOOP-14178.022.patch, HADOOP-14178.023.patch, HADOOP-14178.024.patch, 
> HADOOP-14178.025.patch, HADOOP-14178.026.patch, HADOOP-14178.027.patch, 
> HADOOP-14178.028.patch
>
>
> I don't know when Hadoop picked up Mockito, but it has been frozen at 1.8.5 
> since the switch to maven in 2011. 
> Mockito is now at version 2.1, [with lots of Java 8 
> support|https://github.com/mockito/mockito/wiki/What%27s-new-in-Mockito-2]. 
> That' s not just defining actions as closures, but in supporting Optional 
> types, mocking methods in interfaces, etc. 
> It's only used for testing, and, *provided there aren't regressions*, cost of 
> upgrade is low. The good news: test tools usually come with good test 
> coverage. The bad: mockito does go deep into java bytecodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14178) Move Mockito up to version 2.x

2019-01-07 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16735598#comment-16735598
 ] 

Akira Ajisaka commented on HADOOP-14178:


029 patch: 
* Fixed TestDevicePluginAdapter, TestContainerSchedulerRecovery, TestAmFilter, 
TestReservationSystem, TestYarnClientImpl, TestYarnChild
* Use Mockito 1.x (mockito-all) in these framework tests. Let's upgrade them 
later.

I ran all the tests and fixed all the failures related to the 027 patch. Now 
this patch is ready for review and commit.

> Move Mockito up to version 2.x
> --
>
> Key: HADOOP-14178
> URL: https://issues.apache.org/jira/browse/HADOOP-14178
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-14178.001.patch, HADOOP-14178.002.patch, 
> HADOOP-14178.003.patch, HADOOP-14178.004.patch, HADOOP-14178.005-wip.patch, 
> HADOOP-14178.005-wip2.patch, HADOOP-14178.005-wip3.patch, 
> HADOOP-14178.005-wip4.patch, HADOOP-14178.005-wip5.patch, 
> HADOOP-14178.005-wip6.patch, HADOOP-14178.005.patch, HADOOP-14178.006.patch, 
> HADOOP-14178.007.patch, HADOOP-14178.008.patch, HADOOP-14178.009.patch, 
> HADOOP-14178.010.patch, HADOOP-14178.011.patch, HADOOP-14178.012.patch, 
> HADOOP-14178.013.patch, HADOOP-14178.014.patch, HADOOP-14178.015.patch, 
> HADOOP-14178.016.patch, HADOOP-14178.017.patch, HADOOP-14178.018.patch, 
> HADOOP-14178.019.patch, HADOOP-14178.020.patch, HADOOP-14178.021.patch, 
> HADOOP-14178.022.patch, HADOOP-14178.023.patch, HADOOP-14178.024.patch, 
> HADOOP-14178.025.patch, HADOOP-14178.026.patch, HADOOP-14178.027.patch, 
> HADOOP-14178.028.patch, HADOOP-14178.029.patch
>
>
> I don't know when Hadoop picked up Mockito, but it has been frozen at 1.8.5 
> since the switch to maven in 2011. 
> Mockito is now at version 2.1, [with lots of Java 8 
> support|https://github.com/mockito/mockito/wiki/What%27s-new-in-Mockito-2]. 
> That' s not just defining actions as closures, but in supporting Optional 
> types, mocking methods in interfaces, etc. 
> It's only used for testing, and, *provided there aren't regressions*, cost of 
> upgrade is low. The good news: test tools usually come with good test 
> coverage. The bad: mockito does go deep into java bytecodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14178) Move Mockito up to version 2.x

2019-01-07 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16735594#comment-16735594
 ] 

Akira Ajisaka commented on HADOOP-14178:


{noformat}
[ERROR] 'dependencies.dependency.version' for org.mockito:mockito-core:jar is 
missing. @ line 36, column 17
{noformat}
These framework tests are using Hadoop 3.2.1-SNAPSHOT and the mockito-core 
version is not written there, that's why this error occurs.

> Move Mockito up to version 2.x
> --
>
> Key: HADOOP-14178
> URL: https://issues.apache.org/jira/browse/HADOOP-14178
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-14178.001.patch, HADOOP-14178.002.patch, 
> HADOOP-14178.003.patch, HADOOP-14178.004.patch, HADOOP-14178.005-wip.patch, 
> HADOOP-14178.005-wip2.patch, HADOOP-14178.005-wip3.patch, 
> HADOOP-14178.005-wip4.patch, HADOOP-14178.005-wip5.patch, 
> HADOOP-14178.005-wip6.patch, HADOOP-14178.005.patch, HADOOP-14178.006.patch, 
> HADOOP-14178.007.patch, HADOOP-14178.008.patch, HADOOP-14178.009.patch, 
> HADOOP-14178.010.patch, HADOOP-14178.011.patch, HADOOP-14178.012.patch, 
> HADOOP-14178.013.patch, HADOOP-14178.014.patch, HADOOP-14178.015.patch, 
> HADOOP-14178.016.patch, HADOOP-14178.017.patch, HADOOP-14178.018.patch, 
> HADOOP-14178.019.patch, HADOOP-14178.020.patch, HADOOP-14178.021.patch, 
> HADOOP-14178.022.patch, HADOOP-14178.023.patch, HADOOP-14178.024.patch, 
> HADOOP-14178.025.patch, HADOOP-14178.026.patch, HADOOP-14178.027.patch, 
> HADOOP-14178.028.patch
>
>
> I don't know when Hadoop picked up Mockito, but it has been frozen at 1.8.5 
> since the switch to maven in 2011. 
> Mockito is now at version 2.1, [with lots of Java 8 
> support|https://github.com/mockito/mockito/wiki/What%27s-new-in-Mockito-2]. 
> That' s not just defining actions as closures, but in supporting Optional 
> types, mocking methods in interfaces, etc. 
> It's only used for testing, and, *provided there aren't regressions*, cost of 
> upgrade is low. The good news: test tools usually come with good test 
> coverage. The bad: mockito does go deep into java bytecodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14178) Move Mockito up to version 2.x

2019-01-07 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14178:
---
Attachment: HADOOP-14178.029.patch

> Move Mockito up to version 2.x
> --
>
> Key: HADOOP-14178
> URL: https://issues.apache.org/jira/browse/HADOOP-14178
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-14178.001.patch, HADOOP-14178.002.patch, 
> HADOOP-14178.003.patch, HADOOP-14178.004.patch, HADOOP-14178.005-wip.patch, 
> HADOOP-14178.005-wip2.patch, HADOOP-14178.005-wip3.patch, 
> HADOOP-14178.005-wip4.patch, HADOOP-14178.005-wip5.patch, 
> HADOOP-14178.005-wip6.patch, HADOOP-14178.005.patch, HADOOP-14178.006.patch, 
> HADOOP-14178.007.patch, HADOOP-14178.008.patch, HADOOP-14178.009.patch, 
> HADOOP-14178.010.patch, HADOOP-14178.011.patch, HADOOP-14178.012.patch, 
> HADOOP-14178.013.patch, HADOOP-14178.014.patch, HADOOP-14178.015.patch, 
> HADOOP-14178.016.patch, HADOOP-14178.017.patch, HADOOP-14178.018.patch, 
> HADOOP-14178.019.patch, HADOOP-14178.020.patch, HADOOP-14178.021.patch, 
> HADOOP-14178.022.patch, HADOOP-14178.023.patch, HADOOP-14178.024.patch, 
> HADOOP-14178.025.patch, HADOOP-14178.026.patch, HADOOP-14178.027.patch, 
> HADOOP-14178.028.patch, HADOOP-14178.029.patch
>
>
> I don't know when Hadoop picked up Mockito, but it has been frozen at 1.8.5 
> since the switch to maven in 2011. 
> Mockito is now at version 2.1, [with lots of Java 8 
> support|https://github.com/mockito/mockito/wiki/What%27s-new-in-Mockito-2]. 
> That' s not just defining actions as closures, but in supporting Optional 
> types, mocking methods in interfaces, etc. 
> It's only used for testing, and, *provided there aren't regressions*, cost of 
> upgrade is low. The good news: test tools usually come with good test 
> coverage. The bad: mockito does go deep into java bytecodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16030) AliyunOSS: bring fixes back from HADOOP-15671

2019-01-07 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HADOOP-16030:
-
Fix Version/s: 3.1.2

> AliyunOSS: bring fixes back from HADOOP-15671
> -
>
> Key: HADOOP-16030
> URL: https://issues.apache.org/jira/browse/HADOOP-16030
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/oss
>Affects Versions: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3, 3.3.0
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Blocker
> Fix For: 3.3.0, 3.1.2, 3.2.1
>
> Attachments: HADOOP-16030.001.patch, HADOOP-16030.002.patch
>
>
> https://issues.apache.org/jira/browse/HADOOP-15992 reverted 2 patches, 
> however, those two patches contain bug fixes, we should bring them back



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] elek opened a new pull request #457: HDDS-965. Ozone: checkstyle improvements and code quality scripts.

2019-01-07 Thread GitBox
elek opened a new pull request #457: HDDS-965. Ozone: checkstyle improvements 
and code quality scripts.
URL: https://github.com/apache/hadoop/pull/457
 
 
   Testing github PR capabilities.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16030) AliyunOSS: bring fixes back from HADOOP-15671

2019-01-07 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16735560#comment-16735560
 ] 

Hudson commented on HADOOP-16030:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15718 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15718/])
HADOOP-16030. AliyunOSS: bring fixes back from HADOOP-15671. Contributed (wwei: 
rev f87b3b11c46704dcdb63089dd971e2a5ba1deaac)
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSUtils.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunCredentials.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java


> AliyunOSS: bring fixes back from HADOOP-15671
> -
>
> Key: HADOOP-16030
> URL: https://issues.apache.org/jira/browse/HADOOP-16030
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/oss
>Affects Versions: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3, 3.3.0
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Blocker
> Fix For: 3.3.0, 3.2.1
>
> Attachments: HADOOP-16030.001.patch, HADOOP-16030.002.patch
>
>
> https://issues.apache.org/jira/browse/HADOOP-15992 reverted 2 patches, 
> however, those two patches contain bug fixes, we should bring them back



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15671) AliyunOSS: Support Assume Roles in AliyunOSS

2019-01-07 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16735561#comment-16735561
 ] 

Hudson commented on HADOOP-15671:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15718 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15718/])
HADOOP-16030. AliyunOSS: bring fixes back from HADOOP-15671. Contributed (wwei: 
rev f87b3b11c46704dcdb63089dd971e2a5ba1deaac)
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSUtils.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunCredentials.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java


> AliyunOSS: Support Assume Roles in AliyunOSS
> 
>
> Key: HADOOP-15671
> URL: https://issues.apache.org/jira/browse/HADOOP-15671
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.1.0, 2.10.0, 2.9.1, 3.2.0, 3.0.3
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 2.9.2, 3.0.4, 3.1.2
>
> Attachments: HADOOP-15671.001.patch, HADOOP-15671.002.patch, 
> HADOOP-15671.003.patch, HADOOP-15671.004.patch, HADOOP-15671.005.patch, 
> HADOOP-15671.006.patch
>
>
> We will add assume role function in Aliyun OSS.
> For details about assume role and sts token, click the link below:
> [https://www.alibabacloud.com/help/doc-detail/31935.html?spm=a2c5t.11065259.1996646101.searchclickresult.1fad155aKOUvJZ]
>  
> Major Changes:
>  # Stabilise the constructor of CredentialsProvider so that other developers 
> can have their own implementations.
>  #  add assumed role functions for hadoop aliyun module



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16030) AliyunOSS: bring fixes back from HADOOP-15671

2019-01-07 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HADOOP-16030:
-
Fix Version/s: 3.2.1

> AliyunOSS: bring fixes back from HADOOP-15671
> -
>
> Key: HADOOP-16030
> URL: https://issues.apache.org/jira/browse/HADOOP-16030
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/oss
>Affects Versions: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3, 3.3.0
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Blocker
> Fix For: 3.3.0, 3.2.1
>
> Attachments: HADOOP-16030.001.patch, HADOOP-16030.002.patch
>
>
> https://issues.apache.org/jira/browse/HADOOP-15992 reverted 2 patches, 
> however, those two patches contain bug fixes, we should bring them back



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15937) [JDK 11] Update maven-shade-plugin.version to 3.2.1

2019-01-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16735554#comment-16735554
 ] 

Hadoop QA commented on HADOOP-15937:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
32m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
13s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15937 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12953949/HADOOP-15937.01.patch 
|
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  |
| uname | Linux b4a489ff847b 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d3321fb |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15736/testReport/ |
| Max. process+thread count | 337 (vs. ulimit of 1) |
| modules | C: hadoop-project U: hadoop-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15736/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> [JDK 11] Update maven-shade-plugin.version to 3.2.1
> ---
>
> Key: HADOOP-15937
> URL: https://issues.apache.org/jira/browse/HADOOP-15937
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
> Environment: openjdk version "11" 2018-09-25
>Reporter: Devaraj K
>Assignee: Dinesh Chitlangia
>Priority: Ma

[jira] [Updated] (HADOOP-16030) AliyunOSS: bring fixes back from HADOOP-15671

2019-01-07 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HADOOP-16030:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

> AliyunOSS: bring fixes back from HADOOP-15671
> -
>
> Key: HADOOP-16030
> URL: https://issues.apache.org/jira/browse/HADOOP-16030
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/oss
>Affects Versions: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3, 3.3.0
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Blocker
> Fix For: 3.3.0
>
> Attachments: HADOOP-16030.001.patch, HADOOP-16030.002.patch
>
>
> https://issues.apache.org/jira/browse/HADOOP-15992 reverted 2 patches, 
> however, those two patches contain bug fixes, we should bring them back



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org