[jira] [Commented] (HADOOP-13890) TestWebDelegationToken and TestKMS fails in trunk

2016-12-12 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15744484#comment-15744484
 ] 

John Zhuge commented on HADOOP-13890:
-

+1 LGTM (non-binding).  All hadoop-kms and hadoop-httpfs tests passed.

> TestWebDelegationToken and TestKMS fails in trunk
> -
>
> Key: HADOOP-13890
> URL: https://issues.apache.org/jira/browse/HADOOP-13890
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13890.00.patch, HADOOP-13890.01.patch, 
> HADOOP-13890.02.patch, HADOOP-13890.03.patch
>
>
> TestWebDelegationToken, TestKMS , TestTrashWithSecureEncryptionZones and 
> TestSecureEncryptionZoneWithKMS started failing in trunk because the SPENGO 
> principle used in these test are incomplete: HTTP/localhost assuming the 
> default realm will be applied at authentication time. This ticket is opened 
> to fix these unit test with complete HTTP principal.
> {noformat}
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Invalid SPNEGO sequence, status code: 403
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.readToken(KerberosAuthenticator.java:371)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.access$300(KerberosAuthenticator.java:53)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:317)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:287)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:287)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:205)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:132)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:298)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:170)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:373)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:782)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:779)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$4.run(TestWebDelegationToken.java:715)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.doAsKerberosUser(TestWebDelegationToken.java:712)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:778)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:729)
>  {noformat}
>  *Jenkins URL* 
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/251/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11240/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13898) should set HADOOP_JOB_HISTORYSERVER_HEAPSIZE only if it's empty on branch2

2016-12-12 Thread Fei Hui (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1573#comment-1573
 ] 

Fei Hui commented on HADOOP-13898:
--

cc [~aw] 

> should set HADOOP_JOB_HISTORYSERVER_HEAPSIZE only if it's empty on branch2
> --
>
> Key: HADOOP-13898
> URL: https://issues.apache.org/jira/browse/HADOOP-13898
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.9.0
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-13898-branch-2.001.patch, 
> HADOOP-13898-branch-2.002.patch
>
>
> In mapred-env, set HADOOP_JOB_HISTORYSERVER_HEAPSIZE 1000 by default, That is 
> incorrect.
> We should set it 1000 by default only if it's empty. 
> Because if you run  'HADOOP_JOB_HISTORYSERVER_HEAPSIZE =512 
> $HADOOP_HOME/sbin/mr-jobhistory-daemon.sh start historyserver', 
> HADOOP_JOB_HISTORYSERVER_HEAPSIZE  will be set 1000, rather than 512.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13875) HttpServer2 should support more SSL configuration properties

2016-12-12 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13875:

Hadoop Flags:   (was: Incompatible change)

> HttpServer2 should support more SSL configuration properties
> 
>
> Key: HADOOP-13875
> URL: https://issues.apache.org/jira/browse/HADOOP-13875
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> Support more SSL configuration properties:
> - enabled.protocols
> - includeProtocols/excludeProtocols/includeCipherSuites/excludeCipherSuites



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13898) should set HADOOP_JOB_HISTORYSERVER_HEAPSIZE only if it's empty on branch2

2016-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15744244#comment-15744244
 ] 

Hadoop QA commented on HADOOP-13898:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
38s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 8s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m  
8s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
20s{color} | {color:green} hadoop-mapreduce-project in the patch passed with 
JDK v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Issue | HADOOP-13898 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842932/HADOOP-13898-branch-2.002.patch
 |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux 4ee463ff7b82 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / 282a562 |
| shellcheck | v0.4.5 |
| JDK v1.7.0_121  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11260/testReport/ |
| modules | C: hadoop-mapreduce-project U: hadoop-mapreduce-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11260/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> should set HADOOP_JOB_HISTORYSERVER_HEAPSIZE only if it's empty on branch2
> --
>
> Key: HADOOP-13898
> URL: https://issues.apache.org/jira/browse/HADOOP-13898
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.9.0
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-13898-branch-2.001.patch, 
> HADOOP-13898-branch-2.002.patch
>
>
> In mapred-env, set HADOOP_JOB_HISTORYSERVER_HEAPSIZE 1000 by default, That is 
> incorrect.
> We should set it 1000 by default only if it's empty. 
> Because if you run  'HADOOP_JOB_HISTORYSERVER_HEAPSIZE =512 
> $HADOOP_HOME/sbin/mr-jobhistory-daemon.sh start historyserver', 
> HADOOP_JOB_HISTORYSERVER_HEAPSIZE  will be set 1000, rather than 512.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13898) should set HADOOP_JOB_HISTORYSERVER_HEAPSIZE only if it's empty on branch2

2016-12-12 Thread Fei Hui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-13898:
-
Attachment: HADOOP-13898-branch-2.002.patch

> should set HADOOP_JOB_HISTORYSERVER_HEAPSIZE only if it's empty on branch2
> --
>
> Key: HADOOP-13898
> URL: https://issues.apache.org/jira/browse/HADOOP-13898
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.9.0
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-13898-branch-2.001.patch, 
> HADOOP-13898-branch-2.002.patch
>
>
> In mapred-env, set HADOOP_JOB_HISTORYSERVER_HEAPSIZE 1000 by default, That is 
> incorrect.
> We should set it 1000 by default only if it's empty. 
> Because if you run  'HADOOP_JOB_HISTORYSERVER_HEAPSIZE =512 
> $HADOOP_HOME/sbin/mr-jobhistory-daemon.sh start historyserver', 
> HADOOP_JOB_HISTORYSERVER_HEAPSIZE  will be set 1000, rather than 512.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13455) S3Guard: Write end user documentation.

2016-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15744216#comment-15744216
 ] 

Hadoop QA commented on HADOOP-13455:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
57s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
45s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
46s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
20s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
55s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 39s{color} 
| {color:red} hadoop-aws in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.s3a.s3guard.TestDynamoDBMetadataStore |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13455 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842927/HADOOP-13455-HADOOP-13345.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 7303a170ec16 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-13345 / d354cd1 |
| Default Java | 1.8.0_111 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11259/artifact/patchprocess/patch-unit-hadoop-tools_hadoop-aws.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11259/testReport/ |
| modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . 
|
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11259/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> S3Guard: Write end user documentation.
> --
>
> Key: HADOOP-13455
> URL: https://issues.apache.org/jira/browse/HADOOP-13455
> 

[jira] [Commented] (HADOOP-13898) should set HADOOP_JOB_HISTORYSERVER_HEAPSIZE only if it's empty on branch2

2016-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15744153#comment-15744153
 ] 

Hadoop QA commented on HADOOP-13898:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m  
7s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
50s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
3s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red}  0m  
8s{color} | {color:red} The patch generated 1 new + 519 unchanged - 0 fixed = 
520 total (was 519) {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
10s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
36s{color} | {color:green} hadoop-mapreduce-project in the patch passed with 
JDK v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Issue | HADOOP-13898 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842919/HADOOP-13898-branch-2.001.patch
 |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux bdaedcf107ec 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / 282a562 |
| shellcheck | v0.4.5 |
| shellcheck | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11258/artifact/patchprocess/diff-patch-shellcheck.txt
 |
| JDK v1.7.0_121  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11258/testReport/ |
| modules | C: hadoop-mapreduce-project U: hadoop-mapreduce-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11258/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> should set HADOOP_JOB_HISTORYSERVER_HEAPSIZE only if it's empty on branch2
> --
>
> Key: HADOOP-13898
> URL: https://issues.apache.org/jira/browse/HADOOP-13898
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.9.0
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-13898-branch-2.001.patch
>
>
> In mapred-env, set HADOOP_JOB_HISTORYSERVER_HEAPSIZE 1000 by default, That is 
> incorrect.
> We should set it 1000 by default only if it's empty. 
> Because if you run  'HADOOP_JOB_HISTORYSERVER_HEAPSIZE =512 
> $HADOOP_HOME/sbin/mr-jobhistory-daemon.sh start historyserver', 
> HADOOP_JOB_HISTORYSERVER_HEAPSIZE  will be set 1000, rather than 512.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13455) S3Guard: Write end user documentation.

2016-12-12 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15744117#comment-15744117
 ] 

Aaron Fabbri commented on HADOOP-13455:
---

Notice I also added a missing DynamoDB parameter to core-default.xml

> S3Guard: Write end user documentation.
> --
>
> Key: HADOOP-13455
> URL: https://issues.apache.org/jira/browse/HADOOP-13455
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Chris Nauroth
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13455-HADOOP-13345.001.patch
>
>
> Write end user documentation that describes S3Guard architecture, 
> configuration and usage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13455) S3Guard: Write end user documentation.

2016-12-12 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15744117#comment-15744117
 ] 

Aaron Fabbri edited comment on HADOOP-13455 at 12/13/16 4:27 AM:
-

[~liuml07] Notice I also added a missing DynamoDB parameter to core-default.xml


was (Author: fabbri):
Notice I also added a missing DynamoDB parameter to core-default.xml

> S3Guard: Write end user documentation.
> --
>
> Key: HADOOP-13455
> URL: https://issues.apache.org/jira/browse/HADOOP-13455
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Chris Nauroth
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13455-HADOOP-13345.001.patch
>
>
> Write end user documentation that describes S3Guard architecture, 
> configuration and usage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13455) S3Guard: Write end user documentation.

2016-12-12 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-13455:
--
Affects Version/s: HADOOP-13345
   Status: Patch Available  (was: In Progress)

> S3Guard: Write end user documentation.
> --
>
> Key: HADOOP-13455
> URL: https://issues.apache.org/jira/browse/HADOOP-13455
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Chris Nauroth
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13455-HADOOP-13345.001.patch
>
>
> Write end user documentation that describes S3Guard architecture, 
> configuration and usage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13455) S3Guard: Write end user documentation.

2016-12-12 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-13455:
--
Attachment: HADOOP-13455-HADOOP-13345.001.patch

[~steve_l] ask and you shall receive.  I took a stab and attached a v1 patch 
here.

> S3Guard: Write end user documentation.
> --
>
> Key: HADOOP-13455
> URL: https://issues.apache.org/jira/browse/HADOOP-13455
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Chris Nauroth
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13455-HADOOP-13345.001.patch
>
>
> Write end user documentation that describes S3Guard architecture, 
> configuration and usage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-13455) S3Guard: Write end user documentation.

2016-12-12 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-13455 started by Aaron Fabbri.
-
> S3Guard: Write end user documentation.
> --
>
> Key: HADOOP-13455
> URL: https://issues.apache.org/jira/browse/HADOOP-13455
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Chris Nauroth
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13455-HADOOP-13345.001.patch
>
>
> Write end user documentation that describes S3Guard architecture, 
> configuration and usage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13898) should set HADOOP_JOB_HISTORYSERVER_HEAPSIZE only if it's empty on branch2

2016-12-12 Thread Fei Hui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-13898:
-
Status: Patch Available  (was: Open)

> should set HADOOP_JOB_HISTORYSERVER_HEAPSIZE only if it's empty on branch2
> --
>
> Key: HADOOP-13898
> URL: https://issues.apache.org/jira/browse/HADOOP-13898
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.9.0
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-13898-branch-2.001.patch
>
>
> In mapred-env, set HADOOP_JOB_HISTORYSERVER_HEAPSIZE 1000 by default, That is 
> incorrect.
> We should set it 1000 by default only if it's empty. 
> Because if you run  'HADOOP_JOB_HISTORYSERVER_HEAPSIZE =512 
> $HADOOP_HOME/sbin/mr-jobhistory-daemon.sh start historyserver', 
> HADOOP_JOB_HISTORYSERVER_HEAPSIZE  will be set 1000, rather than 512.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13898) should set HADOOP_JOB_HISTORYSERVER_HEAPSIZE only if it's empty on branch2

2016-12-12 Thread Fei Hui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui reassigned HADOOP-13898:


Assignee: Fei Hui

> should set HADOOP_JOB_HISTORYSERVER_HEAPSIZE only if it's empty on branch2
> --
>
> Key: HADOOP-13898
> URL: https://issues.apache.org/jira/browse/HADOOP-13898
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.9.0
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-13898-branch-2.001.patch
>
>
> In mapred-env, set HADOOP_JOB_HISTORYSERVER_HEAPSIZE 1000 by default, That is 
> incorrect.
> We should set it 1000 by default only if it's empty. 
> Because if you run  'HADOOP_JOB_HISTORYSERVER_HEAPSIZE =512 
> $HADOOP_HOME/sbin/mr-jobhistory-daemon.sh start historyserver', 
> HADOOP_JOB_HISTORYSERVER_HEAPSIZE  will be set 1000, rather than 512.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13890) TestWebDelegationToken and TestKMS fails in trunk

2016-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15744010#comment-15744010
 ] 

Hadoop QA commented on HADOOP-13890:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} hadoop-common-project/hadoop-auth: The patch 
generated 0 new + 18 unchanged - 1 fixed = 18 total (was 19) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
33s{color} | {color:green} hadoop-auth in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13890 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842917/HADOOP-13890.03.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9a16cbc0bbf6 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 754f15b |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11257/testReport/ |
| modules | C: hadoop-common-project/hadoop-auth U: 
hadoop-common-project/hadoop-auth |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11257/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestWebDelegationToken and TestKMS fails in trunk
> -
>
> Key: HADOOP-13890
> URL: https://issues.apache.org/jira/browse/HADOOP-13890
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13890.00.patch, HADOOP-13890.01.patch, 
> HADOOP-13890.02.patch, HADOOP-13890.03.patch
>
>
> TestWebDelegationToken, 

[jira] [Updated] (HADOOP-13898) should set HADOOP_JOB_HISTORYSERVER_HEAPSIZE only if it's empty on branch2

2016-12-12 Thread Fei Hui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-13898:
-
Attachment: HADOOP-13898-branch-2.001.patch

patch uploaded

> should set HADOOP_JOB_HISTORYSERVER_HEAPSIZE only if it's empty on branch2
> --
>
> Key: HADOOP-13898
> URL: https://issues.apache.org/jira/browse/HADOOP-13898
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.9.0
>Reporter: Fei Hui
> Attachments: HADOOP-13898-branch-2.001.patch
>
>
> In mapred-env, set HADOOP_JOB_HISTORYSERVER_HEAPSIZE 1000 by default, That is 
> incorrect.
> We should set it 1000 by default only if it's empty. 
> Because if you run  'HADOOP_JOB_HISTORYSERVER_HEAPSIZE =512 
> $HADOOP_HOME/sbin/mr-jobhistory-daemon.sh start historyserver', 
> HADOOP_JOB_HISTORYSERVER_HEAPSIZE  will be set 1000, rather than 512.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13898) should set HADOOP_JOB_HISTORYSERVER_HEAPSIZE only if it's empty on branch2

2016-12-12 Thread Fei Hui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-13898:
-
Summary: should set HADOOP_JOB_HISTORYSERVER_HEAPSIZE only if it's empty on 
branch2  (was: should set HADOOP_JOB_HISTORYSERVER_HEAPSIZE only if it's empty)

> should set HADOOP_JOB_HISTORYSERVER_HEAPSIZE only if it's empty on branch2
> --
>
> Key: HADOOP-13898
> URL: https://issues.apache.org/jira/browse/HADOOP-13898
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.9.0
>Reporter: Fei Hui
>
> In mapred-env, set HADOOP_JOB_HISTORYSERVER_HEAPSIZE 1000 by default, That is 
> incorrect.
> We should set it 1000 by default only if it's empty. 
> Because if you run  'HADOOP_JOB_HISTORYSERVER_HEAPSIZE =512 
> $HADOOP_HOME/sbin/mr-jobhistory-daemon.sh start historyserver', 
> HADOOP_JOB_HISTORYSERVER_HEAPSIZE  will be set 1000, rather than 512.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13898) should set HADOOP_JOB_HISTORYSERVER_HEAPSIZE only if it's empty

2016-12-12 Thread Fei Hui (JIRA)
Fei Hui created HADOOP-13898:


 Summary: should set HADOOP_JOB_HISTORYSERVER_HEAPSIZE only if it's 
empty
 Key: HADOOP-13898
 URL: https://issues.apache.org/jira/browse/HADOOP-13898
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 2.9.0
Reporter: Fei Hui


In mapred-env, set HADOOP_JOB_HISTORYSERVER_HEAPSIZE 1000 by default, That is 
incorrect.
We should set it 1000 by default only if it's empty. 
Because if you run  'HADOOP_JOB_HISTORYSERVER_HEAPSIZE =512 
$HADOOP_HOME/sbin/mr-jobhistory-daemon.sh start historyserver', 
HADOOP_JOB_HISTORYSERVER_HEAPSIZE  will be set 1000, rather than 512.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13890) TestWebDelegationToken and TestKMS fails in trunk

2016-12-12 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-13890:

Attachment: HADOOP-13890.03.patch

> TestWebDelegationToken and TestKMS fails in trunk
> -
>
> Key: HADOOP-13890
> URL: https://issues.apache.org/jira/browse/HADOOP-13890
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13890.00.patch, HADOOP-13890.01.patch, 
> HADOOP-13890.02.patch, HADOOP-13890.03.patch
>
>
> TestWebDelegationToken, TestKMS , TestTrashWithSecureEncryptionZones and 
> TestSecureEncryptionZoneWithKMS started failing in trunk because the SPENGO 
> principle used in these test are incomplete: HTTP/localhost assuming the 
> default realm will be applied at authentication time. This ticket is opened 
> to fix these unit test with complete HTTP principal.
> {noformat}
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Invalid SPNEGO sequence, status code: 403
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.readToken(KerberosAuthenticator.java:371)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.access$300(KerberosAuthenticator.java:53)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:317)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:287)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:287)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:205)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:132)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:298)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:170)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:373)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:782)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:779)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$4.run(TestWebDelegationToken.java:715)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.doAsKerberosUser(TestWebDelegationToken.java:712)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:778)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:729)
>  {noformat}
>  *Jenkins URL* 
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/251/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11240/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13891) KerberosName#KerberosName cannot parse principle without realm

2016-12-12 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743917#comment-15743917
 ] 

Yuanbo Liu commented on HADOOP-13891:
-

[~xyao] I have went through your patch in HADOOP-13890,  and it seems better to 
address KerberosName's issue in that JIRA.
I will mark this JIRA as resolved shortly if you don't mind. 
Look forwards to your patch in HADOOP-13890 since the test failures are quite 
often.

> KerberosName#KerberosName cannot parse principle without realm
> --
>
> Key: HADOOP-13891
> URL: https://issues.apache.org/jira/browse/HADOOP-13891
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Xiaoyu Yao
> Attachments: testKerberosName.patch
>
>
> Given a principal string like "HTTP/localhost", the returned KerberosName 
> object contains a null hostname and null realm name. The service name is 
> incorrectly parsed as whole as "HTTP/localhost".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13890) TestWebDelegationToken and TestKMS fails in trunk

2016-12-12 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743907#comment-15743907
 ] 

Xiaoyu Yao commented on HADOOP-13890:
-

In the patch v2, we remove the check for realm but keep the check for host as 
required based on [RFC-4559|https://tools.ietf.org/html/rfc4559]. 
{code}
   When the Kerberos Version 5 GSSAPI mechanism [RFC4121] is being used,
   the HTTP server will be using a principal name of the form of
   "HTTP/hostname".
{code}

In other words, some valid UPN (User Principal Name) without hostname like 
h...@example.com will be invalid for HTTP SPNEGO SPN (Service Principal Name). 
The RFC does not mention any requirement on realm. But based on many articles 
on multi-realm deployment, it is recommend to have HTTP/FQDN@Realm configured 
to avoid ambiguity and authentication problem in multi-realm use cases.

> TestWebDelegationToken and TestKMS fails in trunk
> -
>
> Key: HADOOP-13890
> URL: https://issues.apache.org/jira/browse/HADOOP-13890
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13890.00.patch, HADOOP-13890.01.patch, 
> HADOOP-13890.02.patch
>
>
> TestWebDelegationToken, TestKMS , TestTrashWithSecureEncryptionZones and 
> TestSecureEncryptionZoneWithKMS started failing in trunk because the SPENGO 
> principle used in these test are incomplete: HTTP/localhost assuming the 
> default realm will be applied at authentication time. This ticket is opened 
> to fix these unit test with complete HTTP principal.
> {noformat}
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Invalid SPNEGO sequence, status code: 403
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.readToken(KerberosAuthenticator.java:371)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.access$300(KerberosAuthenticator.java:53)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:317)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:287)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:287)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:205)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:132)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:298)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:170)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:373)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:782)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:779)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$4.run(TestWebDelegationToken.java:715)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.doAsKerberosUser(TestWebDelegationToken.java:712)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:778)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:729)
>  {noformat}
>  *Jenkins URL* 
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/251/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11240/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, 

[jira] [Commented] (HADOOP-13890) TestWebDelegationToken and TestKMS fails in trunk

2016-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743893#comment-15743893
 ] 

Hadoop QA commented on HADOOP-13890:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
13s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 18s{color} | {color:orange} hadoop-common-project/hadoop-auth: The patch 
generated 1 new + 18 unchanged - 1 fixed = 19 total (was 19) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
33s{color} | {color:green} hadoop-auth in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13890 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842911/HADOOP-13890.02.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 95763bec7229 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c6a3923 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11256/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-auth.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11256/testReport/ |
| modules | C: hadoop-common-project/hadoop-auth U: 
hadoop-common-project/hadoop-auth |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11256/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestWebDelegationToken and TestKMS fails in trunk
> -
>
> Key: HADOOP-13890
> URL: https://issues.apache.org/jira/browse/HADOOP-13890
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Xiaoyu 

[jira] [Commented] (HADOOP-13890) TestWebDelegationToken and TestKMS fails in trunk

2016-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743850#comment-15743850
 ] 

Hadoop QA commented on HADOOP-13890:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
12s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 18s{color} | {color:orange} hadoop-common-project/hadoop-auth: The patch 
generated 1 new + 18 unchanged - 1 fixed = 19 total (was 19) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
32s{color} | {color:green} hadoop-auth in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 36m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13890 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842911/HADOOP-13890.02.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6c2be5edf51e 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c6a3923 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11255/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-auth.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11255/testReport/ |
| modules | C: hadoop-common-project/hadoop-auth U: 
hadoop-common-project/hadoop-auth |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11255/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestWebDelegationToken and TestKMS fails in trunk
> -
>
> Key: HADOOP-13890
> URL: https://issues.apache.org/jira/browse/HADOOP-13890
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Xiaoyu 

[jira] [Commented] (HADOOP-13890) TestWebDelegationToken and TestKMS fails in trunk

2016-12-12 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743828#comment-15743828
 ] 

Xiaoyu Yao commented on HADOOP-13890:
-

I guess these changes (hadoop-auth) are unlikely to trigger hadoop-common and 
hadoop-hdfs tests by Jenkins.

> TestWebDelegationToken and TestKMS fails in trunk
> -
>
> Key: HADOOP-13890
> URL: https://issues.apache.org/jira/browse/HADOOP-13890
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13890.00.patch, HADOOP-13890.01.patch, 
> HADOOP-13890.02.patch
>
>
> TestWebDelegationToken, TestKMS , TestTrashWithSecureEncryptionZones and 
> TestSecureEncryptionZoneWithKMS started failing in trunk because the SPENGO 
> principle used in these test are incomplete: HTTP/localhost assuming the 
> default realm will be applied at authentication time. This ticket is opened 
> to fix these unit test with complete HTTP principal.
> {noformat}
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Invalid SPNEGO sequence, status code: 403
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.readToken(KerberosAuthenticator.java:371)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.access$300(KerberosAuthenticator.java:53)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:317)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:287)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:287)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:205)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:132)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:298)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:170)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:373)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:782)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:779)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$4.run(TestWebDelegationToken.java:715)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.doAsKerberosUser(TestWebDelegationToken.java:712)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:778)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:729)
>  {noformat}
>  *Jenkins URL* 
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/251/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11240/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13890) TestWebDelegationToken and TestKMS fails in trunk

2016-12-12 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-13890:

Attachment: HADOOP-13890.02.patch

Post a patch that fix the KerberosName parsing and release the checking to 
require a realm from KerberosAuthenticationHandler without modifying the failed 
unit tests. This way, we won't break compatibility for use case that use SPN in 
the form of HTTP/host assuming local realm like the failed unit tests. 

I've tested the patch locally against the failed tests and all of them passed. 
Please review, thanks!


> TestWebDelegationToken and TestKMS fails in trunk
> -
>
> Key: HADOOP-13890
> URL: https://issues.apache.org/jira/browse/HADOOP-13890
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13890.00.patch, HADOOP-13890.01.patch, 
> HADOOP-13890.02.patch
>
>
> TestWebDelegationToken, TestKMS , TestTrashWithSecureEncryptionZones and 
> TestSecureEncryptionZoneWithKMS started failing in trunk because the SPENGO 
> principle used in these test are incomplete: HTTP/localhost assuming the 
> default realm will be applied at authentication time. This ticket is opened 
> to fix these unit test with complete HTTP principal.
> {noformat}
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Invalid SPNEGO sequence, status code: 403
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.readToken(KerberosAuthenticator.java:371)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.access$300(KerberosAuthenticator.java:53)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:317)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:287)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:287)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:205)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:132)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:298)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:170)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:373)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:782)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:779)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$4.run(TestWebDelegationToken.java:715)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.doAsKerberosUser(TestWebDelegationToken.java:712)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:778)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:729)
>  {noformat}
>  *Jenkins URL* 
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/251/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11240/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13890) TestWebDelegationToken and TestKMS fails in trunk

2016-12-12 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-13890:

Summary: TestWebDelegationToken and TestKMS fails in trunk  (was: Unit 
tests should use SPNEGO principal with realm)

> TestWebDelegationToken and TestKMS fails in trunk
> -
>
> Key: HADOOP-13890
> URL: https://issues.apache.org/jira/browse/HADOOP-13890
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13890.00.patch, HADOOP-13890.01.patch
>
>
> TestWebDelegationToken, TestKMS , TestTrashWithSecureEncryptionZones and 
> TestSecureEncryptionZoneWithKMS started failing in trunk because the SPENGO 
> principle used in these test are incomplete: HTTP/localhost assuming the 
> default realm will be applied at authentication time. This ticket is opened 
> to fix these unit test with complete HTTP principal.
> {noformat}
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Invalid SPNEGO sequence, status code: 403
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.readToken(KerberosAuthenticator.java:371)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.access$300(KerberosAuthenticator.java:53)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:317)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:287)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:287)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:205)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:132)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:298)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:170)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:373)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:782)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:779)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$4.run(TestWebDelegationToken.java:715)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.doAsKerberosUser(TestWebDelegationToken.java:712)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:778)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:729)
>  {noformat}
>  *Jenkins URL* 
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/251/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11240/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13831) Correct check for error code to detect Azure Storage Throttling and provide retries

2016-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743731#comment-15743731
 ] 

Hadoop QA commented on HADOOP-13831:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
21s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13831 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842905/HADOOP-13831.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c3a20d2fc120 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c6a3923 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11254/testReport/ |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11254/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Correct check for error code to detect Azure Storage Throttling and provide 
> retries
> ---
>
> Key: HADOOP-13831
> URL: https://issues.apache.org/jira/browse/HADOOP-13831
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.7.3
>Reporter: Gaurav Kanade
> Attachments: HADOOP-13831.001.patch
>
>
>  Azure Storage 

[jira] [Commented] (HADOOP-13831) Correct check for error code to detect Azure Storage Throttling and provide retries

2016-12-12 Thread Gaurav Kanade (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743651#comment-15743651
 ] 

Gaurav Kanade commented on HADOOP-13831:


[~ste...@apache.org] can someone assign this issue to me ? thx

> Correct check for error code to detect Azure Storage Throttling and provide 
> retries
> ---
>
> Key: HADOOP-13831
> URL: https://issues.apache.org/jira/browse/HADOOP-13831
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.7.3
>Reporter: Gaurav Kanade
> Attachments: HADOOP-13831.001.patch
>
>
>  Azure Storage throttling  affects HBase operations such as archiving old 
> WALS and others. In such cases the storage driver needs to detect and handle 
> the exception. We put in this logic to do the retries however the condition 
> to check for the exception is not always met due to inconsistency in which 
> the manner the error code is passed back. Instead the retry logic should 
> check for http status code (503) which is more reliable and consistent check



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13831) Correct check for error code to detect Azure Storage Throttling and provide retries

2016-12-12 Thread Gaurav Kanade (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gaurav Kanade updated HADOOP-13831:
---
Status: Patch Available  (was: Open)

> Correct check for error code to detect Azure Storage Throttling and provide 
> retries
> ---
>
> Key: HADOOP-13831
> URL: https://issues.apache.org/jira/browse/HADOOP-13831
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.7.3
>Reporter: Gaurav Kanade
> Attachments: HADOOP-13831.001.patch
>
>
>  Azure Storage throttling  affects HBase operations such as archiving old 
> WALS and others. In such cases the storage driver needs to detect and handle 
> the exception. We put in this logic to do the retries however the condition 
> to check for the exception is not always met due to inconsistency in which 
> the manner the error code is passed back. Instead the retry logic should 
> check for http status code (503) which is more reliable and consistent check



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13831) Correct check for error code to detect Azure Storage Throttling and provide retries

2016-12-12 Thread Gaurav Kanade (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gaurav Kanade updated HADOOP-13831:
---
Attachment: HADOOP-13831.001.patch

> Correct check for error code to detect Azure Storage Throttling and provide 
> retries
> ---
>
> Key: HADOOP-13831
> URL: https://issues.apache.org/jira/browse/HADOOP-13831
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.7.3
>Reporter: Gaurav Kanade
> Attachments: HADOOP-13831.001.patch
>
>
>  Azure Storage throttling  affects HBase operations such as archiving old 
> WALS and others. In such cases the storage driver needs to detect and handle 
> the exception. We put in this logic to do the retries however the condition 
> to check for the exception is not always met due to inconsistency in which 
> the manner the error code is passed back. Instead the retry logic should 
> check for http status code (503) which is more reliable and consistent check



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13897) TestAdlFileContextMainOperationsLive#testGetFileContext1 fails consistently

2016-12-12 Thread Tony Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743637#comment-15743637
 ] 

Tony Wu commented on HADOOP-13897:
--

It appears the cause is that the following test 
({{FileContextMainOperationsBaseTest#testGetFileContext1}}) is using the 
default configuration rather than the ADLS test specific config 
{{hadoop-azure-datalake/src/test/resources/contract-test-options.xml}}.
{code}
  @Test
  /*
   * Test method
   *  org.apache.hadoop.fs.FileContext.getFileContext(AbstractFileSystem)
   */
  public void testGetFileContext1() throws IOException {
final Path rootPath = getTestRootPath(fc, "test");
AbstractFileSystem asf = fc.getDefaultFileSystem();
// create FileContext using the protected #getFileContext(1) method:
FileContext fc2 = FileContext.getFileContext(asf); // this uses the 
default config
// Now just check that this context can do something reasonable:
final Path path = new Path(rootPath, "zoo");
FSDataOutputStream out = fc2.create(path, EnumSet.of(CREATE),
Options.CreateOpts.createParent());
out.close();
Path pathResolved = fc2.resolvePath(path);
assertEquals(pathResolved.toUri().getPath(), path.toUri().getPath());
  }
{code}

The default config does not have {{dfs.adls.oauth2.access.token.provider.type}} 
defined.

> TestAdlFileContextMainOperationsLive#testGetFileContext1 fails consistently
> ---
>
> Key: HADOOP-13897
> URL: https://issues.apache.org/jira/browse/HADOOP-13897
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.0.0-alpha2
>Reporter: Tony Wu
>
> {{TestAdlFileContextMainOperationsLive#testGetFileContext1}} (this is a live 
> test against Azure Data Lake Store) fails consistently with the following 
> error:
> {noformat}
> ---
>  T E S T S
> ---
> Running org.apache.hadoop.fs.adl.live.TestAdlFileContextMainOperationsLive
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 11.55 sec <<< 
> FAILURE! - in 
> org.apache.hadoop.fs.adl.live.TestAdlFileContextMainOperationsLive
> testGetFileContext1(org.apache.hadoop.fs.adl.live.TestAdlFileContextMainOperationsLive)
>   Time elapsed: 11.229 sec  <<< ERROR!
> java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
>   at 
> org.apache.hadoop.fs.AbstractFileSystem.newInstance(AbstractFileSystem.java:136)
>   at 
> org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:165)
>   at 
> org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:250)
>   at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:331)
>   at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:328)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1857)
>   at 
> org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:328)
>   at org.apache.hadoop.fs.FileContext.getFSofPath(FileContext.java:320)
>   at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:85)
>   at org.apache.hadoop.fs.FileContext.create(FileContext.java:685)
>   at 
> org.apache.hadoop.fs.FileContextMainOperationsBaseTest.testGetFileContext1(FileContextMainOperationsBaseTest.java:1350)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at 

[jira] [Created] (HADOOP-13897) TestAdlFileContextMainOperationsLive#testGetFileContext1 fails consistently

2016-12-12 Thread Tony Wu (JIRA)
Tony Wu created HADOOP-13897:


 Summary: TestAdlFileContextMainOperationsLive#testGetFileContext1 
fails consistently
 Key: HADOOP-13897
 URL: https://issues.apache.org/jira/browse/HADOOP-13897
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/azure
Affects Versions: 3.0.0-alpha2
Reporter: Tony Wu


{{TestAdlFileContextMainOperationsLive#testGetFileContext1}} (this is a live 
test against Azure Data Lake Store) fails consistently with the following error:
{noformat}
---
 T E S T S
---
Running org.apache.hadoop.fs.adl.live.TestAdlFileContextMainOperationsLive
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 11.55 sec <<< 
FAILURE! - in org.apache.hadoop.fs.adl.live.TestAdlFileContextMainOperationsLive
testGetFileContext1(org.apache.hadoop.fs.adl.live.TestAdlFileContextMainOperationsLive)
  Time elapsed: 11.229 sec  <<< ERROR!
java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
at 
org.apache.hadoop.fs.AbstractFileSystem.newInstance(AbstractFileSystem.java:136)
at 
org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:165)
at 
org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:250)
at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:331)
at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:328)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1857)
at 
org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:328)
at org.apache.hadoop.fs.FileContext.getFSofPath(FileContext.java:320)
at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:85)
at org.apache.hadoop.fs.FileContext.create(FileContext.java:685)
at 
org.apache.hadoop.fs.FileContextMainOperationsBaseTest.testGetFileContext1(FileContextMainOperationsBaseTest.java:1350)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:254)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:149)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
Caused by: java.lang.reflect.InvocationTargetException: null
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 

[jira] [Commented] (HADOOP-13336) S3A to support per-bucket configuration

2016-12-12 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743591#comment-15743591
 ] 

Aaron Fabbri commented on HADOOP-13336:
---

Just noticed you suggest the same thing about having a ".bucket." 
prefix, so disregard that comment (it was just missing from landsat example).


> S3A to support per-bucket configuration
> ---
>
> Key: HADOOP-13336
> URL: https://issues.apache.org/jira/browse/HADOOP-13336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> S3a now supports different regions, by way of declaring the endpoint —but you 
> can't do things like read in one region, write back in another (e.g. a distcp 
> backup), because only one region can be specified in a configuration.
> If s3a supported region declaration in the URL, e.g. s3a://b1.frankfurt 
> s3a://b2.seol , then this would be possible. 
> Swift does this with a full filesystem binding/config: endpoints, username, 
> etc, in the XML file. Would we need to do that much? It'd be simpler 
> initially to use a domain suffix of a URL to set the region of a bucket from 
> the domain and have the aws library sort the details out itself, maybe with 
> some config options for working with non-AWS infra



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13336) S3A to support per-bucket configuration

2016-12-12 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743567#comment-15743567
 ] 

Aaron Fabbri commented on HADOOP-13336:
---

Great summary [~steve_l]. I think being backward-compatible with existing 
configs and URIs in production is important.  These all seem reasonable, but 
URI compatibility seems to point to option A for me (if we want to keep it 
simple).  The annoying thing is that these are hard to change if we decide we 
want a different option. Which option are you leaning towards?  

{quote}
*Option A* per-bucket config.
Lets you define everything for a bucket.
Examples
s3a://olap2/data/2017 : s3a URL s3a://olap2/data/2017, with config set 
fs.s3a.bucket.olap2 in configuration
s3a://landsat : s3a URL s3a://landsat, with config set fs.s3a.landsat for 
anonymous credentials and no dynamo
{quote}

To avoid key space conflicts I'd suggest a prefix of 
fs.s3a.bucket. instead of fs.s3a.. Just in case 
someone has an s3 bucket named "endpoint", they'd use 
{{fs.s3a.bucket.endpoint.*}} instead of conflicting with {{fs.s3a.endpoint}}, 
etc..

This option seems pretty straightforward.  Should be backward compatible as it 
requires no changes to URIs and existing default or "all bucket" config keys 
continue to work the same.  For grabbing config values in S3A, we'd call some 
per-bucket Configuration wrapper that looks for the 
fs.s3a.bucket..* key first, and if not, returns whatever is in the 
non-bucket-specific config.

{quote}
*Option B* config via domain name in URL
This is what swift does: you define a domain, with the domain defining 
everything.
s3a://olap2.dynamo/data/2017 with config sett fs.s3a.binding.dynamo
s3a://landsat.anon with config set fs.s3a.binding.anon for anonymous 
credentials and no dynamo
{quote}

As you mention, my desire for URI backward-compatibility implies we need an 
additional way to map a bucket to a domain, e.g. 
{{fs.s3a.domain.bucket.my-bucket=my-domain}}.  Seems a bit too complex. This 
buys us the ability to share a config over some set of buckets. 

Also, does this break folks who use FQDN bucket names?

{quote}
*Option C* Config via user:pass property in URL
This is a bit like Azure, where the FQDN defines the binding, and the username 
defines the bucket. Here I'm proposing the ability to define a new user which 
declares the binding info.
Examples
s3a://dynamo@olap2/data/2017 : s3a URL s3a://olap2/data/2017, with config set 
fs.s3a.binding.dynamo
s3a://anon@landsat : s3a URL s3a://landsat, with config set fs.s3a.binding.anon 
for anonymous credentials.
{quote}

Seems reasonable but the need to change URIs is unfortunate.




> S3A to support per-bucket configuration
> ---
>
> Key: HADOOP-13336
> URL: https://issues.apache.org/jira/browse/HADOOP-13336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> S3a now supports different regions, by way of declaring the endpoint —but you 
> can't do things like read in one region, write back in another (e.g. a distcp 
> backup), because only one region can be specified in a configuration.
> If s3a supported region declaration in the URL, e.g. s3a://b1.frankfurt 
> s3a://b2.seol , then this would be possible. 
> Swift does this with a full filesystem binding/config: endpoints, username, 
> etc, in the XML file. Would we need to do that much? It'd be simpler 
> initially to use a domain suffix of a URL to set the region of a bucket from 
> the domain and have the aws library sort the details out itself, maybe with 
> some config options for working with non-AWS infra



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13890) Unit tests should use SPNEGO principal with realm

2016-12-12 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743554#comment-15743554
 ] 

Xiaoyu Yao commented on HADOOP-13890:
-

Thanks [~daryn] for the comments. Yes, HADOOP-13565 enforces the check of 
SPNEGO SPNs to have three parts HTTP/host/realm. This will be incompatible 
which previously allows principal like HTTP/host before HADOOP-13565, assuming 
the default realm at authentication time.

Since HTTP/host is a legitimate use case as you commented on HADOOP-13891, we 
can loosen the check added by HADOOP-13565 to allow it from 
KerberosAuthenticationHandler without modify the unit tests.

> Unit tests should use SPNEGO principal with realm
> -
>
> Key: HADOOP-13890
> URL: https://issues.apache.org/jira/browse/HADOOP-13890
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13890.00.patch, HADOOP-13890.01.patch
>
>
> TestWebDelegationToken, TestKMS , TestTrashWithSecureEncryptionZones and 
> TestSecureEncryptionZoneWithKMS started failing in trunk because the SPENGO 
> principle used in these test are incomplete: HTTP/localhost assuming the 
> default realm will be applied at authentication time. This ticket is opened 
> to fix these unit test with complete HTTP principal.
> {noformat}
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Invalid SPNEGO sequence, status code: 403
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.readToken(KerberosAuthenticator.java:371)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.access$300(KerberosAuthenticator.java:53)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:317)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:287)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:287)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:205)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:132)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:298)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:170)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:373)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:782)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:779)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$4.run(TestWebDelegationToken.java:715)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.doAsKerberosUser(TestWebDelegationToken.java:712)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:778)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:729)
>  {noformat}
>  *Jenkins URL* 
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/251/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11240/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13863) Hadoop - Azure: Add a new SAS key mode for WASB.

2016-12-12 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743543#comment-15743543
 ] 

Mingliang Liu commented on HADOOP-13863:


Good, thanks [~dchickabasapa] for updating the patch. I'll review this code 
this week (or before the holiday) if no reviews from others.

> Hadoop - Azure: Add a new SAS key mode for WASB.
> 
>
> Key: HADOOP-13863
> URL: https://issues.apache.org/jira/browse/HADOOP-13863
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: azure, fs/azure
>Affects Versions: 2.8.0
>Reporter: Dushyanth
>Assignee: Dushyanth
> Attachments: HADOOP-13863.001.patch, HADOOP-13863.002.patch, WASB-SAS 
> Key Mode-Design Proposal.pdf
>
>
> Current implementation of WASB, only supports Azure storage keys and SAS key 
> being provided via org.apache.hadoop.conf.Configuration, which results in 
> these secrets residing in the same address space as the WASB process and 
> providing complete access to the Azure storage account and its containers. 
> Added to the fact that WASB does not inherently support ACL's, WASB is its 
> current implementation cannot be securely used for environments like secure 
> hadoop cluster. This JIRA is created to add a new mode in WASB, which 
> operates on Azure Storage SAS keys, which can provide fine grained timed 
> access to containers and blobs, providing a segway into supporting WASB for 
> secure hadoop cluster.
> More details about the issue and the proposal are provided in the design 
> proposal document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13565) KerberosAuthenticationHandler#authenticate should not rebuild SPN based on client request

2016-12-12 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743539#comment-15743539
 ] 

Daryn Sharp commented on HADOOP-13565:
--

I've been told this patch broke our testing pipelines.  I don't have details 
but perhaps this patch should be considered for revert until we are sure what 
the problem(s) are.

I'll look at this patch tomorrow. 

> KerberosAuthenticationHandler#authenticate should not rebuild SPN based on 
> client request
> -
>
> Key: HADOOP-13565
> URL: https://issues.apache.org/jira/browse/HADOOP-13565
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13565.00.patch, HADOOP-13565.01.patch, 
> HADOOP-13565.02.patch, HADOOP-13565.03.patch
>
>
> In KerberosAuthenticationHandler#authenticate, we use canonicalized server 
> name derived from HTTP request to build server SPN and authenticate client. 
> This can be problematic if the HTTP client/server are running from a 
> non-local Kerberos realm that the local realm has trust with (e.g., NN UI).
> For example, 
> The server is running its HTTP endpoint using SPN from the client realm:
> hadoop.http.authentication.kerberos.principal
> HTTP/_HOST/TEST.COM
> When client sends request to namenode at http://NN1.example.com:50070 from 
> client.test@test.com.
> The client talks to KDC first and gets a service ticket 
> HTTP/NN1.example.com/TEST.COM to authenticate with the server via SPNEGO 
> negotiation. 
> The authentication will end up with either no valid credential error or 
> checksum failure depending on the HTTP client naming resolution or HTTP Host 
> field from the request header provided by the browser. 
> The root cause is KerberosUtil.getServicePrincipal("HTTP", serverName)}} will 
> always return a SPN with local realm (HTTP/nn.example@example.com) no 
> matter the server login SPN is from that domain or not. 
> The proposed fix is to change to use default server login principal (by 
> passing null as the 1st parameter to gssManager.createCredential()) instead. 
> This way we avoid dependency on HTTP client behavior (Host header or name 
> resolution like CNAME) or assumption on the local realm. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13852) hadoop build to allow hadoop version property to be explicitly set

2016-12-12 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13852:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   Status: Resolved  (was: Patch Available)

> hadoop build to allow hadoop version property to be explicitly set
> --
>
> Key: HADOOP-13852
> URL: https://issues.apache.org/jira/browse/HADOOP-13852
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13582-003.patch, HADOOP-13852-001.patch, 
> HADOOP-13852-002.patch, HADOOP-13852-003.patch
>
>
> Hive (and transitively) Spark, won't start on Hadoop 3.x as the shim layer 
> rejects Hadoop v3. As a workaround pending a Hive fix, allow the build to 
> have the Hadoop version (currently set to pom.version) to be overridden 
> manually.
> This will not affect version names of artifacts, merely the declared Hadoop 
> version visible in {{VersionInfo.getVersion()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13852) hadoop build to allow hadoop version property to be explicitly set

2016-12-12 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13852:
---
Attachment: HADOOP-13852-003.patch

The latest patch was committed in trunk. I think Steve attached the wrong patch 
file here? Re-attach the file for record.

Patch looks good to me (though Im running late); I manually tested the broken 
unit test failures as well and they are passing.



> hadoop build to allow hadoop version property to be explicitly set
> --
>
> Key: HADOOP-13852
> URL: https://issues.apache.org/jira/browse/HADOOP-13852
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13582-003.patch, HADOOP-13852-001.patch, 
> HADOOP-13852-002.patch, HADOOP-13852-003.patch
>
>
> Hive (and transitively) Spark, won't start on Hadoop 3.x as the shim layer 
> rejects Hadoop v3. As a workaround pending a Hive fix, allow the build to 
> have the Hadoop version (currently set to pom.version) to be overridden 
> manually.
> This will not affect version names of artifacts, merely the declared Hadoop 
> version visible in {{VersionInfo.getVersion()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13852) hadoop build to allow hadoop version property to be explicitly set

2016-12-12 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13852:
---
Attachment: (was: HADOOP-13852.003.patch)

> hadoop build to allow hadoop version property to be explicitly set
> --
>
> Key: HADOOP-13852
> URL: https://issues.apache.org/jira/browse/HADOOP-13852
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13582-003.patch, HADOOP-13852-001.patch, 
> HADOOP-13852-002.patch
>
>
> Hive (and transitively) Spark, won't start on Hadoop 3.x as the shim layer 
> rejects Hadoop v3. As a workaround pending a Hive fix, allow the build to 
> have the Hadoop version (currently set to pom.version) to be overridden 
> manually.
> This will not affect version names of artifacts, merely the declared Hadoop 
> version visible in {{VersionInfo.getVersion()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13891) KerberosName#KerberosName cannot parse principle without realm

2016-12-12 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743511#comment-15743511
 ] 

Daryn Sharp commented on HADOOP-13891:
--

Yes, it's valid, no realm means default realm.

> KerberosName#KerberosName cannot parse principle without realm
> --
>
> Key: HADOOP-13891
> URL: https://issues.apache.org/jira/browse/HADOOP-13891
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Xiaoyu Yao
> Attachments: testKerberosName.patch
>
>
> Given a principal string like "HTTP/localhost", the returned KerberosName 
> object contains a null hostname and null realm name. The service name is 
> incorrectly parsed as whole as "HTTP/localhost".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13852) hadoop build to allow hadoop version property to be explicitly set

2016-12-12 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13852:
---
Attachment: HADOOP-13852.003.patch

> hadoop build to allow hadoop version property to be explicitly set
> --
>
> Key: HADOOP-13852
> URL: https://issues.apache.org/jira/browse/HADOOP-13852
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13582-003.patch, HADOOP-13852-001.patch, 
> HADOOP-13852-002.patch, HADOOP-13852.003.patch
>
>
> Hive (and transitively) Spark, won't start on Hadoop 3.x as the shim layer 
> rejects Hadoop v3. As a workaround pending a Hive fix, allow the build to 
> have the Hadoop version (currently set to pom.version) to be overridden 
> manually.
> This will not affect version names of artifacts, merely the declared Hadoop 
> version visible in {{VersionInfo.getVersion()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.

2016-12-12 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743497#comment-15743497
 ] 

Aaron Fabbri commented on HADOOP-13449:
---

Thanks [~steve_l].  Those tests that make assertions about internal operation 
counts have been useful.  I have disabled many of them with a check against 
{{S3AFileSystem#isMetadataStoreConfigured()}} or 
{{S3ATestUtils#isMetadataStoreAuthoritative()}}.  They are a bit brittle to 
change but do end up catching issues, so I think ultimately useful.

> S3Guard: Implement DynamoDBMetadataStore.
> -
>
> Key: HADOOP-13449
> URL: https://issues.apache.org/jira/browse/HADOOP-13449
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Mingliang Liu
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-13449-HADOOP-13345.000.patch, 
> HADOOP-13449-HADOOP-13345.001.patch, HADOOP-13449-HADOOP-13345.002.patch, 
> HADOOP-13449-HADOOP-13345.003.patch, HADOOP-13449-HADOOP-13345.004.patch, 
> HADOOP-13449-HADOOP-13345.005.patch, HADOOP-13449-HADOOP-13345.006.patch, 
> HADOOP-13449-HADOOP-13345.007.patch, HADOOP-13449-HADOOP-13345.008.patch, 
> HADOOP-13449-HADOOP-13345.009.patch, HADOOP-13449-HADOOP-13345.010.patch, 
> HADOOP-13449-HADOOP-13345.011.patch, HADOOP-13449-HADOOP-13345.012.patch, 
> HADOOP-13449-HADOOP-13345.013.patch
>
>
> Provide an implementation of the metadata store backed by DynamoDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13890) Unit tests should use SPNEGO principal with realm

2016-12-12 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743494#comment-15743494
 ] 

Daryn Sharp commented on HADOOP-13890:
--

This appears to be an attempt to hide an incompatibility introduced by 
HADOOP-13565.  Generally, when a legitimate test breaks, the solution isn't to 
alter the test to conform to new incompatible behavior.


> Unit tests should use SPNEGO principal with realm
> -
>
> Key: HADOOP-13890
> URL: https://issues.apache.org/jira/browse/HADOOP-13890
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13890.00.patch, HADOOP-13890.01.patch
>
>
> TestWebDelegationToken, TestKMS , TestTrashWithSecureEncryptionZones and 
> TestSecureEncryptionZoneWithKMS started failing in trunk because the SPENGO 
> principle used in these test are incomplete: HTTP/localhost assuming the 
> default realm will be applied at authentication time. This ticket is opened 
> to fix these unit test with complete HTTP principal.
> {noformat}
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Invalid SPNEGO sequence, status code: 403
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.readToken(KerberosAuthenticator.java:371)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.access$300(KerberosAuthenticator.java:53)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:317)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:287)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:287)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:205)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:132)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:298)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:170)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:373)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:782)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:779)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$4.run(TestWebDelegationToken.java:715)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.doAsKerberosUser(TestWebDelegationToken.java:712)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:778)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:729)
>  {noformat}
>  *Jenkins URL* 
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/251/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11240/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13871) ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks performance awful

2016-12-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743490#comment-15743490
 ] 

Hudson commented on HADOOP-13871:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10987 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10987/])
HADOOP-13871. (liuml07: rev c6a39232456fa0c98b2b9b6dbeaec762294ca01e)
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ITestS3AInputStreamPerformance.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java
* (add) 
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/troubleshooting_s3a.md
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/S3AScaleTestBase.java


> ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks 
> performance awful
> -
>
> Key: HADOOP-13871
> URL: https://issues.apache.org/jira/browse/HADOOP-13871
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0, 2.9.0, 3.0.0-alpha2
> Environment: landsat bucket from the UK
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13871-001.patch, HADOOP-13871-002.patch, 
> HADOOP-13871-003.patch, 
> org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance-output.txt
>
>
> The ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks takes 
> 15s on branch-2, but is now taking minutes.
> This is a regression, and it's surfacing on some internal branches too. Even 
> where the code hasn't changed. -It does not happen on branch-2, which has a 
> later SDK.-



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13826) S3A Deadlock in multipart copy due to thread pool limits.

2016-12-12 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743481#comment-15743481
 ] 

Sean Mackrory commented on HADOOP-13826:


[~Thomas Demoor] Thanks. The more I think about it, the more I prefer .003 
patch. I played around with my original approach (distinct resource pools for 
different types of tasks) today. I restructured it so tasks were segregated 
immediately upon being passed from TransferManager, instead of having layers of 
shared queues on top of them. Even then, control tasks were able to saturate 
their pool and deadlock it. But the unbounded pool you suggested fixed that 
problem - but wanting to avoid unbounded pools is my main concern with the .003 
patch anyway.

It's also still a bit kludgy trying to separate tasks based on internal Amazon 
APIs (which I griped about this morning: 
https://github.com/aws/aws-sdk-java/issues/939) and to a lesser extent when 
S3AFastOutputStream still causes tasks to be submitted to the executor wrapped 
in other types of Callable.

> S3A Deadlock in multipart copy due to thread pool limits.
> -
>
> Key: HADOOP-13826
> URL: https://issues.apache.org/jira/browse/HADOOP-13826
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Critical
> Attachments: HADOOP-13826.001.patch, HADOOP-13826.002.patch, 
> HADOOP-13826.003.patch
>
>
> In testing HIVE-15093 we have encountered deadlocks in the s3a connector. The 
> TransferManager javadocs 
> (http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManager.html)
>  explain how this is possible:
> {quote}It is not recommended to use a single threaded executor or a thread 
> pool with a bounded work queue as control tasks may submit subtasks that 
> can't complete until all sub tasks complete. Using an incorrectly configured 
> thread pool may cause a deadlock (I.E. the work queue is filled with control 
> tasks that can't finish until subtasks complete but subtasks can't execute 
> because the queue is filled).{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13871) ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks performance awful

2016-12-12 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13871:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.8.0
   Status: Resolved  (was: Patch Available)

Ran integration tests against US standard.

Thanks [~ste...@apache.org] for great analysis and patch contribution. I have 
committed to {{trunk}} through {{branch-2.8}} branches. For the commit, I fixed 
the whitespace warnings; and also addressed javadoc of {{getS3AInputStream()}} 
method, per [~mackrorysd]'s comment. Thanks for reviewing this.

> ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks 
> performance awful
> -
>
> Key: HADOOP-13871
> URL: https://issues.apache.org/jira/browse/HADOOP-13871
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0, 2.9.0, 3.0.0-alpha2
> Environment: landsat bucket from the UK
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13871-001.patch, HADOOP-13871-002.patch, 
> HADOOP-13871-003.patch, 
> org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance-output.txt
>
>
> The ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks takes 
> 15s on branch-2, but is now taking minutes.
> This is a regression, and it's surfacing on some internal branches too. Even 
> where the code hasn't changed. -It does not happen on branch-2, which has a 
> later SDK.-



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13890) Unit tests should use SPNEGO principal with realm

2016-12-12 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743464#comment-15743464
 ] 

Xiaoyu Yao commented on HADOOP-13890:
-

I plan to commit the patch by EOD today to fix the Jenkins issues unless 
[~brahmareddy] or other folks on the watchlist have additional comments. 

> Unit tests should use SPNEGO principal with realm
> -
>
> Key: HADOOP-13890
> URL: https://issues.apache.org/jira/browse/HADOOP-13890
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13890.00.patch, HADOOP-13890.01.patch
>
>
> TestWebDelegationToken, TestKMS , TestTrashWithSecureEncryptionZones and 
> TestSecureEncryptionZoneWithKMS started failing in trunk because the SPENGO 
> principle used in these test are incomplete: HTTP/localhost assuming the 
> default realm will be applied at authentication time. This ticket is opened 
> to fix these unit test with complete HTTP principal.
> {noformat}
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Invalid SPNEGO sequence, status code: 403
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.readToken(KerberosAuthenticator.java:371)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.access$300(KerberosAuthenticator.java:53)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:317)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:287)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:287)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:205)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:132)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:298)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:170)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:373)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:782)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:779)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$4.run(TestWebDelegationToken.java:715)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.doAsKerberosUser(TestWebDelegationToken.java:712)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:778)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:729)
>  {noformat}
>  *Jenkins URL* 
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/251/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11240/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13871) ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks performance awful

2016-12-12 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743456#comment-15743456
 ] 

Mingliang Liu commented on HADOOP-13871:


+1

> ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks 
> performance awful
> -
>
> Key: HADOOP-13871
> URL: https://issues.apache.org/jira/browse/HADOOP-13871
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0, 2.9.0, 3.0.0-alpha2
> Environment: landsat bucket from the UK
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13871-001.patch, HADOOP-13871-002.patch, 
> HADOOP-13871-003.patch, 
> org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance-output.txt
>
>
> The ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks takes 
> 15s on branch-2, but is now taking minutes.
> This is a regression, and it's surfacing on some internal branches too. Even 
> where the code hasn't changed. -It does not happen on branch-2, which has a 
> later SDK.-



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13871) ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks performance awful

2016-12-12 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13871:
---
Summary: 
ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks performance 
awful  (was: 
ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks performance 
on branch-2.8 awful)

> ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks 
> performance awful
> -
>
> Key: HADOOP-13871
> URL: https://issues.apache.org/jira/browse/HADOOP-13871
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0, 2.9.0, 3.0.0-alpha2
> Environment: landsat bucket from the UK
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13871-001.patch, HADOOP-13871-002.patch, 
> HADOOP-13871-003.patch, 
> org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance-output.txt
>
>
> The ITestS3AInputStreamPerformance.testTimeToOpenAndReadWholeFileBlocks takes 
> 15s on branch-2, but is now taking minutes.
> This is a regression, and it's surfacing on some internal branches too. Even 
> where the code hasn't changed. -It does not happen on branch-2, which has a 
> later SDK.-



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13875) HttpServer2 should support more SSL configuration properties

2016-12-12 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743426#comment-15743426
 ] 

Wei-Chiu Chuang commented on HADOOP-13875:
--

IMHO, this is not an incompatible change, as long as the enabled/disabled 
protocols/ciphersuites are configurable.

> HttpServer2 should support more SSL configuration properties
> 
>
> Key: HADOOP-13875
> URL: https://issues.apache.org/jira/browse/HADOOP-13875
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> Support more SSL configuration properties:
> - enabled.protocols
> - includeProtocols/excludeProtocols/includeCipherSuites/excludeCipherSuites



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13826) S3A Deadlock in multipart copy due to thread pool limits.

2016-12-12 Thread Thomas Demoor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743421#comment-15743421
 ] 

Thomas Demoor commented on HADOOP-13826:


[~mackrorysd] your last patch seems simple but I think it might do the trick. I 
like simple solutions.

{{putObject()}} uses the (unbounded) TransferManager, {{putObjectDirect()}}  
and {{uploadPart()}} use the bounded threadpool so I think the (potentially) 
memory intensive parts are nicely isolated and under control.
 
My only slight concern is that now both pools can have {{MAX_THREADS}} active. 
From my reading of the code, both threadpools cannot be doing large object PUTs 
at the same time (an instance of s3a uses the block-based uploads or the 
regular S3AOutputstream, never both at the same time). What is possible, is 
that during a large block-based upload, which is saturating the bounded 
executor, another client thread might {{rename}} a directory, invoking a lot of 
parallel copies, hence saturating the TransferManager. But copies are are not 
data-intensive (see below) so I assume this is manageable.

I like [~ste...@apache.org]'s ideas for further separating out the different 
types of operations, but have one remark: for me COPY is not similar to PUT. 
COPY is completely server-side and is thus generally much less 
resource-intensive and much quicker than PUT (the smaller your bandwidth to S3, 
the bigger the difference becomes).

> S3A Deadlock in multipart copy due to thread pool limits.
> -
>
> Key: HADOOP-13826
> URL: https://issues.apache.org/jira/browse/HADOOP-13826
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Critical
> Attachments: HADOOP-13826.001.patch, HADOOP-13826.002.patch, 
> HADOOP-13826.003.patch
>
>
> In testing HIVE-15093 we have encountered deadlocks in the s3a connector. The 
> TransferManager javadocs 
> (http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManager.html)
>  explain how this is possible:
> {quote}It is not recommended to use a single threaded executor or a thread 
> pool with a bounded work queue as control tasks may submit subtasks that 
> can't complete until all sub tasks complete. Using an incorrectly configured 
> thread pool may cause a deadlock (I.E. the work queue is filled with control 
> tasks that can't finish until subtasks complete but subtasks can't execute 
> because the queue is filled).{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13600) S3a rename() to copy files in a directory in parallel

2016-12-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743379#comment-15743379
 ] 

ASF GitHub Bot commented on HADOOP-13600:
-

Github user thodemoor commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/157#discussion_r92054836
  
--- Diff: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
 ---
@@ -207,11 +218,17 @@ public StorageStatistics provide() {
   MAX_TOTAL_TASKS, DEFAULT_MAX_TOTAL_TASKS, 1);
   long keepAliveTime = longOption(conf, KEEPALIVE_TIME,
   DEFAULT_KEEPALIVE_TIME, 0);
-  threadPoolExecutor = BlockingThreadPoolExecutorService.newInstance(
+  uploadThreadPoolExecutor = 
BlockingThreadPoolExecutorService.newInstance(
   maxThreads,
   maxThreads + totalTasks,
   keepAliveTime, TimeUnit.SECONDS,
-  "s3a-transfer-shared");
+  "s3a-upload-shared");
+
+  copyThreadPoolExecutor = 
BlockingThreadPoolExecutorService.newInstance(
+  maxThreads,
--- End diff --

COPY is server-side (no data transfer) and is thus generally much less 
resource-intensive and much quicker than PUT (the smaller your bandwidth to S3, 
the bigger the difference becomes). So I think the `maxThreads` for the 
copyThreadpool could be (much) higher than for uploadThreadpool and should thus 
be configurable separately.


> S3a rename() to copy files in a directory in parallel
> -
>
> Key: HADOOP-13600
> URL: https://issues.apache.org/jira/browse/HADOOP-13600
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Sahil Takiar
>
> Currently a directory rename does a one-by-one copy, making the request 
> O(files * data). If the copy operations were launched in parallel, the 
> duration of the copy may be reducable to the duration of the longest copy. 
> For a directory with many files, this will be significant



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13709) Clean up subprocesses spawned by Shell.java:runCommand when the shell process exits

2016-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743322#comment-15743322
 ] 

Hadoop QA commented on HADOOP-13709:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 54s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.security.token.delegation.web.TestWebDelegationToken |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13709 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842874/HADOOP-13709.009.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 817fea1be363 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f66f618 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11253/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11253/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11253/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Clean up subprocesses spawned by Shell.java:runCommand when the shell process 
> exits
> ---
>
> Key: HADOOP-13709
> URL: https://issues.apache.org/jira/browse/HADOOP-13709
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects 

[jira] [Updated] (HADOOP-13896) disribution tarball is missing almost all of the hadoop jars

2016-12-12 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-13896:
--
Description: 
>From what I can tell, all of the hdfs and common jars from their respective 
>lib dirs are missing, excluding hadoop-hdfs-client and hadoop-hdfs-nfs. But 
>there are likely more.

Steps to reproduce:

1. ./start-build-env.sh
2. mvn install -Pdist,src -Dtar -DskipTests -Pnative
3. ls hadoop-dist/target/hadoop-3.0.0-alpha2-SNAPSHOT/share/hadoop/common
hadoop-nfs-3.0.0-alpha2-SNAPSHOT.jar  jdiff  lib  templates  webapps
4. ls hadoop-dist/target/hadoop-3.0.0-alpha2-SNAPSHOT/share/hadoop/hdfs
hadoop-hdfs-nfs-3.0.0-alpha2-SNAPSHOT.jar  jdiff  lib  templates  webapps


  was:
>From what I can tell, all of the hdfs and common jars from their respective 
>lib dirs are missing, excluding hadoop-hdfs-client and hadoop-hdfs-nfs. But 
>there are likely more.

Steps to reproduce:

1. ./start-build-env.sh
2. mvn install -Pdist,src -Dtar -DskipTests -Pnative
3. tar tvzf ./hadoop-dist/target/hadoop-3.0.0-alpha2-SNAPSHOT.tar.gz | grep 
hadoop-hdfs-3.0.0-alpha2-SNAPSHOT.jar
4. tar tvzf ./hadoop-dist/target/hadoop-3.0.0-alpha2-SNAPSHOT.tar.gz | grep 
hadoop-common-3.0.0-alpha2-SNAPSHOT.jar


> disribution tarball is missing almost all of the hadoop jars
> 
>
> Key: HADOOP-13896
> URL: https://issues.apache.org/jira/browse/HADOOP-13896
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> From what I can tell, all of the hdfs and common jars from their respective 
> lib dirs are missing, excluding hadoop-hdfs-client and hadoop-hdfs-nfs. But 
> there are likely more.
> Steps to reproduce:
> 1. ./start-build-env.sh
> 2. mvn install -Pdist,src -Dtar -DskipTests -Pnative
> 3. ls hadoop-dist/target/hadoop-3.0.0-alpha2-SNAPSHOT/share/hadoop/common
> hadoop-nfs-3.0.0-alpha2-SNAPSHOT.jar  jdiff  lib  templates  webapps
> 4. ls hadoop-dist/target/hadoop-3.0.0-alpha2-SNAPSHOT/share/hadoop/hdfs
> hadoop-hdfs-nfs-3.0.0-alpha2-SNAPSHOT.jar  jdiff  lib  templates  webapps



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13896) disribution tarball is missing almost all of the hadoop jars

2016-12-12 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-13896:
--
Description: 
>From what I can tell, all of the hdfs and common jars from their respective 
>lib dirs are missing, excluding hadoop-hdfs-client and hadoop-hdfs-nfs. But 
>there are likely more.

Steps to reproduce:

1. ./start-build-env.sh
2. mvn install -Pdist,src -Dtar -DskipTests -Pnative
3. tar tvzf ./hadoop-dist/target/hadoop-3.0.0-alpha2-SNAPSHOT.tar.gz | grep 
hadoop-hdfs-3.0.0-alpha2-SNAPSHOT.jar
4. tar tvzf ./hadoop-dist/target/hadoop-3.0.0-alpha2-SNAPSHOT.tar.gz | grep 
hadoop-common-3.0.0-alpha2-SNAPSHOT.jar

  was:
>From what I can tell, all of the hdfs and common jars from their respective 
>lib dirs are missing, excluding hadoop-hdfs-client and hadoop-hdfs-nfs. But 
>there are likely more.

Steps to reproduce:

1. ./start-build-env.sh
2. mvn install -Pdist,src -Dtar -DskipTests -Pnative
3. tar tvzf ./hadoop-dist/target/hadoop-3.0.0-alpha2-SNAPSHOT.tar.gz | grep 
hadoop-hdfs-3.0.0-alpha2-snapshot.jar
4. tar tvzf ./hadoop-dist/target/hadoop-3.0.0-alpha2-SNAPSHOT.tar.gz | grep 
hadoop-common-3.0.0-alpha2-snapshot.jar


> disribution tarball is missing almost all of the hadoop jars
> 
>
> Key: HADOOP-13896
> URL: https://issues.apache.org/jira/browse/HADOOP-13896
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> From what I can tell, all of the hdfs and common jars from their respective 
> lib dirs are missing, excluding hadoop-hdfs-client and hadoop-hdfs-nfs. But 
> there are likely more.
> Steps to reproduce:
> 1. ./start-build-env.sh
> 2. mvn install -Pdist,src -Dtar -DskipTests -Pnative
> 3. tar tvzf ./hadoop-dist/target/hadoop-3.0.0-alpha2-SNAPSHOT.tar.gz | grep 
> hadoop-hdfs-3.0.0-alpha2-SNAPSHOT.jar
> 4. tar tvzf ./hadoop-dist/target/hadoop-3.0.0-alpha2-SNAPSHOT.tar.gz | grep 
> hadoop-common-3.0.0-alpha2-SNAPSHOT.jar



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13896) disribution tarball is missing almost all of the hadoop jars

2016-12-12 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-13896:
--
Summary: disribution tarball is missing almost all of the hadoop jars  
(was: disribution tarball is mising almost all of the hadoop jars)

> disribution tarball is missing almost all of the hadoop jars
> 
>
> Key: HADOOP-13896
> URL: https://issues.apache.org/jira/browse/HADOOP-13896
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> From what I can tell, all of the hdfs and common jars from their respective 
> lib dirs are missing, excluding hadoop-hdfs-client and hadoop-hdfs-nfs. But 
> there are likely more.
> Steps to reproduce:
> 1. ./start-build-env.sh
> 2. mvn install -Pdist,src -Dtar -DskipTests -Pnative
> 3. tar tvzf ./hadoop-dist/target/hadoop-3.0.0-alpha2-SNAPSHOT.tar.gz | grep 
> hadoop-hdfs-3.0.0-alpha2-snapshot.jar
> 4. tar tvzf ./hadoop-dist/target/hadoop-3.0.0-alpha2-SNAPSHOT.tar.gz | grep 
> hadoop-common-3.0.0-alpha2-snapshot.jar



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13896) disribution tarball is mising almost all of the hadoop jars

2016-12-12 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743261#comment-15743261
 ] 

Allen Wittenauer commented on HADOOP-13896:
---

FYI: I have not confirmed if this happens in any other branches.  

> disribution tarball is mising almost all of the hadoop jars
> ---
>
> Key: HADOOP-13896
> URL: https://issues.apache.org/jira/browse/HADOOP-13896
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> From what I can tell, all of the hdfs and common jars from their respective 
> lib dirs are missing, excluding hadoop-hdfs-client and hadoop-hdfs-nfs. But 
> there are likely more.
> Steps to reproduce:
> 1. ./start-build-env.sh
> 2. mvn install -Pdist,src -Dtar -DskipTests -Pnative
> 3. tar tvzf ./hadoop-dist/target/hadoop-3.0.0-alpha2-SNAPSHOT.tar.gz | grep 
> hadoop-hdfs-3.0.0-alpha2-snapshot.jar
> 4. tar tvzf ./hadoop-dist/target/hadoop-3.0.0-alpha2-SNAPSHOT.tar.gz | grep 
> hadoop-common-3.0.0-alpha2-snapshot.jar



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13896) disribution tarball is mising almost all of the hadoop jars

2016-12-12 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-13896:
--
Description: 
>From what I can tell, all of the hdfs and common jars from their respective 
>lib dirs are missing, excluding hadoop-hdfs-client and hadoop-hdfs-nfs. But 
>there are likely more.

Steps to reproduce:

1. ./start-build-env.sh
2. mvn install -Pdist,src -Dtar -DskipTests -Pnative
3. tar tvzf ./hadoop-dist/target/hadoop-3.0.0-alpha2-SNAPSHOT.tar.gz | grep 
hadoop-hdfs-3.0.0-alpha2-snapshot.jar
4. tar tvzf ./hadoop-dist/target/hadoop-3.0.0-alpha2-SNAPSHOT.tar.gz | grep 
hadoop-common-3.0.0-alpha2-snapshot.jar

  was:
>From what I can tell, all of the hdfs and common jars from their respective 
>lib dirs are missing, excluding hadoop-hdfs-client and hadoop-hdfs-nfs. But 
>there are likely more.

Steps to reproduce:

1. ./start-build-env.sh
2. mvn install -Pdist,src -Dtar -DskipTests -Pnative


> disribution tarball is mising almost all of the hadoop jars
> ---
>
> Key: HADOOP-13896
> URL: https://issues.apache.org/jira/browse/HADOOP-13896
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> From what I can tell, all of the hdfs and common jars from their respective 
> lib dirs are missing, excluding hadoop-hdfs-client and hadoop-hdfs-nfs. But 
> there are likely more.
> Steps to reproduce:
> 1. ./start-build-env.sh
> 2. mvn install -Pdist,src -Dtar -DskipTests -Pnative
> 3. tar tvzf ./hadoop-dist/target/hadoop-3.0.0-alpha2-SNAPSHOT.tar.gz | grep 
> hadoop-hdfs-3.0.0-alpha2-snapshot.jar
> 4. tar tvzf ./hadoop-dist/target/hadoop-3.0.0-alpha2-SNAPSHOT.tar.gz | grep 
> hadoop-common-3.0.0-alpha2-snapshot.jar



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13895) Make FileStatus Serializable

2016-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743254#comment-15743254
 ] 

Hadoop QA commented on HADOOP-13895:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 52s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
14s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.security.token.delegation.web.TestWebDelegationToken |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13895 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842867/HADOOP-13895.000.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 01996ea65a86 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f66f618 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11252/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11252/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs-client U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11252/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT  

[jira] [Updated] (HADOOP-13896) disribution tarball is mising almost all of the hadoop jars

2016-12-12 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-13896:
--
Description: 
>From what I can tell, all of the hdfs and common jars from their respective 
>lib dirs are missing, excluding hadoop-hdfs-client and hadoop-hdfs-nfs. But 
>there are likely more.

Steps to reproduce:

1. ./start-build-env.sh
2. mvn install -Pdist,src -Dtar -DskipTests -Pnative

  was:From what I can tell, all of the hdfs and common jars from their 
respective lib dirs are missing, excluding hadoop-hdfs-client and 
hadoop-hdfs-nfs. But there are likely more.


> disribution tarball is mising almost all of the hadoop jars
> ---
>
> Key: HADOOP-13896
> URL: https://issues.apache.org/jira/browse/HADOOP-13896
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> From what I can tell, all of the hdfs and common jars from their respective 
> lib dirs are missing, excluding hadoop-hdfs-client and hadoop-hdfs-nfs. But 
> there are likely more.
> Steps to reproduce:
> 1. ./start-build-env.sh
> 2. mvn install -Pdist,src -Dtar -DskipTests -Pnative



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13896) disribution tarball is mising almost all of the hadoop jars

2016-12-12 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-13896:
-

 Summary: disribution tarball is mising almost all of the hadoop 
jars
 Key: HADOOP-13896
 URL: https://issues.apache.org/jira/browse/HADOOP-13896
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0-alpha2
Reporter: Allen Wittenauer
Priority: Blocker


>From what I can tell, all of the hdfs and common jars from their respective 
>lib dirs are missing, excluding hadoop-hdfs-client and hadoop-hdfs-nfs. But 
>there are likely more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13895) Make FileStatus Serializable

2016-12-12 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13895:
---
Assignee: Chris Douglas  (was: Mingliang Liu)

> Make FileStatus Serializable
> 
>
> Key: HADOOP-13895
> URL: https://issues.apache.org/jira/browse/HADOOP-13895
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Chris Douglas
>Assignee: Chris Douglas
>Priority: Minor
> Attachments: HADOOP-13895.000.patch
>
>
> Some frameworks rely on Java serialization to pass objects between processes. 
> FileStatus is a common argument, but it only supports Writable serialization 
> without special handling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13895) Make FileStatus Serializable

2016-12-12 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu reassigned HADOOP-13895:
--

Assignee: Mingliang Liu

> Make FileStatus Serializable
> 
>
> Key: HADOOP-13895
> URL: https://issues.apache.org/jira/browse/HADOOP-13895
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Chris Douglas
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: HADOOP-13895.000.patch
>
>
> Some frameworks rely on Java serialization to pass objects between processes. 
> FileStatus is a common argument, but it only supports Writable serialization 
> without special handling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13890) Unit tests should use SPNEGO principal with realm

2016-12-12 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743229#comment-15743229
 ] 

Arpit Agarwal commented on HADOOP-13890:


+1 for the patch. Verified it fixes the unit tests. [~brahmareddy], do you have 
any additional comments?

> Unit tests should use SPNEGO principal with realm
> -
>
> Key: HADOOP-13890
> URL: https://issues.apache.org/jira/browse/HADOOP-13890
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13890.00.patch, HADOOP-13890.01.patch
>
>
> TestWebDelegationToken, TestKMS , TestTrashWithSecureEncryptionZones and 
> TestSecureEncryptionZoneWithKMS started failing in trunk because the SPENGO 
> principle used in these test are incomplete: HTTP/localhost assuming the 
> default realm will be applied at authentication time. This ticket is opened 
> to fix these unit test with complete HTTP principal.
> {noformat}
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Invalid SPNEGO sequence, status code: 403
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.readToken(KerberosAuthenticator.java:371)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.access$300(KerberosAuthenticator.java:53)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:317)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:287)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:287)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:205)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:132)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:298)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:170)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:373)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:782)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:779)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$4.run(TestWebDelegationToken.java:715)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.doAsKerberosUser(TestWebDelegationToken.java:712)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:778)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:729)
>  {noformat}
>  *Jenkins URL* 
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/251/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11240/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13709) Clean up subprocesses spawned by Shell.java:runCommand when the shell process exits

2016-12-12 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated HADOOP-13709:
-
Attachment: HADOOP-13709.009.patch

Fixed checkstyle issues

> Clean up subprocesses spawned by Shell.java:runCommand when the shell process 
> exits
> ---
>
> Key: HADOOP-13709
> URL: https://issues.apache.org/jira/browse/HADOOP-13709
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HADOOP-13709.001.patch, HADOOP-13709.002.patch, 
> HADOOP-13709.003.patch, HADOOP-13709.004.patch, HADOOP-13709.005.patch, 
> HADOOP-13709.006.patch, HADOOP-13709.007.patch, HADOOP-13709.008.patch, 
> HADOOP-13709.009.patch, HADOOP-13709.009.patch
>
>
> The runCommand code in Shell.java can get into a situation where it will 
> ignore InterruptedExceptions and refuse to shutdown due to being in I/O 
> waiting for the return value of the subprocess that was spawned. We need to 
> allow for the subprocess to be interrupted and killed when the shell process 
> gets killed. Currently the JVM will shutdown and all of the subprocesses will 
> be orphaned and not killed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13578) Add Codec for ZStandard Compression

2016-12-12 Thread churro morales (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743180#comment-15743180
 ] 

churro morales commented on HADOOP-13578:
-

HI [~jlowe]

Thanks for taking the time to review.  I agree with all of the above comments 
and will correct those issues.  The last question you had was related to the 
ZSTD_endStream().  The endStream() finishes the frame and writes the epilogue 
only if the uncompressed buffer has been fully consumed.  Otherwise it 
basically does the same thing as ZSTD_compressStream(). 

You are correct, if the output buffer is too small it may not be able to flush. 
 There is a check in ZSTD_endStream() which does this: 

{code} 
size_t const notEnded = ZSTD_compressStream_generic(zcs, ostart, 
, , , zsf_end);  
size_t const remainingToFlush = zcs->outBuffContentSize - 
zcs->outBuffFlushedSize;
op += sizeWritten;
if (remainingToFlush) {
output->pos += sizeWritten;
return remainingToFlush + ZSTD_BLOCKHEADERSIZE /* final empty block 
*/ + (zcs->checksum * 4);
}
   // Create the epilogue and flush the epilogue

{code}

so if there is still data to be consumed the library wont finish the frame, 
thus making it safe to call repeatedly with our framework because we never set 
the finished flag until the epilogue has been written successfully.  

The code in the CompressorStream.java which calls our codec simply does this:

{code} 
@Override
  public void finish() throws IOException {
if (!compressor.finished()) {
  compressor.finish();
  while (!compressor.finished()) {
compress();
  }
}
  }
{code}

So I believe we wont drop any data with the way things are done.  Please let me 
know if I am missing something obvious here :). 




> Add Codec for ZStandard Compression
> ---
>
> Key: HADOOP-13578
> URL: https://issues.apache.org/jira/browse/HADOOP-13578
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: churro morales
>Assignee: churro morales
> Attachments: HADOOP-13578.patch, HADOOP-13578.v1.patch, 
> HADOOP-13578.v2.patch, HADOOP-13578.v3.patch, HADOOP-13578.v4.patch, 
> HADOOP-13578.v5.patch, HADOOP-13578.v6.patch
>
>
> ZStandard: https://github.com/facebook/zstd has been used in production for 6 
> months by facebook now.  v1.0 was recently released.  Create a codec for this 
> library.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13597) Switch KMS from Tomcat to Jetty

2016-12-12 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743171#comment-15743171
 ] 

John Zhuge commented on HADOOP-13597:
-

Thanks for the review.

Will remove hadoop_deprecate_envvar and document the deprecated envvars in 
Release Notes.

Will switch hadoop_using_envvar to use hadoop_debug. And will skip passoword 
envvars.

Will change other similar places to use hadoop_mkdir.

> Switch KMS from Tomcat to Jetty
> ---
>
> Key: HADOOP-13597
> URL: https://issues.apache.org/jira/browse/HADOOP-13597
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-13597.001.patch, HADOOP-13597.002.patch, 
> HADOOP-13597.003.patch, HADOOP-13597.004.patch
>
>
> The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
> other good options, I would propose switching to {{Jetty 9}} for the 
> following reasons:
> * Easier migration. Both Tomcat and Jetty are based on {{Servlet 
> Containers}}, so we don't have change client code that much. It would require 
> more work to switch to {{JAX-RS}}.
> * Well established.
> * Good performance and scalability.
> Other alternatives:
> * Jersey + Grizzly
> * Tomcat 8
> Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13673) Update scripts to be smarter when running with privilege

2016-12-12 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743157#comment-15743157
 ] 

Allen Wittenauer commented on HADOOP-13673:
---

OK, found a bug with failure.  Since the su isn't exec'd, we continue on and do 
weird things.  Need to add some protection around that in a few spots.

> Update scripts to be smarter when running with privilege
> 
>
> Key: HADOOP-13673
> URL: https://issues.apache.org/jira/browse/HADOOP-13673
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>  Labels: security
> Attachments: HADOOP-13673.00.patch, HADOOP-13673.01.patch, 
> HADOOP-13673.02.patch
>
>
> As work continues on HADOOP-13397, it's become evident that we need better 
> hooks to start daemons as specifically configured users.  Via the 
> (command)_(subcommand)_USER environment variables in 3.x, we actually have a 
> standardized way to do that.  This in turn means we can make the sbin scripts 
> super functional with a bit of updating:
> * Consolidate start-dfs.sh and start-secure-dns.sh into one script
> * Make start-\*.sh and stop-\*.sh know how to switch users when run as root
> * Undeprecate start/stop-all.sh so that it could be used as root for 
> production purposes and as a single user for non-production users



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13709) Clean up subprocesses spawned by Shell.java:runCommand when the shell process exits

2016-12-12 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743101#comment-15743101
 ] 

Jason Lowe commented on HADOOP-13709:
-

Thanks for updating the patch!  The unit test failure appears to be unrelated.  
It would be good to cleanup the checkstyle line length nits.  Also there's a 
missing space for "Shellprocesses".


> Clean up subprocesses spawned by Shell.java:runCommand when the shell process 
> exits
> ---
>
> Key: HADOOP-13709
> URL: https://issues.apache.org/jira/browse/HADOOP-13709
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HADOOP-13709.001.patch, HADOOP-13709.002.patch, 
> HADOOP-13709.003.patch, HADOOP-13709.004.patch, HADOOP-13709.005.patch, 
> HADOOP-13709.006.patch, HADOOP-13709.007.patch, HADOOP-13709.008.patch, 
> HADOOP-13709.009.patch
>
>
> The runCommand code in Shell.java can get into a situation where it will 
> ignore InterruptedExceptions and refuse to shutdown due to being in I/O 
> waiting for the return value of the subprocess that was spawned. We need to 
> allow for the subprocess to be interrupted and killed when the shell process 
> gets killed. Currently the JVM will shutdown and all of the subprocesses will 
> be orphaned and not killed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13895) Make FileStatus Serializable

2016-12-12 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-13895:
---
Status: Patch Available  (was: Open)

> Make FileStatus Serializable
> 
>
> Key: HADOOP-13895
> URL: https://issues.apache.org/jira/browse/HADOOP-13895
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Chris Douglas
>Priority: Minor
> Attachments: HADOOP-13895.000.patch
>
>
> Some frameworks rely on Java serialization to pass objects between processes. 
> FileStatus is a common argument, but it only supports Writable serialization 
> without special handling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13895) Make FileStatus Serializable

2016-12-12 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743043#comment-15743043
 ] 

Chris Douglas commented on HADOOP-13895:


[~andrew.wang], [~ste...@apache.org] please review

> Make FileStatus Serializable
> 
>
> Key: HADOOP-13895
> URL: https://issues.apache.org/jira/browse/HADOOP-13895
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Chris Douglas
>Priority: Minor
> Attachments: HADOOP-13895.000.patch
>
>
> Some frameworks rely on Java serialization to pass objects between processes. 
> FileStatus is a common argument, but it only supports Writable serialization 
> without special handling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13886) s3guard: ITestS3AFileOperationCost.testFakeDirectoryDeletion failure

2016-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743038#comment-15743038
 ] 

Hadoop QA commented on HADOOP-13886:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
49s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
22s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13886 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842860/HADOOP-13886-HADOOP-13345.000.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b36a4c20c3f2 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-13345 / d354cd1 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11251/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11251/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> s3guard:  ITestS3AFileOperationCost.testFakeDirectoryDeletion failure
> -
>
> Key: HADOOP-13886
> URL: https://issues.apache.org/jira/browse/HADOOP-13886
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Mingliang Liu
> Attachments: HADOOP-13886-HADOOP-13345.000.patch
>
>
> 

[jira] [Updated] (HADOOP-13895) Make FileStatus Serializable

2016-12-12 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-13895:
---
Attachment: HADOOP-13895.000.patch

Moved relevant {{Serializable}} code from HDFS-6984.
{quote}
Changed {{FileStatus}} to be {{Serializable}}, per [~ste...@apache.org]'s 
suggestion. This cascaded to a few other classes, which I halted at 
{{HdfsBlockLocation}} (changing the final ref to transient). Looking through 
its usage this is probably correct, since the fields not redundant with 
{{BlockLocation}} are things like tokens, which are internal(?) to DFSClient.
{quote}

> Make FileStatus Serializable
> 
>
> Key: HADOOP-13895
> URL: https://issues.apache.org/jira/browse/HADOOP-13895
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Chris Douglas
>Priority: Minor
> Attachments: HADOOP-13895.000.patch
>
>
> Some frameworks rely on Java serialization to pass objects between processes. 
> FileStatus is a common argument, but it only supports Writable serialization 
> without special handling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13895) Make FileStatus Serializable

2016-12-12 Thread Chris Douglas (JIRA)
Chris Douglas created HADOOP-13895:
--

 Summary: Make FileStatus Serializable
 Key: HADOOP-13895
 URL: https://issues.apache.org/jira/browse/HADOOP-13895
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: Chris Douglas
Priority: Minor


Some frameworks rely on Java serialization to pass objects between processes. 
FileStatus is a common argument, but it only supports Writable serialization 
without special handling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13070) classloading isolation improvements for stricter dependencies

2016-12-12 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15742855#comment-15742855
 ] 

Sangjin Lee edited comment on HADOOP-13070 at 12/12/16 8:14 PM:


I’ve been testing the latest patch (HADOOP-13398) and there seems to be one 
interesting issue with the stricter classpath isolation, and that has to do 
with the {{ServiceLoader}}.

(1) {{ServiceLoader}}
The {{ServiceLoader}} essentially uses the following pattern to load service 
classes dynamically:
{code}
Enumeration defs = 
classloader.getResources(“META-INF/services/org.foo.ServiceInterface”);
while (defs.hasMoreElements()) {
  URL def = defs.nextElement();
  Iterator names = parse(def);
  for (String name : names) {
ServiceInterface si = (ServiceInterface)Class.forName(name, classloader);
  }
}
{code}
First off, {{ClassLoader.getResources()}} will return all service files, 
*regardless of* where the service file is, either in the user classpath or the 
parent classpath (bit more discussion on {{ClassLoader.getResources()}} below).

Since all service files have been located and the calling class of 
{{Class.forName()}} is {{ServiceLoader}} which is a system class, all service 
classes will be successfully loaded, *regardless of* where the service class 
is, either in the user classpath or the parent classpath.

Technically this would represent an opportunity to circumvent the isolation and 
load stuff from the parent classpath. That said, we could still regard this as 
a variation of a “system facility providing a way to load things from both 
classpaths” case mentioned in the proposal (section 2-1).

I thought about plugging this possibility, but there doesn’t seem to be an 
unambiguous way to do this.

One approach I considered is to go above the call stack to identify who’s 
calling {{ServiceLoader.load()}}. Suppose we use the calling class to enforce 
stricter loading. If a user class is a calling class, it would load service 
files from both the user classpath and the parent classpath. However, as it 
iterates over the classes, it will fail to load a non-system parent class. This 
causes a *hard* failure on the iteration on {{ServiceLoader}}.

On the other hand, we could try to determine somehow a certain service file is 
a “non-system parent service file” and not return that service file resource 
with {{ClassLoader.getResources()}} to begin with. However, the notion of a 
“non-system parent service file” is not well defined, and I don’t think there 
is a way to define this clearly.

I think the best way forward is to allow {{ServiceLoader}} to load services 
from both the user and the parent classpath. I’d love to hear your thoughts on 
this.

(2) {{ClassLoader.getResources()}}
Currently {{ApplicationClassLoader}} does not override this. The javadoc for 
{{ClassLoader.getResources()}} states:
{noformat}
…
The search order is described in the documentation for getResource(String).
{noformat}
Since we do not override this today, we return the resources from the parent 
first and then from the child, which is not quite the same as what the javadoc 
indicates. So it seems to me that at minimum we want to change the order of 
resources so that it returns the child resources first.

The next question is whether it should return a (non-system) parent resource if 
a user class calls this method. We could tighten this to filter out non-system 
parent resource. I am leaning towards making that change.

Thoughts? Feedback? Concerns?
cc [~busbey]



was (Author: sjlee0):
I’ve been testing the latest patch (HADOOP-13998) and there seems to be one 
interesting issue with the stricter classpath isolation, and that has to do 
with the {{ServiceLoader}}.

(1) {{ServiceLoader}}
The {{ServiceLoader}} essentially uses the following pattern to load service 
classes dynamically:
{code}
Enumeration defs = 
classloader.getResources(“META-INF/services/org.foo.ServiceInterface”);
while (defs.hasMoreElements()) {
  URL def = defs.nextElement();
  Iterator names = parse(def);
  for (String name : names) {
ServiceInterface si = (ServiceInterface)Class.forName(name, classloader);
  }
}
{code}
First off, {{ClassLoader.getResources()}} will return all service files, 
*regardless of* where the service file is, either in the user classpath or the 
parent classpath (bit more discussion on {{ClassLoader.getResources()}} below).

Since all service files have been located and the calling class of 
{{Class.forName()}} is {{ServiceLoader}} which is a system class, all service 
classes will be successfully loaded, *regardless of* where the service class 
is, either in the user classpath or the parent classpath.

Technically this would represent an opportunity to circumvent the isolation and 
load stuff from the parent classpath. That said, we could still regard this as 
a variation of a “system facility providing a way to load things from 

[jira] [Updated] (HADOOP-13886) s3guard: ITestS3AFileOperationCost.testFakeDirectoryDeletion failure

2016-12-12 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13886:
---
Target Version/s: HADOOP-13345
  Status: Patch Available  (was: Open)

> s3guard:  ITestS3AFileOperationCost.testFakeDirectoryDeletion failure
> -
>
> Key: HADOOP-13886
> URL: https://issues.apache.org/jira/browse/HADOOP-13886
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
> Attachments: HADOOP-13886-HADOOP-13345.000.patch
>
>
> testFakeDirectoryDeletion(org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost) 
>  Time elapsed: 10.011 sec  <<< FAILURE!
> java.lang.AssertionError: after rename(srcFilePath, destFilePath): 
> directories_created expected:<1> but was:<0>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:555)
> at 
> org.apache.hadoop.fs.s3a.S3ATestUtils$MetricDiff.assertDiffEquals(S3ATestUtils.java:431)
> at 
> org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost.testFakeDirectoryDeletion(ITestS3AFileOperationCost.java:254)
> More details to follow in comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13886) s3guard: ITestS3AFileOperationCost.testFakeDirectoryDeletion failure

2016-12-12 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13886:
---
Attachment: HADOOP-13886-HADOOP-13345.000.patch

Thanks [~ste...@apache.org] for your suggestion.

The v0 patch disables the integration test case if the metadata store is 
enabled.

In this case the fake directories are not created after rename in S3 and we 
think it's OK. Or else, we should keep the test and update the S3AFileSystem 
integration with S3Guard.

> s3guard:  ITestS3AFileOperationCost.testFakeDirectoryDeletion failure
> -
>
> Key: HADOOP-13886
> URL: https://issues.apache.org/jira/browse/HADOOP-13886
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
> Attachments: HADOOP-13886-HADOOP-13345.000.patch
>
>
> testFakeDirectoryDeletion(org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost) 
>  Time elapsed: 10.011 sec  <<< FAILURE!
> java.lang.AssertionError: after rename(srcFilePath, destFilePath): 
> directories_created expected:<1> but was:<0>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:555)
> at 
> org.apache.hadoop.fs.s3a.S3ATestUtils$MetricDiff.assertDiffEquals(S3ATestUtils.java:431)
> at 
> org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost.testFakeDirectoryDeletion(ITestS3AFileOperationCost.java:254)
> More details to follow in comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13886) s3guard: ITestS3AFileOperationCost.testFakeDirectoryDeletion failure

2016-12-12 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu reassigned HADOOP-13886:
--

Assignee: Mingliang Liu

> s3guard:  ITestS3AFileOperationCost.testFakeDirectoryDeletion failure
> -
>
> Key: HADOOP-13886
> URL: https://issues.apache.org/jira/browse/HADOOP-13886
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Mingliang Liu
> Attachments: HADOOP-13886-HADOOP-13345.000.patch
>
>
> testFakeDirectoryDeletion(org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost) 
>  Time elapsed: 10.011 sec  <<< FAILURE!
> java.lang.AssertionError: after rename(srcFilePath, destFilePath): 
> directories_created expected:<1> but was:<0>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:555)
> at 
> org.apache.hadoop.fs.s3a.S3ATestUtils$MetricDiff.assertDiffEquals(S3ATestUtils.java:431)
> at 
> org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost.testFakeDirectoryDeletion(ITestS3AFileOperationCost.java:254)
> More details to follow in comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13673) Update scripts to be smarter when running with privilege

2016-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15742960#comment-15742960
 ] 

Hadoop QA commented on HADOOP-13673:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
16s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
11s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m  
8s{color} | {color:green} The patch generated 0 new + 112 unchanged - 12 fixed 
= 112 total (was 124) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
1s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
49s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
59s{color} | {color:green} hadoop-yarn in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
51s{color} | {color:green} hadoop-mapreduce-project in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 34m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13673 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842853/HADOOP-13673.02.patch 
|
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux b55eae3a42ae 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f66f618 |
| shellcheck | v0.4.5 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11250/artifact/patchprocess/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11250/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs hadoop-yarn-project/hadoop-yarn 
hadoop-mapreduce-project U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11250/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Update scripts to be smarter when running with privilege
> 
>
> Key: HADOOP-13673
> URL: https://issues.apache.org/jira/browse/HADOOP-13673
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>  Labels: security
> Attachments: HADOOP-13673.00.patch, HADOOP-13673.01.patch, 
> HADOOP-13673.02.patch
>
>
> As work continues on HADOOP-13397, it's become evident that we need better 
> hooks to start daemons as specifically configured users.  Via the 
> (command)_(subcommand)_USER environment variables in 3.x, we actually have a 
> standardized way to do that.  This in turn means we can make the sbin scripts 
> super functional with a bit 

[jira] [Commented] (HADOOP-13890) Unit tests should use SPNEGO principal with realm

2016-12-12 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15742948#comment-15742948
 ] 

Xiaoyu Yao commented on HADOOP-13890:
-

The failed unit test is not related to this change. It is tracked by 
https://issues.apache.org/jira/browse/HDFS-11131. 



> Unit tests should use SPNEGO principal with realm
> -
>
> Key: HADOOP-13890
> URL: https://issues.apache.org/jira/browse/HADOOP-13890
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13890.00.patch, HADOOP-13890.01.patch
>
>
> TestWebDelegationToken, TestKMS , TestTrashWithSecureEncryptionZones and 
> TestSecureEncryptionZoneWithKMS started failing in trunk because the SPENGO 
> principle used in these test are incomplete: HTTP/localhost assuming the 
> default realm will be applied at authentication time. This ticket is opened 
> to fix these unit test with complete HTTP principal.
> {noformat}
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Invalid SPNEGO sequence, status code: 403
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.readToken(KerberosAuthenticator.java:371)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.access$300(KerberosAuthenticator.java:53)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:317)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:287)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:287)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:205)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:132)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:298)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:170)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:373)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:782)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$5.call(TestWebDelegationToken.java:779)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken$4.run(TestWebDelegationToken.java:715)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.doAsKerberosUser(TestWebDelegationToken.java:712)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:778)
>   at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:729)
>  {noformat}
>  *Jenkins URL* 
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/251/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11240/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13597) Switch KMS from Tomcat to Jetty

2016-12-12 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15742931#comment-15742931
 ] 

Allen Wittenauer edited comment on HADOOP-13597 at 12/12/16 7:48 PM:
-

{code}
+  hadoop_deprecate_envvar CATALINA_OUT
+  hadoop_deprecate_envvar CATALINA_PID
+  hadoop_deprecate_envvar KMS_ADMIN_PORT
+  hadoop_deprecate_envvar KMS_CATALINA_HOME
+  hadoop_deprecate_envvar KMS_SSL_TRUSTSTORE_PASS
{code}

We don't do this anywhere in the scripts. Instead, this is documented in the 
release notes.  It's just extra console noise otherwise.

{code}
+  hadoop_using_envvar KMS_HOME
{code}

This doesn't appear to have actually been configurable by users.  I don't see a 
reason to add it now.

{code}
+  hadoop_using_envvar KMS_HTTP_PORT
+  hadoop_using_envvar KMS_LOG
+  hadoop_using_envvar KMS_MAX_HTTP_HEADER_SIZE
+  hadoop_using_envvar KMS_MAX_THREADS
+  hadoop_using_envvar KMS_SSL_ENABLED
+  hadoop_using_envvar KMS_SSL_KEYSTORE_FILE
+  hadoop_using_envvar KMS_SSL_KEYSTORE_PASS
+  hadoop_using_envvar KMS_TEMP
{code}

I know that branch-2 spit out a bunch of stuff, but it always felt wrong. Is 
this actually valuable to anyone that aren't developers? Would \-\-debug be a 
better usage here? It seems like a lot of noise on the console that's probably 
more appropriate for a log file

{code}
+  hadoop_using_envvar KMS_SSL_KEYSTORE_PASS
{code}

(!) Is this actually printing a password to the screen?!?  Is there any chance 
we can switch this to being read from a file?  env vars are exposed in /proc on 
some OSes...

{code}
hadoop_mkdir
{code}

We have a bunch of places where this same construct is being used.  We should 
probably replace all of them if we're going to add a function to do it.

FWIW, I definitely prefer the single function for handling kms.  So Much 
Better.  (and I'm really ecstatic of dropping kms-config.sh , etc, etc.)




was (Author: aw):
{code}
+  hadoop_deprecate_envvar CATALINA_OUT
+  hadoop_deprecate_envvar CATALINA_PID
+  hadoop_deprecate_envvar KMS_ADMIN_PORT
+  hadoop_deprecate_envvar KMS_CATALINA_HOME
+  hadoop_deprecate_envvar KMS_SSL_TRUSTSTORE_PASS
{code}

We don't do this anywhere in the scripts. Instead, this is documented in the 
release notes.  It's just extra console noise otherwise.

{code}
+  hadoop_using_envvar KMS_HOME
{code}

This doesn't appear to have actually be configurable by users.  I don't see a 
reason to add it now.

{code}
+  hadoop_using_envvar KMS_HTTP_PORT
+  hadoop_using_envvar KMS_LOG
+  hadoop_using_envvar KMS_MAX_HTTP_HEADER_SIZE
+  hadoop_using_envvar KMS_MAX_THREADS
+  hadoop_using_envvar KMS_SSL_ENABLED
+  hadoop_using_envvar KMS_SSL_KEYSTORE_FILE
+  hadoop_using_envvar KMS_SSL_KEYSTORE_PASS
+  hadoop_using_envvar KMS_TEMP
{code}

I know that branch-2 spit out a bunch of stuff, but it always felt wrong. Is 
this actually valuable to anyone that aren't developers? Would \-\-debug be a 
better usage here? It seems like a lot of noise on the console that's probably 
more appropriate for a log file

{code}
+  hadoop_using_envvar KMS_SSL_KEYSTORE_PASS
{code}

(!)






> Switch KMS from Tomcat to Jetty
> ---
>
> Key: HADOOP-13597
> URL: https://issues.apache.org/jira/browse/HADOOP-13597
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-13597.001.patch, HADOOP-13597.002.patch, 
> HADOOP-13597.003.patch, HADOOP-13597.004.patch
>
>
> The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
> other good options, I would propose switching to {{Jetty 9}} for the 
> following reasons:
> * Easier migration. Both Tomcat and Jetty are based on {{Servlet 
> Containers}}, so we don't have change client code that much. It would require 
> more work to switch to {{JAX-RS}}.
> * Well established.
> * Good performance and scalability.
> Other alternatives:
> * Jersey + Grizzly
> * Tomcat 8
> Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13597) Switch KMS from Tomcat to Jetty

2016-12-12 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15742931#comment-15742931
 ] 

Allen Wittenauer commented on HADOOP-13597:
---

{code}
+  hadoop_deprecate_envvar CATALINA_OUT
+  hadoop_deprecate_envvar CATALINA_PID
+  hadoop_deprecate_envvar KMS_ADMIN_PORT
+  hadoop_deprecate_envvar KMS_CATALINA_HOME
+  hadoop_deprecate_envvar KMS_SSL_TRUSTSTORE_PASS
{code}

We don't do this anywhere in the scripts. Instead, this is documented in the 
release notes.  It's just extra console noise otherwise.

{code}
+  hadoop_using_envvar KMS_HOME
{code}

This doesn't appear to have actually be configurable by users.  I don't see a 
reason to add it now.

{code}
+  hadoop_using_envvar KMS_HTTP_PORT
+  hadoop_using_envvar KMS_LOG
+  hadoop_using_envvar KMS_MAX_HTTP_HEADER_SIZE
+  hadoop_using_envvar KMS_MAX_THREADS
+  hadoop_using_envvar KMS_SSL_ENABLED
+  hadoop_using_envvar KMS_SSL_KEYSTORE_FILE
+  hadoop_using_envvar KMS_SSL_KEYSTORE_PASS
+  hadoop_using_envvar KMS_TEMP
{code}

I know that branch-2 spit out a bunch of stuff, but it always felt wrong. Is 
this actually valuable to anyone that aren't developers? Would \-\-debug be a 
better usage here? It seems like a lot of noise on the console that's probably 
more appropriate for a log file

{code}
+  hadoop_using_envvar KMS_SSL_KEYSTORE_PASS
{code}

(!)






> Switch KMS from Tomcat to Jetty
> ---
>
> Key: HADOOP-13597
> URL: https://issues.apache.org/jira/browse/HADOOP-13597
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-13597.001.patch, HADOOP-13597.002.patch, 
> HADOOP-13597.003.patch, HADOOP-13597.004.patch
>
>
> The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
> other good options, I would propose switching to {{Jetty 9}} for the 
> following reasons:
> * Easier migration. Both Tomcat and Jetty are based on {{Servlet 
> Containers}}, so we don't have change client code that much. It would require 
> more work to switch to {{JAX-RS}}.
> * Well established.
> * Good performance and scalability.
> Other alternatives:
> * Jersey + Grizzly
> * Tomcat 8
> Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13597) Switch KMS from Tomcat to Jetty

2016-12-12 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15742930#comment-15742930
 ] 

John Zhuge commented on HADOOP-13597:
-

Agree with your complexity concern. I intially created a separate API but 
decided to multiplex {{hadoop_deprecate_envvar}} because it is such a good name 
:)  Any suggestion on the separate API? How about {{hadoop_retire_envvar}}?

> Switch KMS from Tomcat to Jetty
> ---
>
> Key: HADOOP-13597
> URL: https://issues.apache.org/jira/browse/HADOOP-13597
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-13597.001.patch, HADOOP-13597.002.patch, 
> HADOOP-13597.003.patch, HADOOP-13597.004.patch
>
>
> The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
> other good options, I would propose switching to {{Jetty 9}} for the 
> following reasons:
> * Easier migration. Both Tomcat and Jetty are based on {{Servlet 
> Containers}}, so we don't have change client code that much. It would require 
> more work to switch to {{JAX-RS}}.
> * Well established.
> * Good performance and scalability.
> Other alternatives:
> * Jersey + Grizzly
> * Tomcat 8
> Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13890) Unit tests should use SPNEGO principal with realm

2016-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15742910#comment-15742910
 ] 

Hadoop QA commented on HADOOP-13890:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
6s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
47s{color} | {color:green} root: The patch generated 0 new + 153 unchanged - 2 
fixed = 153 total (was 155) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
25s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
18s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 28s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}160m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.checker.TestThrottledAsyncChecker |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13890 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842818/HADOOP-13890.01.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1ac439b2ce6d 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f66f618 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11248/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11248/testReport/ |
| modules | C: 

[jira] [Commented] (HADOOP-13578) Add Codec for ZStandard Compression

2016-12-12 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15742894#comment-15742894
 ] 

Jason Lowe commented on HADOOP-13578:
-

Thanks for updating the patch!  Sorry for the delay in re-review, as I was 
travelling last week.  The ASF license warnings appear to be unrelated, as does 
the unit test failure.  Comments on the patch:

IO_COMPRESSION_CODEC_ZSTD_BUFFER_SIZE should be 
IO_COMPRESSION_CODEC_ZSTD_BUFFER_SIZE_DEFAULT

The ZStandardCompressor(level, bufferSize) constructor should be implemented in 
terms of ZStandardCompressor(level, inputBufferSize, outputBufferSize) to 
reduce the code redundancy and improve maintainability.

If inflateBytesDirect took a dest position then we wouldn't need to slice the 
destination buffer, avoiding a temporary objection allocation.

I'm not sure about this snippet in ZStandardDirectDecompressor#decompress:
{code}
  endOfInput = !src.hasRemaining();
  super.finished = !src.hasRemaining();
{code}
After adding the super.finished change it appears the endOfInput and finished 
variables will simply reflect each other, so endOfInput would be redundant.  
Instead I think endOfInput may still be necessary, and the finished variable 
handling will already be taken care of by the JNI code.  In other words I think 
we don't need the super.finished line that was added.  However maybe I'm 
missing something here.

I think we need to return after these THROW macros or risk passing null 
pointers to the zstd library:
{code}
void * uncompressed_bytes = (*env)->GetDirectBufferAddress(env, 
uncompressed_direct_buf);
if (!uncompressed_bytes) {
THROW(env, "java/lang/InternalError", "Undefined memory address for 
uncompressedDirectBuf");
}

// Get the output direct buffer
void * compressed_bytes = (*env)->GetDirectBufferAddress(env, 
compressed_direct_buf);
if (!compressed_bytes) {
THROW(env, "java/lang/InternalError", "Undefined memory address for 
compressedDirectBuf");
}
{code}
Same comments for similar code on the decompressor JNI side.

Related to this code:
{code}
if (remaining_to_flush) {
(*env)->SetBooleanField(env, this, ZStandardCompressor_finished, 
JNI_FALSE);
} else {
(*env)->SetBooleanField(env, this, ZStandardCompressor_finished, 
JNI_TRUE);
}
{code}
What I meant by my previous review was to have the JNI layer clear the finished 
flag in any case we didn't set it, including the case where the finish flag 
isn't asking us to wrap things up.  Actually thinking about this further, I'm 
not sure we need the case where we set it to false.  We just need to set it to 
true in the JNI layer when the end flush indicates there aren't any more bytes 
to flush.  In every other case finished should already be false, and the user 
will need to call reset() to clear both the finish and finished flags to 
continue with more input.  So I think we can put this back to the way it was.  
Sorry for the confusion on my part.

If there are still bytes left to consume in the uncompressed buffer then I 
don't think we want to call ZSTD_endStream.  We should only be calling 
ZSTD_endStream when the uncompressed buffer has been fully consumed, correct?  
Otherwise we may fail to compress the final chunk of input data when the user 
sets the finish flag and the zstd library doesn't fully consume the input on 
the ZSTD_compressStream call.  That could result in a "successful" compression 
that drops data.


> Add Codec for ZStandard Compression
> ---
>
> Key: HADOOP-13578
> URL: https://issues.apache.org/jira/browse/HADOOP-13578
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: churro morales
>Assignee: churro morales
> Attachments: HADOOP-13578.patch, HADOOP-13578.v1.patch, 
> HADOOP-13578.v2.patch, HADOOP-13578.v3.patch, HADOOP-13578.v4.patch, 
> HADOOP-13578.v5.patch, HADOOP-13578.v6.patch
>
>
> ZStandard: https://github.com/facebook/zstd has been used in production for 6 
> months by facebook now.  v1.0 was recently released.  Create a codec for this 
> library.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13597) Switch KMS from Tomcat to Jetty

2016-12-12 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15742886#comment-15742886
 ] 

Allen Wittenauer commented on HADOOP-13597:
---

OK, I misread oldvar as newvar in the patch file. I'm not a fan of the change 
since it just increases the complexity of the code and the run time when oldvar 
is in use.

> Switch KMS from Tomcat to Jetty
> ---
>
> Key: HADOOP-13597
> URL: https://issues.apache.org/jira/browse/HADOOP-13597
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-13597.001.patch, HADOOP-13597.002.patch, 
> HADOOP-13597.003.patch, HADOOP-13597.004.patch
>
>
> The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
> other good options, I would propose switching to {{Jetty 9}} for the 
> following reasons:
> * Easier migration. Both Tomcat and Jetty are based on {{Servlet 
> Containers}}, so we don't have change client code that much. It would require 
> more work to switch to {{JAX-RS}}.
> * Well established.
> * Good performance and scalability.
> Other alternatives:
> * Jersey + Grizzly
> * Tomcat 8
> Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13673) Update scripts to be smarter when running with privilege

2016-12-12 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-13673:
--
Labels: security  (was: )

> Update scripts to be smarter when running with privilege
> 
>
> Key: HADOOP-13673
> URL: https://issues.apache.org/jira/browse/HADOOP-13673
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>  Labels: security
> Attachments: HADOOP-13673.00.patch, HADOOP-13673.01.patch, 
> HADOOP-13673.02.patch
>
>
> As work continues on HADOOP-13397, it's become evident that we need better 
> hooks to start daemons as specifically configured users.  Via the 
> (command)_(subcommand)_USER environment variables in 3.x, we actually have a 
> standardized way to do that.  This in turn means we can make the sbin scripts 
> super functional with a bit of updating:
> * Consolidate start-dfs.sh and start-secure-dns.sh into one script
> * Make start-\*.sh and stop-\*.sh know how to switch users when run as root
> * Undeprecate start/stop-all.sh so that it could be used as root for 
> production purposes and as a single user for non-production users



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13397) Add dockerfile for Hadoop

2016-12-12 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15742863#comment-15742863
 ] 

Allen Wittenauer edited comment on HADOOP-13397 at 12/12/16 7:21 PM:
-

Just an update.  HADOOP-13673 is nearing completion. After it gets committed, 
it'll be trivial to run multiple daemons as multiple users in a single docker 
image for those releases that have this patch.  This greatly simplifies any 
start up code needed under docker, as the su'ing is handled by Apache Hadoop 
itself.


was (Author: aw):
Just an update.  HADOOP-13673 is nearly completion. After it gets committed, 
it'll be trivial to run multiple daemons as multiple users in a single docker 
image for those releases that have this patch.  This greatly simplifies any 
start up code needed under docker, as the su'ing is handled by Apache Hadoop 
itself.

> Add dockerfile for Hadoop
> -
>
> Key: HADOOP-13397
> URL: https://issues.apache.org/jira/browse/HADOOP-13397
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Klaus Ma
>Assignee: Allen Wittenauer
> Attachments: HADOOP-13397.DNC001.patch
>
>
> For now, there's no community version Dockerfile in Hadoop; most of docker 
> images are provided by vendor, e.g. 
> 1. Cloudera's image: https://hub.docker.com/r/cloudera/quickstart/
> 2.  From HortonWorks sequenceiq: 
> https://hub.docker.com/r/sequenceiq/hadoop-docker/
> 3. MapR provides the mapr-sandbox-base: 
> https://hub.docker.com/r/maprtech/mapr-sandbox-base/
> The proposal of this JIRA is to provide a community version Dockerfile in 
> Hadoop, and here's some requirement:
> 1. Seperated docker image for master & agents, e.g. resource manager & node 
> manager
> 2. Default configuration to start master & agent instead of configurating 
> manually
> 3. Start Hadoop process as no-daemon
> Here's my dockerfile to start master/agent: 
> https://github.com/k82cn/outrider/tree/master/kubernetes/imgs/yarn
> I'd like to contribute it after polishing :).
> Email Thread : 
> http://mail-archives.apache.org/mod_mbox/hadoop-user/201607.mbox/%3CSG2PR04MB162977CFE150444FA022510FB6370%40SG2PR04MB1629.apcprd04.prod.outlook.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13397) Add dockerfile for Hadoop

2016-12-12 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15742863#comment-15742863
 ] 

Allen Wittenauer commented on HADOOP-13397:
---

Just an update.  HADOOP-13673 is nearly completion. After it gets committed, 
it'll be trivial to run multiple daemons as multiple users in a single docker 
image for those releases that have this patch.  This greatly simplifies any 
start up code needed under docker, as the su'ing is handled by Apache Hadoop 
itself.

> Add dockerfile for Hadoop
> -
>
> Key: HADOOP-13397
> URL: https://issues.apache.org/jira/browse/HADOOP-13397
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Klaus Ma
>Assignee: Allen Wittenauer
> Attachments: HADOOP-13397.DNC001.patch
>
>
> For now, there's no community version Dockerfile in Hadoop; most of docker 
> images are provided by vendor, e.g. 
> 1. Cloudera's image: https://hub.docker.com/r/cloudera/quickstart/
> 2.  From HortonWorks sequenceiq: 
> https://hub.docker.com/r/sequenceiq/hadoop-docker/
> 3. MapR provides the mapr-sandbox-base: 
> https://hub.docker.com/r/maprtech/mapr-sandbox-base/
> The proposal of this JIRA is to provide a community version Dockerfile in 
> Hadoop, and here's some requirement:
> 1. Seperated docker image for master & agents, e.g. resource manager & node 
> manager
> 2. Default configuration to start master & agent instead of configurating 
> manually
> 3. Start Hadoop process as no-daemon
> Here's my dockerfile to start master/agent: 
> https://github.com/k82cn/outrider/tree/master/kubernetes/imgs/yarn
> I'd like to contribute it after polishing :).
> Email Thread : 
> http://mail-archives.apache.org/mod_mbox/hadoop-user/201607.mbox/%3CSG2PR04MB162977CFE150444FA022510FB6370%40SG2PR04MB1629.apcprd04.prod.outlook.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13070) classloading isolation improvements for stricter dependencies

2016-12-12 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15742855#comment-15742855
 ] 

Sangjin Lee commented on HADOOP-13070:
--

I’ve been testing the latest patch (HADOOP-13998) and there seems to be one 
interesting issue with the stricter classpath isolation, and that has to do 
with the {{ServiceLoader}}.

(1) {{ServiceLoader}}
The {{ServiceLoader}} essentially uses the following pattern to load service 
classes dynamically:
{code}
Enumeration defs = 
classloader.getResources(“META-INF/services/org.foo.ServiceInterface”);
while (defs.hasMoreElements()) {
  URL def = defs.nextElement();
  Iterator names = parse(def);
  for (String name : names) {
ServiceInterface si = (ServiceInterface)Class.forName(name, classloader);
  }
}
{code}
First off, {{ClassLoader.getResources()}} will return all service files, 
*regardless of* where the service file is, either in the user classpath or the 
parent classpath (bit more discussion on {{ClassLoader.getResources()}} below).

Since all service files have been located and the calling class of 
{{Class.forName()}} is {{ServiceLoader}} which is a system class, all service 
classes will be successfully loaded, *regardless of* where the service class 
is, either in the user classpath or the parent classpath.

Technically this would represent an opportunity to circumvent the isolation and 
load stuff from the parent classpath. That said, we could still regard this as 
a variation of a “system facility providing a way to load things from both 
classpaths” case mentioned in the proposal (section 2-1).

I thought about plugging this possibility, but there doesn’t seem to be an 
unambiguous way to do this.

One approach I considered is to go above the call stack to identify who’s 
calling {{ServiceLoader.load()}}. Suppose we use the calling class to enforce 
stricter loading. If a user class is a calling class, it would load service 
files from both the user classpath and the parent classpath. However, as it 
iterates over the classes, it will fail to load a non-system parent class. This 
causes a *hard* failure on the iteration on {{ServiceLoader}}.

On the other hand, we could try to determine somehow a certain service file is 
a “non-system parent service file” and not return that service file resource 
with {{ClassLoader.getResources()}} to begin with. However, the notion of a 
“non-system parent service file” is not well defined, and I don’t think there 
is a way to define this clearly.

I think the best way forward is to allow {{ServiceLoader}} to load services 
from both the user and the parent classpath. I’d love to hear your thoughts on 
this.

(2) {{ClassLoader.getResources()}}
Currently {{ApplicationClassLoader}} does not override this. The javadoc for 
{{ClassLoader.getResources()}} states:
{noformat}
…
The search order is described in the documentation for getResource(String).
{noformat}
Since we do not override this today, we return the resources from the parent 
first and then from the child, which is not quite the same as what the javadoc 
indicates. So it seems to me that at minimum we want to change the order of 
resources so that it returns the child resources first.

The next question is whether it should return a (non-system) parent resource if 
a user class calls this method. We could tighten this to filter out non-system 
parent resource. I am leaning towards making that change.

Thoughts? Feedback? Concerns?
cc [~busbey]


> classloading isolation improvements for stricter dependencies
> -
>
> Key: HADOOP-13070
> URL: https://issues.apache.org/jira/browse/HADOOP-13070
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Critical
> Attachments: HADOOP-13070.poc.01.patch, Test.java, TestDriver.java, 
> classloading-improvements-ideas-v.3.pdf, classloading-improvements-ideas.pdf, 
> classloading-improvements-ideas.v.2.pdf, lib.jar
>
>
> Related to HADOOP-11656, we would like to make a number of improvements in 
> terms of classloading isolation so that user-code can run safely without 
> worrying about dependency collisions with the Hadoop dependencies.
> By the same token, it should raised the quality of the user code and its 
> specified classpath so that users get clear signals if they specify incorrect 
> classpaths.
> This will contain a proposal that will include several improvements some of 
> which may not be backward compatible. As such, it should be targeted to the 
> next major revision of Hadoop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For 

[jira] [Comment Edited] (HADOOP-13673) Update scripts to be smarter when running with privilege

2016-12-12 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15742849#comment-15742849
 ] 

Allen Wittenauer edited comment on HADOOP-13673 at 12/12/16 7:17 PM:
-

-02:
* minor bug fixes
* add unit tests
* doc fixes
* shellcheck fixes
* verified that users can run daemons as root if they set _USER=root (as 
ill-advised as that is)


was (Author: aw):
-02:
* minor bug fixes
* add unit tests
* doc fixes
* shellcheck fixes


> Update scripts to be smarter when running with privilege
> 
>
> Key: HADOOP-13673
> URL: https://issues.apache.org/jira/browse/HADOOP-13673
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-13673.00.patch, HADOOP-13673.01.patch, 
> HADOOP-13673.02.patch
>
>
> As work continues on HADOOP-13397, it's become evident that we need better 
> hooks to start daemons as specifically configured users.  Via the 
> (command)_(subcommand)_USER environment variables in 3.x, we actually have a 
> standardized way to do that.  This in turn means we can make the sbin scripts 
> super functional with a bit of updating:
> * Consolidate start-dfs.sh and start-secure-dns.sh into one script
> * Make start-\*.sh and stop-\*.sh know how to switch users when run as root
> * Undeprecate start/stop-all.sh so that it could be used as root for 
> production purposes and as a single user for non-production users



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13673) Update scripts to be smarter when running with privilege

2016-12-12 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-13673:
--
Attachment: HADOOP-13673.02.patch

-02:
* minor bug fixes
* add unit tests
* doc fixes
* shellcheck fixes


> Update scripts to be smarter when running with privilege
> 
>
> Key: HADOOP-13673
> URL: https://issues.apache.org/jira/browse/HADOOP-13673
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-13673.00.patch, HADOOP-13673.01.patch, 
> HADOOP-13673.02.patch
>
>
> As work continues on HADOOP-13397, it's become evident that we need better 
> hooks to start daemons as specifically configured users.  Via the 
> (command)_(subcommand)_USER environment variables in 3.x, we actually have a 
> standardized way to do that.  This in turn means we can make the sbin scripts 
> super functional with a bit of updating:
> * Consolidate start-dfs.sh and start-secure-dns.sh into one script
> * Make start-\*.sh and stop-\*.sh know how to switch users when run as root
> * Undeprecate start/stop-all.sh so that it could be used as root for 
> production purposes and as a single user for non-production users



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13893) dynamodb dependency -> compile

2016-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15742791#comment-15742791
 ] 

Hadoop QA commented on HADOOP-13893:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
11s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
 3s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13893 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842847/HADOOP-13893-HADOOP-13345.000.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 8ffc141703db 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-13345 / d354cd1 |
| Default Java | 1.8.0_111 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11249/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11249/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> dynamodb dependency -> compile
> --
>
> Key: HADOOP-13893
> URL: https://issues.apache.org/jira/browse/HADOOP-13893
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
> Attachments: HADOOP-13893-HADOOP-13345.000.patch
>
>
> unless/until we can go back to a unified JAR for the AWS SDK, we need to add 
> the dynamoDB dependencies to the compile category, so it gets picked up  
> downstream.
> without this, clients may discover that they cant talk to s3guard endpoints.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: 

[jira] [Updated] (HADOOP-13893) dynamodb dependency -> compile

2016-12-12 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13893:
---
Target Version/s: HADOOP-13345
  Status: Patch Available  (was: Open)

> dynamodb dependency -> compile
> --
>
> Key: HADOOP-13893
> URL: https://issues.apache.org/jira/browse/HADOOP-13893
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
> Attachments: HADOOP-13893-HADOOP-13345.000.patch
>
>
> unless/until we can go back to a unified JAR for the AWS SDK, we need to add 
> the dynamoDB dependencies to the compile category, so it gets picked up  
> downstream.
> without this, clients may discover that they cant talk to s3guard endpoints.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13893) dynamodb dependency -> compile

2016-12-12 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13893:
---
Attachment: HADOOP-13893-HADOOP-13345.000.patch

Thanks Steve for creating this JIRA.

The exception we got is missing class:
{code}
Exception in thread "main" java.lang.NoClassDefFoundError: 
com/amazonaws/services/dynamodbv2/model/ResourceNotFoundException
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at 
org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2237)
at 
org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2202)
at 
org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2298)
at 
org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2324)
at 
org.apache.hadoop.fs.s3a.s3guard.S3Guard.getMetadataStoreClass(S3Guard.java:164)
at 
org.apache.hadoop.fs.s3a.s3guard.S3Guard.getMetadataStore(S3Guard.java:128)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:252)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3246)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:123)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3295)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3263)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:476)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:239)
at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:222)
at 
org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
at org.apache.hadoop.fs.shell.Command.run(Command.java:166)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:326)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:384)
Caused by: java.lang.ClassNotFoundException: 
com.amazonaws.services.dynamodbv2.model.ResourceNotFoundException
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 24 more
{code}

And the hadoop classpath before the patch was:
{code}
$ hadoop classpath
/Users/mliu/Applications/hadoop/etc/hadoop:/Users/mliu/Applications/hadoop/share/hadoop/common/lib/*:/Users/mliu/Applications/hadoop/share/hadoop/common/*:/Users/mliu/Applications/hadoop/share/hadoop/tools/lib/jmespath-java-1.0.jar:/Users/mliu/Applications/hadoop/share/hadoop/tools/lib/joda-time-2.9.4.jar:/Users/mliu/Applications/hadoop/share/hadoop/tools/lib/aws-java-sdk-s3-1.11.45.jar:/Users/mliu/Applications/hadoop/share/hadoop/tools/lib/aws-java-sdk-core-1.11.45.jar:/Users/mliu/Applications/hadoop/share/hadoop/tools/lib/jackson-dataformat-cbor-2.7.8.jar:/Users/mliu/Applications/hadoop/share/hadoop/tools/lib/ion-java-1.0.1.jar:/Users/mliu/Applications/hadoop/share/hadoop/tools/lib/aws-java-sdk-kms-1.11.45.jar:/Users/mliu/Applications/hadoop/share/hadoop/tools/lib/hadoop-aws-3.0.0-alpha2-SNAPSHOT.jar:/Users/mliu/Applications/hadoop/share/hadoop/hdfs:/Users/mliu/Applications/hadoop/share/hadoop/hdfs/lib/*:/Users/mliu/Applications/hadoop/share/hadoop/hdfs/*:/Users/mliu/Applications/hadoop/share/hadoop/mapreduce/*:/Users/mliu/Applications/hadoop/share/hadoop/yarn/lib/*:/Users/mliu/Applications/hadoop/share/hadoop/yarn/*
{code}
after:
{code}
$ hadoop classpath

[jira] [Created] (HADOOP-13894) s3a troubleshooting to cover the "JSON parse error" message

2016-12-12 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13894:
---

 Summary: s3a troubleshooting to cover the "JSON parse error" 
message
 Key: HADOOP-13894
 URL: https://issues.apache.org/jira/browse/HADOOP-13894
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: documentation, fs/s3
Affects Versions: 2.7.3
Reporter: Steve Loughran
Priority: Minor


Generally problems in s3 IO during list operations surface as JSON parse 
errors, with the underlying cause lost (unchecked HTTP error code, text/plain, 
text/html, interrupted thread).

Document this fact in the troubleshooting section



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13893) dynamodb dependency -> compile

2016-12-12 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13893:
---

 Summary: dynamodb dependency -> compile
 Key: HADOOP-13893
 URL: https://issues.apache.org/jira/browse/HADOOP-13893
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.0.0-alpha2
Reporter: Steve Loughran
Assignee: Mingliang Liu


unless/until we can go back to a unified JAR for the AWS SDK, we need to add 
the dynamoDB dependencies to the compile category, so it gets picked up  
downstream.

without this, clients may discover that they cant talk to s3guard endpoints.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13336) S3A to support per-bucket configuration

2016-12-12 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15742062#comment-15742062
 ] 

Steve Loughran edited comment on HADOOP-13336 at 12/12/16 4:53 PM:
---

This also matters for HADOOP-13345, where different buckets will have different 
MD caching policies, including "none", so increasing its priority.

Possibilities —all of which assume fallling back to the s3a standard options as 
default. This means: no way to undefine an option.

h3. per-bucket config. 

Lets you define everything for a bucket. 

Examples

* {{s3a://olap2/data/2017}} : s3a URL {{s3a://olap2/data/2017}}, with config 
set {{fs.s3a.bucket.olap2}} in configuration
* {{s3a://landsat}} : s3a URL {{s3a://landsat}}, with config set 
{{fs.s3a.landsat}} for anonymous credentials and no dynamo



Pro
* Conceptually simple
* easy to get started
* trivial to move between other s3 clients, just change the prefix/redeclare 
the prefix binding

Con
* Expensive/complicated to maintain configurations.
* Need to delve into the configuration file to see what the mappings are. I can 
see this mattering a lot in support calls related to authentication.

h3. config via domain name in URL

This is what swift does: you define a domain, with the domain defining 
everything.


* {{s3a://olap2.dynamo/data/2017}} with config sett {{fs.s3a.binding.dynamo}}
* {{s3a://landsat.anon}} with config set {{fs.s3a.binding.anon}} for anonymous 
credentials and no dynamo

Pro:
* shared config across multiple buckets
* easy to see when buckets have different config options without having delve 
into the configuration file to see what the mappings are.
* Matches {{swift://}}
* Similar-ish to {{wasb}}

Con:
* the need to explicitly declare a domain stops you transparently moving a 
bucket to a different set of options, unless you add a way to also bind a 
bucket to a "configuration domain", behind the scenes.
* S3 supports FQDNs already
* not going to be compatible with previous versions, external s3 clients, (e.g. 
EMR)

h3. Config via user:pass property in URL

This is a bit like Azure, where the FQDN defines the binding, and the username 
defines the bucket. Here I'm proposing the ability to define a new user which 
declares the binding info.

Examples

* {{s3a://dynamo@olap2/data/2017}} : s3a URL {{s3a://olap2/data/2017}}, with 
config set {{fs.s3a.binding.dynamo}}
* {{s3a://anon@landsat}} : s3a URL {{s3a://landsat}}, with config set 
{{fs.s3a.binding.anon}} for anonymous credentials.


Pro:
* Better for sharing configuration options across buckets
* consistent model with the AWSID:secret mechanism today
* see at a glance what the configuration set used is, easy to change.
* no complications related to domain naming
* Easy to switch between configuration sets on the command line, without adding 
new properties.

Con:
* needs different URLs if you don't want the default.

h3. Fundamentally rework Hadoop configuration to support a hierarchical 
configuration mechanism.

I'm not really proposing this, just wanted to mention it as the nominal 
ultimate option, instead of what we have today with different things (HA, 
Swift, Azure, etc), all defining different mechanisms for tuning customisation.




was (Author: ste...@apache.org):
This also matters for HADOOP-13345, where different buckets will have different 
MD caching policies, including "none", so increasing its priority.

Possibilities —all of which assume fallling back to the s3a standard options as 
default. This means: no way to undefine an option.

h3. per-bucket config. 

Lets you define everything for a bucket. 

Examples

* {{s3a://olap2/data/2017}} : s3a URL {{s3a://olap2/data/2017}}, with config 
set {{fs.s3a.bucket.olap2}} in configuration
* {{s3a://anon@landsat}} : s3a URL {{s3a://landsat}}, with config set 
{{fs.s3a.landsat}} for anonymous credentials and no dynamo



Pro
* Conceptually simple
* easy to get started
* trivial to move between other s3 clients, just change the prefix/redeclare 
the prefix binding

Con
* Expensive/complicated to maintain configurations.
* Need to delve into the configuration file to see what the mappings are. I can 
see this mattering a lot in support calls related to authentication.

h3. config via domain name in URL

This is what swift does: you define a domain, with the domain defining 
everything.


* {{s3a://olap2.dynamo/data/2017}} with config sett {{fs.s3a.binding.dynamo}}
* {{s3a://landsat.anon}} with config set {{fs.s3a.binding.anon}} for anonymous 
credentials and no dynamo

Pro:
* shared config across multiple buckets
* easy to see when buckets have different config options without having delve 
into the configuration file to see what the mappings are.
* Matches {{swift://}}
* Similar-ish to {{wasb}}

Con:
* the need to explicitly declare a domain stops you transparently moving a 
bucket to a different set of options, unless you add 

[jira] [Updated] (HADOOP-13863) Hadoop - Azure: Add a new SAS key mode for WASB.

2016-12-12 Thread Dushyanth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dushyanth updated HADOOP-13863:
---
Attachment: HADOOP-13863.002.patch

I apologize, I was not seeing this issue after the initial build was 
successful. Was able to repro the problem with a clean build. I have fixed the 
issue, and removed the implementation checks. The check was put in place to not 
affect  Mock tests, but have disabled those tests for SAS mode in the new patch.

> Hadoop - Azure: Add a new SAS key mode for WASB.
> 
>
> Key: HADOOP-13863
> URL: https://issues.apache.org/jira/browse/HADOOP-13863
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: azure, fs/azure
>Affects Versions: 2.8.0
>Reporter: Dushyanth
>Assignee: Dushyanth
> Attachments: HADOOP-13863.001.patch, HADOOP-13863.002.patch, WASB-SAS 
> Key Mode-Design Proposal.pdf
>
>
> Current implementation of WASB, only supports Azure storage keys and SAS key 
> being provided via org.apache.hadoop.conf.Configuration, which results in 
> these secrets residing in the same address space as the WASB process and 
> providing complete access to the Azure storage account and its containers. 
> Added to the fact that WASB does not inherently support ACL's, WASB is its 
> current implementation cannot be securely used for environments like secure 
> hadoop cluster. This JIRA is created to add a new mode in WASB, which 
> operates on Azure Storage SAS keys, which can provide fine grained timed 
> access to containers and blobs, providing a segway into supporting WASB for 
> secure hadoop cluster.
> More details about the issue and the proposal are provided in the design 
> proposal document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >