[jira] [Commented] (HADOOP-13206) Delegation token cannot be fetched and used by different versions of client

2016-06-01 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311795#comment-15311795
 ] 

Zhe Zhang commented on HADOOP-13206:


Jenkins failure is unrelated to the change and passes locally.

I took another look at the error I was getting. A possible reason is that the 
clients use different {{hadoop.security.token.service.use_ip}} config values.

Basically, this {{selectToken}} method goes over the {{tokens}} list and find 
the first matching token. There are only two matching criteria: the {{token}} 
has the right {{kind}} (e.g. is HDFS delegation token instead of YARN), and the 
{{service}} text matches with the give {{service}} parameter.

So any {{Text}} can be used as the input parameter. A token could also have 
{{service}} field as arbitrary {{Text}}.This JIRA only aims at improving the 
matching logic for the two {{service}} strings such that an IP address matches 
with a {{host:port}} string pointing to the same node. If the given {{service}} 
or the {{service}} in the {{token}} are in other formats and don't 
string-match, we should just pass over that {{token}} instead of throwing an 
exception or printing a WARN.

> Delegation token cannot be fetched and used by different versions of client
> ---
>
> Key: HADOOP-13206
> URL: https://issues.apache.org/jira/browse/HADOOP-13206
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.3.0, 2.6.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HADOOP-13206.00.patch, HADOOP-13206.01.patch, 
> HADOOP-13206.02.patch
>
>
> We have observed that an HDFS delegation token fetched by a 2.3.0 client 
> cannot be used by a 2.6.1 client, and vice versa. Through some debugging I 
> found that it's a mismatch between the token's {{service}} and the 
> {{service}} of the filesystem (e.g. {{webhdfs://host.something.com:50070/}}). 
> One would be in numerical IP address and one would be in non-numerical 
> hostname format.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13155) Implement TokenRenewer to renew and cancel delegation tokens in KMS

2016-06-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311702#comment-15311702
 ] 

Hadoop QA commented on HADOOP-13155:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 12s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 
57s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 9s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 42s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 25s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 43s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 23s 
{color} | {color:red} root: The patch generated 2 new + 160 unchanged - 6 fixed 
= 162 total (was 166) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 14s 
{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 28s 
{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 53s 
{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 22s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807604/HADOOP-13155.07.patch 
|
| JIRA Issue | HADOOP-13155 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3b27fd943a0a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 16b1cc7 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9644/artifact/patchprocess/diff-checkstyle-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9644/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-common-project/hadoop-kms hadoop-hdfs-project/hadoop-hdfs-client U: . |
| Console output | 

[jira] [Updated] (HADOOP-13155) Implement TokenRenewer to renew and cancel delegation tokens in KMS

2016-06-01 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13155:
---
Attachment: HADOOP-13155.07.patch

Thanks again [~andrew.wang] for the discussion, and for getting HADOOP-13228 in.
Attached patch 7 is a rebase on latest trunk.

Regarding the config and compat:
- {{DFSUtilClient}} now has a static variable for storing the config name, and 
defaults to dfs._ for compat. It can be set via 
{{DFSUtilClient#setKeyProviderUriKeyName}}.
- The provider creation logic is extracted to a new class {{KMSUtil}} in common.
- The newly added renewer classes uses {{KMSUtil}}, but since they're in 
common, they use the hadoop._ configs. Of course, this means one will need to 
have both configs set correctly so that the token can be renewed, but this 
seems to be the most compatible way.

> Implement TokenRenewer to renew and cancel delegation tokens in KMS
> ---
>
> Key: HADOOP-13155
> URL: https://issues.apache.org/jira/browse/HADOOP-13155
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13155.01.patch, HADOOP-13155.02.patch, 
> HADOOP-13155.03.patch, HADOOP-13155.04.patch, HADOOP-13155.05.patch, 
> HADOOP-13155.06.patch, HADOOP-13155.07.patch, HADOOP-13155.pre.patch
>
>
> Service DelegationToken (DT) renewal is done in Yarn by 
> {{org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer}},
>  where it calls {{Token#renew}} and uses ServiceLoader to find the renewer 
> class 
> ([code|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/Token.java#L382]),
>  and invokes the renew method from it.
> We seem to miss the token renewer class in KMS / HttpFSFileSystem, and hence 
> Yarn defaults to {{TrivialRenewer}} for DT of such kinds, resulting in the 
> token not being renewed.
> As a side note, {{HttpFSFileSystem}} does have a {{renewDelegationToken}} 
> API, but I don't see it invoked in hadoop code base. KMS does not have any 
> renew hook.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13232) Typo in exception in ValueQueue.java

2016-06-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311518#comment-15311518
 ] 

Hadoop QA commented on HADOOP-13232:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 29s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 5s 
{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 37s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807566/HADOOP-13232.001.patch
 |
| JIRA Issue | HADOOP-13232 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 62799157d3eb 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 16b1cc7 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9643/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9643/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Typo in exception in ValueQueue.java
> 
>
> Key: HADOOP-13232
> URL: https://issues.apache.org/jira/browse/HADOOP-13232
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Jiayi Zhou
>Assignee: Jiayi Zhou
>Priority: Trivial
> Attachments: HADOOP-13232.001.patch
>
>
> Typo in exception. Missing a 'c' 

[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311508#comment-15311508
 ] 

Hadoop QA commented on HADOOP-12893:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 9m 27s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 47s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 38s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 8m 41s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 54s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 7s 
{color} | {color:red} hadoop-project in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 6s 
{color} | {color:red} hadoop-project-dist in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 8m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 4s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 3s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 87m 37s {color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 154m 41s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeLifeline |
|   | 
hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestLogAggregationService
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807535/HADOOP-12893.009.patch
 |
| JIRA Issue | HADOOP-12893 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux a3f137e0b439 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 16b1cc7 |
| Default Java | 1.8.0_91 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9642/artifact/patchprocess/patch-mvninstall-hadoop-project.txt
 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9642/artifact/patchprocess/patch-mvninstall-hadoop-project-dist.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9642/artifact/patchprocess/patch-unit-root.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/9642/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9642/testReport/ |
| modules | C: hadoop-project hadoop-project-dist . 

[jira] [Commented] (HADOOP-13232) Typo in exception in ValueQueue.java

2016-06-01 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311450#comment-15311450
 ] 

Andrew Wang commented on HADOOP-13232:
--

Patch LGTM, thanks for the contribution Jiayi. I added you to the contributor 
role on HADOOP and HDFS so you can assign JIRAs to yourself now.

I'll commit once Jenkins comes back.

> Typo in exception in ValueQueue.java
> 
>
> Key: HADOOP-13232
> URL: https://issues.apache.org/jira/browse/HADOOP-13232
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Jiayi Zhou
>Assignee: Jiayi Zhou
>Priority: Trivial
> Attachments: HADOOP-13232.001.patch
>
>
> Typo in exception. Missing a 'c' in method getAtMost in ValueQueue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13232) Typo in exception in ValueQueue.java

2016-06-01 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13232:
-
Assignee: Jiayi Zhou  (was: Andrew Wang)

> Typo in exception in ValueQueue.java
> 
>
> Key: HADOOP-13232
> URL: https://issues.apache.org/jira/browse/HADOOP-13232
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Jiayi Zhou
>Assignee: Jiayi Zhou
>Priority: Trivial
> Attachments: HADOOP-13232.001.patch
>
>
> Typo in exception. Missing a 'c' in method getAtMost in ValueQueue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13232) Typo in exception in ValueQueue.java

2016-06-01 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13232:
-
Target Version/s: 2.8.0  (was: 3.0.0-alpha1)

> Typo in exception in ValueQueue.java
> 
>
> Key: HADOOP-13232
> URL: https://issues.apache.org/jira/browse/HADOOP-13232
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Jiayi Zhou
>Assignee: Jiayi Zhou
>Priority: Trivial
> Attachments: HADOOP-13232.001.patch
>
>
> Typo in exception. Missing a 'c' in method getAtMost in ValueQueue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13232) Typo in exception in ValueQueue.java

2016-06-01 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13232:
-
Affects Version/s: (was: 3.0.0-alpha1)
   2.6.0

> Typo in exception in ValueQueue.java
> 
>
> Key: HADOOP-13232
> URL: https://issues.apache.org/jira/browse/HADOOP-13232
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Jiayi Zhou
>Assignee: Jiayi Zhou
>Priority: Trivial
> Attachments: HADOOP-13232.001.patch
>
>
> Typo in exception. Missing a 'c' in method getAtMost in ValueQueue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13232) Typo in exception in ValueQueue.java

2016-06-01 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang reassigned HADOOP-13232:


Assignee: Andrew Wang

> Typo in exception in ValueQueue.java
> 
>
> Key: HADOOP-13232
> URL: https://issues.apache.org/jira/browse/HADOOP-13232
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.0.0-alpha1
>Reporter: Jiayi Zhou
>Assignee: Andrew Wang
>Priority: Trivial
> Attachments: HADOOP-13232.001.patch
>
>
> Typo in exception. Missing a 'c' in method getAtMost in ValueQueue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13232) Typo in exception in ValueQueue.java

2016-06-01 Thread Jiayi Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiayi Zhou updated HADOOP-13232:

Attachment: HADOOP-13232.001.patch

Trivial fix the typo.

> Typo in exception in ValueQueue.java
> 
>
> Key: HADOOP-13232
> URL: https://issues.apache.org/jira/browse/HADOOP-13232
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.0.0-alpha1
>Reporter: Jiayi Zhou
>Priority: Trivial
> Attachments: HADOOP-13232.001.patch
>
>
> Typo in exception. Missing a 'c' in method getAtMost in ValueQueue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13232) Typo in exception in ValueQueue.java

2016-06-01 Thread Jiayi Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiayi Zhou updated HADOOP-13232:

Status: Patch Available  (was: Open)

> Typo in exception in ValueQueue.java
> 
>
> Key: HADOOP-13232
> URL: https://issues.apache.org/jira/browse/HADOOP-13232
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.0.0-alpha1
>Reporter: Jiayi Zhou
>Priority: Trivial
> Attachments: HADOOP-13232.001.patch
>
>
> Typo in exception. Missing a 'c' in method getAtMost in ValueQueue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13232) Typo in exception in ValueQueue.java

2016-06-01 Thread Jiayi Zhou (JIRA)
Jiayi Zhou created HADOOP-13232:
---

 Summary: Typo in exception in ValueQueue.java
 Key: HADOOP-13232
 URL: https://issues.apache.org/jira/browse/HADOOP-13232
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms
Affects Versions: 3.0.0-alpha1
Reporter: Jiayi Zhou
Priority: Trivial


Typo in exception. Missing a 'c' in method getAtMost in ValueQueue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13131) Add tests to verify that S3A supports SSE-S3 encryption

2016-06-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311306#comment-15311306
 ] 

Hudson commented on HADOOP-13131:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #9896 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9896/])
HADOOP-13131. Add tests to verify that S3A supports SSE-S3 encryption. 
(cnauroth: rev 16b1cc7af9bd63b65ef50e1056f275a7baf111a2)
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestConstants.java
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AEncryptionFastOutputStream.java
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AEncryption.java
* hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
* hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/S3AScaleTestBase.java
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/AbstractS3ATestBase.java
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AFileSystemContract.java
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AEncryptionAlgorithmPropagation.java
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestUtils.java


> Add tests to verify that S3A supports SSE-S3 encryption
> ---
>
> Key: HADOOP-13131
> URL: https://issues.apache.org/jira/browse/HADOOP-13131
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13131-001.patch, HADOOP-13131-002.patch, 
> HADOOP-13131-003.patch, HADOOP-13131-004.patch, HADOOP-13131-005.patch, 
> HADOOP-13131-branch-2-006.patch, HADOOP-13131-branch-2-007.patch, 
> HADOOP-13131-branch-2-008.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Although S3A claims to support server-side S3 encryption (and does, if you 
> set the option), we don't have any test to verify this. Of course, as the 
> encryption is transparent, it's hard to test.
> Here's what I propose
> # a test which sets encryption = AES256; expects things to work as normal.
> # a test which sets encyption = DES and expects any operation creating a file 
> or directory to fail with a 400 "bad request" error



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12537) s3a: Add flag for session ID to allow Amazon STS temporary credentials

2016-06-01 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311279#comment-15311279
 ] 

Sean Mackrory commented on HADOOP-12537:


It looks to me like you can append to the session token and it will still work 
- it must just be parsing what it expects and not looking at the rest. I can 
make the test fail by prepending, or simply setting the empty string as the 
session token. As soon as I append to an otherwise valid session token, the 
test fails because the S3 access still works.

I thought I also observed behavior where authorization seemed to be cached for 
a short period of time, but I have been misinterpreting the behavior I just 
described. Going to run through a few more tweaks of the test case to ensure 
this is not the case.

> s3a: Add flag for session ID to allow Amazon STS temporary credentials
> --
>
> Key: HADOOP-12537
> URL: https://issues.apache.org/jira/browse/HADOOP-12537
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 2.7.1
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Minor
> Attachments: HADOOP-12537-branch-2.005.patch, HADOOP-12537.001.patch, 
> HADOOP-12537.002.patch, HADOOP-12537.003.patch, HADOOP-12537.004.patch, 
> HADOOP-12537.diff, HADOOP-12537.diff
>
>
> Amazon STS allows you to issue temporary access key id / secret key pairs for 
> your a user / role. However, using these credentials also requires specifying 
> a session ID. There is currently no such configuration property or the 
> required code to pass it through to the API (at least not that I can find) in 
> any of the S3 connectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13220) Ignore findbugs checking in MiniKdc#stop and add the kerby version hadoop-project/pom.xml

2016-06-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311277#comment-15311277
 ] 

Hadoop QA commented on HADOOP-13220:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
52s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 9s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 6s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 8s 
{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s 
{color} | {color:green} hadoop-minikdc in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 27s 
{color} | {color:green} hadoop-auth in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 59s 
{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 40s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807065/HADOOP-13220-V1.patch 
|
| JIRA Issue | HADOOP-13220 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux a95006280351 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0bc05e4 |
| Default Java | 1.8.0_91 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9641/testReport/ |
| modules | C: hadoop-project hadoop-common-project/hadoop-minikdc 
hadoop-common-project/hadoop-auth hadoop-common-project/hadoop-common U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9641/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ignore findbugs checking in MiniKdc#stop and add the kerby version 
> hadoop-project/pom.xml
> 

[jira] [Commented] (HADOOP-13207) Specify FileSystem listStatus and listFiles

2016-06-01 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311270#comment-15311270
 ] 

Aaron Fabbri commented on HADOOP-13207:
---

Took a quick scan of the attached patch.

Love the documentation, thanks.

{{+value is an instance of the `LocatedFileStatus` subclass of a 
`FileSystatus`,}}

Typo in last word (FileStatus)

{quote}
+Callers MUST assume that if the iterator is not used immediately then
+the iteration operation itself MAY fail.
{quote}

Not sure if this is well-defined.  "Immediately" has little meaning on a 
non-realtime system.  Maybe specify that, after some time, the iteration may 
fail, and thus clients should handle this with a retry?

{{+is defined as a generator of a `LocatedFileS`tatus` instance `ls`}}

Typo.

{quote}
+The ordering in which the elements of `resultset` are returned in the iterator
+is undefined.
+
+It follows, that the set of paths in
+
{quote}
Unfinished sentence?

{{+of data which must be collected in a single}}
Ditto.

{{+Get the block size for a s}}
Here as well.

Test code looks good to me.


> Specify FileSystem listStatus and listFiles
> ---
>
> Key: HADOOP-13207
> URL: https://issues.apache.org/jira/browse/HADOOP-13207
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13207-branch-2-001.patch, 
> HADOOP-13207-branch-2-002.patch
>
>
> The many `listStatus`, `listLocatedStatus` and `listFiles` operations have 
> not been completely covered in the FS specification. There's lots of implicit 
> use of {{listStatus()}} path, but no coverage or tests of the others.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-01 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311256#comment-15311256
 ] 

Allen Wittenauer commented on HADOOP-12893:
---

Actually, is there some reason this is a separate module from 
hadoop-build-tools?  (It has the same problem, FWIW.)  

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Attachments: HADOOP-12893.002.patch, HADOOP-12893.003.patch, 
> HADOOP-12893.004.patch, HADOOP-12893.005.patch, HADOOP-12893.006.patch, 
> HADOOP-12893.007.patch, HADOOP-12893.008.patch, HADOOP-12893.009.patch, 
> HADOOP-12893.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-01 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311250#comment-15311250
 ] 

Allen Wittenauer commented on HADOOP-12893:
---

Put this in the hadoop-resource-bundle's pom.xml:

{code}
  
 org.apache.maven.plugins
maven-antrun-plugin

  
dummy
validate

  run

  

  
{code}

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Attachments: HADOOP-12893.002.patch, HADOOP-12893.003.patch, 
> HADOOP-12893.004.patch, HADOOP-12893.005.patch, HADOOP-12893.006.patch, 
> HADOOP-12893.007.patch, HADOOP-12893.008.patch, HADOOP-12893.009.patch, 
> HADOOP-12893.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13171) Add StorageStatistics to S3A; instrument some more operations

2016-06-01 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311241#comment-15311241
 ] 

Chris Nauroth commented on HADOOP-13171:


[~ste...@apache.org], sorry, I just realized this needs to be rebased now that 
I have committed HADOOP-13131.  I guess one of them had to lose the race.  
We'll need one more patch revision for both trunk and branch-2.

> Add StorageStatistics to S3A; instrument some more operations
> -
>
> Key: HADOOP-13171
> URL: https://issues.apache.org/jira/browse/HADOOP-13171
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13171-014.patch, HADOOP-13171-branch-2-001.patch, 
> HADOOP-13171-branch-2-002.patch, HADOOP-13171-branch-2-003.patch, 
> HADOOP-13171-branch-2-004.patch, HADOOP-13171-branch-2-005.patch, 
> HADOOP-13171-branch-2-006.patch, HADOOP-13171-branch-2-007.patch, 
> HADOOP-13171-branch-2-008.patch, HADOOP-13171-branch-2-009.patch, 
> HADOOP-13171-branch-2-010.patch, HADOOP-13171-branch-2-011.patch, 
> HADOOP-13171-branch-2-012.patch, HADOOP-13171-branch-2-013.patch
>
>
> Add {{StorageStatistics}} support to S3A, collecting the same metrics as the 
> instrumentation, but sharing across all instances.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13206) Delegation token cannot be fetched and used by different versions of client

2016-06-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311239#comment-15311239
 ] 

Hadoop QA commented on HADOOP-13206:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 39s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 28s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 22s 
{color} | {color:red} hadoop-common-project/hadoop-common: The patch generated 
1 new + 2 unchanged - 0 fixed = 3 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 40s {color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 0s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807528/HADOOP-13206.02.patch 
|
| JIRA Issue | HADOOP-13206 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a2945a5bc7e2 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0bc05e4 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9640/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9640/artifact/patchprocess/whitespace-eol.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9640/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/9640/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9640/testReport/ |
| modules | C: 

[jira] [Updated] (HADOOP-13131) Add tests to verify that S3A supports SSE-S3 encryption

2016-06-01 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13131:
---
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I have committed this to trunk, branch-2 and branch-2.8.  Steve, thank you for 
this patch.

> Add tests to verify that S3A supports SSE-S3 encryption
> ---
>
> Key: HADOOP-13131
> URL: https://issues.apache.org/jira/browse/HADOOP-13131
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13131-001.patch, HADOOP-13131-002.patch, 
> HADOOP-13131-003.patch, HADOOP-13131-004.patch, HADOOP-13131-005.patch, 
> HADOOP-13131-branch-2-006.patch, HADOOP-13131-branch-2-007.patch, 
> HADOOP-13131-branch-2-008.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Although S3A claims to support server-side S3 encryption (and does, if you 
> set the option), we don't have any test to verify this. Of course, as the 
> encryption is transparent, it's hard to test.
> Here's what I propose
> # a test which sets encryption = AES256; expects things to work as normal.
> # a test which sets encyption = DES and expects any operation creating a file 
> or directory to fail with a 400 "bad request" error



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-01 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-12893:
---
Attachment: HADOOP-12893.009.patch

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Attachments: HADOOP-12893.002.patch, HADOOP-12893.003.patch, 
> HADOOP-12893.004.patch, HADOOP-12893.005.patch, HADOOP-12893.006.patch, 
> HADOOP-12893.007.patch, HADOOP-12893.008.patch, HADOOP-12893.009.patch, 
> HADOOP-12893.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-01 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-12893:
---
Attachment: HADOOP-12893.09.patch

Thanks [~aw] for looking into this. I think these are different problems: the 
pom in hadoop-resource-bundle didn't have version for its 
{{maven-remote-resources-plugin}}, good catch. I'm attaching patch 9 to fix 
that.

I think jenkins is saying hadoop-project cannot be built alone - I can 
reproduce that locally, but not sure what's the fix yet.
- remove m2 cache for hadoop-resource-bundle
- cd hadoop-project (or, any project)
- maven install, failed. :(

Seems we need to tell maven to have hadoop-resource-bundle built before 
building any of the projects, otherwise it's dependency logic will try to 
download and fail.

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Attachments: HADOOP-12893.002.patch, HADOOP-12893.003.patch, 
> HADOOP-12893.004.patch, HADOOP-12893.005.patch, HADOOP-12893.006.patch, 
> HADOOP-12893.007.patch, HADOOP-12893.008.patch, HADOOP-12893.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-01 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-12893:
---
Attachment: (was: HADOOP-12893.09.patch)

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Attachments: HADOOP-12893.002.patch, HADOOP-12893.003.patch, 
> HADOOP-12893.004.patch, HADOOP-12893.005.patch, HADOOP-12893.006.patch, 
> HADOOP-12893.007.patch, HADOOP-12893.008.patch, HADOOP-12893.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13171) Add StorageStatistics to S3A; instrument some more operations

2016-06-01 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13171:
---
Hadoop Flags: Reviewed

+1 for patch 014 on trunk.  I'll wait for a pre-commit run before committing.

bq. Tests working against S3 Ireland; couple of FNFE failures on the 
{{TestS3ABlockingThreadPool}} multipart tests with parallel test runs enabled; 
running non-parallel and all is well. Maybe that's one to isolate.

{{TestS3ABlockingThreadPool}} is run during the sequential phase now, so it's 
odd that running in parallel mode would make it fail.  Do you have a full stack 
trace for that failure?

I did notice a few potential small improvements to parallel test execution, so 
I filed HADOOP-13231 to track that.  I wouldn't expect any of that to impact 
{{TestS3ABlockingThreadPool}} though.

> Add StorageStatistics to S3A; instrument some more operations
> -
>
> Key: HADOOP-13171
> URL: https://issues.apache.org/jira/browse/HADOOP-13171
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13171-014.patch, HADOOP-13171-branch-2-001.patch, 
> HADOOP-13171-branch-2-002.patch, HADOOP-13171-branch-2-003.patch, 
> HADOOP-13171-branch-2-004.patch, HADOOP-13171-branch-2-005.patch, 
> HADOOP-13171-branch-2-006.patch, HADOOP-13171-branch-2-007.patch, 
> HADOOP-13171-branch-2-008.patch, HADOOP-13171-branch-2-009.patch, 
> HADOOP-13171-branch-2-010.patch, HADOOP-13171-branch-2-011.patch, 
> HADOOP-13171-branch-2-012.patch, HADOOP-13171-branch-2-013.patch
>
>
> Add {{StorageStatistics}} support to S3A, collecting the same metrics as the 
> instrumentation, but sharing across all instances.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13231) Isolate test path used by a few S3A tests for more reliable parallel execution.

2016-06-01 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311189#comment-15311189
 ] 

Chris Nauroth commented on HADOOP-13231:


{{TestS3A}} uses a path of /tests3afc, without consideration of the fork ID in 
parallel mode.

{{TestS3ADeleteManyFiles#testOpenCreate}} uses a path of /tests3a, also without 
consideration of the parallel fork ID.

I don't believe there is any reason these tests need to run in isolation during 
the sequential phase.  We can make the paths include the fork ID. 

> Isolate test path used by a few S3A tests for more reliable parallel 
> execution.
> ---
>
> Key: HADOOP-13231
> URL: https://issues.apache.org/jira/browse/HADOOP-13231
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
>
> I have noticed a few more spots in S3A tests that do not make use of the 
> isolated test directory path when running in parallel mode.  While I don't 
> have any evidence that this is really causing problems for parallel test runs 
> right now, it would still be good practice to clean these up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13220) Ignore findbugs checking in MiniKdc#stop and add the kerby version hadoop-project/pom.xml

2016-06-01 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-13220:
---
Status: Patch Available  (was: Open)

> Ignore findbugs checking in MiniKdc#stop and add the kerby version 
> hadoop-project/pom.xml
> -
>
> Key: HADOOP-13220
> URL: https://issues.apache.org/jira/browse/HADOOP-13220
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Jiajia Li
> Attachments: HADOOP-13220-V1.patch
>
>
> This is a  follow up jira from HADOOP-12911.
> 1. Now with the findbug warning:
> {noformat}
> org.apache.hadoop.minikdc.MiniKdc.stop() calls Thread.sleep() with a lock 
> held At MiniKdc.java:lock held At MiniKdc.java:[line 345] 
> {noformat}
> As discussed in HADOOP-12911:
> bq. Why was this committed with a findbugs errors rather than adding the 
> necessary plumbing in pom.xml to make it go away?
> we will add the findbugsExcludeFile.xml and will get rid of this given 
> kerby-1.0.0-rc3 release.
> 2. Add the kerby version hadoop-project/pom.xml
> bq. hadoop-project/pom.xml contains the dependencies of all libraries used in 
> all modules of hadoop, under dependencyManagement. Only here version will be 
> mentioned. All other Hadoop Modules will inherit hadoop-project, so all 
> submodules will use the same version. In submodule, version need not be 
> mentioned in pom.xml. This will make version management easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13231) Isolate test path used by a few S3A tests for more reliable parallel execution.

2016-06-01 Thread Chris Nauroth (JIRA)
Chris Nauroth created HADOOP-13231:
--

 Summary: Isolate test path used by a few S3A tests for more 
reliable parallel execution.
 Key: HADOOP-13231
 URL: https://issues.apache.org/jira/browse/HADOOP-13231
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3, test
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Minor


I have noticed a few more spots in S3A tests that do not make use of the 
isolated test directory path when running in parallel mode.  While I don't have 
any evidence that this is really causing problems for parallel test runs right 
now, it would still be good practice to clean these up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13206) Delegation token cannot be fetched and used by different versions of client

2016-06-01 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311169#comment-15311169
 ] 

Zhe Zhang commented on HADOOP-13206:


Thanks for the discussion Yongjun. Based on my tests, tokens fetched by version 
2.3 client have IP address and token fetched by version 2.6 client have host 
names. I'm still trying to find the code which makes this difference.

bq. 1. Do we expect the  to be either host name or ip address, or only 
host name is allowed?
bq. 2. Do we intend to support both hostname and ip address formats here? Based 
on my read of the jira description, seems we intend to support both
So yes, we should expect both host names and IP addresses in the {{service}} 
field.

This JIRA just serves an incremental fix to match an IP address and a host name 
pointing to the same host. In general, I guess {{service}} can be any text. 
That's why I'm using {{DEBUG}} level logging -- if {{service}} is not in 
{{host:port}} format, it might not indicate a bug.

Good point about log message, attaching patch to address.

> Delegation token cannot be fetched and used by different versions of client
> ---
>
> Key: HADOOP-13206
> URL: https://issues.apache.org/jira/browse/HADOOP-13206
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.3.0, 2.6.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HADOOP-13206.00.patch, HADOOP-13206.01.patch, 
> HADOOP-13206.02.patch
>
>
> We have observed that an HDFS delegation token fetched by a 2.3.0 client 
> cannot be used by a 2.6.1 client, and vice versa. Through some debugging I 
> found that it's a mismatch between the token's {{service}} and the 
> {{service}} of the filesystem (e.g. {{webhdfs://host.something.com:50070/}}). 
> One would be in numerical IP address and one would be in non-numerical 
> hostname format.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13206) Delegation token cannot be fetched and used by different versions of client

2016-06-01 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HADOOP-13206:
---
Attachment: HADOOP-13206.02.patch

> Delegation token cannot be fetched and used by different versions of client
> ---
>
> Key: HADOOP-13206
> URL: https://issues.apache.org/jira/browse/HADOOP-13206
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.3.0, 2.6.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HADOOP-13206.00.patch, HADOOP-13206.01.patch, 
> HADOOP-13206.02.patch
>
>
> We have observed that an HDFS delegation token fetched by a 2.3.0 client 
> cannot be used by a 2.6.1 client, and vice versa. Through some debugging I 
> found that it's a mismatch between the token's {{service}} and the 
> {{service}} of the filesystem (e.g. {{webhdfs://host.something.com:50070/}}). 
> One would be in numerical IP address and one would be in non-numerical 
> hostname format.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-01 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311122#comment-15311122
 ] 

Allen Wittenauer edited comment on HADOOP-12893 at 6/1/16 9:34 PM:
---

I'm guessing the dependencies are in the wrong order, which is why yetus 
rejected it:

{code}
C: hadoop-project hadoop-project-dist . hadoop-resource-bundle
{code}

If hadoop-project and hadoop-project-dist require hadoop-resource-bundle, then 
that needs to be dealt with before this gets committed.

EDIT:

Yup, maven is having problems with that module:

https://builds.apache.org/job/PreCommit-HADOOP-Build/9630/artifact/patchprocess/maven-patch-validate-root.txt

As a result, Yetus can't figure out the order properly so tacks it onto the 
end, thus breaking the build when trying to do the dependencies for individual 
modules.

FWIW, Yetus' maven dependency bits expects *something* to generate a line that 
ends in '@ module-name' so that it can properly order bits.


was (Author: aw):
I'm guessing the dependencies are in the wrong order, which is why yetus 
rejected it:

{code}
C: hadoop-project hadoop-project-dist . hadoop-resource-bundle
{code}

If hadoop-project and hadoop-project-dist require hadoop-resource-bundle, then 
that needs to be dealt with before this gets committed.

EDIT:

Yup, maven is having problems with that module:

https://builds.apache.org/job/PreCommit-HADOOP-Build/9630/artifact/patchprocess/maven-patch-validate-root.txt

As a result, Yetus can't figure out the order properly so tacks it onto the 
end, thus breaking the build when trying to do the dependencies for individual 
modules.


> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Attachments: HADOOP-12893.002.patch, HADOOP-12893.003.patch, 
> HADOOP-12893.004.patch, HADOOP-12893.005.patch, HADOOP-12893.006.patch, 
> HADOOP-12893.007.patch, HADOOP-12893.008.patch, HADOOP-12893.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-01 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311122#comment-15311122
 ] 

Allen Wittenauer edited comment on HADOOP-12893 at 6/1/16 9:32 PM:
---

I'm guessing the dependencies are in the wrong order, which is why yetus 
rejected it:

{code}
C: hadoop-project hadoop-project-dist . hadoop-resource-bundle
{code}

If hadoop-project and hadoop-project-dist require hadoop-resource-bundle, then 
that needs to be dealt with before this gets committed.

EDIT:

Yup, maven is having problems with that module:

https://builds.apache.org/job/PreCommit-HADOOP-Build/9630/artifact/patchprocess/maven-patch-validate-root.txt

As a result, Yetus can't figure out the order properly so tacks it onto the 
end, thus breaking the build when trying to do the dependencies for individual 
modules.



was (Author: aw):
I'm guessing the dependencies are in the wrong order, which is why yetus 
rejected it:

{code}
C: hadoop-project hadoop-project-dist . hadoop-resource-bundle
{code}

If hadoop-project and hadoop-project-dist require hadoop-resource-bundle, then 
that needs to be dealt with before this gets committed.

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Attachments: HADOOP-12893.002.patch, HADOOP-12893.003.patch, 
> HADOOP-12893.004.patch, HADOOP-12893.005.patch, HADOOP-12893.006.patch, 
> HADOOP-12893.007.patch, HADOOP-12893.008.patch, HADOOP-12893.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13230) s3a's use of fake empty directory blobs does not interoperate with other s3 tools

2016-06-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311154#comment-15311154
 ] 

Steve Loughran commented on HADOOP-13230:
-

# Have you tested this against branch-2 yet? HADOOP-11694 covers changes there
# This should be testable: you'll need to bypass the s3a code after a mkdir and 
PUT up a file; listing the dir probably won't find the path.

I don't see us changing the policy of creating those empty paths, it's how we 
emulate empty dirs, and there is a core assumption in the Hadoop FS APIs, after 
a call to fs.mkdirs(path), then exists(path). Holds


But, 

* HADOOP-13208 proposes making listFiles(recursive) do a bulk list call; that 
would bypass the directory walk. We'll take a patch there, with tests,
* We sort of do that in rename already. Does playing with that make any 
difference? Maybe rename() is copying the empty/ dir entries too, even though 
there are children, so propagating the problem. Again, we'll take a patch there.

Finally, there is always the possibility of bypassing that HEAD for the empty 
dir and going straight to a listing. 
# That listing will need to recognise the diff between an empty dir entry and 
the children
# you have to consider that the cost of list operation is >> than a HEAD, due 
to the need to parse the XML response. That means it may get bounced for cost 
reasons. On the other hand, if you can can show that overall, on a populated 
directory path, things come out lower (and we are now counting individual 
operations for you to add those measurements to your tests), then they will get 
in.

Note that any patches against S3 will need to be tested by you before anyone 
will look at them:

https://wiki.apache.org/hadoop/HowToContribute#Submitting_patches_against_object_stores_such_as_Amazon_S3.2C_OpenStack_Swift_and_Microsoft_Azure

That's a policy which we ourselves have to abide by.




> s3a's use of fake empty directory blobs does not interoperate with other s3 
> tools
> -
>
> Key: HADOOP-13230
> URL: https://issues.apache.org/jira/browse/HADOOP-13230
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Aaron Fabbri
>
> Users of s3a may not realize that, in some cases, it does not interoperate 
> well with other s3 tools, such as the AWS CLI.  (See HIVE-13778, IMPALA-3558).
> Specifically, if a user:
> - Creates an empty directory with hadoop fs -mkdir s3a://bucket/path
> - Copies data into that directory via another tool, i.e. aws cli.
> - Tries to access the data in that directory with any Hadoop software.
> Then the last step fails because the fake empty directory blob that s3a wrote 
> in the first step, causes s3a (listStatus() etc.) to continue to treat that 
> directory as empty, even though the second step was supposed to populate the 
> directory with data.
> I wanted to document this fact for users. We may mark this as not-fix, "by 
> design".. May also be interesting to brainstorm solutions and/or a config 
> option to change the behavior if folks care.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13131) Add tests to verify that S3A supports SSE-S3 encryption

2016-06-01 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13131:
---
Hadoop Flags: Reviewed

+1 for patch 008 on both trunk and branch-2.  I verified all tests passed, 
running against US-west-2.  I'm planning to clean up a few unused imports 
before I commit this.

> Add tests to verify that S3A supports SSE-S3 encryption
> ---
>
> Key: HADOOP-13131
> URL: https://issues.apache.org/jira/browse/HADOOP-13131
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13131-001.patch, HADOOP-13131-002.patch, 
> HADOOP-13131-003.patch, HADOOP-13131-004.patch, HADOOP-13131-005.patch, 
> HADOOP-13131-branch-2-006.patch, HADOOP-13131-branch-2-007.patch, 
> HADOOP-13131-branch-2-008.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Although S3A claims to support server-side S3 encryption (and does, if you 
> set the option), we don't have any test to verify this. Of course, as the 
> encryption is transparent, it's hard to test.
> Here's what I propose
> # a test which sets encryption = AES256; expects things to work as normal.
> # a test which sets encyption = DES and expects any operation creating a file 
> or directory to fail with a 400 "bad request" error



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-01 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311122#comment-15311122
 ] 

Allen Wittenauer commented on HADOOP-12893:
---

I'm guessing the dependencies are in the wrong order, which is why yetus 
rejected it:

{code}
C: hadoop-project hadoop-project-dist . hadoop-resource-bundle
{code}

If hadoop-project and hadoop-project-dist require hadoop-resource-bundle, then 
that needs to be dealt with before this gets committed.

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Attachments: HADOOP-12893.002.patch, HADOOP-12893.003.patch, 
> HADOOP-12893.004.patch, HADOOP-12893.005.patch, HADOOP-12893.006.patch, 
> HADOOP-12893.007.patch, HADOOP-12893.008.patch, HADOOP-12893.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-01 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1535#comment-1535
 ] 

Andrew Wang commented on HADOOP-12893:
--

I ran Xiao's latest patch and then my check script, and it seems to have 
worked. Thanks for fixing my rev!

If no one objects, I'd like to commit this to trunk tomorrow. More review 
though would always be appreciated.

branch-2 and backwards will need some customization, since the deps are 
different.

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Attachments: HADOOP-12893.002.patch, HADOOP-12893.003.patch, 
> HADOOP-12893.004.patch, HADOOP-12893.005.patch, HADOOP-12893.006.patch, 
> HADOOP-12893.007.patch, HADOOP-12893.008.patch, HADOOP-12893.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13228) Add delegation token to the connection in DelegationTokenAuthenticator

2016-06-01 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311095#comment-15311095
 ] 

Xiao Chen commented on HADOOP-13228:


Thank you [~andrew.wang]! Now heading back to HADOOP-13155. :)

> Add delegation token to the connection in DelegationTokenAuthenticator
> --
>
> Key: HADOOP-13228
> URL: https://issues.apache.org/jira/browse/HADOOP-13228
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Fix For: 2.8.0
>
> Attachments: HADOOP-13228.01.patch, HADOOP-13228.02.patch, 
> HADOOP-13228.03.patch
>
>
> Following [a comment from another 
> jira|https://issues.apache.org/jira/browse/HADOOP-13155?focusedCommentId=15308715=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15308715],
>  create this to specifically handle the delegation token renewal/cancellation 
> bug in {{DelegationTokenAuthenticatedURL}} and 
> {{DelegationTokenAuthenticator}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13171) Add StorageStatistics to S3A; instrument some more operations

2016-06-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13171:

Attachment: HADOOP-13171-014.patch

This is patch 014: the -013 patch as merged with trunk.

Tests working against S3 Ireland; couple of FNFE failures on the 
{{TestS3ABlockingThreadPool}} multipart tests with parallel test runs enabled; 
running non-parallel and all is well. Maybe that's one to isolate.

> Add StorageStatistics to S3A; instrument some more operations
> -
>
> Key: HADOOP-13171
> URL: https://issues.apache.org/jira/browse/HADOOP-13171
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13171-014.patch, HADOOP-13171-branch-2-001.patch, 
> HADOOP-13171-branch-2-002.patch, HADOOP-13171-branch-2-003.patch, 
> HADOOP-13171-branch-2-004.patch, HADOOP-13171-branch-2-005.patch, 
> HADOOP-13171-branch-2-006.patch, HADOOP-13171-branch-2-007.patch, 
> HADOOP-13171-branch-2-008.patch, HADOOP-13171-branch-2-009.patch, 
> HADOOP-13171-branch-2-010.patch, HADOOP-13171-branch-2-011.patch, 
> HADOOP-13171-branch-2-012.patch, HADOOP-13171-branch-2-013.patch
>
>
> Add {{StorageStatistics}} support to S3A, collecting the same metrics as the 
> instrumentation, but sharing across all instances.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13171) Add StorageStatistics to S3A; instrument some more operations

2016-06-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13171:

Status: Open  (was: Patch Available)

> Add StorageStatistics to S3A; instrument some more operations
> -
>
> Key: HADOOP-13171
> URL: https://issues.apache.org/jira/browse/HADOOP-13171
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13171-branch-2-001.patch, 
> HADOOP-13171-branch-2-002.patch, HADOOP-13171-branch-2-003.patch, 
> HADOOP-13171-branch-2-004.patch, HADOOP-13171-branch-2-005.patch, 
> HADOOP-13171-branch-2-006.patch, HADOOP-13171-branch-2-007.patch, 
> HADOOP-13171-branch-2-008.patch, HADOOP-13171-branch-2-009.patch, 
> HADOOP-13171-branch-2-010.patch, HADOOP-13171-branch-2-011.patch, 
> HADOOP-13171-branch-2-012.patch, HADOOP-13171-branch-2-013.patch
>
>
> Add {{StorageStatistics}} support to S3A, collecting the same metrics as the 
> instrumentation, but sharing across all instances.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13228) Add delegation token to the connection in DelegationTokenAuthenticator

2016-06-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311059#comment-15311059
 ] 

Hudson commented on HADOOP-13228:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #9894 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9894/])
HADOOP-13228. Add delegation token to the connection in (wang: rev 
35356de1ba1cad0fa469ff546263290109c61b77)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticator.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/web/TestWebDelegationToken.java


> Add delegation token to the connection in DelegationTokenAuthenticator
> --
>
> Key: HADOOP-13228
> URL: https://issues.apache.org/jira/browse/HADOOP-13228
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Fix For: 2.8.0
>
> Attachments: HADOOP-13228.01.patch, HADOOP-13228.02.patch, 
> HADOOP-13228.03.patch
>
>
> Following [a comment from another 
> jira|https://issues.apache.org/jira/browse/HADOOP-13155?focusedCommentId=15308715=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15308715],
>  create this to specifically handle the delegation token renewal/cancellation 
> bug in {{DelegationTokenAuthenticatedURL}} and 
> {{DelegationTokenAuthenticator}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13228) Add delegation token to the connection in DelegationTokenAuthenticator

2016-06-01 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13228:
-
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Commited back through 2.8. Thanks for the contribution Xiao!

> Add delegation token to the connection in DelegationTokenAuthenticator
> --
>
> Key: HADOOP-13228
> URL: https://issues.apache.org/jira/browse/HADOOP-13228
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Fix For: 2.8.0
>
> Attachments: HADOOP-13228.01.patch, HADOOP-13228.02.patch, 
> HADOOP-13228.03.patch
>
>
> Following [a comment from another 
> jira|https://issues.apache.org/jira/browse/HADOOP-13155?focusedCommentId=15308715=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15308715],
>  create this to specifically handle the delegation token renewal/cancellation 
> bug in {{DelegationTokenAuthenticatedURL}} and 
> {{DelegationTokenAuthenticator}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12537) s3a: Add flag for session ID to allow Amazon STS temporary credentials

2016-06-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311003#comment-15311003
 ] 

Hadoop QA commented on HADOOP-12537:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
33s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 3s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 33s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
34s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 45s 
{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
38s {color} | {color:green} branch-2 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s 
{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 32s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
3s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 41s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 21s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 21s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 20s 
{color} | {color:red} root: The patch generated 2 new + 9 unchanged - 0 fixed = 
11 total (was 9) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 49 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 29s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 8s 
{color} | {color:green} hadoop-project in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 45s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit 

[jira] [Created] (HADOOP-13230) s3a's use of fake empty directory blobs does not interoperate with other s3 tools

2016-06-01 Thread Aaron Fabbri (JIRA)
Aaron Fabbri created HADOOP-13230:
-

 Summary: s3a's use of fake empty directory blobs does not 
interoperate with other s3 tools
 Key: HADOOP-13230
 URL: https://issues.apache.org/jira/browse/HADOOP-13230
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.9.0
Reporter: Aaron Fabbri


Users of s3a may not realize that, in some cases, it does not interoperate well 
with other s3 tools, such as the AWS CLI.  (See HIVE-13778, IMPALA-3558).

Specifically, if a user:

- Creates an empty directory with hadoop fs -mkdir s3a://bucket/path
- Copies data into that directory via another tool, i.e. aws cli.
- Tries to access the data in that directory with any Hadoop software.

Then the last step fails because the fake empty directory blob that s3a wrote 
in the first step, causes s3a (listStatus() etc.) to continue to treat that 
directory as empty, even though the second step was supposed to populate the 
directory with data.

I wanted to document this fact for users. We may mark this as not-fix, "by 
design".. May also be interesting to brainstorm solutions and/or a config 
option to change the behavior if folks care.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13223) winutils.exe is a bug nexus and should be killed with an axe.

2016-06-01 Thread john lilley (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310888#comment-15310888
 ] 

john lilley edited comment on HADOOP-13223 at 6/1/16 6:56 PM:
--

[~chliu], I completely understand how this came about.  Engineering decisions 
that are appropriate for the present often do not withstand the test of time.  
I hope we agree that the shell-command-callout and winutils.exe in particular 
is a less-than-ideal solution that should be replaced, although there may be 
obstacles to doing so.  I don't know what shell callouts are actually performed 
throughout Hadoop (that is part of the problem, the inability to analyze 
external code dependencies), but if a solid API replacement were made and the 
use of shell callouts deprecated, this would at least establish a path to 
eventual removal of winutils.


was (Author: john.lil...@redpoint.net):
[~chliu], I completely understand how this came about.  Good engineering 
decisions often do not withstand the test of time.  I hope we agree that the 
shell-command-callout and winutils.exe in particular is a less-than-ideal 
solution that should be replaced, although there may be obstacles to doing so.  
I don't know what shell callouts are actually performed throughout Hadoop (that 
is part of the problem, the inability to analyze external code dependencies), 
but if a solid API replacement were made and the use of shell callouts 
deprecated, this would at least establish a path to eventual removal of 
winutils.

> winutils.exe is a bug nexus and should be killed with an axe.
> -
>
> Key: HADOOP-13223
> URL: https://issues.apache.org/jira/browse/HADOOP-13223
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: bin
>Affects Versions: 2.6.0
> Environment: Microsoft Windows, all versions
>Reporter: john lilley
>
> winutils.exe was apparently created as a stopgap measure to allow Hadoop to 
> "work" on Windows platforms, because the NativeIO libraries aren't 
> implemented there (edit: even NativeIO probably doesn't cover the operations 
> that winutils.exe is used for).  Rather than building a DLL that makes native 
> OS calls, the creators of winutils.exe must have decided that it would be 
> more expedient to create an EXE to carry out file system operations in a 
> linux-like fashion.  Unfortunately, like many stopgap measures in software, 
> this one has persisted well beyond its expected lifetime and usefulness.  My 
> team creates software that runs on Windows and Linux, and winutils.exe is 
> probably responsible for 20% of all issues we encounter, both during 
> development and in the field.
> Problem #1 with winutils.exe is that it is simply missing from many popular 
> distros and/or the client-side software installation for said distros, when 
> supplied, fails to install winutils.exe.  Thus, as software developers, we 
> are forced to pick one version and distribute and install it with our 
> software.
> Which leads to problem #2: winutils.exe are not always compatible.  In 
> particular, MapR MUST have its winutils.exe in the system path, but doing so 
> breaks the Hadoop distro for every other Hadoop vendor.  This makes creating 
> and maintaining test environments that work with all of the Hadoop distros we 
> want to test unnecessarily tedious and error-prone.
> Problem #3 is that the mechanism by which you inform the Hadoop client 
> software where to find winutils.exe is poorly documented and fragile.  First, 
> it can be in the PATH.  If it is in the PATH, that is where it is found.  
> However, the documentation, such as it is, makes no mention of this, and 
> instead says that you should set the HADOOP_HOME environment variable, which 
> does NOT override the winutils.exe found in your system PATH.
> Which leads to problem #4: There is no logging that says where winutils.exe 
> was actually found and loaded.  Because of this, fixing problems of finding 
> the wrong winutils.exe are extremely difficult.
> Problem #5 is that most of the time, such as when accessing straight up HDFS 
> and YARN, one does not *need* winutils.exe.  But if it is missing, the log 
> messages complain about its absence.  When we are trying to diagnose an 
> obscure issue in Hadoop (of which there are many), the presence of this red 
> herring leads to all sorts of time wasted until someone on the team points 
> out that winutils.exe is not the problem, at least not this time.
> Problem #6 is that errors and stack traces from issues involving winutils.exe 
> are not helpful.  The Java stack trace ends at the ProcessBuilder call.  Only 
> through bitter experience is one able to connect the dots from 
> "ProcessBuilder is the last thing on the stack" to "something is wrong with 
> 

[jira] [Commented] (HADOOP-13223) winutils.exe is a bug nexus and should be killed with an axe.

2016-06-01 Thread john lilley (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310888#comment-15310888
 ] 

john lilley commented on HADOOP-13223:
--

[~chliu], I completely understand how this came about.  Good engineering 
decisions often do not withstand the test of time.  I hope we agree that the 
shell-command-callout and winutils.exe in particular is a less-than-ideal 
solution that should be replaced, although there may be obstacles to doing so.  
I don't know what shell callouts are actually performed throughout Hadoop (that 
is part of the problem, the inability to analyze external code dependencies), 
but if a solid API replacement were made and the use of shell callouts 
deprecated, this would at least establish a path to eventual removal of 
winutils.

> winutils.exe is a bug nexus and should be killed with an axe.
> -
>
> Key: HADOOP-13223
> URL: https://issues.apache.org/jira/browse/HADOOP-13223
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: bin
>Affects Versions: 2.6.0
> Environment: Microsoft Windows, all versions
>Reporter: john lilley
>
> winutils.exe was apparently created as a stopgap measure to allow Hadoop to 
> "work" on Windows platforms, because the NativeIO libraries aren't 
> implemented there (edit: even NativeIO probably doesn't cover the operations 
> that winutils.exe is used for).  Rather than building a DLL that makes native 
> OS calls, the creators of winutils.exe must have decided that it would be 
> more expedient to create an EXE to carry out file system operations in a 
> linux-like fashion.  Unfortunately, like many stopgap measures in software, 
> this one has persisted well beyond its expected lifetime and usefulness.  My 
> team creates software that runs on Windows and Linux, and winutils.exe is 
> probably responsible for 20% of all issues we encounter, both during 
> development and in the field.
> Problem #1 with winutils.exe is that it is simply missing from many popular 
> distros and/or the client-side software installation for said distros, when 
> supplied, fails to install winutils.exe.  Thus, as software developers, we 
> are forced to pick one version and distribute and install it with our 
> software.
> Which leads to problem #2: winutils.exe are not always compatible.  In 
> particular, MapR MUST have its winutils.exe in the system path, but doing so 
> breaks the Hadoop distro for every other Hadoop vendor.  This makes creating 
> and maintaining test environments that work with all of the Hadoop distros we 
> want to test unnecessarily tedious and error-prone.
> Problem #3 is that the mechanism by which you inform the Hadoop client 
> software where to find winutils.exe is poorly documented and fragile.  First, 
> it can be in the PATH.  If it is in the PATH, that is where it is found.  
> However, the documentation, such as it is, makes no mention of this, and 
> instead says that you should set the HADOOP_HOME environment variable, which 
> does NOT override the winutils.exe found in your system PATH.
> Which leads to problem #4: There is no logging that says where winutils.exe 
> was actually found and loaded.  Because of this, fixing problems of finding 
> the wrong winutils.exe are extremely difficult.
> Problem #5 is that most of the time, such as when accessing straight up HDFS 
> and YARN, one does not *need* winutils.exe.  But if it is missing, the log 
> messages complain about its absence.  When we are trying to diagnose an 
> obscure issue in Hadoop (of which there are many), the presence of this red 
> herring leads to all sorts of time wasted until someone on the team points 
> out that winutils.exe is not the problem, at least not this time.
> Problem #6 is that errors and stack traces from issues involving winutils.exe 
> are not helpful.  The Java stack trace ends at the ProcessBuilder call.  Only 
> through bitter experience is one able to connect the dots from 
> "ProcessBuilder is the last thing on the stack" to "something is wrong with 
> winutils.exe".
> Note that none of these involve running Hadoop on Windows.  They are only 
> encountered when using Hadoop client libraries to access a cluster from 
> Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11127) Improve versioning and compatibility support in native library for downstream hadoop-common users.

2016-06-01 Thread john lilley (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310870#comment-15310870
 ] 

john lilley commented on HADOOP-11127:
--

My two cents; if there is some way to migrate shell-command/winutils calls into 
a library I think this would reduce the number of opportunities for 
configuration and environment error (see 
https://issues.apache.org/jira/browse/HADOOP-13223, which is my laundry list of 
gripes about how winutils causes various problems).  I prefer approach #2 of 
the three listed at the beginning.  While it has some challenges, mostly 
related to extracting a singleton of the .so/.dll in a multi-threaded 
multi-process environment, it has the definite advantage of being able to find 
and load exactly the library version we want.

> Improve versioning and compatibility support in native library for downstream 
> hadoop-common users.
> --
>
> Key: HADOOP-11127
> URL: https://issues.apache.org/jira/browse/HADOOP-11127
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: native
>Reporter: Chris Nauroth
>Assignee: Alan Burlison
> Attachments: HADOOP-11064.003.patch, proposal.01.txt
>
>
> There is no compatibility policy enforced on the JNI function signatures 
> implemented in the native library.  This library typically is deployed to all 
> nodes in a cluster, built from a specific source code version.  However, 
> downstream applications that want to run in that cluster might choose to 
> bundle a hadoop-common jar at a different version.  Since there is no 
> compatibility policy, this can cause link errors at runtime when the native 
> function signatures expected by hadoop-common.jar do not exist in 
> libhadoop.so/hadoop.dll.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12537) s3a: Add flag for session ID to allow Amazon STS temporary credentials

2016-06-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12537:

Status: Patch Available  (was: Open)

submitting against branch-2

> s3a: Add flag for session ID to allow Amazon STS temporary credentials
> --
>
> Key: HADOOP-12537
> URL: https://issues.apache.org/jira/browse/HADOOP-12537
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 2.7.1
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Minor
> Attachments: HADOOP-12537-branch-2.005.patch, HADOOP-12537.001.patch, 
> HADOOP-12537.002.patch, HADOOP-12537.003.patch, HADOOP-12537.004.patch, 
> HADOOP-12537.diff, HADOOP-12537.diff
>
>
> Amazon STS allows you to issue temporary access key id / secret key pairs for 
> your a user / role. However, using these credentials also requires specifying 
> a session ID. There is currently no such configuration property or the 
> required code to pass it through to the API (at least not that I can find) in 
> any of the S3 connectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12537) s3a: Add flag for session ID to allow Amazon STS temporary credentials

2016-06-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12537:

Attachment: HADOOP-12537-branch-2.005.patch

This patch 005; patch 004 merged in and reworked somewhat

Production:
# throwing an explicit {{CredentialInitializationException}} (backported to the 
other providers). This declares itself as non-retryable, and, being an explicit 
type, can be looked for in names.
# Changed the exception text message.

Test
# moved the STS option flag out of the contract tests, instead adding one to 
the S3A tests only.
# added a test for missing session token.
# extended the existing test with an attempt to create an FS with an invalid 
token.

Docs
# more detail, an example, and some test docs.

The new test —taking the newly issued session token and trying to init with a 
now-invalid triple of (key-id, key-secret, sessionId) is something I'd have 
expected to fail. It isn't. This is something that needs to be fixed.

Hypotheses

* I've misunderstood something
* credential setup isn't working as expected; perhaps the permanent keys are 
being picked up, not these new ones.
* AWS is doing something underneath.

[~mackrorysd]: could you look at that? Once that test is passing I think we're 
pretty much good to go here


> s3a: Add flag for session ID to allow Amazon STS temporary credentials
> --
>
> Key: HADOOP-12537
> URL: https://issues.apache.org/jira/browse/HADOOP-12537
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 2.7.1
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Minor
> Attachments: HADOOP-12537-branch-2.005.patch, HADOOP-12537.001.patch, 
> HADOOP-12537.002.patch, HADOOP-12537.003.patch, HADOOP-12537.004.patch, 
> HADOOP-12537.diff, HADOOP-12537.diff
>
>
> Amazon STS allows you to issue temporary access key id / secret key pairs for 
> your a user / role. However, using these credentials also requires specifying 
> a session ID. There is currently no such configuration property or the 
> required code to pass it through to the API (at least not that I can find) in 
> any of the S3 connectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12537) s3a: Add flag for session ID to allow Amazon STS temporary credentials

2016-06-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12537:

Assignee: Sean Mackrory
  Status: Open  (was: Patch Available)

> s3a: Add flag for session ID to allow Amazon STS temporary credentials
> --
>
> Key: HADOOP-12537
> URL: https://issues.apache.org/jira/browse/HADOOP-12537
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 2.7.1
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Minor
> Attachments: HADOOP-12537.001.patch, HADOOP-12537.002.patch, 
> HADOOP-12537.003.patch, HADOOP-12537.004.patch, HADOOP-12537.diff, 
> HADOOP-12537.diff
>
>
> Amazon STS allows you to issue temporary access key id / secret key pairs for 
> your a user / role. However, using these credentials also requires specifying 
> a session ID. There is currently no such configuration property or the 
> required code to pass it through to the API (at least not that I can find) in 
> any of the S3 connectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13223) winutils.exe is a bug nexus and should be killed with an axe.

2016-06-01 Thread Chuan Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310835#comment-15310835
 ] 

Chuan Liu commented on HADOOP-13223:


I can add some context here. Back then:
# Native library inclusion is *optional* in Hadoop on both Windows and Linux.
# Accessing Linux cluster from Windows client is not supported.
# There is no ASF build of Hadoop on Windows (due to Apache had no Windows CI 
machine).

All native and Java API gaps are addressed in calling to external commands due 
to 1). The original Hadoop on Windows promise is that Windows implementation 
should never break Linux side. We agreed to create "winutils" to address 
missing command line utilities on Windows when porting Hadoop to Windows. 
Later, when fixing some other file IO issues, we make the native library a 
mandate on Windows, but the existing "winutils.exe" is not replaced with JNI 
calls due to the amount of engineering work involved.

I read through your complaints, I think problem 1 & 2 are a distro issue and if 
Apache has an official build that include Windows binaries that can help with 
the problem. [~ste...@apache.org] also provides a partial solution. Problem 3 - 
6 can be summarized as poor error messages when calling external commands on 
Windows. So I agree with that and there are various places such error message 
should be improved. However, I do not think removing "winutils.exe" is a fix to 
all your problem. It will likely just replace ".exe" problem with ".dll" 
problem.

That said, personally, I am also in favor to get rid of the "winutils.exe" and 
replace necessary calls with the JNI implementation. "winutils" code is 
designed to have all main implementations in "libwintuils", so both command 
line implemetation "winutils.exe" and JNI implementation "hadoop.dll" are 
surface level wrappers that statically link to the same underlying library. On 
this front, it should not be too difficult to move all "winutils.exe" 
implemetations into JNI calls.

> winutils.exe is a bug nexus and should be killed with an axe.
> -
>
> Key: HADOOP-13223
> URL: https://issues.apache.org/jira/browse/HADOOP-13223
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: bin
>Affects Versions: 2.6.0
> Environment: Microsoft Windows, all versions
>Reporter: john lilley
>
> winutils.exe was apparently created as a stopgap measure to allow Hadoop to 
> "work" on Windows platforms, because the NativeIO libraries aren't 
> implemented there (edit: even NativeIO probably doesn't cover the operations 
> that winutils.exe is used for).  Rather than building a DLL that makes native 
> OS calls, the creators of winutils.exe must have decided that it would be 
> more expedient to create an EXE to carry out file system operations in a 
> linux-like fashion.  Unfortunately, like many stopgap measures in software, 
> this one has persisted well beyond its expected lifetime and usefulness.  My 
> team creates software that runs on Windows and Linux, and winutils.exe is 
> probably responsible for 20% of all issues we encounter, both during 
> development and in the field.
> Problem #1 with winutils.exe is that it is simply missing from many popular 
> distros and/or the client-side software installation for said distros, when 
> supplied, fails to install winutils.exe.  Thus, as software developers, we 
> are forced to pick one version and distribute and install it with our 
> software.
> Which leads to problem #2: winutils.exe are not always compatible.  In 
> particular, MapR MUST have its winutils.exe in the system path, but doing so 
> breaks the Hadoop distro for every other Hadoop vendor.  This makes creating 
> and maintaining test environments that work with all of the Hadoop distros we 
> want to test unnecessarily tedious and error-prone.
> Problem #3 is that the mechanism by which you inform the Hadoop client 
> software where to find winutils.exe is poorly documented and fragile.  First, 
> it can be in the PATH.  If it is in the PATH, that is where it is found.  
> However, the documentation, such as it is, makes no mention of this, and 
> instead says that you should set the HADOOP_HOME environment variable, which 
> does NOT override the winutils.exe found in your system PATH.
> Which leads to problem #4: There is no logging that says where winutils.exe 
> was actually found and loaded.  Because of this, fixing problems of finding 
> the wrong winutils.exe are extremely difficult.
> Problem #5 is that most of the time, such as when accessing straight up HDFS 
> and YARN, one does not *need* winutils.exe.  But if it is missing, the log 
> messages complain about its absence.  When we are trying to diagnose an 
> obscure issue in Hadoop (of which there are many), the presence of this red 

[jira] [Commented] (HADOOP-13223) winutils.exe is a bug nexus and should be killed with an axe.

2016-06-01 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310735#comment-15310735
 ] 

Chris Nauroth commented on HADOOP-13223:


bq. I think that moving functions to a DLL could improve matters if the DLL was 
embedded in the jar itself as a resource.

There is a JIRA tracking this: HADOOP-11127.  There is a lot of discussion 
about the trade-offs there, though there is not yet consensus on how to proceed.

> winutils.exe is a bug nexus and should be killed with an axe.
> -
>
> Key: HADOOP-13223
> URL: https://issues.apache.org/jira/browse/HADOOP-13223
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: bin
>Affects Versions: 2.6.0
> Environment: Microsoft Windows, all versions
>Reporter: john lilley
>
> winutils.exe was apparently created as a stopgap measure to allow Hadoop to 
> "work" on Windows platforms, because the NativeIO libraries aren't 
> implemented there (edit: even NativeIO probably doesn't cover the operations 
> that winutils.exe is used for).  Rather than building a DLL that makes native 
> OS calls, the creators of winutils.exe must have decided that it would be 
> more expedient to create an EXE to carry out file system operations in a 
> linux-like fashion.  Unfortunately, like many stopgap measures in software, 
> this one has persisted well beyond its expected lifetime and usefulness.  My 
> team creates software that runs on Windows and Linux, and winutils.exe is 
> probably responsible for 20% of all issues we encounter, both during 
> development and in the field.
> Problem #1 with winutils.exe is that it is simply missing from many popular 
> distros and/or the client-side software installation for said distros, when 
> supplied, fails to install winutils.exe.  Thus, as software developers, we 
> are forced to pick one version and distribute and install it with our 
> software.
> Which leads to problem #2: winutils.exe are not always compatible.  In 
> particular, MapR MUST have its winutils.exe in the system path, but doing so 
> breaks the Hadoop distro for every other Hadoop vendor.  This makes creating 
> and maintaining test environments that work with all of the Hadoop distros we 
> want to test unnecessarily tedious and error-prone.
> Problem #3 is that the mechanism by which you inform the Hadoop client 
> software where to find winutils.exe is poorly documented and fragile.  First, 
> it can be in the PATH.  If it is in the PATH, that is where it is found.  
> However, the documentation, such as it is, makes no mention of this, and 
> instead says that you should set the HADOOP_HOME environment variable, which 
> does NOT override the winutils.exe found in your system PATH.
> Which leads to problem #4: There is no logging that says where winutils.exe 
> was actually found and loaded.  Because of this, fixing problems of finding 
> the wrong winutils.exe are extremely difficult.
> Problem #5 is that most of the time, such as when accessing straight up HDFS 
> and YARN, one does not *need* winutils.exe.  But if it is missing, the log 
> messages complain about its absence.  When we are trying to diagnose an 
> obscure issue in Hadoop (of which there are many), the presence of this red 
> herring leads to all sorts of time wasted until someone on the team points 
> out that winutils.exe is not the problem, at least not this time.
> Problem #6 is that errors and stack traces from issues involving winutils.exe 
> are not helpful.  The Java stack trace ends at the ProcessBuilder call.  Only 
> through bitter experience is one able to connect the dots from 
> "ProcessBuilder is the last thing on the stack" to "something is wrong with 
> winutils.exe".
> Note that none of these involve running Hadoop on Windows.  They are only 
> encountered when using Hadoop client libraries to access a cluster from 
> Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13214) sync(0); next(); yields wrong key-values on block-compressed files

2016-06-01 Thread Illes S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Illes S updated HADOOP-13214:
-
Status: Open  (was: Patch Available)

> sync(0); next(); yields wrong key-values on block-compressed files
> --
>
> Key: HADOOP-13214
> URL: https://issues.apache.org/jira/browse/HADOOP-13214
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Illes S
>  Labels: easyfix, newbie, patch
> Attachments: HADOOP-13214.2.patch, HADOOP-13214.patch
>
>
> Calling {{sync(0); next(...);}} on a block-compressed {{SequenceFile.Reader}} 
> that has already been used may not yield the key-values at the beginning, but 
> those following the previous position. The issue is caused by {{sync(0)}} not 
> releasing previously buffered keys and values. The issue was introduced by 
> HADOOP-6196.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13214) sync(0); next(); yields wrong key-values on block-compressed files

2016-06-01 Thread Illes S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Illes S updated HADOOP-13214:
-
Status: Patch Available  (was: Open)

> sync(0); next(); yields wrong key-values on block-compressed files
> --
>
> Key: HADOOP-13214
> URL: https://issues.apache.org/jira/browse/HADOOP-13214
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Illes S
>  Labels: easyfix, newbie, patch
> Attachments: HADOOP-13214.2.patch, HADOOP-13214.patch
>
>
> Calling {{sync(0); next(...);}} on a block-compressed {{SequenceFile.Reader}} 
> that has already been used may not yield the key-values at the beginning, but 
> those following the previous position. The issue is caused by {{sync(0)}} not 
> releasing previously buffered keys and values. The issue was introduced by 
> HADOOP-6196.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13228) Add delegation token to the connection in DelegationTokenAuthenticator

2016-06-01 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310565#comment-15310565
 ] 

Xiao Chen commented on HADOOP-13228:


Failed tests seem unrelated and passed locally.

> Add delegation token to the connection in DelegationTokenAuthenticator
> --
>
> Key: HADOOP-13228
> URL: https://issues.apache.org/jira/browse/HADOOP-13228
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13228.01.patch, HADOOP-13228.02.patch, 
> HADOOP-13228.03.patch
>
>
> Following [a comment from another 
> jira|https://issues.apache.org/jira/browse/HADOOP-13155?focusedCommentId=15308715=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15308715],
>  create this to specifically handle the delegation token renewal/cancellation 
> bug in {{DelegationTokenAuthenticatedURL}} and 
> {{DelegationTokenAuthenticator}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13223) winutils.exe is a bug nexus and should be killed with an axe.

2016-06-01 Thread john lilley (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310519#comment-15310519
 ] 

john lilley commented on HADOOP-13223:
--

the other bug which showed is reported by my colleague Nathan:

In org.apache.hadoop.util, there's a function, getWinUtilsPath, that looks like 
this:
 public static final String getWinUtilsPath() {
String winUtilsPath = null;
try {
  if (WINDOWS) {
winUtilsPath = getQualifiedBinPath("winutils.exe");
  }
} catch (IOException ioe) {
   LOG.error("Failed to locate the winutils binary in the hadoop binary 
path",
 ioe);
}
   return winUtilsPath;
  }

Unfortunately, if HADOOP_HOME is set but is bogus, it returns a null and you 
get the ambiguous "null exception" thrown.

> winutils.exe is a bug nexus and should be killed with an axe.
> -
>
> Key: HADOOP-13223
> URL: https://issues.apache.org/jira/browse/HADOOP-13223
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: bin
>Affects Versions: 2.6.0
> Environment: Microsoft Windows, all versions
>Reporter: john lilley
>
> winutils.exe was apparently created as a stopgap measure to allow Hadoop to 
> "work" on Windows platforms, because the NativeIO libraries aren't 
> implemented there (edit: even NativeIO probably doesn't cover the operations 
> that winutils.exe is used for).  Rather than building a DLL that makes native 
> OS calls, the creators of winutils.exe must have decided that it would be 
> more expedient to create an EXE to carry out file system operations in a 
> linux-like fashion.  Unfortunately, like many stopgap measures in software, 
> this one has persisted well beyond its expected lifetime and usefulness.  My 
> team creates software that runs on Windows and Linux, and winutils.exe is 
> probably responsible for 20% of all issues we encounter, both during 
> development and in the field.
> Problem #1 with winutils.exe is that it is simply missing from many popular 
> distros and/or the client-side software installation for said distros, when 
> supplied, fails to install winutils.exe.  Thus, as software developers, we 
> are forced to pick one version and distribute and install it with our 
> software.
> Which leads to problem #2: winutils.exe are not always compatible.  In 
> particular, MapR MUST have its winutils.exe in the system path, but doing so 
> breaks the Hadoop distro for every other Hadoop vendor.  This makes creating 
> and maintaining test environments that work with all of the Hadoop distros we 
> want to test unnecessarily tedious and error-prone.
> Problem #3 is that the mechanism by which you inform the Hadoop client 
> software where to find winutils.exe is poorly documented and fragile.  First, 
> it can be in the PATH.  If it is in the PATH, that is where it is found.  
> However, the documentation, such as it is, makes no mention of this, and 
> instead says that you should set the HADOOP_HOME environment variable, which 
> does NOT override the winutils.exe found in your system PATH.
> Which leads to problem #4: There is no logging that says where winutils.exe 
> was actually found and loaded.  Because of this, fixing problems of finding 
> the wrong winutils.exe are extremely difficult.
> Problem #5 is that most of the time, such as when accessing straight up HDFS 
> and YARN, one does not *need* winutils.exe.  But if it is missing, the log 
> messages complain about its absence.  When we are trying to diagnose an 
> obscure issue in Hadoop (of which there are many), the presence of this red 
> herring leads to all sorts of time wasted until someone on the team points 
> out that winutils.exe is not the problem, at least not this time.
> Problem #6 is that errors and stack traces from issues involving winutils.exe 
> are not helpful.  The Java stack trace ends at the ProcessBuilder call.  Only 
> through bitter experience is one able to connect the dots from 
> "ProcessBuilder is the last thing on the stack" to "something is wrong with 
> winutils.exe".
> Note that none of these involve running Hadoop on Windows.  They are only 
> encountered when using Hadoop client libraries to access a cluster from 
> Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13171) Add StorageStatistics to S3A; instrument some more operations

2016-06-01 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310505#comment-15310505
 ] 

Chris Nauroth commented on HADOOP-13171:


+1 for patch 013 on branch-2.

> Add StorageStatistics to S3A; instrument some more operations
> -
>
> Key: HADOOP-13171
> URL: https://issues.apache.org/jira/browse/HADOOP-13171
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13171-branch-2-001.patch, 
> HADOOP-13171-branch-2-002.patch, HADOOP-13171-branch-2-003.patch, 
> HADOOP-13171-branch-2-004.patch, HADOOP-13171-branch-2-005.patch, 
> HADOOP-13171-branch-2-006.patch, HADOOP-13171-branch-2-007.patch, 
> HADOOP-13171-branch-2-008.patch, HADOOP-13171-branch-2-009.patch, 
> HADOOP-13171-branch-2-010.patch, HADOOP-13171-branch-2-011.patch, 
> HADOOP-13171-branch-2-012.patch, HADOOP-13171-branch-2-013.patch
>
>
> Add {{StorageStatistics}} support to S3A, collecting the same metrics as the 
> instrumentation, but sharing across all instances.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13162) Consider reducing number of getFileStatus calls in S3AFileSystem.mkdirs

2016-06-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13162:

   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

> Consider reducing number of getFileStatus calls in S3AFileSystem.mkdirs
> ---
>
> Key: HADOOP-13162
> URL: https://issues.apache.org/jira/browse/HADOOP-13162
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13162-branch-2-002.patch, 
> HADOOP-13162-branch-2-003.patch, HADOOP-13162-branch-2-004.patch, 
> HADOOP-13162.001.patch
>
>
> getFileStatus is relatively expensive call and mkdirs invokes it multiple 
> times depending on how deep the directory structure is. It would be good to 
> reduce the number of getFileStatus calls in such cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13209) replace slaves with workers

2016-06-01 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310438#comment-15310438
 ] 

Allen Wittenauer edited comment on HADOOP-13209 at 6/1/16 3:01 PM:
---

Test failures are probably because this is patch is huge. Anway, we should 
probably at least read the slaves file if it exists and warn that it's 
deprecated for backwards compatibility.  It's great that this removes a bunch 
of whitespace, but it'd be nice to fix some of the checkstyle issues too while 
we're here.


was (Author: aw):
Test failures are probably because this is patch is huge. Anway, we should 
probably at least read the slaves file if it exists and warn that it's 
deprecated for backwards compatibility.

> replace slaves with workers
> ---
>
> Key: HADOOP-13209
> URL: https://issues.apache.org/jira/browse/HADOOP-13209
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Reporter: John Smith
> Attachments: HADOOP-13209.v01.patch
>
>
> slaves.sh and the slaves file should get replace with workers.sh and a 
> workers file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13209) replace slaves with workers

2016-06-01 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310438#comment-15310438
 ] 

Allen Wittenauer commented on HADOOP-13209:
---

Test failures are probably because this is patch is huge. Anway, we should 
probably at least read the slaves file if it exists and warn that it's 
deprecated for backwards compatibility.

> replace slaves with workers
> ---
>
> Key: HADOOP-13209
> URL: https://issues.apache.org/jira/browse/HADOOP-13209
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Reporter: John Smith
> Attachments: HADOOP-13209.v01.patch
>
>
> slaves.sh and the slaves file should get replace with workers.sh and a 
> workers file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13213) Small Documentation bug with AuthenticatedURL in hadoop-auth

2016-06-01 Thread Tom Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310436#comment-15310436
 ] 

Tom Ellis commented on HADOOP-13213:


Is there anything else I need to do for this [~jojochuang]?

> Small Documentation bug with AuthenticatedURL in hadoop-auth
> 
>
> Key: HADOOP-13213
> URL: https://issues.apache.org/jira/browse/HADOOP-13213
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.2
>Reporter: Tom Ellis
>Priority: Trivial
>  Labels: documentation, patch
>
> Small documentation error in hadoop-auth.
> AuthenticatedURL doesn't have a constructor that takes URL and Token, these 
> params are passed to openConnection(url, token).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13162) Consider reducing number of getFileStatus calls in S3AFileSystem.mkdirs

2016-06-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310310#comment-15310310
 ] 

Hudson commented on HADOOP-13162:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #9893 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9893/])
HADOOP-13162. Consider reducing number of getFileStatus calls in (stevel: rev 
587061103097160d8aceb60dbef6958cafdd30ae)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextCreateMkdirBaseTest.java
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java


> Consider reducing number of getFileStatus calls in S3AFileSystem.mkdirs
> ---
>
> Key: HADOOP-13162
> URL: https://issues.apache.org/jira/browse/HADOOP-13162
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13162-branch-2-002.patch, 
> HADOOP-13162-branch-2-003.patch, HADOOP-13162-branch-2-004.patch, 
> HADOOP-13162.001.patch
>
>
> getFileStatus is relatively expensive call and mkdirs invokes it multiple 
> times depending on how deep the directory structure is. It would be good to 
> reduce the number of getFileStatus calls in such cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13171) Add StorageStatistics to S3A; instrument some more operations

2016-06-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310302#comment-15310302
 ] 

Hadoop QA commented on HADOOP-13171:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 29s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
30s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 9s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 42s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
21s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s 
{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
14s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
1s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 43s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 17s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 16s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 24s 
{color} | {color:red} root: The patch generated 3 new + 87 unchanged - 20 fixed 
= 90 total (was 107) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 49 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 17s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 13s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 33s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_101. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 14s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_101. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} 

[jira] [Updated] (HADOOP-13162) Consider reducing number of getFileStatus calls in S3AFileSystem.mkdirs

2016-06-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13162:

Assignee: Rajesh Balamohan

> Consider reducing number of getFileStatus calls in S3AFileSystem.mkdirs
> ---
>
> Key: HADOOP-13162
> URL: https://issues.apache.org/jira/browse/HADOOP-13162
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13162-branch-2-002.patch, 
> HADOOP-13162-branch-2-003.patch, HADOOP-13162-branch-2-004.patch, 
> HADOOP-13162.001.patch
>
>
> getFileStatus is relatively expensive call and mkdirs invokes it multiple 
> times depending on how deep the directory structure is. It would be good to 
> reduce the number of getFileStatus calls in such cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13162) Consider reducing number of getFileStatus calls in S3AFileSystem.mkdirs

2016-06-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310269#comment-15310269
 ] 

Steve Loughran commented on HADOOP-13162:
-

+1

* tested against s3a ireland, parallel threads=2; all clear
* as it changed a base test case, checked with the other implementation outside 
hadoop-common, here {{TestFcHdfsCreateMkdir}}. All well.


> Consider reducing number of getFileStatus calls in S3AFileSystem.mkdirs
> ---
>
> Key: HADOOP-13162
> URL: https://issues.apache.org/jira/browse/HADOOP-13162
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13162-branch-2-002.patch, 
> HADOOP-13162-branch-2-003.patch, HADOOP-13162-branch-2-004.patch, 
> HADOOP-13162.001.patch
>
>
> getFileStatus is relatively expensive call and mkdirs invokes it multiple 
> times depending on how deep the directory structure is. It would be good to 
> reduce the number of getFileStatus calls in such cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls

2016-06-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310247#comment-15310247
 ] 

Steve Loughran commented on HADOOP-12910:
-

I'm going to highilght that vast quantities of code people rely on is tagged as 
"Unstable"; 

having things marked as such is not a green light to cut things on a whim. It's 
always good to check across the downstream projects who is actually using a 
method or interface before breaking things.

I've been thinking we need an extra unstable tag, {{@Experimental}}, which 
would mean "this entire feature could be removed without warning". This async 
API would fit that category

bq. The down streams are intelligent people. They can decide whether they want 
to use the unstable API. 

It's always insightful to work downstream, especially downstream code you are 
trying to get to compile and work against multiple versions. HDFS changes are 
things I fear, though generally it's the new packaging changes rather than any 
interface or behaviour; that's more at the YARN level. And we can't say "never 
trust unstable" or "never use private/limited private" as its impossible to do 
things. We end up picking things up not noticing their stability guarantees 
(i.e. we cut and paste from working code and test cases), or pull it in without 
scanning the entire tree of dependent classes.

Here are some examples of things downstream apps depend on

* UGI {{@LimitedPrivate, @Evolving}}
* the YARN API records: {{ApplicationAttemptReport, ContainerReport, 
ContainerExitStatus}} @Unstable

These are things we use every day, and we don't make a conscious decision in 
the expectation that it will suddenly vanish, more that "we hope they don't 
break". 

if the stability of a new API is less, then I think an @Experimental tag would 
be good. Ship it, learn from the experience, be prepared to rewrite. And the 
tag would make clear that this stuff is really, really unstable. Then follow 
through by removing the @Experimental tag once it's no longer considered an 
experiment: this stops the tag becoming as devalued as the rest are.




(holding off on any opinion about the API, just highlighting that there are 
issues with our tags, and we cannot treat private/limited private and unstable 
as hardcoded freedom to play, not without discovering how things get used. This 
is why [~stack]'s comments are so welcome.


> Add new FileSystem API to support asynchronous method calls
> ---
>
> Key: HADOOP-12910
> URL: https://issues.apache.org/jira/browse/HADOOP-12910
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12910-HDFS-9924.000.patch, 
> HADOOP-12910-HDFS-9924.001.patch, HADOOP-12910-HDFS-9924.002.patch
>
>
> Add a new API, namely FutureFileSystem (or AsynchronousFileSystem, if it is a 
> better name).  All the APIs in FutureFileSystem are the same as FileSystem 
> except that the return type is wrapped by Future, e.g.
> {code}
>   //FileSystem
>   public boolean rename(Path src, Path dst) throws IOException;
>   //FutureFileSystem
>   public Future rename(Path src, Path dst) throws IOException;
> {code}
> Note that FutureFileSystem does not extend FileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13131) Add tests to verify that S3A supports SSE-S3 encryption

2016-06-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310238#comment-15310238
 ] 

Hadoop QA commented on HADOOP-13131:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 51s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
38s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s 
{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
32s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 8s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 12s 
{color} | {color:red} hadoop-tools/hadoop-aws: The patch generated 6 new + 35 
unchanged - 7 fixed = 41 total (was 42) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 10s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 12s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_101. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 33s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:babe025 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807414/HADOOP-13131-branch-2-008.patch
 |
| JIRA Issue | HADOOP-13131 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5a9f98a04099 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| 

[jira] [Commented] (HADOOP-13223) winutils.exe is a bug nexus and should be killed with an axe.

2016-06-01 Thread john lilley (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310216#comment-15310216
 ] 

john lilley commented on HADOOP-13223:
--

Steve,
I think that moving functions to a DLL could improve matters if the DLL was 
embedded in the jar itself as a resource.  This is apparently not magic -- the 
DLL still must be extracted to disk and loaded -- but at least you can do this 
in a way that is free of PATH issues: 
http://stackoverflow.com/questions/1611357/how-to-make-a-jar-file-that-includes-dll-files
Of course, you get different issues about needing a valid temp space and 
possible collisions and race conditions.

> winutils.exe is a bug nexus and should be killed with an axe.
> -
>
> Key: HADOOP-13223
> URL: https://issues.apache.org/jira/browse/HADOOP-13223
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: bin
>Affects Versions: 2.6.0
> Environment: Microsoft Windows, all versions
>Reporter: john lilley
>
> winutils.exe was apparently created as a stopgap measure to allow Hadoop to 
> "work" on Windows platforms, because the NativeIO libraries aren't 
> implemented there (edit: even NativeIO probably doesn't cover the operations 
> that winutils.exe is used for).  Rather than building a DLL that makes native 
> OS calls, the creators of winutils.exe must have decided that it would be 
> more expedient to create an EXE to carry out file system operations in a 
> linux-like fashion.  Unfortunately, like many stopgap measures in software, 
> this one has persisted well beyond its expected lifetime and usefulness.  My 
> team creates software that runs on Windows and Linux, and winutils.exe is 
> probably responsible for 20% of all issues we encounter, both during 
> development and in the field.
> Problem #1 with winutils.exe is that it is simply missing from many popular 
> distros and/or the client-side software installation for said distros, when 
> supplied, fails to install winutils.exe.  Thus, as software developers, we 
> are forced to pick one version and distribute and install it with our 
> software.
> Which leads to problem #2: winutils.exe are not always compatible.  In 
> particular, MapR MUST have its winutils.exe in the system path, but doing so 
> breaks the Hadoop distro for every other Hadoop vendor.  This makes creating 
> and maintaining test environments that work with all of the Hadoop distros we 
> want to test unnecessarily tedious and error-prone.
> Problem #3 is that the mechanism by which you inform the Hadoop client 
> software where to find winutils.exe is poorly documented and fragile.  First, 
> it can be in the PATH.  If it is in the PATH, that is where it is found.  
> However, the documentation, such as it is, makes no mention of this, and 
> instead says that you should set the HADOOP_HOME environment variable, which 
> does NOT override the winutils.exe found in your system PATH.
> Which leads to problem #4: There is no logging that says where winutils.exe 
> was actually found and loaded.  Because of this, fixing problems of finding 
> the wrong winutils.exe are extremely difficult.
> Problem #5 is that most of the time, such as when accessing straight up HDFS 
> and YARN, one does not *need* winutils.exe.  But if it is missing, the log 
> messages complain about its absence.  When we are trying to diagnose an 
> obscure issue in Hadoop (of which there are many), the presence of this red 
> herring leads to all sorts of time wasted until someone on the team points 
> out that winutils.exe is not the problem, at least not this time.
> Problem #6 is that errors and stack traces from issues involving winutils.exe 
> are not helpful.  The Java stack trace ends at the ProcessBuilder call.  Only 
> through bitter experience is one able to connect the dots from 
> "ProcessBuilder is the last thing on the stack" to "something is wrong with 
> winutils.exe".
> Note that none of these involve running Hadoop on Windows.  They are only 
> encountered when using Hadoop client libraries to access a cluster from 
> Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13131) Add tests to verify that S3A supports SSE-S3 encryption

2016-06-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13131:

Status: Patch Available  (was: Open)

> Add tests to verify that S3A supports SSE-S3 encryption
> ---
>
> Key: HADOOP-13131
> URL: https://issues.apache.org/jira/browse/HADOOP-13131
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13131-001.patch, HADOOP-13131-002.patch, 
> HADOOP-13131-003.patch, HADOOP-13131-004.patch, HADOOP-13131-005.patch, 
> HADOOP-13131-branch-2-006.patch, HADOOP-13131-branch-2-007.patch, 
> HADOOP-13131-branch-2-008.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Although S3A claims to support server-side S3 encryption (and does, if you 
> set the option), we don't have any test to verify this. Of course, as the 
> encryption is transparent, it's hard to test.
> Here's what I propose
> # a test which sets encryption = AES256; expects things to work as normal.
> # a test which sets encyption = DES and expects any operation creating a file 
> or directory to fail with a 400 "bad request" error



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13131) Add tests to verify that S3A supports SSE-S3 encryption

2016-06-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13131:

Status: Open  (was: Patch Available)

> Add tests to verify that S3A supports SSE-S3 encryption
> ---
>
> Key: HADOOP-13131
> URL: https://issues.apache.org/jira/browse/HADOOP-13131
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13131-001.patch, HADOOP-13131-002.patch, 
> HADOOP-13131-003.patch, HADOOP-13131-004.patch, HADOOP-13131-005.patch, 
> HADOOP-13131-branch-2-006.patch, HADOOP-13131-branch-2-007.patch, 
> HADOOP-13131-branch-2-008.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Although S3A claims to support server-side S3 encryption (and does, if you 
> set the option), we don't have any test to verify this. Of course, as the 
> encryption is transparent, it's hard to test.
> Here's what I propose
> # a test which sets encryption = AES256; expects things to work as normal.
> # a test which sets encyption = DES and expects any operation creating a file 
> or directory to fail with a 400 "bad request" error



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13131) Add tests to verify that S3A supports SSE-S3 encryption

2016-06-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13131:

Attachment: HADOOP-13131-branch-2-008.patch

Attached wrong patch; deleting and adding correct one: patch 008

> Add tests to verify that S3A supports SSE-S3 encryption
> ---
>
> Key: HADOOP-13131
> URL: https://issues.apache.org/jira/browse/HADOOP-13131
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13131-001.patch, HADOOP-13131-002.patch, 
> HADOOP-13131-003.patch, HADOOP-13131-004.patch, HADOOP-13131-005.patch, 
> HADOOP-13131-branch-2-006.patch, HADOOP-13131-branch-2-007.patch, 
> HADOOP-13131-branch-2-008.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Although S3A claims to support server-side S3 encryption (and does, if you 
> set the option), we don't have any test to verify this. Of course, as the 
> encryption is transparent, it's hard to test.
> Here's what I propose
> # a test which sets encryption = AES256; expects things to work as normal.
> # a test which sets encyption = DES and expects any operation creating a file 
> or directory to fail with a 400 "bad request" error



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13131) Add tests to verify that S3A supports SSE-S3 encryption

2016-06-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13131:

Attachment: (was: HADOOP-13171-branch-2-008.patch)

> Add tests to verify that S3A supports SSE-S3 encryption
> ---
>
> Key: HADOOP-13131
> URL: https://issues.apache.org/jira/browse/HADOOP-13131
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13131-001.patch, HADOOP-13131-002.patch, 
> HADOOP-13131-003.patch, HADOOP-13131-004.patch, HADOOP-13131-005.patch, 
> HADOOP-13131-branch-2-006.patch, HADOOP-13131-branch-2-007.patch, 
> HADOOP-13131-branch-2-008.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Although S3A claims to support server-side S3 encryption (and does, if you 
> set the option), we don't have any test to verify this. Of course, as the 
> encryption is transparent, it's hard to test.
> Here's what I propose
> # a test which sets encryption = AES256; expects things to work as normal.
> # a test which sets encyption = DES and expects any operation creating a file 
> or directory to fail with a 400 "bad request" error



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13223) winutils.exe is a bug nexus and should be killed with an axe.

2016-06-01 Thread john lilley (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310203#comment-15310203
 ] 

john lilley edited comment on HADOOP-13223 at 6/1/16 12:16 PM:
---

I think that the problem is something more than winutils.exe... NativeIO only 
goes so far in replacing shell commands like chmod and chown, and that's really 
the heart of the problem.  I was taught in college that shelling out to 
external commands was a bad idea, and well, now we can see why.  I think that a 
root resolution of the underlying issues would mean a search-and-replace 
mission of shell commands with calls into an enhanced NativeIO.  Maybe a 
NativeIO that also covers chown?  I'm unsure of what shell commands are 
actually executed.  But of course that in and of itself speaks highly of the 
nature of the problem.  There is no _interface_ to these operations, so it is a 
blind spot of code quality and ability to refactor or analyze.


was (Author: john.lil...@redpoint.net):
I think that the problem is something more than winutils.exe... NativeIO only 
goes so far in replacing shell commands like chmod and chown, and that's really 
the heart of the problem.  I think I was taught in college that shelling out to 
external commands was a bad idea, and well, now we can see why.  I think that a 
root resolution of the underlying issues would mean a search-and-replace 
mission of shell commands with calls into an enhanced NativeIO.  Maybe a 
NativeIO that also covers chown?  I'm unsure of what shell commands are 
actually executed.  But of course that in and of itself speaks highly of the 
nature of the problem.  There is no _interface_ to these operations, so it is a 
blind spot of code quality and ability to refactor or analyze.

> winutils.exe is a bug nexus and should be killed with an axe.
> -
>
> Key: HADOOP-13223
> URL: https://issues.apache.org/jira/browse/HADOOP-13223
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: bin
>Affects Versions: 2.6.0
> Environment: Microsoft Windows, all versions
>Reporter: john lilley
>
> winutils.exe was apparently created as a stopgap measure to allow Hadoop to 
> "work" on Windows platforms, because the NativeIO libraries aren't 
> implemented there (edit: even NativeIO probably doesn't cover the operations 
> that winutils.exe is used for).  Rather than building a DLL that makes native 
> OS calls, the creators of winutils.exe must have decided that it would be 
> more expedient to create an EXE to carry out file system operations in a 
> linux-like fashion.  Unfortunately, like many stopgap measures in software, 
> this one has persisted well beyond its expected lifetime and usefulness.  My 
> team creates software that runs on Windows and Linux, and winutils.exe is 
> probably responsible for 20% of all issues we encounter, both during 
> development and in the field.
> Problem #1 with winutils.exe is that it is simply missing from many popular 
> distros and/or the client-side software installation for said distros, when 
> supplied, fails to install winutils.exe.  Thus, as software developers, we 
> are forced to pick one version and distribute and install it with our 
> software.
> Which leads to problem #2: winutils.exe are not always compatible.  In 
> particular, MapR MUST have its winutils.exe in the system path, but doing so 
> breaks the Hadoop distro for every other Hadoop vendor.  This makes creating 
> and maintaining test environments that work with all of the Hadoop distros we 
> want to test unnecessarily tedious and error-prone.
> Problem #3 is that the mechanism by which you inform the Hadoop client 
> software where to find winutils.exe is poorly documented and fragile.  First, 
> it can be in the PATH.  If it is in the PATH, that is where it is found.  
> However, the documentation, such as it is, makes no mention of this, and 
> instead says that you should set the HADOOP_HOME environment variable, which 
> does NOT override the winutils.exe found in your system PATH.
> Which leads to problem #4: There is no logging that says where winutils.exe 
> was actually found and loaded.  Because of this, fixing problems of finding 
> the wrong winutils.exe are extremely difficult.
> Problem #5 is that most of the time, such as when accessing straight up HDFS 
> and YARN, one does not *need* winutils.exe.  But if it is missing, the log 
> messages complain about its absence.  When we are trying to diagnose an 
> obscure issue in Hadoop (of which there are many), the presence of this red 
> herring leads to all sorts of time wasted until someone on the team points 
> out that winutils.exe is not the problem, at least not this time.
> Problem #6 is that errors and stack traces from issues involving 

[jira] [Updated] (HADOOP-13171) Add StorageStatistics to S3A; instrument some more operations

2016-06-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13171:

Status: Patch Available  (was: Open)

> Add StorageStatistics to S3A; instrument some more operations
> -
>
> Key: HADOOP-13171
> URL: https://issues.apache.org/jira/browse/HADOOP-13171
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13171-branch-2-001.patch, 
> HADOOP-13171-branch-2-002.patch, HADOOP-13171-branch-2-003.patch, 
> HADOOP-13171-branch-2-004.patch, HADOOP-13171-branch-2-005.patch, 
> HADOOP-13171-branch-2-006.patch, HADOOP-13171-branch-2-007.patch, 
> HADOOP-13171-branch-2-008.patch, HADOOP-13171-branch-2-009.patch, 
> HADOOP-13171-branch-2-010.patch, HADOOP-13171-branch-2-011.patch, 
> HADOOP-13171-branch-2-012.patch, HADOOP-13171-branch-2-013.patch
>
>
> Add {{StorageStatistics}} support to S3A, collecting the same metrics as the 
> instrumentation, but sharing across all instances.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13171) Add StorageStatistics to S3A; instrument some more operations

2016-06-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13171:

Attachment: HADOOP-13171-branch-2-013.patch

Patch 013: address the checkstyle complaints where applicable. The one on 
function parameter count isn't covered.

I'll do a trunk build. Life would/will be easier if HADOOP-13139 got into 
branch-2; the diff from the two branches would go away

> Add StorageStatistics to S3A; instrument some more operations
> -
>
> Key: HADOOP-13171
> URL: https://issues.apache.org/jira/browse/HADOOP-13171
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13171-branch-2-001.patch, 
> HADOOP-13171-branch-2-002.patch, HADOOP-13171-branch-2-003.patch, 
> HADOOP-13171-branch-2-004.patch, HADOOP-13171-branch-2-005.patch, 
> HADOOP-13171-branch-2-006.patch, HADOOP-13171-branch-2-007.patch, 
> HADOOP-13171-branch-2-008.patch, HADOOP-13171-branch-2-009.patch, 
> HADOOP-13171-branch-2-010.patch, HADOOP-13171-branch-2-011.patch, 
> HADOOP-13171-branch-2-012.patch, HADOOP-13171-branch-2-013.patch
>
>
> Add {{StorageStatistics}} support to S3A, collecting the same metrics as the 
> instrumentation, but sharing across all instances.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13166) add getFileStatus("/") test to AbstractContractGetFileStatusTest

2016-06-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-13166:
---

Assignee: Steve Loughran

> add getFileStatus("/") test to AbstractContractGetFileStatusTest
> 
>
> Key: HADOOP-13166
> URL: https://issues.apache.org/jira/browse/HADOOP-13166
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, test
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13166-001.patch
>
>
> The test suite {{AbstractContractGetFileStatusTest}} doesn't have a test for 
> {{getFileStatus("/")}} working.
> While it may seem "obvious" that this will work, on object stores, its 
> actually a special case



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13223) winutils.exe is a bug nexus and should be killed with an axe.

2016-06-01 Thread john lilley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

john lilley updated HADOOP-13223:
-
Description: 
winutils.exe was apparently created as a stopgap measure to allow Hadoop to 
"work" on Windows platforms, because the NativeIO libraries aren't implemented 
there (edit: even NativeIO probably doesn't cover the operations that 
winutils.exe is used for).  Rather than building a DLL that makes native OS 
calls, the creators of winutils.exe must have decided that it would be more 
expedient to create an EXE to carry out file system operations in a linux-like 
fashion.  Unfortunately, like many stopgap measures in software, this one has 
persisted well beyond its expected lifetime and usefulness.  My team creates 
software that runs on Windows and Linux, and winutils.exe is probably 
responsible for 20% of all issues we encounter, both during development and in 
the field.

Problem #1 with winutils.exe is that it is simply missing from many popular 
distros and/or the client-side software installation for said distros, when 
supplied, fails to install winutils.exe.  Thus, as software developers, we are 
forced to pick one version and distribute and install it with our software.

Which leads to problem #2: winutils.exe are not always compatible.  In 
particular, MapR MUST have its winutils.exe in the system path, but doing so 
breaks the Hadoop distro for every other Hadoop vendor.  This makes creating 
and maintaining test environments that work with all of the Hadoop distros we 
want to test unnecessarily tedious and error-prone.

Problem #3 is that the mechanism by which you inform the Hadoop client software 
where to find winutils.exe is poorly documented and fragile.  First, it can be 
in the PATH.  If it is in the PATH, that is where it is found.  However, the 
documentation, such as it is, makes no mention of this, and instead says that 
you should set the HADOOP_HOME environment variable, which does NOT override 
the winutils.exe found in your system PATH.

Which leads to problem #4: There is no logging that says where winutils.exe was 
actually found and loaded.  Because of this, fixing problems of finding the 
wrong winutils.exe are extremely difficult.

Problem #5 is that most of the time, such as when accessing straight up HDFS 
and YARN, one does not *need* winutils.exe.  But if it is missing, the log 
messages complain about its absence.  When we are trying to diagnose an obscure 
issue in Hadoop (of which there are many), the presence of this red herring 
leads to all sorts of time wasted until someone on the team points out that 
winutils.exe is not the problem, at least not this time.

Problem #6 is that errors and stack traces from issues involving winutils.exe 
are not helpful.  The Java stack trace ends at the ProcessBuilder call.  Only 
through bitter experience is one able to connect the dots from "ProcessBuilder 
is the last thing on the stack" to "something is wrong with winutils.exe".

Note that none of these involve running Hadoop on Windows.  They are only 
encountered when using Hadoop client libraries to access a cluster from Windows.

  was:
winutils.exe was apparently created as a stopgap measure to allow Hadoop to 
"work" on Windows platforms, because the NativeIO libraries aren't implemented 
there.  Rather than building a DLL that makes native OS calls, the creators of 
winutils.exe must have decided that it would be more expedient to create an EXE 
to carry out file system operations in a linux-like fashion.  Unfortunately, 
like many stopgap measures in software, this one has persisted well beyond its 
expected lifetime and usefulness.  My team creates software that runs on 
Windows and Linux, and winutils.exe is probably responsible for 20% of all 
issues we encounter, both during development and in the field.

Problem #1 with winutils.exe is that it is simply missing from many popular 
distros and/or the client-side software installation for said distros, when 
supplied, fails to install winutils.exe.  Thus, as software developers, we are 
forced to pick one version and distribute and install it with our software.

Which leads to problem #2: winutils.exe are not always compatible.  In 
particular, MapR MUST have its winutils.exe in the system path, but doing so 
breaks the Hadoop distro for every other Hadoop vendor.  This makes creating 
and maintaining test environments that work with all of the Hadoop distros we 
want to test unnecessarily tedious and error-prone.

Problem #3 is that the mechanism by which you inform the Hadoop client software 
where to find winutils.exe is poorly documented and fragile.  First, it can be 
in the PATH.  If it is in the PATH, that is where it is found.  However, the 
documentation, such as it is, makes no mention of this, and instead says that 
you should set the HADOOP_HOME environment variable, which does NOT override 
the 

[jira] [Comment Edited] (HADOOP-13223) winutils.exe is a bug nexus and should be killed with an axe.

2016-06-01 Thread john lilley (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310203#comment-15310203
 ] 

john lilley edited comment on HADOOP-13223 at 6/1/16 12:13 PM:
---

I think that the problem is something more than winutils.exe... NativeIO only 
goes so far in replacing shell commands like chmod and chown, and that's really 
the heart of the problem.  I think I was taught in college that shelling out to 
external commands was a bad idea, and well, now we can see why.  I think that a 
root resolution of the underlying issues would mean a search-and-replace 
mission of shell commands with calls into an enhanced NativeIO.  Maybe a 
NativeIO that also covers chown?  I'm unsure of what shell commands are 
actually executed.  But of course that in and of itself speaks highly of the 
nature of the problem.  There is no _interface_ to these operations, so it is a 
blind spot of code quality and ability to refactor or analyze.


was (Author: john.lil...@redpoint.net):
I think that the problem is something more than winutils.exe... NativeIO only 
goes so far in replacing shell commands like chmod and chown, and that's really 
the heart of the problem.  I think I was taught in college that shelling out to 
external commands was a bad idea, and well, now we can see why.  I think that a 
root resolution of the underlying issues would mean a search-and-replace 
mission of shell commands with calls into an enhanced NativeIO that also covers 
chown.  But of course that in and of itself speaks highly of the nature of the 
problem.  There is no _interface_ to these operations, so it is a blind spot of 
code quality and ability to refactor or analyze.

> winutils.exe is a bug nexus and should be killed with an axe.
> -
>
> Key: HADOOP-13223
> URL: https://issues.apache.org/jira/browse/HADOOP-13223
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: bin
>Affects Versions: 2.6.0
> Environment: Microsoft Windows, all versions
>Reporter: john lilley
>
> winutils.exe was apparently created as a stopgap measure to allow Hadoop to 
> "work" on Windows platforms, because the NativeIO libraries aren't 
> implemented there.  Rather than building a DLL that makes native OS calls, 
> the creators of winutils.exe must have decided that it would be more 
> expedient to create an EXE to carry out file system operations in a 
> linux-like fashion.  Unfortunately, like many stopgap measures in software, 
> this one has persisted well beyond its expected lifetime and usefulness.  My 
> team creates software that runs on Windows and Linux, and winutils.exe is 
> probably responsible for 20% of all issues we encounter, both during 
> development and in the field.
> Problem #1 with winutils.exe is that it is simply missing from many popular 
> distros and/or the client-side software installation for said distros, when 
> supplied, fails to install winutils.exe.  Thus, as software developers, we 
> are forced to pick one version and distribute and install it with our 
> software.
> Which leads to problem #2: winutils.exe are not always compatible.  In 
> particular, MapR MUST have its winutils.exe in the system path, but doing so 
> breaks the Hadoop distro for every other Hadoop vendor.  This makes creating 
> and maintaining test environments that work with all of the Hadoop distros we 
> want to test unnecessarily tedious and error-prone.
> Problem #3 is that the mechanism by which you inform the Hadoop client 
> software where to find winutils.exe is poorly documented and fragile.  First, 
> it can be in the PATH.  If it is in the PATH, that is where it is found.  
> However, the documentation, such as it is, makes no mention of this, and 
> instead says that you should set the HADOOP_HOME environment variable, which 
> does NOT override the winutils.exe found in your system PATH.
> Which leads to problem #4: There is no logging that says where winutils.exe 
> was actually found and loaded.  Because of this, fixing problems of finding 
> the wrong winutils.exe are extremely difficult.
> Problem #5 is that most of the time, such as when accessing straight up HDFS 
> and YARN, one does not *need* winutils.exe.  But if it is missing, the log 
> messages complain about its absence.  When we are trying to diagnose an 
> obscure issue in Hadoop (of which there are many), the presence of this red 
> herring leads to all sorts of time wasted until someone on the team points 
> out that winutils.exe is not the problem, at least not this time.
> Problem #6 is that errors and stack traces from issues involving winutils.exe 
> are not helpful.  The Java stack trace ends at the ProcessBuilder call.  Only 
> through bitter experience is one able to connect the dots from 
> 

[jira] [Commented] (HADOOP-13223) winutils.exe is a bug nexus and should be killed with an axe.

2016-06-01 Thread john lilley (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310203#comment-15310203
 ] 

john lilley commented on HADOOP-13223:
--

I think that the problem is something more than winutils.exe... NativeIO only 
goes so far in replacing shell commands like chmod and chown, and that's really 
the heart of the problem.  I think I was taught in college that shelling out to 
external commands was a bad idea, and well, now we can see why.  I think that a 
root resolution of the underlying issues would mean a search-and-replace 
mission of shell commands with calls into an enhanced NativeIO that also covers 
chown.  But of course that in and of itself speaks highly of the nature of the 
problem.  There is no _interface_ to these operations, so it is a blind spot of 
code quality and ability to refactor or analyze.

> winutils.exe is a bug nexus and should be killed with an axe.
> -
>
> Key: HADOOP-13223
> URL: https://issues.apache.org/jira/browse/HADOOP-13223
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: bin
>Affects Versions: 2.6.0
> Environment: Microsoft Windows, all versions
>Reporter: john lilley
>
> winutils.exe was apparently created as a stopgap measure to allow Hadoop to 
> "work" on Windows platforms, because the NativeIO libraries aren't 
> implemented there.  Rather than building a DLL that makes native OS calls, 
> the creators of winutils.exe must have decided that it would be more 
> expedient to create an EXE to carry out file system operations in a 
> linux-like fashion.  Unfortunately, like many stopgap measures in software, 
> this one has persisted well beyond its expected lifetime and usefulness.  My 
> team creates software that runs on Windows and Linux, and winutils.exe is 
> probably responsible for 20% of all issues we encounter, both during 
> development and in the field.
> Problem #1 with winutils.exe is that it is simply missing from many popular 
> distros and/or the client-side software installation for said distros, when 
> supplied, fails to install winutils.exe.  Thus, as software developers, we 
> are forced to pick one version and distribute and install it with our 
> software.
> Which leads to problem #2: winutils.exe are not always compatible.  In 
> particular, MapR MUST have its winutils.exe in the system path, but doing so 
> breaks the Hadoop distro for every other Hadoop vendor.  This makes creating 
> and maintaining test environments that work with all of the Hadoop distros we 
> want to test unnecessarily tedious and error-prone.
> Problem #3 is that the mechanism by which you inform the Hadoop client 
> software where to find winutils.exe is poorly documented and fragile.  First, 
> it can be in the PATH.  If it is in the PATH, that is where it is found.  
> However, the documentation, such as it is, makes no mention of this, and 
> instead says that you should set the HADOOP_HOME environment variable, which 
> does NOT override the winutils.exe found in your system PATH.
> Which leads to problem #4: There is no logging that says where winutils.exe 
> was actually found and loaded.  Because of this, fixing problems of finding 
> the wrong winutils.exe are extremely difficult.
> Problem #5 is that most of the time, such as when accessing straight up HDFS 
> and YARN, one does not *need* winutils.exe.  But if it is missing, the log 
> messages complain about its absence.  When we are trying to diagnose an 
> obscure issue in Hadoop (of which there are many), the presence of this red 
> herring leads to all sorts of time wasted until someone on the team points 
> out that winutils.exe is not the problem, at least not this time.
> Problem #6 is that errors and stack traces from issues involving winutils.exe 
> are not helpful.  The Java stack trace ends at the ProcessBuilder call.  Only 
> through bitter experience is one able to connect the dots from 
> "ProcessBuilder is the last thing on the stack" to "something is wrong with 
> winutils.exe".
> Note that none of these involve running Hadoop on Windows.  They are only 
> encountered when using Hadoop client libraries to access a cluster from 
> Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13223) winutils.exe is a bug nexus and should be killed with an axe.

2016-06-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310052#comment-15310052
 ] 

Steve Loughran edited comment on HADOOP-13223 at 6/1/16 12:09 PM:
--

Normally JIRA titles this negative are viewed as a bit rude, but here I think 
we can all agree you are merely stating the problem and the ideal solution in 
the title. Maybe for the release notes the title could be rephrased "winutils 
killed with an axe —for its own good".

I wasn't aware that MapR bundled something what wasn't quite ASF 
{{WINUTILS.EXE}} in their bunding of what-isn't-Apache Hadoop. Makes sense, 
though it is painfully reminiscent of AOL's bundling of their own 
{{WINSOCK.DLL}}; yes, it added TCP-over-AOL dial, up, but broke TCP everywhere 
else.

Which brings me round to this; DLLs are just as bad as EXEs for getting on the 
path then ruining your life —and can often be even more brittle at load. I say 
that as someone whose first introduction to windows coding was actually 
Windows/386. That doesn't mean it's not needed, just that it's not a silver 
bullet.

Having multiple WINUTILs versions on the PATH is probably going to break you 
either way. You've just got encounter this because the rest of us who setup and 
run ASF-based windows installations (or cloud-hosted versions) do operate a 
one-release-only process so haven't hit the same path-of-pain as you. 

And as you note, Hadoop isn't helpful enough when things don't work. We've done 
our best with networking, and Kerberos diagnostics is now an ongoing bit of 
work (HADOOP-12649).

In HADOOP-10775 we put effort in to making failures to find winutils useful, 
* stack traces are more meaningful than an NPE in process launch
* problems are only logged at time of use, not the time that the {{Shell}} 
class is instantiated. Like you say: a distraction otherwise.
* we try to include a bit more on the cause of failure (no {{HADOOP_HOME}}, no 
{{WINUTILS.EXE}}, etc
* the messages point to a wiki entry on the topic 
https://wiki.apache.org/hadoop/WindowsProblems
* which points to where I've been building windows binaries off the ASF 
commits. https://github.com/steveloughran/winutils
* we're explicitly picking up the winutils file from 
{{%HADOOP_HOME%/BIN/WINUTILS.EXE}}
* And you can set the system property {{hadoop.home.dir}} to point to a hadoop 
home of your choice.

I hope you can agree, this will make life less painful, though it sounds like 
your multi-install setup may expose you to problems we haven't hit ourselves. 

# Can you download & build a windows version of 2.8.0 to see how well the new 
codepath works for you? As before we ship that is the time to improve it
# You can add more details onto that WindowsProblems wiki page —create an 
account on the hadoop wiki, then ask on the dev list (or email me me direct) 
for write access. This can be done after the 2.8.x release

There's not much we can do about the MapR codebase, other than mention it on 
the WindowsProblems wiki page.
 
You have made me realise one thing; we could look at adding a way to verify 
that the winutils version is compatible. At the very least, we should be 
printing some version info and the path where it is, so that a {{WINUTILS 
version}} will tell you what's causing needless pain.

And, finally; yes, let's get the axe out and take it behind the shed, never to 
be seen again. Contributions there *are* welcome. 


was (Author: ste...@apache.org):
Normally JIRA titles this negative are viewed as a bit rude, but here I think 
we can all agree you are merely stating the problem and the ideal solution in 
the title. Maybe for the release notes the title could be rephrased "winutils 
killed with an axe —for its own good".

I wasn't aware that MapR bundled something what wasn't quite ASF 
{{WINUTILS.EXE}}in their bunding of what-isn't-Apache Hadoop. Makes sense, 
though it is painfully reminiscent of AOL's bundling of their own 
{{WINSOCK.DLL}}; yes, it added TCP-over-AOL dial, up, but broke TCP everywhere 
else.

Which brings me round to this; DLLs are just as bad as EXEs for getting on the 
path then ruining your life —and can often be even more brittle at load. I say 
that as someone whose first introduction to windows coding was actually 
Windows/386. That doesn't mean it's not needed, just that it's not a silver 
bullet.

Having multiple WINUTILs versions on the PATH is probably going to break you 
either way. You've just got encounter this because the rest of us who setup and 
run ASF-based windows installations (or cloud-hosted versions) do operate a 
one-release-only process so haven't hit the same path-of-pain as you. 

And as you note, Hadoop isn't helpful enough when things don't work. We've done 
our best with networking, and Kerberos diagnostics is now an ongoing bit of 
work (HADOOP-12649).

In HADOOP-10775 we put effort in to making failures to find 

[jira] [Commented] (HADOOP-13223) winutils.exe is a bug nexus and should be killed with an axe.

2016-06-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310197#comment-15310197
 ] 

Steve Loughran commented on HADOOP-13223:
-

we're still trying to stabilise 2.8.0 features; I'm one of the people holding 
things up with my S3a work.

I can do a quick build of it for you if you want, just so you can see how the 
failure handling has improved. You don't want to suffer the pain of building a 
windows release setup if you can avoid it.

As you note, all the winutil operations are being done in a windows binary. By 
inference, they can all be done in a DLL. I don't think it will make the 
problems go away, but it could, possibly, lessen the pain. We've looked at 
moving the whole of RawLocalFileSystem to nio; nobody has done it, and we 
suspect a couple of things won't be there, but again, it can only be a good 
thing. I also suspect the missing bits will be related to: permissions and 
symlinks.

ps, don't apologise for the tone, it's a pretty reasonable summary of the 
experience

> winutils.exe is a bug nexus and should be killed with an axe.
> -
>
> Key: HADOOP-13223
> URL: https://issues.apache.org/jira/browse/HADOOP-13223
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: bin
>Affects Versions: 2.6.0
> Environment: Microsoft Windows, all versions
>Reporter: john lilley
>
> winutils.exe was apparently created as a stopgap measure to allow Hadoop to 
> "work" on Windows platforms, because the NativeIO libraries aren't 
> implemented there.  Rather than building a DLL that makes native OS calls, 
> the creators of winutils.exe must have decided that it would be more 
> expedient to create an EXE to carry out file system operations in a 
> linux-like fashion.  Unfortunately, like many stopgap measures in software, 
> this one has persisted well beyond its expected lifetime and usefulness.  My 
> team creates software that runs on Windows and Linux, and winutils.exe is 
> probably responsible for 20% of all issues we encounter, both during 
> development and in the field.
> Problem #1 with winutils.exe is that it is simply missing from many popular 
> distros and/or the client-side software installation for said distros, when 
> supplied, fails to install winutils.exe.  Thus, as software developers, we 
> are forced to pick one version and distribute and install it with our 
> software.
> Which leads to problem #2: winutils.exe are not always compatible.  In 
> particular, MapR MUST have its winutils.exe in the system path, but doing so 
> breaks the Hadoop distro for every other Hadoop vendor.  This makes creating 
> and maintaining test environments that work with all of the Hadoop distros we 
> want to test unnecessarily tedious and error-prone.
> Problem #3 is that the mechanism by which you inform the Hadoop client 
> software where to find winutils.exe is poorly documented and fragile.  First, 
> it can be in the PATH.  If it is in the PATH, that is where it is found.  
> However, the documentation, such as it is, makes no mention of this, and 
> instead says that you should set the HADOOP_HOME environment variable, which 
> does NOT override the winutils.exe found in your system PATH.
> Which leads to problem #4: There is no logging that says where winutils.exe 
> was actually found and loaded.  Because of this, fixing problems of finding 
> the wrong winutils.exe are extremely difficult.
> Problem #5 is that most of the time, such as when accessing straight up HDFS 
> and YARN, one does not *need* winutils.exe.  But if it is missing, the log 
> messages complain about its absence.  When we are trying to diagnose an 
> obscure issue in Hadoop (of which there are many), the presence of this red 
> herring leads to all sorts of time wasted until someone on the team points 
> out that winutils.exe is not the problem, at least not this time.
> Problem #6 is that errors and stack traces from issues involving winutils.exe 
> are not helpful.  The Java stack trace ends at the ProcessBuilder call.  Only 
> through bitter experience is one able to connect the dots from 
> "ProcessBuilder is the last thing on the stack" to "something is wrong with 
> winutils.exe".
> Note that none of these involve running Hadoop on Windows.  They are only 
> encountered when using Hadoop client libraries to access a cluster from 
> Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13223) winutils.exe is a bug nexus and should be killed with an axe.

2016-06-01 Thread john lilley (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310196#comment-15310196
 ] 

john lilley commented on HADOOP-13223:
--

OK, I made the title a bit less rude ;-)

> winutils.exe is a bug nexus and should be killed with an axe.
> -
>
> Key: HADOOP-13223
> URL: https://issues.apache.org/jira/browse/HADOOP-13223
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: bin
>Affects Versions: 2.6.0
> Environment: Microsoft Windows, all versions
>Reporter: john lilley
>
> winutils.exe was apparently created as a stopgap measure to allow Hadoop to 
> "work" on Windows platforms, because the NativeIO libraries aren't 
> implemented there.  Rather than building a DLL that makes native OS calls, 
> the creators of winutils.exe must have decided that it would be more 
> expedient to create an EXE to carry out file system operations in a 
> linux-like fashion.  Unfortunately, like many stopgap measures in software, 
> this one has persisted well beyond its expected lifetime and usefulness.  My 
> team creates software that runs on Windows and Linux, and winutils.exe is 
> probably responsible for 20% of all issues we encounter, both during 
> development and in the field.
> Problem #1 with winutils.exe is that it is simply missing from many popular 
> distros and/or the client-side software installation for said distros, when 
> supplied, fails to install winutils.exe.  Thus, as software developers, we 
> are forced to pick one version and distribute and install it with our 
> software.
> Which leads to problem #2: winutils.exe are not always compatible.  In 
> particular, MapR MUST have its winutils.exe in the system path, but doing so 
> breaks the Hadoop distro for every other Hadoop vendor.  This makes creating 
> and maintaining test environments that work with all of the Hadoop distros we 
> want to test unnecessarily tedious and error-prone.
> Problem #3 is that the mechanism by which you inform the Hadoop client 
> software where to find winutils.exe is poorly documented and fragile.  First, 
> it can be in the PATH.  If it is in the PATH, that is where it is found.  
> However, the documentation, such as it is, makes no mention of this, and 
> instead says that you should set the HADOOP_HOME environment variable, which 
> does NOT override the winutils.exe found in your system PATH.
> Which leads to problem #4: There is no logging that says where winutils.exe 
> was actually found and loaded.  Because of this, fixing problems of finding 
> the wrong winutils.exe are extremely difficult.
> Problem #5 is that most of the time, such as when accessing straight up HDFS 
> and YARN, one does not *need* winutils.exe.  But if it is missing, the log 
> messages complain about its absence.  When we are trying to diagnose an 
> obscure issue in Hadoop (of which there are many), the presence of this red 
> herring leads to all sorts of time wasted until someone on the team points 
> out that winutils.exe is not the problem, at least not this time.
> Problem #6 is that errors and stack traces from issues involving winutils.exe 
> are not helpful.  The Java stack trace ends at the ProcessBuilder call.  Only 
> through bitter experience is one able to connect the dots from 
> "ProcessBuilder is the last thing on the stack" to "something is wrong with 
> winutils.exe".
> Note that none of these involve running Hadoop on Windows.  They are only 
> encountered when using Hadoop client libraries to access a cluster from 
> Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13223) winutils.exe is a bug nexus and should be killed with an axe.

2016-06-01 Thread john lilley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

john lilley updated HADOOP-13223:
-
Summary: winutils.exe is a bug nexus and should be killed with an axe.  
(was: winutils.exe is an abomination and should be killed with an axe.)

> winutils.exe is a bug nexus and should be killed with an axe.
> -
>
> Key: HADOOP-13223
> URL: https://issues.apache.org/jira/browse/HADOOP-13223
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: bin
>Affects Versions: 2.6.0
> Environment: Microsoft Windows, all versions
>Reporter: john lilley
>
> winutils.exe was apparently created as a stopgap measure to allow Hadoop to 
> "work" on Windows platforms, because the NativeIO libraries aren't 
> implemented there.  Rather than building a DLL that makes native OS calls, 
> the creators of winutils.exe must have decided that it would be more 
> expedient to create an EXE to carry out file system operations in a 
> linux-like fashion.  Unfortunately, like many stopgap measures in software, 
> this one has persisted well beyond its expected lifetime and usefulness.  My 
> team creates software that runs on Windows and Linux, and winutils.exe is 
> probably responsible for 20% of all issues we encounter, both during 
> development and in the field.
> Problem #1 with winutils.exe is that it is simply missing from many popular 
> distros and/or the client-side software installation for said distros, when 
> supplied, fails to install winutils.exe.  Thus, as software developers, we 
> are forced to pick one version and distribute and install it with our 
> software.
> Which leads to problem #2: winutils.exe are not always compatible.  In 
> particular, MapR MUST have its winutils.exe in the system path, but doing so 
> breaks the Hadoop distro for every other Hadoop vendor.  This makes creating 
> and maintaining test environments that work with all of the Hadoop distros we 
> want to test unnecessarily tedious and error-prone.
> Problem #3 is that the mechanism by which you inform the Hadoop client 
> software where to find winutils.exe is poorly documented and fragile.  First, 
> it can be in the PATH.  If it is in the PATH, that is where it is found.  
> However, the documentation, such as it is, makes no mention of this, and 
> instead says that you should set the HADOOP_HOME environment variable, which 
> does NOT override the winutils.exe found in your system PATH.
> Which leads to problem #4: There is no logging that says where winutils.exe 
> was actually found and loaded.  Because of this, fixing problems of finding 
> the wrong winutils.exe are extremely difficult.
> Problem #5 is that most of the time, such as when accessing straight up HDFS 
> and YARN, one does not *need* winutils.exe.  But if it is missing, the log 
> messages complain about its absence.  When we are trying to diagnose an 
> obscure issue in Hadoop (of which there are many), the presence of this red 
> herring leads to all sorts of time wasted until someone on the team points 
> out that winutils.exe is not the problem, at least not this time.
> Problem #6 is that errors and stack traces from issues involving winutils.exe 
> are not helpful.  The Java stack trace ends at the ProcessBuilder call.  Only 
> through bitter experience is one able to connect the dots from 
> "ProcessBuilder is the last thing on the stack" to "something is wrong with 
> winutils.exe".
> Note that none of these involve running Hadoop on Windows.  They are only 
> encountered when using Hadoop client libraries to access a cluster from 
> Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13223) winutils.exe is an abomination and should be killed with an axe.

2016-06-01 Thread john lilley (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310189#comment-15310189
 ] 

john lilley commented on HADOOP-13223:
--

One more winutils.exe issue seen only on MapR.  I don't think this affects the 
mainline Hadoop development, but it is indicative of the kinds of errors you 
can get out of winutils.  This is the thing that drove us to put MapR's 
winutil.exe folder in the PATH, which in turn broke the other distros:

I’m still trying to narrow this down more precisely, but I’ve found that on 
Windows, attempting to access Hive on Mapr 5.1 in secure mode will fail with 
the stack trace below, unless we put the mapr bin folder (e.g. 
C:\opt\mapr\hadoop\hadoop-2.7.0\bin) in the PATH.  Otherwise, we have a 
winutils.exe in the normal place for Hadoop relative to the jar files, and it 
is found, but a weird error ensues when it is called.  

The smoking gun in the stack trace is the invalid mode ‘00777’.  Indeed, no 
version of winutils, including the one in mapr’s bin, will accept “chmod 
00777”.  This was a bug in Paleolithic versions of RawLocalFileSystem.  I have 
to go back before 2.2.0 to find a RawLocalFileSystem.setPermissions that 
formats five digits instead of four:
  public void setPermission(Path p, FsPermission permission)
throws IOException {
if (NativeIO.isAvailable()) {
  NativeIO.chmod(pathToFile(p).getCanonicalPath(),
 permission.toShort());
} else {
  execCommand(pathToFile(p), Shell.SET_PERMISSION_COMMAND,
  String.format("%05o", permission.toShort()));
}
  }

Either way, it is mysterious why putting MapR's bin in the path helps.  

This is the error stack:
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:466)

net.redpoint.hiveclient.internal.DMHiveClientMetastoreImpl.(DMHiveClientMetastoreImpl.java:257)

net.redpoint.hiveclient.internal.DMHiveClientMetastoreImpl.newInstance(DMHiveClientMetastoreImpl.java:60)

net.redpoint.hiveclient.DMHiveClientCreator.createHiveClient(DMHiveClientCreator.java:16)
Caused by: ExitCodeException exitCode=1: Invalid mode: '00777'
Incorrect command line arguments.
org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
org.apache.hadoop.util.Shell.run(Shell.java:456)
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
org.apache.hadoop.util.Shell.execCommand(Shell.java:815)
org.apache.hadoop.util.Shell.execCommand(Shell.java:798)

org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:772)

org.apache.hadoop.fs.RawLocalFileSystem.mkOneDirWithMode(RawLocalFileSystem.java:487)

org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:527)
org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:505)
org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:305)

org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:642)

org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:570)
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:508)
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:466)

net.redpoint.hiveclient.internal.DMHiveClientMetastoreImpl.(DMHiveClientMetastoreImpl.java:257)

net.redpoint.hiveclient.internal.DMHiveClientMetastoreImpl.newInstance(DMHiveClientMetastoreImpl.java:60)

net.redpoint.hiveclient.DMHiveClientCreator.createHiveClient(DMHiveClientCreator.java:16)


> winutils.exe is an abomination and should be killed with an axe.
> 
>
> Key: HADOOP-13223
> URL: https://issues.apache.org/jira/browse/HADOOP-13223
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: bin
>Affects Versions: 2.6.0
> Environment: Microsoft Windows, all versions
>Reporter: john lilley
>
> winutils.exe was apparently created as a stopgap measure to allow Hadoop to 
> "work" on Windows platforms, because the NativeIO libraries aren't 
> implemented there.  Rather than building a DLL that makes native OS calls, 
> the creators of winutils.exe must have decided that it would be more 
> expedient to create an EXE to carry out file system operations in a 
> linux-like fashion.  Unfortunately, like many stopgap measures in software, 
> this one has persisted well beyond its expected lifetime and usefulness.  My 
> team creates software that runs on Windows and Linux, and winutils.exe is 
> probably responsible for 20% of all issues we encounter, both during 
> development and in the field.
> Problem #1 with winutils.exe is that it is simply missing from many popular 
> 

[jira] [Comment Edited] (HADOOP-13223) winutils.exe is an abomination and should be killed with an axe.

2016-06-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310052#comment-15310052
 ] 

Steve Loughran edited comment on HADOOP-13223 at 6/1/16 12:03 PM:
--

Normally JIRA titles this negative are viewed as a bit rude, but here I think 
we can all agree you are merely stating the problem and the ideal solution in 
the title. Maybe for the release notes the title could be rephrased "winutils 
killed with an axe —for its own good".

I wasn't aware that MapR bundled something what wasn't quite ASF 
{{WINUTILS.EXE}}in their bunding of what-isn't-Apache Hadoop. Makes sense, 
though it is painfully reminiscent of AOL's bundling of their own 
{{WINSOCK.DLL}}; yes, it added TCP-over-AOL dial, up, but broke TCP everywhere 
else.

Which brings me round to this; DLLs are just as bad as EXEs for getting on the 
path then ruining your life —and can often be even more brittle at load. I say 
that as someone whose first introduction to windows coding was actually 
Windows/386. That doesn't mean it's not needed, just that it's not a silver 
bullet.

Having multiple WINUTILs versions on the PATH is probably going to break you 
either way. You've just got encounter this because the rest of us who setup and 
run ASF-based windows installations (or cloud-hosted versions) do operate a 
one-release-only process so haven't hit the same path-of-pain as you. 

And as you note, Hadoop isn't helpful enough when things don't work. We've done 
our best with networking, and Kerberos diagnostics is now an ongoing bit of 
work (HADOOP-12649).

In HADOOP-10775 we put effort in to making failures to find winutils useful, 
* stack traces are more meaningful than an NPE in process launch
* problems are only logged at time of use, not the time that the {{Shell}} 
class is instantiated. Like you say: a distraction otherwise.
* we try to include a bit more on the cause of failure (no {{HADOOP_HOME}}, no 
{{WINUTILS.EXE}}, etc
* the messages point to a wiki entry on the topic 
https://wiki.apache.org/hadoop/WindowsProblems
* which points to where I've been building windows binaries off the ASF 
commits. https://github.com/steveloughran/winutils
* we're explicitly picking up the winutils file from 
{{%HADOOP_HOME%/BIN/WINUTILS.EXE}}
* And you can set the system property {{hadoop.home.dir}} to point to a hadoop 
home of your choice.

I hope you can agree, this will make life less painful, though it sounds like 
your multi-install setup may expose you to problems we haven't hit ourselves. 

# Can you download & build a windows version of 2.8.0 to see how well the new 
codepath works for you? As before we ship that is the time to improve it
# You can add more details onto that WindowsProblems wiki page —create an 
account on the hadoop wiki, then ask on the dev list (or email me me direct) 
for write access. This can be done after the 2.8.x release

There's not much we can do about the MapR codebase, other than mention it on 
the WindowsProblems wiki page.
 
You have made me realise one thing; we could look at adding a way to verify 
that the winutils version is compatible. At the very least, we should be 
printing some version info and the path where it is, so that a {{WINUTILS 
version}} will tell you what's causing needless pain.

And, finally; yes, let's get the axe out and take it behind the shed, never to 
be seen again. Contributions there *are* welcome. 


was (Author: ste...@apache.org):
Normally JIRA titles this negative are viewed as a bit rude, but here I think 
we can all agree you are merely stating the problem and the ideal solution in 
the title. Maybe for the release notes the title could be rephrased "winutils 
killed with an axe —for its own good".

I wasn't aware that MapR bundled something what wasn't quite ASF {{WINUTILS.EXE 
}}in their bunding of what-isn't-Apache Hadoop. Makes sense, though it is 
painfully reminiscent of AOL's bundling of their own {{WINSOCK.DLL}}; yes, it 
added TCP-over-AOL dial, up, but broke TCP everywhere else.

Which brings me round to this; DLLs are just as bad as EXEs for getting on the 
path then ruining your life —and can often be even more brittle at load. I say 
that as someone whose first introduction to windows coding was actually 
Windows/386. That doesn't mean it's not needed, just that it's not a silver 
bullet.

Having multiple WINUTILs versions on the PATH is probably going to break you 
either way. You've just got encounter this because the rest of us who setup and 
run ASF-based windows installations (or cloud-hosted versions) do operate a 
one-release-only process so haven't hit the same path-of-pain as you. 

And as you note, Hadoop isn't helpful enough when things don't work. We've done 
our best with networking, and Kerberos diagnostics is now an ongoing bit of 
work (HADOOP-12649).

In HADOOP-10775 we put effort in to making failures to find 

[jira] [Commented] (HADOOP-13223) winutils.exe is an abomination and should be killed with an axe.

2016-06-01 Thread john lilley (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310185#comment-15310185
 ] 

john lilley commented on HADOOP-13223:
--

Hi Steve,

Sorry for the tone -- I was venting some frustration at how often this 
component causes difficulties for us.

Regarding the MapR incompatibility, let me be more clear.  If you put the 
folder containing MapR's winutils.exe in the PATH on Windows -- something that 
is required for various client-side functions to work on MapR -- then you will 
get failures when attempting to use other Hadoop distros.  I *think* it is 
winutils.exe that must be in the PATH for MapR (as opposed to other DLLs in the 
same folder) because the errors that result otherwise implicate winutils-like 
operations (e.g. setting folder permissions).  But I don't know for sure that 
it is the presence of winutils.exe in the PATH is the thing that breaks other 
distros (it could very well be hadoop.dll or some other DLL).  

Overall, I'm very glad to hear that I'm not the only one seeing issues, and 
that steps are being taken to make it better.  I don't know that I have time to 
replace NativeIO for Windows, but if you point me to the winutil.exe source and 
the interface that is being implemented (all of NativeIO?  I was never really 
clear on that...), I'll take a look at it.  Seems like function of winutils 
could be wrapped into a native-method-calling class wrapper instead.  Devil is 
in the details of course.

Is there a built 2.8.0 distro anywhere?  I see up to 2.7.2 on Apache.  I'm not 
a Hadoop internals expert.  I built it once, long long ago, but I'm a bit rusty 
there.

PS: I forgot one other weird symptom of a missing winutils.exe that was 
reported by a colleague.  Let me corral him and get the full story. 

Thanks
john


> winutils.exe is an abomination and should be killed with an axe.
> 
>
> Key: HADOOP-13223
> URL: https://issues.apache.org/jira/browse/HADOOP-13223
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: bin
>Affects Versions: 2.6.0
> Environment: Microsoft Windows, all versions
>Reporter: john lilley
>
> winutils.exe was apparently created as a stopgap measure to allow Hadoop to 
> "work" on Windows platforms, because the NativeIO libraries aren't 
> implemented there.  Rather than building a DLL that makes native OS calls, 
> the creators of winutils.exe must have decided that it would be more 
> expedient to create an EXE to carry out file system operations in a 
> linux-like fashion.  Unfortunately, like many stopgap measures in software, 
> this one has persisted well beyond its expected lifetime and usefulness.  My 
> team creates software that runs on Windows and Linux, and winutils.exe is 
> probably responsible for 20% of all issues we encounter, both during 
> development and in the field.
> Problem #1 with winutils.exe is that it is simply missing from many popular 
> distros and/or the client-side software installation for said distros, when 
> supplied, fails to install winutils.exe.  Thus, as software developers, we 
> are forced to pick one version and distribute and install it with our 
> software.
> Which leads to problem #2: winutils.exe are not always compatible.  In 
> particular, MapR MUST have its winutils.exe in the system path, but doing so 
> breaks the Hadoop distro for every other Hadoop vendor.  This makes creating 
> and maintaining test environments that work with all of the Hadoop distros we 
> want to test unnecessarily tedious and error-prone.
> Problem #3 is that the mechanism by which you inform the Hadoop client 
> software where to find winutils.exe is poorly documented and fragile.  First, 
> it can be in the PATH.  If it is in the PATH, that is where it is found.  
> However, the documentation, such as it is, makes no mention of this, and 
> instead says that you should set the HADOOP_HOME environment variable, which 
> does NOT override the winutils.exe found in your system PATH.
> Which leads to problem #4: There is no logging that says where winutils.exe 
> was actually found and loaded.  Because of this, fixing problems of finding 
> the wrong winutils.exe are extremely difficult.
> Problem #5 is that most of the time, such as when accessing straight up HDFS 
> and YARN, one does not *need* winutils.exe.  But if it is missing, the log 
> messages complain about its absence.  When we are trying to diagnose an 
> obscure issue in Hadoop (of which there are many), the presence of this red 
> herring leads to all sorts of time wasted until someone on the team points 
> out that winutils.exe is not the problem, at least not this time.
> Problem #6 is that errors and stack traces from issues involving winutils.exe 
> are not helpful.  The Java stack trace ends at 

[jira] [Commented] (HADOOP-13171) Add StorageStatistics to S3A; instrument some more operations

2016-06-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310066#comment-15310066
 ] 

Steve Loughran commented on HADOOP-13171:
-

I'll see about a trunk patch too.

Checkstyle-wise, only one complaint is in production code
{code}
public S3AFastOutputStream(AmazonS3Client client,:10: More than 7 parameters 
(found 9).
{code}

that's a wontfix. Test-code side; some on line-length and imports, which I'll 
fix. A few on some fields for the test data structures where some of the fields 
aren't encapsulated. I really think that bit of style checking in test code is 
overkill. Production, I'm happy with it for the long-term maintenance, 
especially for managing derived code. But for tests? Overkill.

Even so, those contract tests are externally used, so I'll do the wrapping. I 
just think checkstyle is whining.

> Add StorageStatistics to S3A; instrument some more operations
> -
>
> Key: HADOOP-13171
> URL: https://issues.apache.org/jira/browse/HADOOP-13171
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13171-branch-2-001.patch, 
> HADOOP-13171-branch-2-002.patch, HADOOP-13171-branch-2-003.patch, 
> HADOOP-13171-branch-2-004.patch, HADOOP-13171-branch-2-005.patch, 
> HADOOP-13171-branch-2-006.patch, HADOOP-13171-branch-2-007.patch, 
> HADOOP-13171-branch-2-008.patch, HADOOP-13171-branch-2-009.patch, 
> HADOOP-13171-branch-2-010.patch, HADOOP-13171-branch-2-011.patch, 
> HADOOP-13171-branch-2-012.patch
>
>
> Add {{StorageStatistics}} support to S3A, collecting the same metrics as the 
> instrumentation, but sharing across all instances.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13223) winutils.exe is an abomination and should be killed with an axe.

2016-06-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310052#comment-15310052
 ] 

Steve Loughran edited comment on HADOOP-13223 at 6/1/16 10:00 AM:
--

Normally JIRA titles this negative are viewed as a bit rude, but here I think 
we can all agree you are merely stating the problem and the ideal solution in 
the title. Maybe for the release notes the title could be rephrased "winutils 
killed with an axe —for its own good".

I wasn't aware that MapR bundled something what wasn't quite ASF {{WINUTILS.EXE 
}}in their bunding of what-isn't-Apache Hadoop. Makes sense, though it is 
painfully reminiscent of AOL's bundling of their own {{WINSOCK.DLL}}; yes, it 
added TCP-over-AOL dial, up, but broke TCP everywhere else.

Which brings me round to this; DLLs are just as bad as EXEs for getting on the 
path then ruining your life —and can often be even more brittle at load. I say 
that as someone whose first introduction to windows coding was actually 
Windows/386. That doesn't mean it's not needed, just that it's not a silver 
bullet.

Having multiple WINUTILs versions on the PATH is probably going to break you 
either way. You've just got encounter this because the rest of us who setup and 
run ASF-based windows installations (or cloud-hosted versions) do operate a 
one-release-only process so haven't hit the same path-of-pain as you. 

And as you note, Hadoop isn't helpful enough when things don't work. We've done 
our best with networking, and Kerberos diagnostics is now an ongoing bit of 
work (HADOOP-12649).

In HADOOP-10775 we put effort in to making failures to find winutils useful, 
* stack traces are more meaningful than an NPE in process launch
* problems are only logged at time of use, not the time that the {{Shell}} 
class is instantiated. Like you say: a distraction otherwise.
* we try to include a bit more on the cause of failure (no {{HADOOP_HOME}}, no 
{{WINUTILS.EXE}}, etc
* the messages point to a wiki entry on the topic 
https://wiki.apache.org/hadoop/WindowsProblems
* which points to where I've been building windows binaries off the ASF 
commits. https://github.com/steveloughran/winutils
* we're explicitly picking up the winutils file from 
{{%HADOOP_HOME%/BIN/WINUTILS.EXE}}
* And you can set the system property {{hadoop.home.dir}} to point to a hadoop 
home of your choice.

I hope you can agree, this will make life less painful, though it sounds like 
your multi-install setup may expose you to problems we haven't hit ourselves. 

# Can you download & build a windows version of 2.8.0 to see how well the new 
codepath works for you? As before we ship that is the time to improve it
# You can add more details onto that WindowsProblems wiki page —create an 
account on the hadoop wiki, then ask on the dev list (or email me me direct) 
for write access. This can be done after the 2.8.x release

There's not much we can do about the MapR codebase, other than mention it on 
the WindowsProblems wiki page.
 
You have made me realise one thing; we could look at adding a way to verify 
that the winutils version is compatible. At the very least, we should be 
printing some version info and the path where it is, so that a {{WINUTILS 
version}} will tell you what's causing needless pain.

And, finally; yes, let's get the axe out and take it behind the shed, never to 
be seen again. Contributions there *are* welcome. 


was (Author: ste...@apache.org):
Normally JIRA titles this negative are viewed as a bit rude, but here I think 
we can all disagree you are merely stating the problem an ideal solution in the 
title. Maybe for the release notes the title could be rephrased "winutils 
killed with an axe".

I wasn't aware that MapR bundled something what wasn't quite ASF WINUTILS.EXE 
in their bunding of what-isn't-Apache Hadoop. Makes sense, though it is 
painfully reminiscent of AOL's bundling of their own WINSOCK.DLL; yes, it added 
TCP-over-AOL dial, up, but broke TCP everywhere else.

Which brings me round to this; DLLs are just as bad as EXEs for getting on the 
path then ruining your life. I say that as someone whose first introduction to 
windows coding was actually Windows/386.

Having multiple versions on the PATH is probably going to break you either way. 
You've just got encounter this because the rest of us who setup and run 
ASF-based windows installations (or cloud-hosted versions) do operate a 
one-release-only process so haven't hit the same path-of-pain as you. 

And as you note, Hadoop isn't helpful enough when things don't work. We've done 
our best with networking, and Kerberos diagnostics is now an ongoing bit of 
work.

In HADOOP-10775 we put effort in to making failures to find winutils useful, 
* stack traces are more meaningful than an NPE in process launch
* problems are only logged at time of use, not the time that the {{Shell}} 
class is 

[jira] [Commented] (HADOOP-13223) winutils.exe is an abomination and should be killed with an axe.

2016-06-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310052#comment-15310052
 ] 

Steve Loughran commented on HADOOP-13223:
-

Normally JIRA titles this negative are viewed as a bit rude, but here I think 
we can all disagree you are merely stating the problem an ideal solution in the 
title. Maybe for the release notes the title could be rephrased "winutils 
killed with an axe".

I wasn't aware that MapR bundled something what wasn't quite ASF WINUTILS.EXE 
in their bunding of what-isn't-Apache Hadoop. Makes sense, though it is 
painfully reminiscent of AOL's bundling of their own WINSOCK.DLL; yes, it added 
TCP-over-AOL dial, up, but broke TCP everywhere else.

Which brings me round to this; DLLs are just as bad as EXEs for getting on the 
path then ruining your life. I say that as someone whose first introduction to 
windows coding was actually Windows/386.

Having multiple versions on the PATH is probably going to break you either way. 
You've just got encounter this because the rest of us who setup and run 
ASF-based windows installations (or cloud-hosted versions) do operate a 
one-release-only process so haven't hit the same path-of-pain as you. 

And as you note, Hadoop isn't helpful enough when things don't work. We've done 
our best with networking, and Kerberos diagnostics is now an ongoing bit of 
work.

In HADOOP-10775 we put effort in to making failures to find winutils useful, 
* stack traces are more meaningful than an NPE in process launch
* problems are only logged at time of use, not the time that the {{Shell}} 
class is instantiated. Like you say: a distraction otherwise.
* we try to include a bit more on the cause of failure (no HADOOP_HOME, no 
WINUTILS.EXE, etc
* the messages point to a wiki entry on the topic 
https://wiki.apache.org/hadoop/WindowsProblems
* which points to where I've been building windows binaries off the ASF 
commits. https://github.com/steveloughran/winutils
* we're explicitly picking up the winutils file from 
{{%HADOOP_HOME%/BIN/WINUTILS.EXE}}
* And you can set the system property {{hadoop.home.dir}} to point to a hadoop 
home of your choice.

I hope you can agree, this will make life less painful, though it sounds like 
your multi-install setup may expose you to problems we haven't hit ourselves. 

# Can you download & build a windows version of 2.8.0 to see how well the new 
codepath works for you? As before we ship that is the time to improve it
# You can add more details onto that WindowsProblems wiki page —create an 
account on the hadoop wiki, then ask on the dev list (or email me me direct) 
for write access. This can be done after the 2.8.x release

There's not much we can do about the MapR codebase. Though you have made me 
realise one thing; we could look at adding a wayo verify that the winutils 
version is compatible. At the very least, we should be printing some version 
info and the path where it is, so that a {{WINUTILS version}} will tell you 
what's causing needless pain.

And, finally; yes, let's get the axe out and take it behind the shed, never to 
be seen again. Contributions there *are* welcome. 

> winutils.exe is an abomination and should be killed with an axe.
> 
>
> Key: HADOOP-13223
> URL: https://issues.apache.org/jira/browse/HADOOP-13223
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: bin
>Affects Versions: 2.6.0
> Environment: Microsoft Windows, all versions
>Reporter: john lilley
>
> winutils.exe was apparently created as a stopgap measure to allow Hadoop to 
> "work" on Windows platforms, because the NativeIO libraries aren't 
> implemented there.  Rather than building a DLL that makes native OS calls, 
> the creators of winutils.exe must have decided that it would be more 
> expedient to create an EXE to carry out file system operations in a 
> linux-like fashion.  Unfortunately, like many stopgap measures in software, 
> this one has persisted well beyond its expected lifetime and usefulness.  My 
> team creates software that runs on Windows and Linux, and winutils.exe is 
> probably responsible for 20% of all issues we encounter, both during 
> development and in the field.
> Problem #1 with winutils.exe is that it is simply missing from many popular 
> distros and/or the client-side software installation for said distros, when 
> supplied, fails to install winutils.exe.  Thus, as software developers, we 
> are forced to pick one version and distribute and install it with our 
> software.
> Which leads to problem #2: winutils.exe are not always compatible.  In 
> particular, MapR MUST have its winutils.exe in the system path, but doing so 
> breaks the Hadoop distro for every other Hadoop vendor.  This makes creating 
> and maintaining test 

[jira] [Commented] (HADOOP-13209) replace slaves with workers

2016-06-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15309885#comment-15309885
 ] 

Hadoop QA commented on HADOOP-13209:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
1s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 12s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
15s {color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 8s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 54s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
6s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 43s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 26s 
{color} | {color:red} root: The patch generated 28 new + 186 unchanged - 27 
fixed = 214 total (was 213) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
13s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 
9s {color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 49s 
{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 56m 25s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 47m 59s {color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 7s 
{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 113m 39s 
{color} | {color:red} hadoop-mapreduce-client-jobclient in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
38s {color} | {color:green} The patch does not generate ASF License warnings. 

[jira] [Commented] (HADOOP-13228) Add delegation token to the connection in DelegationTokenAuthenticator

2016-06-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15309871#comment-15309871
 ] 

Hadoop QA commented on HADOOP-13228:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 2s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 110 unchanged - 2 fixed = 110 total (was 112) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 37s {color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m 27s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics |
|   | hadoop.ha.TestZKFailoverController |
|   | hadoop.net.TestDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807349/HADOOP-13228.03.patch 
|
| JIRA Issue | HADOOP-13228 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2ae830f27ffc 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d749cf6 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9636/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/9636/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9636/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9636/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add delegation token to the connection in DelegationTokenAuthenticator
> 

[jira] [Updated] (HADOOP-13228) Add delegation token to the connection in DelegationTokenAuthenticator

2016-06-01 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13228:
---
Attachment: HADOOP-13228.03.patch

Patch 3 to fix the new style warnings.

> Add delegation token to the connection in DelegationTokenAuthenticator
> --
>
> Key: HADOOP-13228
> URL: https://issues.apache.org/jira/browse/HADOOP-13228
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13228.01.patch, HADOOP-13228.02.patch, 
> HADOOP-13228.03.patch
>
>
> Following [a comment from another 
> jira|https://issues.apache.org/jira/browse/HADOOP-13155?focusedCommentId=15308715=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15308715],
>  create this to specifically handle the delegation token renewal/cancellation 
> bug in {{DelegationTokenAuthenticatedURL}} and 
> {{DelegationTokenAuthenticator}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13228) Add delegation token to the connection in DelegationTokenAuthenticator

2016-06-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15309401#comment-15309401
 ] 

Hadoop QA commented on HADOOP-13228:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
5s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 48s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 45s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 24s 
{color} | {color:red} hadoop-common-project/hadoop-common: The patch generated 
2 new + 110 unchanged - 2 fixed = 112 total (was 112) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 24s {color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 11s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.net.TestDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807337/HADOOP-13228.02.patch 
|
| JIRA Issue | HADOOP-13228 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8dc9a8e1084c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d749cf6 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9635/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9635/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/9635/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9635/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9635/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add delegation 

[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-01 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15309396#comment-15309396
 ] 

Xiao Chen commented on HADOOP-12893:


Looks like jenkins says trunk compiled fine, but {{hadoop-project}} and 
{{hadoop-project-dist}} needs to be compiled as well. I'll try to fix, but not 
sure what's the best way to handle it.
If anyone has suggestions or experience with such, feel free to comment or post 
a patch. Thanks!

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Attachments: HADOOP-12893.002.patch, HADOOP-12893.003.patch, 
> HADOOP-12893.004.patch, HADOOP-12893.005.patch, HADOOP-12893.006.patch, 
> HADOOP-12893.007.patch, HADOOP-12893.008.patch, HADOOP-12893.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org