[jira] [Commented] (HADOOP-14119) Remove unused imports from GzipCodec.java

2017-02-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15884097#comment-15884097
 ] 

Hadoop QA commented on HADOOP-14119:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 11s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-14119 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12854647/HADOOP-14119.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5824b63e05b7 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 120bef7 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11716/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11716/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11716/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Remove unused imports from GzipCodec.java
> -
>
> Key: HADOOP-14119
> URL: https://issues.apache.org/jira/browse/HADOOP-14119
> Project: Hadoop Common
>  Issue Type: Bug
>

[jira] [Updated] (HADOOP-14119) Remove unused imports from GzipCodec.java

2017-02-24 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HADOOP-14119:
---
Target Version/s: 3.0.0-alpha2, 2.9.0  (was: 2.9.0, 3.0.0-alpha2)
  Status: Patch Available  (was: Open)

Attach a patch to make a clean up. It looks strange that Jenkins not shows the 
checkstyle issues.

> Remove unused imports from GzipCodec.java
> -
>
> Key: HADOOP-14119
> URL: https://issues.apache.org/jira/browse/HADOOP-14119
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14119.001.patch
>
>
> There are some unused imports after HADOOP-14097. Let's fix this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14119) Remove unused imports from GzipCodec.java

2017-02-24 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HADOOP-14119:
---
Attachment: HADOOP-14119.001.patch

> Remove unused imports from GzipCodec.java
> -
>
> Key: HADOOP-14119
> URL: https://issues.apache.org/jira/browse/HADOOP-14119
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14119.001.patch
>
>
> There are some unused imports after HADOOP-14097. Let's fix this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14119) Remove unused imports from GzipCodec.java

2017-02-24 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin reassigned HADOOP-14119:
--

Assignee: Yiqun Lin

> Remove unused imports from GzipCodec.java
> -
>
> Key: HADOOP-14119
> URL: https://issues.apache.org/jira/browse/HADOOP-14119
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: newbie
>
> There are some unused imports after HADOOP-14097. Let's fix this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14106) Fix deprecated FileSystem inefficient calls in test

2017-02-24 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15883967#comment-15883967
 ] 

Mingliang Liu commented on HADOOP-14106:


This can be a uber jira to track all changes in different modules, using 
"contains" links as the subtask can only be used within the same project.

> Fix deprecated FileSystem inefficient calls in test
> ---
>
> Key: HADOOP-14106
> URL: https://issues.apache.org/jira/browse/HADOOP-14106
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Mingliang Liu
>Assignee: Yiqun Lin
>
> [HADOOP-13321] deprecates FileSystem APIs that promote inefficient call 
> patterns. There are existing code patterns in tests and this JIRA is to 
> address them.
> {code}
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractFSContractTestBase.java:[372,10]
>  [deprecation] isDirectory(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java:[249,14]
>  [deprecation] isFile(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java:[251,16]
>  [deprecation] isFile(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java:[264,14]
>  [deprecation] isFile(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java:[266,16]
>  [deprecation] isFile(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java:[280,14]
>  [deprecation] isFile(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java:[282,16]
>  [deprecation] isFile(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java:[288,14]
>  [deprecation] isFile(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java:[290,16]
>  [deprecation] isFile(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java:[306,14]
>  [deprecation] isDirectory(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java:[308,16]
>  [deprecation] isDirectory(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java:[313,14]
>  [deprecation] isDirectory(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java:[315,16]
>  [deprecation] isDirectory(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java:[340,14]
>  [deprecation] isFile(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java:[342,16]
>  [deprecation] isFile(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java:[351,14]
>  [deprecation] isDirectory(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java:[353,16]
>  [deprecation] isDirectory(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java:[400,14]
>  [deprecation] isFile(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java:[759,14]
>  [deprecation] isFile(Path) in File

[jira] [Assigned] (HADOOP-14106) Fix deprecated FileSystem inefficient calls in test

2017-02-24 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu reassigned HADOOP-14106:
--

Assignee: Yiqun Lin
 Summary: Fix deprecated FileSystem inefficient calls in test  (was: Fix 
deprecated FileSystem inefficient calls in unit test)

unit test -> test

> Fix deprecated FileSystem inefficient calls in test
> ---
>
> Key: HADOOP-14106
> URL: https://issues.apache.org/jira/browse/HADOOP-14106
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Mingliang Liu
>Assignee: Yiqun Lin
>
> [HADOOP-13321] deprecates FileSystem APIs that promote inefficient call 
> patterns. There are existing code patterns in tests and this JIRA is to 
> address them.
> {code}
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractFSContractTestBase.java:[372,10]
>  [deprecation] isDirectory(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java:[249,14]
>  [deprecation] isFile(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java:[251,16]
>  [deprecation] isFile(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java:[264,14]
>  [deprecation] isFile(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java:[266,16]
>  [deprecation] isFile(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java:[280,14]
>  [deprecation] isFile(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java:[282,16]
>  [deprecation] isFile(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java:[288,14]
>  [deprecation] isFile(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java:[290,16]
>  [deprecation] isFile(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java:[306,14]
>  [deprecation] isDirectory(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java:[308,16]
>  [deprecation] isDirectory(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java:[313,14]
>  [deprecation] isDirectory(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java:[315,16]
>  [deprecation] isDirectory(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java:[340,14]
>  [deprecation] isFile(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java:[342,16]
>  [deprecation] isFile(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java:[351,14]
>  [deprecation] isDirectory(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java:[353,16]
>  [deprecation] isDirectory(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java:[400,14]
>  [deprecation] isFile(Path) in FileSystem has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java:[759,14]
>  [deprecation] isFile(Path) in FileSystem has been deprecat

[jira] [Commented] (HADOOP-14104) Client should always ask namenode for kms provider path.

2017-02-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15883960#comment-15883960
 ] 

Hadoop QA commented on HADOOP-14104:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
27s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 14m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
22s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
33s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs-client generated 1 new 
+ 1 unchanged - 0 fixed = 2 total (was 1) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 27s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
20s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 14s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}175m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
|   | hadoop.hdfs.TestDFSUtil |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-14104 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12854608/HADOOP-14104-trunk.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 6936cba05be6 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d2b3ba9 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11714/artifact/patchprocess/diff-javadoc-javad

[jira] [Commented] (HADOOP-6801) io.sort.mb and io.sort.factor were renamed and moved to mapreduce but are still in CommonConfigurationKeysPublic.java and used in SequenceFile.java

2017-02-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15883958#comment-15883958
 ] 

ASF GitHub Bot commented on HADOOP-6801:


Github user QwertyManiac commented on the issue:

https://github.com/apache/hadoop/pull/146
  
Rebased the patch.


> io.sort.mb and io.sort.factor were renamed and moved to mapreduce but are 
> still in CommonConfigurationKeysPublic.java and used in SequenceFile.java
> ---
>
> Key: HADOOP-6801
> URL: https://issues.apache.org/jira/browse/HADOOP-6801
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: Erik Steffl
>Assignee: Harsh J
>Priority: Minor
> Attachments: HADOOP-6801.05.patch, HADOOP-6801.r1.diff, 
> HADOOP-6801.r2.diff, HADOOP-6801.r3.diff, HADOOP-6801.r4.diff, 
> HADOOP-6801.r5.diff
>
>
> Following configuration keys in CommonConfigurationKeysPublic.java (former 
> CommonConfigurationKeys.java):
> public static final String  IO_SORT_MB_KEY = "io.sort.mb";
> public static final String  IO_SORT_FACTOR_KEY = "io.sort.factor";
> are partially moved:
>   - they were renamed to mapreduce.task.io.sort.mb and 
> mapreduce.task.io.sort.factor respectively
>   - they were moved to mapreduce project, documented in mapred-default.xml
> However:
>   - they are still listed in CommonConfigurationKeysPublic.java as quoted 
> above
>   - strings "io.sort.mb" and "io.sort.factor" are used in SequenceFile.java 
> in Hadoop Common project
> Not sure what the solution is, these constants should probably be removed 
> from CommonConfigurationKeysPublic.java but I am not sure what's the best 
> solution for SequenceFile.java.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14121) Fix occasional BindException in TestNameNodeMetricsLogger

2017-02-24 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-14121:
--

 Summary: Fix occasional BindException in TestNameNodeMetricsLogger
 Key: HADOOP-14121
 URL: https://issues.apache.org/jira/browse/HADOOP-14121
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


TestNameNodeMetricsLogger occasionally hits BindException even though it uses 
ServerSocketUtil.getPort to get a random port number.

It's better to specify a port number of 0 and let the OS allocate an unused 
port.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14104) Client should always ask namenode for kms provider path.

2017-02-24 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HADOOP-14104:

Status: Open  (was: Patch Available)

Canceling the patch to address review comments.
Will post a new patch shortly.

> Client should always ask namenode for kms provider path.
> 
>
> Key: HADOOP-14104
> URL: https://issues.apache.org/jira/browse/HADOOP-14104
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HADOOP-14104-trunk.patch
>
>
> According to current implementation of kms provider in client conf, there can 
> only be one kms.
> In multi-cluster environment, if a client is reading encrypted data from 
> multiple clusters it will only get kms token for local cluster.
> Not sure whether the target version is correct or not.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14094) Rethink S3GuardTool options

2017-02-24 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15883831#comment-15883831
 ] 

Sean Mackrory edited comment on HADOOP-14094 at 2/25/17 12:33 AM:
--

Attaching a patch to get started on this. So far, it does the following:

* Change s3a -> s3guard when invoking the command from the Hadoop CLI (e.g. 
hadoop s3guard init ...)
* Change single-letter options to full-word options to avoid future collisions 
(e.g. hadoop s3guard init -read 500 -write 100). I've thought about using 
double-dashes as that's the convention with long-form options, but *most* other 
Hadoop commands use single dashes, so better to be consistent with the most of 
the rest of Hadoop, I guess. That also has the bonus of not *needing* a 
framework for processing the command-line (e.g. jcommander).
* Change the argument to initMetadataStore from "create" to "forceCreate". In 
other words, init can force the table to be created it it doesn't exist, but 
other commands will create the table IFF fs.s3a.s3guard.ddb.table.create=true 
(or equivalent). Of course this is muddying the line between implementation 
details a bit, which is one thing I'd like to improve.
* Change all docs to use s3a://bucket instead of s3a://bucket/path/ - maybe we 
should allow commands to operate on subdirectories, but for now everything is 
at the bucket / table level so we shouldn't advertize otherwise.
* Improve the way exceptions are logged, especially in the case of an invalid 
argument. Before, the -h flag wasn't doing anything and you wouldn't see usage 
information for a specific command if you provided a bad argument, but now you 
do.

Before committing this, I would still like to:

* Add more detail to USAGE messages, as well as output logged to confirm the 
results of running the command
* is how to make the commands more generic. It's nice if everything 
implementation-specific can be encoded into a URI, but that still requires 
custom code for any implementation. 

I'd also like to make the commands more generic and make sure it's clear 
exactly how a connection is going to be configured (e.g. for various 
combinations of providing / not providing an endpoint in configs, providing / 
not providing an S3 URL, providing / not providing the -e flag). A few thoughts 
I've had along these lines:

* Split -m option into separate options for the implementation (e.g. -impl 
DynamoDB, we can have shortcuts for built-in classes, but also accept a full 
class name, -impl com.example.CustomMetadataStore) and then have other 
implementation specific flags for things like the table name.
* Remove -e option, and require endpoints be specified with "-D 
fs.s3a.s3guard.ddb.endpoint=...". It's very much an implementation-specific 
configuration, but it fills a role that can also be filled by specifying the S3 
URL, so it's a bit messy too. The -e option is already not always allowed to be 
used (there was some discussion along those lines in HADOOP-13995).


was (Author: mackrorysd):
Attaching a patch to get started on this. So far, it does the following:

* Change s3a -> s3guard when invoking the command from the Hadoop CLI (e.g. 
hadoop s3guard init ...)
* Change single-letter options to full-word options to avoid future collisions 
(e.g. hadoop s3guard init -read 500 -write 100). I've thought about using 
double-dashes as that's the convention with long-form options, but *most* other 
Hadoop commands use single dashes, so better to be consistent with the *most* 
of the rest of Hadoop, I guess.
* Change the argument to initMetadataStore from "create" to "forceCreate". In 
other words, init can force the table to be created it it doesn't exist, but 
other commands will create the table IFF fs.s3a.s3guard.ddb.table.create=true 
(or equivalent). Of course this is muddying the line between implementation 
details a bit, which is one thing I'd like to improve.
* Change all docs to use s3a://bucket instead of s3a://bucket/path/ - maybe we 
should allow commands to operate on subdirectories, but for now everything is 
at the bucket / table level so we shouldn't advertize otherwise.
* Improve the way exceptions are logged, especially in the case of an invalid 
argument. Before, the -h flag wasn't doing anything and you wouldn't see usage 
information for a specific command if you provided a bad argument, but now you 
do.

Before committing this, I would still like to:

* Add more detail to USAGE messages, as well as output logged to confirm the 
results of running the command
* is how to make the commands more generic. It's nice if everything 
implementation-specific can be encoded into a URI, but that still requires 
custom code for any implementation. 

I'd also like to make the commands more generic and make sure it's clear 
exactly how a connection is going to be configured (e.g. for various 
combinations of provid

[jira] [Updated] (HADOOP-14094) Rethink S3GuardTool options

2017-02-24 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-14094:
---
Attachment: HADOOP-14094-HADOOP-13345.001.patch

Attaching a patch to get started on this. So far, it does the following:

* Change s3a -> s3guard when invoking the command from the Hadoop CLI (e.g. 
hadoop s3guard init ...)
* Change single-letter options to full-word options to avoid future collisions 
(e.g. hadoop s3guard init -read 500 -write 100). I've thought about using 
double-dashes as that's the convention with long-form options, but *most* other 
Hadoop commands use single dashes, so better to be consistent with the *most* 
of the rest of Hadoop, I guess.
* Change the argument to initMetadataStore from "create" to "forceCreate". In 
other words, init can force the table to be created it it doesn't exist, but 
other commands will create the table IFF fs.s3a.s3guard.ddb.table.create=true 
(or equivalent). Of course this is muddying the line between implementation 
details a bit, which is one thing I'd like to improve.
* Change all docs to use s3a://bucket instead of s3a://bucket/path/ - maybe we 
should allow commands to operate on subdirectories, but for now everything is 
at the bucket / table level so we shouldn't advertize otherwise.
* Improve the way exceptions are logged, especially in the case of an invalid 
argument. Before, the -h flag wasn't doing anything and you wouldn't see usage 
information for a specific command if you provided a bad argument, but now you 
do.

Before committing this, I would still like to:

* Add more detail to USAGE messages, as well as output logged to confirm the 
results of running the command
* is how to make the commands more generic. It's nice if everything 
implementation-specific can be encoded into a URI, but that still requires 
custom code for any implementation. 

I'd also like to make the commands more generic and make sure it's clear 
exactly how a connection is going to be configured (e.g. for various 
combinations of providing / not providing an endpoint in configs, providing / 
not providing an S3 URL, providing / not providing the -e flag). A few thoughts 
I've had along these lines:

* Split -m option into separate options for the implementation (e.g. -impl 
DynamoDB, we can have shortcuts for built-in classes, but also accept a full 
class name, -impl com.example.CustomMetadataStore) and then have other 
implementation specific flags for things like the table name.
* Remove -e option, and require endpoints be specified with "-D 
fs.s3a.s3guard.ddb.endpoint=...". It's very much an implementation-specific 
configuration, but it fills a role that can also be filled by specifying the S3 
URL, so it's a bit messy too. The -e option is already not always allowed to be 
used (there was some discussion along those lines in HADOOP-13995).

> Rethink S3GuardTool options
> ---
>
> Key: HADOOP-14094
> URL: https://issues.apache.org/jira/browse/HADOOP-14094
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sean Mackrory
> Attachments: HADOOP-14094-HADOOP-13345.001.patch
>
>
> I think we need to rework the S3GuardTool options. A couple of problems I've 
> observed in the patches I've done on top of that and seeing other developers 
> trying it out:
> * We should probably wrap the current commands in an S3Guard-specific 
> command, since 'init', 'destroy', etc. don't touch the buckets at all.
> * Convert to whole-word options, as the single-letter options are already 
> getting overloaded. Some patches I've submitted have added functionality 
> where the obvious flag is already in use (e.g. -r for region, and read 
> throughput, -m for minutes, and metadatastore uri).  I may do this early as 
> part of HADOOP-14090.
> * We have some options that must be in the config in some cases, and can be 
> in the command in other cases. But I've seen someone try to specify the table 
> name in the config and leave out the -m option, with no luck. Also, since 
> commands hard-code table auto-creation, you might have configured table 
> auto-creation, try to import to a non-existent table, and it tells you table 
> auto-creation is off.
> We need a more consistent policy for how things should get configured that 
> addresses these problems and future-proofs the command a bit more.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14104) Client should always ask namenode for kms provider path.

2017-02-24 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15883797#comment-15883797
 ] 

Rushabh S Shah commented on HADOOP-14104:
-

bq. PB can differentiate between not specified, empty string, and set.
Please ignore my previous comment. I understood what you actually meant.
I will update the patch by monday.

> Client should always ask namenode for kms provider path.
> 
>
> Key: HADOOP-14104
> URL: https://issues.apache.org/jira/browse/HADOOP-14104
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HADOOP-14104-trunk.patch
>
>
> According to current implementation of kms provider in client conf, there can 
> only be one kms.
> In multi-cluster environment, if a client is reading encrypted data from 
> multiple clusters it will only get kms token for local cluster.
> Not sure whether the target version is correct or not.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13994) explicitly declare the commons-lang3 dependency as 3.4

2017-02-24 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu resolved HADOOP-13994.

Resolution: Fixed

> explicitly declare the commons-lang3 dependency as 3.4
> --
>
> Key: HADOOP-13994
> URL: https://issues.apache.org/jira/browse/HADOOP-13994
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, fs/azure, fs/s3
>Affects Versions: 3.0.0-alpha2, HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-13994-HADOOP-13345-001.patch, 
> HADOOP-13994-HADOOP-13445-001.patch
>
>
> Other people aren't seeing this (yet?), but unless you explicitly exclude v 
> 3.4 of commons-lang3 from the azure build (which HADOOP-13660 does), then the 
> dependency declaration of commons-lang3 v 3.3.2 is creating a resolution 
> conflict. That's a dependency only needed for the local dynamodb & tests.
> I propose to fix this in s3guard by explicitly declaring the version used in 
> the tests to be that of the azure-storage one, excluding that you get for 
> free. It doesn't impact anything shipped in production, but puts the hadoop 
> build in control of what versions of commons-lang are coming in everywhere by 
> way of the commons-config version declared in hadoop-common



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14104) Client should always ask namenode for kms provider path.

2017-02-24 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15883787#comment-15883787
 ] 

Rushabh S Shah commented on HADOOP-14104:
-

[~andrew.wang]: Thanks for quick review.
bq. Could you update the documentation and core-default.xml about this new 
feature?
Will update in next revision.

bq. Do we actually need the PB boolean? PB can differentiate between not 
specified, empty string, and set
The default value for string format in protobuf  is "" (empty string).
If the client receives an empty string, it either means its talking to an old 
namenode or the namenode has provider path config not set.
Maybe if the namenode doesn't have security path set, then I can return null 
value and on the client side, I have to differentiate between null and empty 
string. But need to test if protobuf convert null string to empty string in 
response or not ?


> Client should always ask namenode for kms provider path.
> 
>
> Key: HADOOP-14104
> URL: https://issues.apache.org/jira/browse/HADOOP-14104
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HADOOP-14104-trunk.patch
>
>
> According to current implementation of kms provider in client conf, there can 
> only be one kms.
> In multi-cluster environment, if a client is reading encrypted data from 
> multiple clusters it will only get kms token for local cluster.
> Not sure whether the target version is correct or not.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14104) Client should always ask namenode for kms provider path.

2017-02-24 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15883770#comment-15883770
 ] 

Andrew Wang commented on HADOOP-14104:
--

Thanks for posting the patch Rushabh! I like this idea, it'll simplify client 
configuration.

I took a quick look, few questions:

* Could you update the documentation and core-default.xml about this new 
feature?
* Do we actually need the PB boolean? PB can differentiate between not 
specified, empty string, and set. IIUC it's not specified, then it's an old NN, 
and we can fallback.

> Client should always ask namenode for kms provider path.
> 
>
> Key: HADOOP-14104
> URL: https://issues.apache.org/jira/browse/HADOOP-14104
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HADOOP-14104-trunk.patch
>
>
> According to current implementation of kms provider in client conf, there can 
> only be one kms.
> In multi-cluster environment, if a client is reading encrypted data from 
> multiple clusters it will only get kms token for local cluster.
> Not sure whether the target version is correct or not.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14104) Client should always ask namenode for kms provider path.

2017-02-24 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HADOOP-14104:

Status: Patch Available  (was: Open)

Added 2 member variable (optional in protobuf format) in FsServerDefaults:
1. namenodeSupportsProviderUri: to differentiate between empty keyProviderUri 
and namenode not supporting encrpytion zones and supporting backward 
compatibility.
2. keyProviderUri: 

New namenode returns namenodeSupportsProviderUri as true.
If namenodeSupportsProviderUri is true, then the client will always trust 
whatever namenode returned in keyProviderUri.
If namenodeSupportsProviderUri is false (that means the namenode is not 
upgraded), then the client will use it own conf as before thereby supporting 
backwards compatibility.

{code:title=DFSClient.java|borderStyle=solid}
  public boolean isHDFSEncryptionEnabled() {
try {
  return DFSUtilClient.isHDFSEncryptionEnabled(getKeyProviderUri());
} catch(IOException ioe) {
  return false;
}
  }
{code}

{code:title=DFSClient.java|borderStyle=solid}
  private String getKeyProviderUri() throws IOException {
FsServerDefaults serverDefaults = getServerDefaults();
return serverDefaults.getNamenodeSupportsProviderUri() ?
serverDefaults.getKeyProviderUri() : conf.getTrimmed(
CommonConfigurationKeysPublic.HADOOP_SECURITY_KEY_PROVIDER_PATH, "");
  }
{code}

{{DFSClient#getServerDefaults}} throws IOExcpetion (StandbyException) 
If {{DFSUtilClient#isHDFSEncryptionEnabled}} throws IOException then I am 
returning false. 
Not sure what should I return.
Comments are welcome.

Please review.



> Client should always ask namenode for kms provider path.
> 
>
> Key: HADOOP-14104
> URL: https://issues.apache.org/jira/browse/HADOOP-14104
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HADOOP-14104-trunk.patch
>
>
> According to current implementation of kms provider in client conf, there can 
> only be one kms.
> In multi-cluster environment, if a client is reading encrypted data from 
> multiple clusters it will only get kms token for local cluster.
> Not sure whether the target version is correct or not.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14028) S3A BlockOutputStreams doesn't delete temporary files in multipart uploads or handle part upload failures

2017-02-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15883736#comment-15883736
 ] 

Hadoop QA commented on HADOOP-14028:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
35s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 6 
new + 31 unchanged - 1 fixed = 37 total (was 32) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_121. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Issue | HADOOP-14028 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12854597/HADOOP-14028-branch-2-009.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c6458574a225 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / 5c509f5 |
| Default Java | 1.7.0_121 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_121 
/usr/lib

[jira] [Updated] (HADOOP-14104) Client should always ask namenode for kms provider path.

2017-02-24 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HADOOP-14104:

Attachment: HADOOP-14104-trunk.patch

> Client should always ask namenode for kms provider path.
> 
>
> Key: HADOOP-14104
> URL: https://issues.apache.org/jira/browse/HADOOP-14104
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HADOOP-14104-trunk.patch
>
>
> According to current implementation of kms provider in client conf, there can 
> only be one kms.
> In multi-cluster environment, if a client is reading encrypted data from 
> multiple clusters it will only get kms token for local cluster.
> Not sure whether the target version is correct or not.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13994) explicitly declare the commons-lang3 dependency as 3.4

2017-02-24 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu reopened HADOOP-13994:


> explicitly declare the commons-lang3 dependency as 3.4
> --
>
> Key: HADOOP-13994
> URL: https://issues.apache.org/jira/browse/HADOOP-13994
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, fs/azure, fs/s3
>Affects Versions: 3.0.0-alpha2, HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-13994-HADOOP-13345-001.patch, 
> HADOOP-13994-HADOOP-13445-001.patch
>
>
> Other people aren't seeing this (yet?), but unless you explicitly exclude v 
> 3.4 of commons-lang3 from the azure build (which HADOOP-13660 does), then the 
> dependency declaration of commons-lang3 v 3.3.2 is creating a resolution 
> conflict. That's a dependency only needed for the local dynamodb & tests.
> I propose to fix this in s3guard by explicitly declaring the version used in 
> the tests to be that of the azure-storage one, excluding that you get for 
> free. It doesn't impact anything shipped in production, but puts the hadoop 
> build in control of what versions of commons-lang are coming in everywhere by 
> way of the commons-config version declared in hadoop-common



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14028) S3A BlockOutputStreams doesn't delete temporary files in multipart uploads or handle part upload failures

2017-02-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14028:

Status: Patch Available  (was: Open)

> S3A BlockOutputStreams doesn't delete temporary files in multipart uploads or 
> handle part upload failures
> -
>
> Key: HADOOP-14028
> URL: https://issues.apache.org/jira/browse/HADOOP-14028
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: JDK 8 + ORC 1.3.0 + hadoop-aws 3.0.0-alpha2
>Reporter: Seth Fitzsimmons
>Assignee: Steve Loughran
>Priority: Critical
> Attachments: HADOOP-14028-006.patch, HADOOP-14028-007.patch, 
> HADOOP-14028-branch-2-001.patch, HADOOP-14028-branch-2-008.patch, 
> HADOOP-14028-branch-2-009.patch, HADOOP-14028-branch-2.8-002.patch, 
> HADOOP-14028-branch-2.8-003.patch, HADOOP-14028-branch-2.8-004.patch, 
> HADOOP-14028-branch-2.8-005.patch, HADOOP-14028-branch-2.8-007.patch, 
> HADOOP-14028-branch-2.8-008.patch
>
>
> I have `fs.s3a.fast.upload` enabled with 3.0.0-alpha2 (it's exactly what I 
> was looking for after running into the same OOM problems) and don't see it 
> cleaning up the disk-cached blocks.
> I'm generating a ~50GB file on an instance with ~6GB free when the process 
> starts. My expectation is that local copies of the blocks would be deleted 
> after those parts finish uploading, but I'm seeing more than 15 blocks in 
> /tmp (and none of them have been deleted thus far).
> I see that DiskBlock deletes temporary files when closed, but is it closed 
> after individual blocks have finished uploading or when the entire file has 
> been fully written to the FS (full upload completed, including all parts)?
> As a temporary workaround to avoid running out of space, I'm listing files, 
> sorting by atime, and deleting anything older than the first 20: `ls -ut | 
> tail -n +21 | xargs rm`
> Steve Loughran says:
> > They should be deleted as soon as the upload completes; the close() call 
> > that the AWS httpclient makes on the input stream triggers the deletion. 
> > Though there aren't tests for it, as I recall.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10584) ActiveStandbyElector goes down if ZK quorum become unavailable

2017-02-24 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HADOOP-10584:
--
Attachment: HADOOP-10584.prelim.patch

Here's a rough first patch that removes the fatal errors from the 
{{ActiveStandbyElector}}.  I haven't done much testing yet, but I wanted to get 
it posted for comments.  This patch also eliminates wrapping 
{{InterruptedException}} in {{IOException}} so that it's possible to tell the 
two apart.  Feedback welcome!

> ActiveStandbyElector goes down if ZK quorum become unavailable
> --
>
> Key: HADOOP-10584
> URL: https://issues.apache.org/jira/browse/HADOOP-10584
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ha
>Affects Versions: 2.4.0
>Reporter: Karthik Kambatla
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: hadoop-10584-prelim.patch, HADOOP-10584.prelim.patch, 
> rm.log
>
>
> ActiveStandbyElector retries operations for a few times. If the ZK quorum 
> itself is down, it goes down and the daemons will have to be brought up 
> again. 
> Instead, it should log the fact that it is unable to talk to ZK, call 
> becomeStandby on its client, and continue to attempt connecting to ZK.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-10584) ActiveStandbyElector goes down if ZK quorum become unavailable

2017-02-24 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton reassigned HADOOP-10584:
-

Assignee: Daniel Templeton

> ActiveStandbyElector goes down if ZK quorum become unavailable
> --
>
> Key: HADOOP-10584
> URL: https://issues.apache.org/jira/browse/HADOOP-10584
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ha
>Affects Versions: 2.4.0
>Reporter: Karthik Kambatla
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: hadoop-10584-prelim.patch, rm.log
>
>
> ActiveStandbyElector retries operations for a few times. If the ZK quorum 
> itself is down, it goes down and the daemons will have to be brought up 
> again. 
> Instead, it should log the fact that it is unable to talk to ZK, call 
> becomeStandby on its client, and continue to attempt connecting to ZK.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14028) S3A BlockOutputStreams doesn't delete temporary files in multipart uploads or handle part upload failures

2017-02-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14028:

Attachment: HADOOP-14028-branch-2-009.patch

Interesting: the branch-2 yetus checkstyle run complained about stuff that was 
in the -2.8 branch patch, but not picked up on.

patch 009 shuts up checkstyle as far as makes sense; no other changes

tested @ scale, s3a ireland, AES256 encryption turned on

> S3A BlockOutputStreams doesn't delete temporary files in multipart uploads or 
> handle part upload failures
> -
>
> Key: HADOOP-14028
> URL: https://issues.apache.org/jira/browse/HADOOP-14028
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: JDK 8 + ORC 1.3.0 + hadoop-aws 3.0.0-alpha2
>Reporter: Seth Fitzsimmons
>Assignee: Steve Loughran
>Priority: Critical
> Attachments: HADOOP-14028-006.patch, HADOOP-14028-007.patch, 
> HADOOP-14028-branch-2-001.patch, HADOOP-14028-branch-2-008.patch, 
> HADOOP-14028-branch-2-009.patch, HADOOP-14028-branch-2.8-002.patch, 
> HADOOP-14028-branch-2.8-003.patch, HADOOP-14028-branch-2.8-004.patch, 
> HADOOP-14028-branch-2.8-005.patch, HADOOP-14028-branch-2.8-007.patch, 
> HADOOP-14028-branch-2.8-008.patch
>
>
> I have `fs.s3a.fast.upload` enabled with 3.0.0-alpha2 (it's exactly what I 
> was looking for after running into the same OOM problems) and don't see it 
> cleaning up the disk-cached blocks.
> I'm generating a ~50GB file on an instance with ~6GB free when the process 
> starts. My expectation is that local copies of the blocks would be deleted 
> after those parts finish uploading, but I'm seeing more than 15 blocks in 
> /tmp (and none of them have been deleted thus far).
> I see that DiskBlock deletes temporary files when closed, but is it closed 
> after individual blocks have finished uploading or when the entire file has 
> been fully written to the FS (full upload completed, including all parts)?
> As a temporary workaround to avoid running out of space, I'm listing files, 
> sorting by atime, and deleting anything older than the first 20: `ls -ut | 
> tail -n +21 | xargs rm`
> Steve Loughran says:
> > They should be deleted as soon as the upload completes; the close() call 
> > that the AWS httpclient makes on the input stream triggers the deletion. 
> > Though there aren't tests for it, as I recall.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14028) S3A BlockOutputStreams doesn't delete temporary files in multipart uploads or handle part upload failures

2017-02-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14028:

Status: Open  (was: Patch Available)

> S3A BlockOutputStreams doesn't delete temporary files in multipart uploads or 
> handle part upload failures
> -
>
> Key: HADOOP-14028
> URL: https://issues.apache.org/jira/browse/HADOOP-14028
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: JDK 8 + ORC 1.3.0 + hadoop-aws 3.0.0-alpha2
>Reporter: Seth Fitzsimmons
>Assignee: Steve Loughran
>Priority: Critical
> Attachments: HADOOP-14028-006.patch, HADOOP-14028-007.patch, 
> HADOOP-14028-branch-2-001.patch, HADOOP-14028-branch-2-008.patch, 
> HADOOP-14028-branch-2.8-002.patch, HADOOP-14028-branch-2.8-003.patch, 
> HADOOP-14028-branch-2.8-004.patch, HADOOP-14028-branch-2.8-005.patch, 
> HADOOP-14028-branch-2.8-007.patch, HADOOP-14028-branch-2.8-008.patch
>
>
> I have `fs.s3a.fast.upload` enabled with 3.0.0-alpha2 (it's exactly what I 
> was looking for after running into the same OOM problems) and don't see it 
> cleaning up the disk-cached blocks.
> I'm generating a ~50GB file on an instance with ~6GB free when the process 
> starts. My expectation is that local copies of the blocks would be deleted 
> after those parts finish uploading, but I'm seeing more than 15 blocks in 
> /tmp (and none of them have been deleted thus far).
> I see that DiskBlock deletes temporary files when closed, but is it closed 
> after individual blocks have finished uploading or when the entire file has 
> been fully written to the FS (full upload completed, including all parts)?
> As a temporary workaround to avoid running out of space, I'm listing files, 
> sorting by atime, and deleting anything older than the first 20: `ls -ut | 
> tail -n +21 | xargs rm`
> Steve Loughran says:
> > They should be deleted as soon as the upload completes; the close() call 
> > that the AWS httpclient makes on the input stream triggers the deletion. 
> > Though there aren't tests for it, as I recall.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14118) move jets3t into a dependency on hadoop-aws JAR

2017-02-24 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15883617#comment-15883617
 ] 

Steve Loughran commented on HADOOP-14118:
-

+1


I normally turn off the s3n stuff unless I touch any of the FS contract tests. 
I turned them on here and first all the tests failed (endpoint was S3 
frankfurt, see). Turning to s3 ireland on, two failures. Looks like 
consistency, though the second one is weirder. I'd have expected that test to 
create something in the root path, but clearly it isn't. Nothing to do with 
this patch though



{code}
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.751 sec - in 
org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFilesDiskBlocks
Running org.apache.hadoop.fs.s3native.ITestJets3tNativeS3FileSystemContract
Tests run: 52, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 60.125 sec <<< 
FAILURE! - in 
org.apache.hadoop.fs.s3native.ITestJets3tNativeS3FileSystemContract
testListStatusForRoot(org.apache.hadoop.fs.s3native.ITestJets3tNativeS3FileSystemContract)
  Time elapsed: 0.299 sec  <<< FAILURE!
junit.framework.AssertionFailedError: Root directory is not empty;  
expected:<0> but was:<1>
at junit.framework.Assert.fail(Assert.java:57)
at junit.framework.Assert.failNotEquals(Assert.java:329)
at junit.framework.Assert.assertEquals(Assert.java:78)
at junit.framework.Assert.assertEquals(Assert.java:234)
at junit.framework.TestCase.assertEquals(TestCase.java:401)
at 
org.apache.hadoop.fs.s3native.NativeS3FileSystemContractBaseTest.testListStatusForRoot(NativeS3FileSystemContractBaseTest.java:66)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at junit.framework.TestCase.runTest(TestCase.java:176)
at junit.framework.TestCase.runBare(TestCase.java:141)
at junit.framework.TestResult$1.protect(TestResult.java:122)
at junit.framework.TestResult.runProtected(TestResult.java:142)
at junit.framework.TestResult.run(TestResult.java:125)
at junit.framework.TestCase.run(TestCase.java:129)
at junit.framework.TestSuite.runTest(TestSuite.java:255)
at junit.framework.TestSuite.run(TestSuite.java:250)
at 
org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)

testLSRootDir(org.apache.hadoop.fs.s3native.ITestJets3tNativeS3FileSystemContract)
  Time elapsed: 1.274 sec  <<< ERROR!
java.io.FileNotFoundException: File 
s3n://hwdev-steve-ireland-new/Users/stevel/Projects/hadoop-trunk/hadoop-tools/hadoop-aws/target/test/data/62zuTzHEJZ
 does not exist.
at 
org.apache.hadoop.fs.s3native.NativeS3FileSystem.listStatus(NativeS3FileSystem.java:603)
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1823)
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1865)
at org.apache.hadoop.fs.FileSystem$4.(FileSystem.java:2027)
at 
org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:2026)
at 
org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:2009)
at 
org.apache.hadoop.fs.FileSystem$5.handleFileStat(FileSystem.java:2167)
at org.apache.hadoop.fs.FileSystem$5.hasNext(FileSystem.java:2144)
at 
org.apache.hadoop.fs.FileSystemContractBaseTest.assertListFilesFinds(FileSystemContractBaseTest.java:755)
at 
org.apache.hadoop.fs.FileSystemContractBaseTest.testLSRootDir(FileSystemContractBaseTest.java:740)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at junit.framework.TestCase.runTest(TestCase.java:176)
at junit.framework.TestCase.runBare(TestCase.java:141)
at junit.framework.TestResult$1.protect(TestResult.java:122)
at junit.framework.TestResult.runProtected(TestResult.java:142)
  

[jira] [Commented] (HADOOP-13994) explicitly declare the commons-lang3 dependency as 3.4

2017-02-24 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15883591#comment-15883591
 ] 

Steve Loughran commented on HADOOP-13994:
-

Just reopen this and sync up the diff. 

It might be good to split into the "bits for trunk" and the special case for DDB

> explicitly declare the commons-lang3 dependency as 3.4
> --
>
> Key: HADOOP-13994
> URL: https://issues.apache.org/jira/browse/HADOOP-13994
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, fs/azure, fs/s3
>Affects Versions: 3.0.0-alpha2, HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-13994-HADOOP-13345-001.patch, 
> HADOOP-13994-HADOOP-13445-001.patch
>
>
> Other people aren't seeing this (yet?), but unless you explicitly exclude v 
> 3.4 of commons-lang3 from the azure build (which HADOOP-13660 does), then the 
> dependency declaration of commons-lang3 v 3.3.2 is creating a resolution 
> conflict. That's a dependency only needed for the local dynamodb & tests.
> I propose to fix this in s3guard by explicitly declaring the version used in 
> the tests to be that of the azure-storage one, excluding that you get for 
> free. It doesn't impact anything shipped in production, but puts the hadoop 
> build in control of what versions of commons-lang are coming in everywhere by 
> way of the commons-config version declared in hadoop-common



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14028) S3A BlockOutputStreams doesn't delete temporary files in multipart uploads or handle part upload failures

2017-02-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15883586#comment-15883586
 ] 

Hadoop QA commented on HADOOP-14028:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
37s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 11 
new + 31 unchanged - 1 fixed = 42 total (was 32) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
9s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_121. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Issue | HADOOP-14028 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12854576/HADOOP-14028-branch-2-008.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 08d517f49f19 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / 5c509f5 |
| Default Java | 1.7.0_121 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_121 
/usr/li

[jira] [Commented] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final

2017-02-24 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15883581#comment-15883581
 ] 

Junping Du commented on HADOOP-13866:
-

Just give a heads up that YARN-6143 already get fixed so no other blocker now 
for 2.8.0 release. We need to make a quick decision that to get this in or not 
for 2.8.0 release. CC [~vinodkv], [~jlowe] and [~sjlee0].

> Upgrade netty-all to 4.1.1.Final
> 
>
> Key: HADOOP-13866
> URL: https://issues.apache.org/jira/browse/HADOOP-13866
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Blocker
> Attachments: HADOOP-13866.v1.patch, HADOOP-13866.v2.patch, 
> HADOOP-13866.v3.patch, HADOOP-13866.v4.patch, HADOOP-13866.v6.patch, 
> HADOOP-13866.v7.patch, HADOOP-13866.v8.patch, HADOOP-13866.v8.patch, 
> HADOOP-13866.v8.patch, HADOOP-13866.v9.patch
>
>
> netty-all 4.1.1.Final is stable release which we should upgrade to.
> See bottom of HADOOP-12927 for related discussion.
> This issue was discovered since hbase 2.0 uses 4.1.1.Final of netty.
> When launching mapreduce job from hbase, /grid/0/hadoop/yarn/local/  
> usercache/hbase/appcache/application_1479850535804_0008/container_e01_1479850535804_0008_01_05/mr-framework/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar
>  (from hdfs) is ahead of 4.1.1.Final jar (from hbase) on the classpath.
> Resulting in the following exception:
> {code}
> 2016-12-01 20:17:26,678 WARN [Default-IPC-NioEventLoopGroup-1-1] 
> io.netty.util.concurrent.DefaultPromise: An exception was thrown by 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete()
> java.lang.NoSuchMethodError: 
> io.netty.buffer.ByteBuf.retainedDuplicate()Lio/netty/buffer/ByteBuf;
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:272)
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:262)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
> at 
> io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14028) S3A BlockOutputStreams doesn't delete temporary files in multipart uploads or handle part upload failures

2017-02-24 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15883580#comment-15883580
 ] 

Steve Loughran commented on HADOOP-14028:
-

Seth: thanks for that.

regarding the branch-2 patch (which I consider +1'd in anyway, just the small 
diff going through yetus), tested s3 frankfurt with -Dscale & 32M files

> S3A BlockOutputStreams doesn't delete temporary files in multipart uploads or 
> handle part upload failures
> -
>
> Key: HADOOP-14028
> URL: https://issues.apache.org/jira/browse/HADOOP-14028
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: JDK 8 + ORC 1.3.0 + hadoop-aws 3.0.0-alpha2
>Reporter: Seth Fitzsimmons
>Assignee: Steve Loughran
>Priority: Critical
> Attachments: HADOOP-14028-006.patch, HADOOP-14028-007.patch, 
> HADOOP-14028-branch-2-001.patch, HADOOP-14028-branch-2-008.patch, 
> HADOOP-14028-branch-2.8-002.patch, HADOOP-14028-branch-2.8-003.patch, 
> HADOOP-14028-branch-2.8-004.patch, HADOOP-14028-branch-2.8-005.patch, 
> HADOOP-14028-branch-2.8-007.patch, HADOOP-14028-branch-2.8-008.patch
>
>
> I have `fs.s3a.fast.upload` enabled with 3.0.0-alpha2 (it's exactly what I 
> was looking for after running into the same OOM problems) and don't see it 
> cleaning up the disk-cached blocks.
> I'm generating a ~50GB file on an instance with ~6GB free when the process 
> starts. My expectation is that local copies of the blocks would be deleted 
> after those parts finish uploading, but I'm seeing more than 15 blocks in 
> /tmp (and none of them have been deleted thus far).
> I see that DiskBlock deletes temporary files when closed, but is it closed 
> after individual blocks have finished uploading or when the entire file has 
> been fully written to the FS (full upload completed, including all parts)?
> As a temporary workaround to avoid running out of space, I'm listing files, 
> sorting by atime, and deleting anything older than the first 20: `ls -ut | 
> tail -n +21 | xargs rm`
> Steve Loughran says:
> > They should be deleted as soon as the upload completes; the close() call 
> > that the AWS httpclient makes on the input stream triggers the deletion. 
> > Though there aren't tests for it, as I recall.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14028) S3A BlockOutputStreams doesn't delete temporary files in multipart uploads or handle part upload failures

2017-02-24 Thread Seth Fitzsimmons (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15883571#comment-15883571
 ] 

Seth Fitzsimmons commented on HADOOP-14028:
---

You can disregard my comments about prematurely / incorrectly closing the 
output file when the process terminates. It turns out that the input producer 
was the process being OOM killed, so I need to fix handling of premature end of 
inputs.

> S3A BlockOutputStreams doesn't delete temporary files in multipart uploads or 
> handle part upload failures
> -
>
> Key: HADOOP-14028
> URL: https://issues.apache.org/jira/browse/HADOOP-14028
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: JDK 8 + ORC 1.3.0 + hadoop-aws 3.0.0-alpha2
>Reporter: Seth Fitzsimmons
>Assignee: Steve Loughran
>Priority: Critical
> Attachments: HADOOP-14028-006.patch, HADOOP-14028-007.patch, 
> HADOOP-14028-branch-2-001.patch, HADOOP-14028-branch-2-008.patch, 
> HADOOP-14028-branch-2.8-002.patch, HADOOP-14028-branch-2.8-003.patch, 
> HADOOP-14028-branch-2.8-004.patch, HADOOP-14028-branch-2.8-005.patch, 
> HADOOP-14028-branch-2.8-007.patch, HADOOP-14028-branch-2.8-008.patch
>
>
> I have `fs.s3a.fast.upload` enabled with 3.0.0-alpha2 (it's exactly what I 
> was looking for after running into the same OOM problems) and don't see it 
> cleaning up the disk-cached blocks.
> I'm generating a ~50GB file on an instance with ~6GB free when the process 
> starts. My expectation is that local copies of the blocks would be deleted 
> after those parts finish uploading, but I'm seeing more than 15 blocks in 
> /tmp (and none of them have been deleted thus far).
> I see that DiskBlock deletes temporary files when closed, but is it closed 
> after individual blocks have finished uploading or when the entire file has 
> been fully written to the FS (full upload completed, including all parts)?
> As a temporary workaround to avoid running out of space, I'm listing files, 
> sorting by atime, and deleting anything older than the first 20: `ls -ut | 
> tail -n +21 | xargs rm`
> Steve Loughran says:
> > They should be deleted as soon as the upload completes; the close() call 
> > that the AWS httpclient makes on the input stream triggers the deletion. 
> > Though there aren't tests for it, as I recall.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14028) S3A BlockOutputStreams doesn't delete temporary files in multipart uploads or handle part upload failures

2017-02-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14028:

Attachment: HADOOP-14028-branch-2-008.patch

Committed to branch-2.8. Here's the branch-2 patch for checkstyle to look at; 
it's slightly different due to some minor merge conflicts with the SSE code of 
HADOOP-13075

> S3A BlockOutputStreams doesn't delete temporary files in multipart uploads or 
> handle part upload failures
> -
>
> Key: HADOOP-14028
> URL: https://issues.apache.org/jira/browse/HADOOP-14028
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: JDK 8 + ORC 1.3.0 + hadoop-aws 3.0.0-alpha2
>Reporter: Seth Fitzsimmons
>Assignee: Steve Loughran
>Priority: Critical
> Attachments: HADOOP-14028-006.patch, HADOOP-14028-007.patch, 
> HADOOP-14028-branch-2-001.patch, HADOOP-14028-branch-2-008.patch, 
> HADOOP-14028-branch-2.8-002.patch, HADOOP-14028-branch-2.8-003.patch, 
> HADOOP-14028-branch-2.8-004.patch, HADOOP-14028-branch-2.8-005.patch, 
> HADOOP-14028-branch-2.8-007.patch, HADOOP-14028-branch-2.8-008.patch
>
>
> I have `fs.s3a.fast.upload` enabled with 3.0.0-alpha2 (it's exactly what I 
> was looking for after running into the same OOM problems) and don't see it 
> cleaning up the disk-cached blocks.
> I'm generating a ~50GB file on an instance with ~6GB free when the process 
> starts. My expectation is that local copies of the blocks would be deleted 
> after those parts finish uploading, but I'm seeing more than 15 blocks in 
> /tmp (and none of them have been deleted thus far).
> I see that DiskBlock deletes temporary files when closed, but is it closed 
> after individual blocks have finished uploading or when the entire file has 
> been fully written to the FS (full upload completed, including all parts)?
> As a temporary workaround to avoid running out of space, I'm listing files, 
> sorting by atime, and deleting anything older than the first 20: `ls -ut | 
> tail -n +21 | xargs rm`
> Steve Loughran says:
> > They should be deleted as soon as the upload completes; the close() call 
> > that the AWS httpclient makes on the input stream triggers the deletion. 
> > Though there aren't tests for it, as I recall.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14028) S3A BlockOutputStreams doesn't delete temporary files in multipart uploads or handle part upload failures

2017-02-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14028:

Status: Patch Available  (was: Open)

> S3A BlockOutputStreams doesn't delete temporary files in multipart uploads or 
> handle part upload failures
> -
>
> Key: HADOOP-14028
> URL: https://issues.apache.org/jira/browse/HADOOP-14028
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: JDK 8 + ORC 1.3.0 + hadoop-aws 3.0.0-alpha2
>Reporter: Seth Fitzsimmons
>Assignee: Steve Loughran
>Priority: Critical
> Attachments: HADOOP-14028-006.patch, HADOOP-14028-007.patch, 
> HADOOP-14028-branch-2-001.patch, HADOOP-14028-branch-2-008.patch, 
> HADOOP-14028-branch-2.8-002.patch, HADOOP-14028-branch-2.8-003.patch, 
> HADOOP-14028-branch-2.8-004.patch, HADOOP-14028-branch-2.8-005.patch, 
> HADOOP-14028-branch-2.8-007.patch, HADOOP-14028-branch-2.8-008.patch
>
>
> I have `fs.s3a.fast.upload` enabled with 3.0.0-alpha2 (it's exactly what I 
> was looking for after running into the same OOM problems) and don't see it 
> cleaning up the disk-cached blocks.
> I'm generating a ~50GB file on an instance with ~6GB free when the process 
> starts. My expectation is that local copies of the blocks would be deleted 
> after those parts finish uploading, but I'm seeing more than 15 blocks in 
> /tmp (and none of them have been deleted thus far).
> I see that DiskBlock deletes temporary files when closed, but is it closed 
> after individual blocks have finished uploading or when the entire file has 
> been fully written to the FS (full upload completed, including all parts)?
> As a temporary workaround to avoid running out of space, I'm listing files, 
> sorting by atime, and deleting anything older than the first 20: `ls -ut | 
> tail -n +21 | xargs rm`
> Steve Loughran says:
> > They should be deleted as soon as the upload completes; the close() call 
> > that the AWS httpclient makes on the input stream triggers the deletion. 
> > Though there aren't tests for it, as I recall.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14028) S3A BlockOutputStreams doesn't delete temporary files in multipart uploads or handle part upload failures

2017-02-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14028:

Status: Open  (was: Patch Available)

> S3A BlockOutputStreams doesn't delete temporary files in multipart uploads or 
> handle part upload failures
> -
>
> Key: HADOOP-14028
> URL: https://issues.apache.org/jira/browse/HADOOP-14028
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: JDK 8 + ORC 1.3.0 + hadoop-aws 3.0.0-alpha2
>Reporter: Seth Fitzsimmons
>Assignee: Steve Loughran
>Priority: Critical
> Attachments: HADOOP-14028-006.patch, HADOOP-14028-007.patch, 
> HADOOP-14028-branch-2-001.patch, HADOOP-14028-branch-2.8-002.patch, 
> HADOOP-14028-branch-2.8-003.patch, HADOOP-14028-branch-2.8-004.patch, 
> HADOOP-14028-branch-2.8-005.patch, HADOOP-14028-branch-2.8-007.patch, 
> HADOOP-14028-branch-2.8-008.patch
>
>
> I have `fs.s3a.fast.upload` enabled with 3.0.0-alpha2 (it's exactly what I 
> was looking for after running into the same OOM problems) and don't see it 
> cleaning up the disk-cached blocks.
> I'm generating a ~50GB file on an instance with ~6GB free when the process 
> starts. My expectation is that local copies of the blocks would be deleted 
> after those parts finish uploading, but I'm seeing more than 15 blocks in 
> /tmp (and none of them have been deleted thus far).
> I see that DiskBlock deletes temporary files when closed, but is it closed 
> after individual blocks have finished uploading or when the entire file has 
> been fully written to the FS (full upload completed, including all parts)?
> As a temporary workaround to avoid running out of space, I'm listing files, 
> sorting by atime, and deleting anything older than the first 20: `ls -ut | 
> tail -n +21 | xargs rm`
> Steve Loughran says:
> > They should be deleted as soon as the upload completes; the close() call 
> > that the AWS httpclient makes on the input stream triggers the deletion. 
> > Though there aren't tests for it, as I recall.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14120) needless S3AFileSystem.setOptionalPutRequestParameters in S3ABlockOutputStream putObject()

2017-02-24 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14120:
---

 Summary: needless S3AFileSystem.setOptionalPutRequestParameters in 
S3ABlockOutputStream putObject()
 Key: HADOOP-14120
 URL: https://issues.apache.org/jira/browse/HADOOP-14120
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 2.9.0
Reporter: Steve Loughran
Priority: Minor


There's a call to {{S3AFileSystem.setOptionalPutRequestParameters()}} in {{ 
S3ABlockOutputStream putObject()}}

The put request has already been created by the FS; this call is only 
superflous and potentially confusing.

Proposed: cut it, make the {{setOptionalPutRequestParameters()}} method private.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14028) S3A BlockOutputStreams doesn't delete temporary files in multipart uploads or handle part upload failures

2017-02-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14028:

Summary: S3A BlockOutputStreams doesn't delete temporary files in multipart 
uploads or handle part upload failures  (was: S3A block output streams don't 
delete temporary files in multipart uploads)

> S3A BlockOutputStreams doesn't delete temporary files in multipart uploads or 
> handle part upload failures
> -
>
> Key: HADOOP-14028
> URL: https://issues.apache.org/jira/browse/HADOOP-14028
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: JDK 8 + ORC 1.3.0 + hadoop-aws 3.0.0-alpha2
>Reporter: Seth Fitzsimmons
>Assignee: Steve Loughran
>Priority: Critical
> Attachments: HADOOP-14028-006.patch, HADOOP-14028-007.patch, 
> HADOOP-14028-branch-2-001.patch, HADOOP-14028-branch-2.8-002.patch, 
> HADOOP-14028-branch-2.8-003.patch, HADOOP-14028-branch-2.8-004.patch, 
> HADOOP-14028-branch-2.8-005.patch, HADOOP-14028-branch-2.8-007.patch, 
> HADOOP-14028-branch-2.8-008.patch
>
>
> I have `fs.s3a.fast.upload` enabled with 3.0.0-alpha2 (it's exactly what I 
> was looking for after running into the same OOM problems) and don't see it 
> cleaning up the disk-cached blocks.
> I'm generating a ~50GB file on an instance with ~6GB free when the process 
> starts. My expectation is that local copies of the blocks would be deleted 
> after those parts finish uploading, but I'm seeing more than 15 blocks in 
> /tmp (and none of them have been deleted thus far).
> I see that DiskBlock deletes temporary files when closed, but is it closed 
> after individual blocks have finished uploading or when the entire file has 
> been fully written to the FS (full upload completed, including all parts)?
> As a temporary workaround to avoid running out of space, I'm listing files, 
> sorting by atime, and deleting anything older than the first 20: `ls -ut | 
> tail -n +21 | xargs rm`
> Steve Loughran says:
> > They should be deleted as soon as the upload completes; the close() call 
> > that the AWS httpclient makes on the input stream triggers the deletion. 
> > Though there aren't tests for it, as I recall.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14118) move jets3t into a dependency on hadoop-aws JAR

2017-02-24 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15883500#comment-15883500
 ] 

Steve Loughran commented on HADOOP-14118:
-

ooh, you know in the cloud subprojects we're being pretty strict about 
certifying your test runs: 
https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md

did you do one here? If not, as its a special case, I'll do it

> move jets3t into a dependency on hadoop-aws JAR
> ---
>
> Key: HADOOP-14118
> URL: https://issues.apache.org/jira/browse/HADOOP-14118
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Akira Ajisaka
> Attachments: HADOOP-14118.01.patch
>
>
> hadoop-common still declares a dependency on jets3t, which allows downstream 
> projects to pick it up. But as they can't get s3n to work without the 
> hadoop-aws JAR, it's hard to see how much use this is.
> I propose: moving it to a dependency of hadoop-aws JAR alone.
> Marking as incompatible as it will be in the specific situation
> * downstream maven build doesn't pull in hadoop-aws
> * command line doesnt have hadoop-aws JAR dependencies on CP and still wants 
> to use jets3t. Hard to imagine how this arises.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14118) move jets3t into a dependency on hadoop-aws JAR

2017-02-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15883462#comment-15883462
 ] 

Hadoop QA commented on HADOOP-14118:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
7s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
9s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 10s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
58s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
10s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
17s{color} | {color:green} hadoop-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
16s{color} | {color:green} hadoop-cloud-storage in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-14118 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12854552/HADOOP-14118.01.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 7281a1a9ce70 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 53d372a |
| Default Java | 1.8.0_121 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11711/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org

[jira] [Commented] (HADOOP-14116) FailoverOnNetworkExceptionRetry does not wait when failover on certain exception

2017-02-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15883437#comment-15883437
 ] 

Hudson commented on HADOOP-14116:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11303 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11303/])
HADOOP-14116:FailoverOnNetworkExceptionRetry does not wait when failover 
(xgong: rev 289bc50e663b882956878eeaefe0eaa1ef4ed39e)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/RetryPolicies.java


> FailoverOnNetworkExceptionRetry does not wait when failover on certain 
> exception 
> -
>
> Key: HADOOP-14116
> URL: https://issues.apache.org/jira/browse/HADOOP-14116
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-14116.1.patch, HADOOP-14116.2.patch
>
>
> Below code, when doing failover, it does not wait like other condition does, 
> which leads to a busy loop. 
> {code}
>} else if (e instanceof SocketException
>   || (e instanceof IOException && !(e instanceof RemoteException))) {
> if (isIdempotentOrAtMostOnce) {
>   return RetryAction.FAILOVER_AND_RETRY;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14116) FailoverOnNetworkExceptionRetry does not wait when failover on certain exception

2017-02-24 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated HADOOP-14116:
---
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha3
   2.9.0
   Status: Resolved  (was: Patch Available)

Committed into trunk and branch-2. Thanks, Jian!

> FailoverOnNetworkExceptionRetry does not wait when failover on certain 
> exception 
> -
>
> Key: HADOOP-14116
> URL: https://issues.apache.org/jira/browse/HADOOP-14116
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-14116.1.patch, HADOOP-14116.2.patch
>
>
> Below code, when doing failover, it does not wait like other condition does, 
> which leads to a busy loop. 
> {code}
>} else if (e instanceof SocketException
>   || (e instanceof IOException && !(e instanceof RemoteException))) {
> if (isIdempotentOrAtMostOnce) {
>   return RetryAction.FAILOVER_AND_RETRY;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14028) S3A block output streams don't delete temporary files in multipart uploads

2017-02-24 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15883395#comment-15883395
 ] 

Mingliang Liu commented on HADOOP-14028:


{quote}
IOUtilsClient#cleanup() because its in the wrong package
{quote}
I missed that... sorry for the noise.

+1

> S3A block output streams don't delete temporary files in multipart uploads
> --
>
> Key: HADOOP-14028
> URL: https://issues.apache.org/jira/browse/HADOOP-14028
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: JDK 8 + ORC 1.3.0 + hadoop-aws 3.0.0-alpha2
>Reporter: Seth Fitzsimmons
>Assignee: Steve Loughran
>Priority: Critical
> Attachments: HADOOP-14028-006.patch, HADOOP-14028-007.patch, 
> HADOOP-14028-branch-2-001.patch, HADOOP-14028-branch-2.8-002.patch, 
> HADOOP-14028-branch-2.8-003.patch, HADOOP-14028-branch-2.8-004.patch, 
> HADOOP-14028-branch-2.8-005.patch, HADOOP-14028-branch-2.8-007.patch, 
> HADOOP-14028-branch-2.8-008.patch
>
>
> I have `fs.s3a.fast.upload` enabled with 3.0.0-alpha2 (it's exactly what I 
> was looking for after running into the same OOM problems) and don't see it 
> cleaning up the disk-cached blocks.
> I'm generating a ~50GB file on an instance with ~6GB free when the process 
> starts. My expectation is that local copies of the blocks would be deleted 
> after those parts finish uploading, but I'm seeing more than 15 blocks in 
> /tmp (and none of them have been deleted thus far).
> I see that DiskBlock deletes temporary files when closed, but is it closed 
> after individual blocks have finished uploading or when the entire file has 
> been fully written to the FS (full upload completed, including all parts)?
> As a temporary workaround to avoid running out of space, I'm listing files, 
> sorting by atime, and deleting anything older than the first 20: `ls -ut | 
> tail -n +21 | xargs rm`
> Steve Loughran says:
> > They should be deleted as soon as the upload completes; the close() call 
> > that the AWS httpclient makes on the input stream triggers the deletion. 
> > Though there aren't tests for it, as I recall.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14036) S3Guard: intermittent duplicate item keys failure

2017-02-24 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-14036:
---
Affects Version/s: HADOOP-13345

> S3Guard: intermittent duplicate item keys failure
> -
>
> Key: HADOOP-14036
> URL: https://issues.apache.org/jira/browse/HADOOP-14036
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Mingliang Liu
>
> I see this occasionally when running integration tests with -Ds3guard 
> -Ddynamo:
> {noformat}
> testRenameToDirWithSamePrefixAllowed(org.apache.hadoop.fs.s3a.ITestS3AFileSystemContract)
>   Time elapsed: 2.756 sec  <<< ERROR!
> org.apache.hadoop.fs.s3a.AWSServiceIOException: move: 
> com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: Provided 
> list of item keys contains duplicates (Service: AmazonDynamoDBv2; Status 
> Code: 400; Error Code: ValidationException; Request ID: 
> QSBVQV69279UGOB4AJ4NO9Q86VVV4KQNSO5AEMVJF66Q9ASUAAJG): Provided list of item 
> keys contains duplicates (Service: AmazonDynamoDBv2; Status Code: 400; Error 
> Code: ValidationException; Request ID: 
> QSBVQV69279UGOB4AJ4NO9Q86VVV4KQNSO5AEMVJF66Q9ASUAAJG)
> at 
> org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:178)
> at 
> org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.move(DynamoDBMetadataStore.java:408)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerRename(S3AFileSystem.java:869)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.rename(S3AFileSystem.java:662)
> at 
> org.apache.hadoop.fs.FileSystemContractBaseTest.rename(FileSystemContractBaseTest.java:525)
> at 
> org.apache.hadoop.fs.FileSystemContractBaseTest.testRenameToDirWithSamePrefixAllowed(FileSystemContractBaseTest.java:669)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcces
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14036) S3Guard: intermittent duplicate item keys failure

2017-02-24 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu reassigned HADOOP-14036:
--

Assignee: Mingliang Liu

Assigned this to me. I'll have a look at this as I'm seeing this more 
frequently. Feel free to re-assign as I don't have a patch yet. Thanks,

> S3Guard: intermittent duplicate item keys failure
> -
>
> Key: HADOOP-14036
> URL: https://issues.apache.org/jira/browse/HADOOP-14036
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Mingliang Liu
>
> I see this occasionally when running integration tests with -Ds3guard 
> -Ddynamo:
> {noformat}
> testRenameToDirWithSamePrefixAllowed(org.apache.hadoop.fs.s3a.ITestS3AFileSystemContract)
>   Time elapsed: 2.756 sec  <<< ERROR!
> org.apache.hadoop.fs.s3a.AWSServiceIOException: move: 
> com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: Provided 
> list of item keys contains duplicates (Service: AmazonDynamoDBv2; Status 
> Code: 400; Error Code: ValidationException; Request ID: 
> QSBVQV69279UGOB4AJ4NO9Q86VVV4KQNSO5AEMVJF66Q9ASUAAJG): Provided list of item 
> keys contains duplicates (Service: AmazonDynamoDBv2; Status Code: 400; Error 
> Code: ValidationException; Request ID: 
> QSBVQV69279UGOB4AJ4NO9Q86VVV4KQNSO5AEMVJF66Q9ASUAAJG)
> at 
> org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:178)
> at 
> org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.move(DynamoDBMetadataStore.java:408)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerRename(S3AFileSystem.java:869)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.rename(S3AFileSystem.java:662)
> at 
> org.apache.hadoop.fs.FileSystemContractBaseTest.rename(FileSystemContractBaseTest.java:525)
> at 
> org.apache.hadoop.fs.FileSystemContractBaseTest.testRenameToDirWithSamePrefixAllowed(FileSystemContractBaseTest.java:669)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcces
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14110) In S3AFileSystem, make getAmazonClient() package private; export getBucketLocation()

2017-02-24 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-14110:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HADOOP-13345
   Status: Resolved  (was: Patch Available)

I have committed to S3Guard feature branch. Thanks for your contribution, Steve.

> In S3AFileSystem, make getAmazonClient() package private; export 
> getBucketLocation()
> 
>
> Key: HADOOP-14110
> URL: https://issues.apache.org/jira/browse/HADOOP-14110
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-14110-HADOOP-13345-001.patch
>
>
> I'ved just noticed we are making the S3 client visible again. I really don't 
> want that to happen, as at that point you lose control of how it gets used. 
> Better to make getBucketLocation a visible method, possibly on the 
> internal-use-only {{WriteOperationHelper}} class.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14110) In S3AFileSystem, make getAmazonClient() package private; export getBucketLocation()

2017-02-24 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-14110:
---
Summary: In S3AFileSystem, make getAmazonClient() package private; export 
getBucketLocation()  (was: mark S3AFileSystem.getAmazonClient package private, 
export getBucketLocation(fs.getBucket());)

> In S3AFileSystem, make getAmazonClient() package private; export 
> getBucketLocation()
> 
>
> Key: HADOOP-14110
> URL: https://issues.apache.org/jira/browse/HADOOP-14110
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
> Attachments: HADOOP-14110-HADOOP-13345-001.patch
>
>
> I'ved just noticed we are making the S3 client visible again. I really don't 
> want that to happen, as at that point you lose control of how it gets used. 
> Better to make getBucketLocation a visible method, possibly on the 
> internal-use-only {{WriteOperationHelper}} class.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14118) move jets3t into a dependency on hadoop-aws JAR

2017-02-24 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14118:
---
Status: Patch Available  (was: Open)

> move jets3t into a dependency on hadoop-aws JAR
> ---
>
> Key: HADOOP-14118
> URL: https://issues.apache.org/jira/browse/HADOOP-14118
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Akira Ajisaka
> Attachments: HADOOP-14118.01.patch
>
>
> hadoop-common still declares a dependency on jets3t, which allows downstream 
> projects to pick it up. But as they can't get s3n to work without the 
> hadoop-aws JAR, it's hard to see how much use this is.
> I propose: moving it to a dependency of hadoop-aws JAR alone.
> Marking as incompatible as it will be in the specific situation
> * downstream maven build doesn't pull in hadoop-aws
> * command line doesnt have hadoop-aws JAR dependencies on CP and still wants 
> to use jets3t. Hard to imagine how this arises.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14118) move jets3t into a dependency on hadoop-aws JAR

2017-02-24 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14118:
---
Attachment: HADOOP-14118.01.patch

Attaching a patch
* Moved jets3t dependency from hadoop-common to hadoop-aws
* Removed exclusion of jets3t in hadoop-common dependency because jets3t 
dependency was removed from hadoop-common

> move jets3t into a dependency on hadoop-aws JAR
> ---
>
> Key: HADOOP-14118
> URL: https://issues.apache.org/jira/browse/HADOOP-14118
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Akira Ajisaka
> Attachments: HADOOP-14118.01.patch
>
>
> hadoop-common still declares a dependency on jets3t, which allows downstream 
> projects to pick it up. But as they can't get s3n to work without the 
> hadoop-aws JAR, it's hard to see how much use this is.
> I propose: moving it to a dependency of hadoop-aws JAR alone.
> Marking as incompatible as it will be in the specific situation
> * downstream maven build doesn't pull in hadoop-aws
> * command line doesnt have hadoop-aws JAR dependencies on CP and still wants 
> to use jets3t. Hard to imagine how this arises.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14110) mark S3AFileSystem.getAmazonClient package private, export getBucketLocation(fs.getBucket());

2017-02-24 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15883347#comment-15883347
 ] 

Mingliang Liu commented on HADOOP-14110:


I have to admit it was me who made it public. The JIRA (point+patch) makes 
sense to me. 

+1

Will commit shortly.

> mark S3AFileSystem.getAmazonClient package private, export 
> getBucketLocation(fs.getBucket());
> -
>
> Key: HADOOP-14110
> URL: https://issues.apache.org/jira/browse/HADOOP-14110
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
> Attachments: HADOOP-14110-HADOOP-13345-001.patch
>
>
> I'ved just noticed we are making the S3 client visible again. I really don't 
> want that to happen, as at that point you lose control of how it gets used. 
> Better to make getBucketLocation a visible method, possibly on the 
> internal-use-only {{WriteOperationHelper}} class.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14118) move jets3t into a dependency on hadoop-aws JAR

2017-02-24 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-14118:
--

Assignee: Akira Ajisaka

> move jets3t into a dependency on hadoop-aws JAR
> ---
>
> Key: HADOOP-14118
> URL: https://issues.apache.org/jira/browse/HADOOP-14118
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Akira Ajisaka
>
> hadoop-common still declares a dependency on jets3t, which allows downstream 
> projects to pick it up. But as they can't get s3n to work without the 
> hadoop-aws JAR, it's hard to see how much use this is.
> I propose: moving it to a dependency of hadoop-aws JAR alone.
> Marking as incompatible as it will be in the specific situation
> * downstream maven build doesn't pull in hadoop-aws
> * command line doesnt have hadoop-aws JAR dependencies on CP and still wants 
> to use jets3t. Hard to imagine how this arises.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14068) Add integration test version of TestMetadataStore for DynamoDB

2017-02-24 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15883313#comment-15883313
 ] 

Sean Mackrory commented on HADOOP-14068:


There's a lot of things in those failures that are definitely unreleated. Yetus 
passed for me locally, and trying to rerun this seems to be hitting infra 
issues at the moment...: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11710/console

> Add integration test version of TestMetadataStore for DynamoDB
> --
>
> Key: HADOOP-14068
> URL: https://issues.apache.org/jira/browse/HADOOP-14068
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-14068-HADOOP-13345.001.patch, 
> HADOOP-14068-HADOOP-13345.002.patch, HADOOP-14068-HADOOP-13345.003.patch, 
> HADOOP-14068-HADOOP-13345.004.patch, HADOOP-14068-HADOOP-13345.005.patch
>
>
> I tweaked TestDynamoDBMetadataStore to run against the actual Amazon DynamoDB 
> service (as opposed to the "local" edition). Several tests failed because of 
> minor variations in behavior. I think the differences that are clearly 
> possible are enough to warrant extending that class as an ITest (but 
> obviously keeping the existing test so 99% of the the coverage remains even 
> when not configured for actual DynamoDB usage).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14116) FailoverOnNetworkExceptionRetry does not wait when failover on certain exception

2017-02-24 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15883308#comment-15883308
 ] 

Xuan Gong commented on HADOOP-14116:


+1 LGTM. Will commit it shortly

> FailoverOnNetworkExceptionRetry does not wait when failover on certain 
> exception 
> -
>
> Key: HADOOP-14116
> URL: https://issues.apache.org/jira/browse/HADOOP-14116
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: HADOOP-14116.1.patch, HADOOP-14116.2.patch
>
>
> Below code, when doing failover, it does not wait like other condition does, 
> which leads to a busy loop. 
> {code}
>} else if (e instanceof SocketException
>   || (e instanceof IOException && !(e instanceof RemoteException))) {
> if (isIdempotentOrAtMostOnce) {
>   return RetryAction.FAILOVER_AND_RETRY;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14114) S3A can no longer handle unencoded + in URIs

2017-02-24 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15883121#comment-15883121
 ] 

Steve Loughran commented on HADOOP-14114:
-

thx. Note that with per bucket config, there's no justification for secrets in 
URIs no more. You can do a distcp using -D options to declare (ideally session) 
credentials for the URL you are working with, same for anything else

> S3A can no longer handle unencoded + in URIs 
> -
>
> Key: HADOOP-14114
> URL: https://issues.apache.org/jira/browse/HADOOP-14114
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Minor
> Fix For: 2.8.1
>
> Attachments: HADOOP-14114.001.patch
>
>
> Amazon secret access keys can include alphanumeric characters, but also / and 
> + (I wish there was an official source that was really specific on what they 
> can contain, but I'll have to rely on a few blog posts and my own experience).
> Keys containing slashes used to be impossible to embed in the URL (e.g. 
> s3a://access_key:secret_key@bucket/) but it is now possible to do it via URL 
> encoding. Pluses used to work, but that is now *only* possible via URL 
> encoding.
> In the case of pluses, they don't appear to cause any other problems for 
> parsing. So IMO the best all-around solution here is for people to URL-encode 
> these keys always, but so that keys that used to work just fine can continue 
> to work fine, all we need to do is detect that, log a warning, and we can 
> re-encode it for the user.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13817) Add a finite shell command timeout to ShellBasedUnixGroupsMapping

2017-02-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15883032#comment-15883032
 ] 

Hudson commented on HADOOP-13817:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11301 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11301/])
HADOOP-13817. Add a finite shell command timeout to (harsh: rev 
e8694deb6ad180449f8ce6c1c8b4f84873c0587a)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Shell.java
* (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestGroupsCaching.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestShellBasedUnixGroupsMapping.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ShellBasedUnixGroupsMapping.java


> Add a finite shell command timeout to ShellBasedUnixGroupsMapping
> -
>
> Key: HADOOP-13817
> URL: https://issues.apache.org/jira/browse/HADOOP-13817
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-13817.000.patch, HADOOP-13817-branch-2.000.patch
>
>
> The ShellBasedUnixGroupsMapping run various {{id}} commands via the 
> ShellCommandExecutor modules without a timeout set (its set to 0, which 
> implies infinite).
> If this command hangs for a long time on the OS end due to an unresponsive 
> groups backend or other reasons, it also blocks the handlers that use it on 
> the NameNode (or other services that use this class). That inadvertently 
> causes odd timeout troubles on the client end where its forced to retry (only 
> to likely run into such hangs again with every attempt until at least one 
> command returns).
> It would be helpful to have a finite command timeout after which we may give 
> up on the command and return the result equivalent of no groups found.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13817) Add a finite shell command timeout to ShellBasedUnixGroupsMapping

2017-02-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15883003#comment-15883003
 ] 

ASF GitHub Bot commented on HADOOP-13817:
-

Github user QwertyManiac commented on the issue:

https://github.com/apache/hadoop/pull/161
  
Done via e8694deb6ad180449f8ce6c1c8b4f84873c0587a.


> Add a finite shell command timeout to ShellBasedUnixGroupsMapping
> -
>
> Key: HADOOP-13817
> URL: https://issues.apache.org/jira/browse/HADOOP-13817
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-13817.000.patch, HADOOP-13817-branch-2.000.patch
>
>
> The ShellBasedUnixGroupsMapping run various {{id}} commands via the 
> ShellCommandExecutor modules without a timeout set (its set to 0, which 
> implies infinite).
> If this command hangs for a long time on the OS end due to an unresponsive 
> groups backend or other reasons, it also blocks the handlers that use it on 
> the NameNode (or other services that use this class). That inadvertently 
> causes odd timeout troubles on the client end where its forced to retry (only 
> to likely run into such hangs again with every attempt until at least one 
> command returns).
> It would be helpful to have a finite command timeout after which we may give 
> up on the command and return the result equivalent of no groups found.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13817) Add a finite shell command timeout to ShellBasedUnixGroupsMapping

2017-02-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15883004#comment-15883004
 ] 

ASF GitHub Bot commented on HADOOP-13817:
-

Github user QwertyManiac closed the pull request at:

https://github.com/apache/hadoop/pull/161


> Add a finite shell command timeout to ShellBasedUnixGroupsMapping
> -
>
> Key: HADOOP-13817
> URL: https://issues.apache.org/jira/browse/HADOOP-13817
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-13817.000.patch, HADOOP-13817-branch-2.000.patch
>
>
> The ShellBasedUnixGroupsMapping run various {{id}} commands via the 
> ShellCommandExecutor modules without a timeout set (its set to 0, which 
> implies infinite).
> If this command hangs for a long time on the OS end due to an unresponsive 
> groups backend or other reasons, it also blocks the handlers that use it on 
> the NameNode (or other services that use this class). That inadvertently 
> causes odd timeout troubles on the client end where its forced to retry (only 
> to likely run into such hangs again with every attempt until at least one 
> command returns).
> It would be helpful to have a finite command timeout after which we may give 
> up on the command and return the result equivalent of no groups found.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13817) Add a finite shell command timeout to ShellBasedUnixGroupsMapping

2017-02-24 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-13817:
-
Release Note: A new introduced configuration key 
"hadoop.security.groups.shell.command.timeout" allows applying a finite wait 
timeout over the 'id' commands launched by the ShellBasedUnixGroupsMapping 
plugin. Values specified can be in any valid time duration units: 
https://hadoop.apache.org/docs/current/api/org/apache/hadoop/conf/Configuration.html#getTimeDuration-java.lang.String-long-java.util.concurrent.TimeUnit-

> Add a finite shell command timeout to ShellBasedUnixGroupsMapping
> -
>
> Key: HADOOP-13817
> URL: https://issues.apache.org/jira/browse/HADOOP-13817
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-13817.000.patch, HADOOP-13817-branch-2.000.patch
>
>
> The ShellBasedUnixGroupsMapping run various {{id}} commands via the 
> ShellCommandExecutor modules without a timeout set (its set to 0, which 
> implies infinite).
> If this command hangs for a long time on the OS end due to an unresponsive 
> groups backend or other reasons, it also blocks the handlers that use it on 
> the NameNode (or other services that use this class). That inadvertently 
> causes odd timeout troubles on the client end where its forced to retry (only 
> to likely run into such hangs again with every attempt until at least one 
> command returns).
> It would be helpful to have a finite command timeout after which we may give 
> up on the command and return the result equivalent of no groups found.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13817) Add a finite shell command timeout to ShellBasedUnixGroupsMapping

2017-02-24 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-13817:
-
Release Note: 
A new introduced configuration key 
"hadoop.security.groups.shell.command.timeout" allows applying a finite wait 
timeout over the 'id' commands launched by the ShellBasedUnixGroupsMapping 
plugin. Values specified can be in any valid time duration units: 
https://hadoop.apache.org/docs/current/api/org/apache/hadoop/conf/Configuration.html#getTimeDuration-java.lang.String-long-java.util.concurrent.TimeUnit-

Value defaults to 0, indicating infinite wait (preserving existing behaviour).

  was:A new introduced configuration key 
"hadoop.security.groups.shell.command.timeout" allows applying a finite wait 
timeout over the 'id' commands launched by the ShellBasedUnixGroupsMapping 
plugin. Values specified can be in any valid time duration units: 
https://hadoop.apache.org/docs/current/api/org/apache/hadoop/conf/Configuration.html#getTimeDuration-java.lang.String-long-java.util.concurrent.TimeUnit-


> Add a finite shell command timeout to ShellBasedUnixGroupsMapping
> -
>
> Key: HADOOP-13817
> URL: https://issues.apache.org/jira/browse/HADOOP-13817
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-13817.000.patch, HADOOP-13817-branch-2.000.patch
>
>
> The ShellBasedUnixGroupsMapping run various {{id}} commands via the 
> ShellCommandExecutor modules without a timeout set (its set to 0, which 
> implies infinite).
> If this command hangs for a long time on the OS end due to an unresponsive 
> groups backend or other reasons, it also blocks the handlers that use it on 
> the NameNode (or other services that use this class). That inadvertently 
> causes odd timeout troubles on the client end where its forced to retry (only 
> to likely run into such hangs again with every attempt until at least one 
> command returns).
> It would be helpful to have a finite command timeout after which we may give 
> up on the command and return the result equivalent of no groups found.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13817) Add a finite shell command timeout to ShellBasedUnixGroupsMapping

2017-02-24 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-13817:
-
  Resolution: Fixed
   Fix Version/s: 3.0.0-beta1
  2.9.0
Target Version/s:   (was: 2.9.0, 3.0.0-beta1)
  Status: Resolved  (was: Patch Available)

> Add a finite shell command timeout to ShellBasedUnixGroupsMapping
> -
>
> Key: HADOOP-13817
> URL: https://issues.apache.org/jira/browse/HADOOP-13817
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-13817.000.patch, HADOOP-13817-branch-2.000.patch
>
>
> The ShellBasedUnixGroupsMapping run various {{id}} commands via the 
> ShellCommandExecutor modules without a timeout set (its set to 0, which 
> implies infinite).
> If this command hangs for a long time on the OS end due to an unresponsive 
> groups backend or other reasons, it also blocks the handlers that use it on 
> the NameNode (or other services that use this class). That inadvertently 
> causes odd timeout troubles on the client end where its forced to retry (only 
> to likely run into such hangs again with every attempt until at least one 
> command returns).
> It would be helpful to have a finite command timeout after which we may give 
> up on the command and return the result equivalent of no groups found.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13817) Add a finite shell command timeout to ShellBasedUnixGroupsMapping

2017-02-24 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-13817:
-
Hadoop Flags: Reviewed

> Add a finite shell command timeout to ShellBasedUnixGroupsMapping
> -
>
> Key: HADOOP-13817
> URL: https://issues.apache.org/jira/browse/HADOOP-13817
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-13817.000.patch, HADOOP-13817-branch-2.000.patch
>
>
> The ShellBasedUnixGroupsMapping run various {{id}} commands via the 
> ShellCommandExecutor modules without a timeout set (its set to 0, which 
> implies infinite).
> If this command hangs for a long time on the OS end due to an unresponsive 
> groups backend or other reasons, it also blocks the handlers that use it on 
> the NameNode (or other services that use this class). That inadvertently 
> causes odd timeout troubles on the client end where its forced to retry (only 
> to likely run into such hangs again with every attempt until at least one 
> command returns).
> It would be helpful to have a finite command timeout after which we may give 
> up on the command and return the result equivalent of no groups found.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14114) S3A can no longer handle unencoded + in URIs

2017-02-24 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15882970#comment-15882970
 ] 

Sean Mackrory commented on HADOOP-14114:


Thanks [~ste...@apache.org]. The error message is identical from a V4-only 
endpoint.

> S3A can no longer handle unencoded + in URIs 
> -
>
> Key: HADOOP-14114
> URL: https://issues.apache.org/jira/browse/HADOOP-14114
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Minor
> Fix For: 2.8.1
>
> Attachments: HADOOP-14114.001.patch
>
>
> Amazon secret access keys can include alphanumeric characters, but also / and 
> + (I wish there was an official source that was really specific on what they 
> can contain, but I'll have to rely on a few blog posts and my own experience).
> Keys containing slashes used to be impossible to embed in the URL (e.g. 
> s3a://access_key:secret_key@bucket/) but it is now possible to do it via URL 
> encoding. Pluses used to work, but that is now *only* possible via URL 
> encoding.
> In the case of pluses, they don't appear to cause any other problems for 
> parsing. So IMO the best all-around solution here is for people to URL-encode 
> these keys always, but so that keys that used to work just fine can continue 
> to work fine, all we need to do is detect that, log a warning, and we can 
> re-encode it for the user.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14097) Remove Java6 specific code from GzipCodec.java

2017-02-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15882944#comment-15882944
 ] 

Hudson commented on HADOOP-14097:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11300 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11300/])
HADOOP-14097. Remove Java6 specific code from GzipCodec.java. (aajisaka: rev 
50decd36130945e184734dcd55b8912be6f4550a)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/GzipCodec.java


> Remove Java6 specific code from GzipCodec.java
> --
>
> Key: HADOOP-14097
> URL: https://issues.apache.org/jira/browse/HADOOP-14097
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Elek, Marton
>Priority: Minor
>  Labels: newbie
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HDFS-14097.001.patch
>
>
> GzipCodec has Java 6 specific code and it can be removed.
> {code:title=GzipCodec.java}
>   public static final String JVMVersion= 
> System.getProperty("java.version");
>   private static final boolean HAS_BROKEN_FINISH =
>   (IBM_JAVA && JVMVersion.contains("1.6.0"));
> {code}
> Hadoop 2.7+ dropped Java 6 support.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13817) Add a finite shell command timeout to ShellBasedUnixGroupsMapping

2017-02-24 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15882938#comment-15882938
 ] 

Wei-Chiu Chuang commented on HADOOP-13817:
--

Test failures are not reproducible in my local tree.

+1 based on my review and [~xyao]'s review on GitHub.

> Add a finite shell command timeout to ShellBasedUnixGroupsMapping
> -
>
> Key: HADOOP-13817
> URL: https://issues.apache.org/jira/browse/HADOOP-13817
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Minor
> Attachments: HADOOP-13817.000.patch, HADOOP-13817-branch-2.000.patch
>
>
> The ShellBasedUnixGroupsMapping run various {{id}} commands via the 
> ShellCommandExecutor modules without a timeout set (its set to 0, which 
> implies infinite).
> If this command hangs for a long time on the OS end due to an unresponsive 
> groups backend or other reasons, it also blocks the handlers that use it on 
> the NameNode (or other services that use this class). That inadvertently 
> causes odd timeout troubles on the client end where its forced to retry (only 
> to likely run into such hangs again with every attempt until at least one 
> command returns).
> It would be helpful to have a finite command timeout after which we may give 
> up on the command and return the result equivalent of no groups found.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14097) Remove Java6 specific code from GzipCodec.java

2017-02-24 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15882930#comment-15882930
 ] 

Akira Ajisaka commented on HADOOP-14097:


Filed HADOOP-14119.

> Remove Java6 specific code from GzipCodec.java
> --
>
> Key: HADOOP-14097
> URL: https://issues.apache.org/jira/browse/HADOOP-14097
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Elek, Marton
>Priority: Minor
>  Labels: newbie
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HDFS-14097.001.patch
>
>
> GzipCodec has Java 6 specific code and it can be removed.
> {code:title=GzipCodec.java}
>   public static final String JVMVersion= 
> System.getProperty("java.version");
>   private static final boolean HAS_BROKEN_FINISH =
>   (IBM_JAVA && JVMVersion.contains("1.6.0"));
> {code}
> Hadoop 2.7+ dropped Java 6 support.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14119) Remove unused imports from GzipCodec.java

2017-02-24 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-14119:
--

 Summary: Remove unused imports from GzipCodec.java
 Key: HADOOP-14119
 URL: https://issues.apache.org/jira/browse/HADOOP-14119
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Akira Ajisaka
Priority: Minor


There are some unused imports after HADOOP-14097. Let's fix this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14097) Remove Java6 specific code from GzipCodec.java

2017-02-24 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15882925#comment-15882925
 ] 

Akira Ajisaka commented on HADOOP-14097:


I found there are some unused imports after committing this (sorry for that). 
I'll file a separate jira to track this.

> Remove Java6 specific code from GzipCodec.java
> --
>
> Key: HADOOP-14097
> URL: https://issues.apache.org/jira/browse/HADOOP-14097
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Elek, Marton
>Priority: Minor
>  Labels: newbie
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HDFS-14097.001.patch
>
>
> GzipCodec has Java 6 specific code and it can be removed.
> {code:title=GzipCodec.java}
>   public static final String JVMVersion= 
> System.getProperty("java.version");
>   private static final boolean HAS_BROKEN_FINISH =
>   (IBM_JAVA && JVMVersion.contains("1.6.0"));
> {code}
> Hadoop 2.7+ dropped Java 6 support.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14097) Remove Java6 specific code from GzipCodec.java

2017-02-24 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14097:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha3
   2.9.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2. Thanks [~elek] for the contribution.

> Remove Java6 specific code from GzipCodec.java
> --
>
> Key: HADOOP-14097
> URL: https://issues.apache.org/jira/browse/HADOOP-14097
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Elek, Marton
>Priority: Minor
>  Labels: newbie
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HDFS-14097.001.patch
>
>
> GzipCodec has Java 6 specific code and it can be removed.
> {code:title=GzipCodec.java}
>   public static final String JVMVersion= 
> System.getProperty("java.version");
>   private static final boolean HAS_BROKEN_FINISH =
>   (IBM_JAVA && JVMVersion.contains("1.6.0"));
> {code}
> Hadoop 2.7+ dropped Java 6 support.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14097) Remove Java6 specific code from GzipCodec.java

2017-02-24 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15882876#comment-15882876
 ] 

Akira Ajisaka commented on HADOOP-14097:


+1, committing this shortly.

> Remove Java6 specific code from GzipCodec.java
> --
>
> Key: HADOOP-14097
> URL: https://issues.apache.org/jira/browse/HADOOP-14097
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Elek, Marton
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-14097.001.patch
>
>
> GzipCodec has Java 6 specific code and it can be removed.
> {code:title=GzipCodec.java}
>   public static final String JVMVersion= 
> System.getProperty("java.version");
>   private static final boolean HAS_BROKEN_FINISH =
>   (IBM_JAVA && JVMVersion.contains("1.6.0"));
> {code}
> Hadoop 2.7+ dropped Java 6 support.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14038) Add ADLS credential properties to core-default.xml

2017-02-24 Thread Vishwajeet Dusane (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15882843#comment-15882843
 ] 

Vishwajeet Dusane commented on HADOOP-14038:


[~jzhuge] - Also take a look at HADOOP-14113. Patch has updated ADL Doc, You 
might need to merge your changes with HADOOP-14113.

> Add ADLS credential properties to core-default.xml
> --
>
> Key: HADOOP-14038
> URL: https://issues.apache.org/jira/browse/HADOOP-14038
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha3
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14038.001.patch, HADOOP-14038.002.patch, 
> HADOOP-14038.003.patch
>
>
> Add ADLS credential configuration properties to {{core-default.xml}}. 
> Set/document the default value for 
> {{dfs.adls.oauth2.access.token.provider.type}} to {{ClientCredential}}.
> Fix {{AdlFileSystem#getAccessTokenProvider}} which implies the provider type 
> is {{Custom}}.
> Fix several unit tests that set {{dfs.adls.oauth2.access.token.provider}} but 
> does not set {{dfs.adls.oauth2.access.token.provider.type}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13817) Add a finite shell command timeout to ShellBasedUnixGroupsMapping

2017-02-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15882822#comment-15882822
 ] 

Hadoop QA commented on HADOOP-13817:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 11s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13817 |
| GITHUB PR | https://github.com/apache/hadoop/pull/161 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 93f53f4231a1 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e60c654 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11709/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11709/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11709/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add a finite shell command timeout to ShellBasedUnixGroupsMapping
> -
>
> Key: HADOOP-13817
> URL: https://issues.apache.org/jira/browse/HADOOP-13817
> Project: Hadoop Common

[jira] [Resolved] (HADOOP-13991) Retry management in NativeS3FileSystem to avoid file upload problem

2017-02-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-13991.
-
Resolution: Won't Fix

> Retry management in NativeS3FileSystem to avoid file upload problem
> ---
>
> Key: HADOOP-13991
> URL: https://issues.apache.org/jira/browse/HADOOP-13991
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Musaddique Hossain
>Priority: Minor
>
> NativeS3FileSystem does not support any retry management for failed uploading 
> to S3.
> If due to socket timeout or any other network exception, file uploading to S3 
> bucket fails, then uploading fails and temporary file gets deleted. 
> java.net.SocketException: Connection reset
>   at java.net.SocketInputStream.read(SocketInputStream.java:196)
>   at java.net.SocketInputStream.read(SocketInputStream.java:122)
>   at org.jets3t.service.S3Service.putObject(S3Service.java:2265)
>   at 
> org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.storeFile(Jets3tNativeFileSystemStore.java:122)
>   at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at org.apache.hadoop.fs.s3native.$Proxy8.storeFile(Unknown Source)
>   at 
> org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsOutputStream.close(NativeS3FileSystem.java:284)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
>   at 
> org.apache.hadoop.io.compress.bzip2.CBZip2OutputStream.close(CBZip2OutputStream.java:737)
>   at 
> org.apache.hadoop.io.compress.BZip2Codec$BZip2CompressionOutputStream.close(BZip2Codec.java:336)
>   at 
> org.apache.flume.sink.hdfs.HDFSCompressedDataStream.close(HDFSCompressedDataStream.java:155)
>   at org.apache.flume.sink.hdfs.BucketWriter$3.call(BucketWriter.java:312)
>   at org.apache.flume.sink.hdfs.BucketWriter$3.call(BucketWriter.java:308)
>   at 
> org.apache.flume.sink.hdfs.BucketWriter$9$1.run(BucketWriter.java:679)
>   at 
> org.apache.flume.auth.SimpleAuthenticator.execute(SimpleAuthenticator.java:50)
>   at org.apache.flume.sink.hdfs.BucketWriter$9.call(BucketWriter.java:676)
> This can be solved by using asynchronous retry management.
> We have made following modifications to NativeS3FileSystem to add the retry 
> management, which is working fine in our product system, without any 
> uploading failure:
> {code:title=NativeS3FileSystem.java|borderStyle=solid}
> @@ -36,6 +36,7 @@
>  import java.util.Map;
>  import java.util.Set;
>  import java.util.TreeSet;
> +import java.util.concurrent.Callable;
>  import java.util.concurrent.TimeUnit;
>  import com.google.common.base.Preconditions;
> @@ -279,9 +280,19 @@
>backupStream.close();
>LOG.info("OutputStream for key '{}' closed. Now beginning upload", 
> key);
> +  Callable task = new Callable() {
> + private final byte[] md5Hash = digest == null ? null : 
> digest.digest();
> + public Void call() throws IOException {
> +store.storeFile(key, backupFile, md5Hash);
> +return null;
> + }
> +  };
> +  RetriableTask r = new RetriableTask(task);
> +  
>try {
> -byte[] md5Hash = digest == null ? null : digest.digest();
> -store.storeFile(key, backupFile, md5Hash);
> + r.call();
> +  } catch (Exception e) {
> + throw new IOException(e);
>} finally {
>  if (!backupFile.delete()) {
>LOG.warn("Could not delete temporary s3n file: " + backupFile);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-12376) S3NInputStream.close() downloads the remaining bytes of the object from S3

2017-02-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-12376.
-
Resolution: Won't Fix

Closing as a wontfix as there is a solution "move to S3a". That has a lot of 
logic about when to skip forwards vs. close, seek optimisation for different io 
policies, metrics on all of this, etc.

> S3NInputStream.close() downloads the remaining bytes of the object from S3
> --
>
> Key: HADOOP-12376
> URL: https://issues.apache.org/jira/browse/HADOOP-12376
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.6.0, 2.7.1
>Reporter: Steve Loughran
>Assignee: Ajith S
>
> This is the same as HADOOP-11570, possibly the swift code has the same 
> problem.
> Apparently (as raised on ASF lists), when you close an s3n input stream, it
> reads through the remainder of the file. This kills performance on partial 
> reads of large files.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14118) move jets3t into a dependency on hadoop-aws JAR

2017-02-24 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14118:
---

 Summary: move jets3t into a dependency on hadoop-aws JAR
 Key: HADOOP-14118
 URL: https://issues.apache.org/jira/browse/HADOOP-14118
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build, fs/s3
Affects Versions: 2.9.0
Reporter: Steve Loughran


hadoop-common still declares a dependency on jets3t, which allows downstream 
projects to pick it up. But as they can't get s3n to work without the 
hadoop-aws JAR, it's hard to see how much use this is.

I propose: moving it to a dependency of hadoop-aws JAR alone.
Marking as incompatible as it will be in the specific situation

* downstream maven build doesn't pull in hadoop-aws
* command line doesnt have hadoop-aws JAR dependencies on CP and still wants to 
use jets3t. Hard to imagine how this arises.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14113) review ADL Docs

2017-02-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15882710#comment-15882710
 ] 

Hudson commented on HADOOP-14113:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11299 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11299/])
HADOOP-14113. Review ADL Docs. Contributed by Steve Loughran (stevel: rev 
e60c6543d57611039b0438d5dcb4cb19ee239bb6)
* (edit) hadoop-tools/hadoop-azure-datalake/src/site/markdown/index.md


> review ADL Docs
> ---
>
> Key: HADOOP-14113
> URL: https://issues.apache.org/jira/browse/HADOOP-14113
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/adl
>Affects Versions: 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-14113-001.patch
>
>
> Do a quick review of the ADL docs and edit where appropriate.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13817) Add a finite shell command timeout to ShellBasedUnixGroupsMapping

2017-02-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15882705#comment-15882705
 ] 

ASF GitHub Bot commented on HADOOP-13817:
-

Github user QwertyManiac commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/161#discussion_r102943834
  
--- Diff: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ShellBasedUnixGroupsMapping.java
 ---
@@ -133,8 +177,26 @@ protected ShellCommandExecutor 
createGroupIDExecutor(String userName) {
 groups = resolvePartialGroupNames(user, e.getMessage(),
 executor.getOutput());
   } catch (PartialGroupNameException pge) {
-LOG.warn("unable to return groups for user " + user, pge);
-return new LinkedList<>();
+LOG.warn("unable to return groups for user {}", user, pge);
+return EMPTY_GROUPS;
+  }
+} catch (IOException ioe) {
+  // If its a shell executor timeout, indicate so in the message
+  // but treat the result as empty instead of throwing it up,
+  // similar to how partial resolution failures are handled above
+  if (executor.isTimedOut()) {
+LOG.warn(
+"Unable to return groups for user '{}' as shell group lookup " 
+
+"command '{}' ran longer than the configured timeout limit of 
" +
+"{} seconds.",
+user,
+Arrays.asList(executor.getExecString()),
--- End diff --

Thank you again @jojochuang. I've added a new change here addressing your 
comments. I've also uploaded the full patch directly to JIRA to trigger a build 
test.


> Add a finite shell command timeout to ShellBasedUnixGroupsMapping
> -
>
> Key: HADOOP-13817
> URL: https://issues.apache.org/jira/browse/HADOOP-13817
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Minor
> Attachments: HADOOP-13817.000.patch, HADOOP-13817-branch-2.000.patch
>
>
> The ShellBasedUnixGroupsMapping run various {{id}} commands via the 
> ShellCommandExecutor modules without a timeout set (its set to 0, which 
> implies infinite).
> If this command hangs for a long time on the OS end due to an unresponsive 
> groups backend or other reasons, it also blocks the handlers that use it on 
> the NameNode (or other services that use this class). That inadvertently 
> causes odd timeout troubles on the client end where its forced to retry (only 
> to likely run into such hangs again with every attempt until at least one 
> command returns).
> It would be helpful to have a finite command timeout after which we may give 
> up on the command and return the result equivalent of no groups found.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13817) Add a finite shell command timeout to ShellBasedUnixGroupsMapping

2017-02-24 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-13817:
-
Target Version/s: 2.9.0, 3.0.0-beta1

> Add a finite shell command timeout to ShellBasedUnixGroupsMapping
> -
>
> Key: HADOOP-13817
> URL: https://issues.apache.org/jira/browse/HADOOP-13817
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Minor
> Attachments: HADOOP-13817.000.patch, HADOOP-13817-branch-2.000.patch
>
>
> The ShellBasedUnixGroupsMapping run various {{id}} commands via the 
> ShellCommandExecutor modules without a timeout set (its set to 0, which 
> implies infinite).
> If this command hangs for a long time on the OS end due to an unresponsive 
> groups backend or other reasons, it also blocks the handlers that use it on 
> the NameNode (or other services that use this class). That inadvertently 
> causes odd timeout troubles on the client end where its forced to retry (only 
> to likely run into such hangs again with every attempt until at least one 
> command returns).
> It would be helpful to have a finite command timeout after which we may give 
> up on the command and return the result equivalent of no groups found.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13817) Add a finite shell command timeout to ShellBasedUnixGroupsMapping

2017-02-24 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-13817:
-
Attachment: HADOOP-13817.000.patch
HADOOP-13817-branch-2.000.patch

Uploading direct patches to trigger Jenkins build.

> Add a finite shell command timeout to ShellBasedUnixGroupsMapping
> -
>
> Key: HADOOP-13817
> URL: https://issues.apache.org/jira/browse/HADOOP-13817
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Minor
> Attachments: HADOOP-13817.000.patch, HADOOP-13817-branch-2.000.patch
>
>
> The ShellBasedUnixGroupsMapping run various {{id}} commands via the 
> ShellCommandExecutor modules without a timeout set (its set to 0, which 
> implies infinite).
> If this command hangs for a long time on the OS end due to an unresponsive 
> groups backend or other reasons, it also blocks the handlers that use it on 
> the NameNode (or other services that use this class). That inadvertently 
> causes odd timeout troubles on the client end where its forced to retry (only 
> to likely run into such hangs again with every attempt until at least one 
> command returns).
> It would be helpful to have a finite command timeout after which we may give 
> up on the command and return the result equivalent of no groups found.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14028) S3A block output streams don't delete temporary files in multipart uploads

2017-02-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15882691#comment-15882691
 ] 

Hadoop QA commented on HADOOP-14028:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
34s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 4 
new + 27 unchanged - 2 fixed = 31 total (was 29) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_121. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5af2af1 |
| JIRA Issue | HADOOP-14028 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12854467/HADOOP-14028-branch-2.8-008.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux de828f17db34 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2.8 / ff87ca8 |
| Default Java | 1.7.0_121 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-orac

[jira] [Updated] (HADOOP-14113) review ADL Docs

2017-02-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14113:

   Resolution: Fixed
Fix Version/s: 3.0.0-alpha3
   Status: Resolved  (was: Patch Available)

thanks, committed

> review ADL Docs
> ---
>
> Key: HADOOP-14113
> URL: https://issues.apache.org/jira/browse/HADOOP-14113
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/adl
>Affects Versions: 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-14113-001.patch
>
>
> Do a quick review of the ADL docs and edit where appropriate.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14028) S3A block output streams don't delete temporary files in multipart uploads

2017-02-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14028:

Attachment: HADOOP-14028-branch-2.8-008.patch

patch 008; patch feedback
* make field private
* use passed in log (with check for null first) in closeAll
Also: used xor in precondition check so its clearer; added comment too.

I've not moved to the hdfs-client {{IOUtilsClient#cleanup()}} because its in 
the wrong package. Also, looking at it and the Hadoop client one, both are at 
risk of failing if a closeable raises an exception in close() and then throws 
an NPE in {{toString()}}. The one here relies on SLF4J to be more robust in its 
string evaluation. I'd have pulled it into hadoop-common, except then I'd have 
to write the test to create the failure conditions.

> S3A block output streams don't delete temporary files in multipart uploads
> --
>
> Key: HADOOP-14028
> URL: https://issues.apache.org/jira/browse/HADOOP-14028
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: JDK 8 + ORC 1.3.0 + hadoop-aws 3.0.0-alpha2
>Reporter: Seth Fitzsimmons
>Assignee: Steve Loughran
>Priority: Critical
> Attachments: HADOOP-14028-006.patch, HADOOP-14028-007.patch, 
> HADOOP-14028-branch-2-001.patch, HADOOP-14028-branch-2.8-002.patch, 
> HADOOP-14028-branch-2.8-003.patch, HADOOP-14028-branch-2.8-004.patch, 
> HADOOP-14028-branch-2.8-005.patch, HADOOP-14028-branch-2.8-007.patch, 
> HADOOP-14028-branch-2.8-008.patch
>
>
> I have `fs.s3a.fast.upload` enabled with 3.0.0-alpha2 (it's exactly what I 
> was looking for after running into the same OOM problems) and don't see it 
> cleaning up the disk-cached blocks.
> I'm generating a ~50GB file on an instance with ~6GB free when the process 
> starts. My expectation is that local copies of the blocks would be deleted 
> after those parts finish uploading, but I'm seeing more than 15 blocks in 
> /tmp (and none of them have been deleted thus far).
> I see that DiskBlock deletes temporary files when closed, but is it closed 
> after individual blocks have finished uploading or when the entire file has 
> been fully written to the FS (full upload completed, including all parts)?
> As a temporary workaround to avoid running out of space, I'm listing files, 
> sorting by atime, and deleting anything older than the first 20: `ls -ut | 
> tail -n +21 | xargs rm`
> Steve Loughran says:
> > They should be deleted as soon as the upload completes; the close() call 
> > that the AWS httpclient makes on the input stream triggers the deletion. 
> > Though there aren't tests for it, as I recall.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14028) S3A block output streams don't delete temporary files in multipart uploads

2017-02-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14028:

Status: Patch Available  (was: Open)

> S3A block output streams don't delete temporary files in multipart uploads
> --
>
> Key: HADOOP-14028
> URL: https://issues.apache.org/jira/browse/HADOOP-14028
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: JDK 8 + ORC 1.3.0 + hadoop-aws 3.0.0-alpha2
>Reporter: Seth Fitzsimmons
>Assignee: Steve Loughran
>Priority: Critical
> Attachments: HADOOP-14028-006.patch, HADOOP-14028-007.patch, 
> HADOOP-14028-branch-2-001.patch, HADOOP-14028-branch-2.8-002.patch, 
> HADOOP-14028-branch-2.8-003.patch, HADOOP-14028-branch-2.8-004.patch, 
> HADOOP-14028-branch-2.8-005.patch, HADOOP-14028-branch-2.8-007.patch, 
> HADOOP-14028-branch-2.8-008.patch
>
>
> I have `fs.s3a.fast.upload` enabled with 3.0.0-alpha2 (it's exactly what I 
> was looking for after running into the same OOM problems) and don't see it 
> cleaning up the disk-cached blocks.
> I'm generating a ~50GB file on an instance with ~6GB free when the process 
> starts. My expectation is that local copies of the blocks would be deleted 
> after those parts finish uploading, but I'm seeing more than 15 blocks in 
> /tmp (and none of them have been deleted thus far).
> I see that DiskBlock deletes temporary files when closed, but is it closed 
> after individual blocks have finished uploading or when the entire file has 
> been fully written to the FS (full upload completed, including all parts)?
> As a temporary workaround to avoid running out of space, I'm listing files, 
> sorting by atime, and deleting anything older than the first 20: `ls -ut | 
> tail -n +21 | xargs rm`
> Steve Loughran says:
> > They should be deleted as soon as the upload completes; the close() call 
> > that the AWS httpclient makes on the input stream triggers the deletion. 
> > Though there aren't tests for it, as I recall.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14028) S3A block output streams don't delete temporary files in multipart uploads

2017-02-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14028:

Status: Open  (was: Patch Available)

> S3A block output streams don't delete temporary files in multipart uploads
> --
>
> Key: HADOOP-14028
> URL: https://issues.apache.org/jira/browse/HADOOP-14028
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: JDK 8 + ORC 1.3.0 + hadoop-aws 3.0.0-alpha2
>Reporter: Seth Fitzsimmons
>Assignee: Steve Loughran
>Priority: Critical
> Attachments: HADOOP-14028-006.patch, HADOOP-14028-007.patch, 
> HADOOP-14028-branch-2-001.patch, HADOOP-14028-branch-2.8-002.patch, 
> HADOOP-14028-branch-2.8-003.patch, HADOOP-14028-branch-2.8-004.patch, 
> HADOOP-14028-branch-2.8-005.patch, HADOOP-14028-branch-2.8-007.patch
>
>
> I have `fs.s3a.fast.upload` enabled with 3.0.0-alpha2 (it's exactly what I 
> was looking for after running into the same OOM problems) and don't see it 
> cleaning up the disk-cached blocks.
> I'm generating a ~50GB file on an instance with ~6GB free when the process 
> starts. My expectation is that local copies of the blocks would be deleted 
> after those parts finish uploading, but I'm seeing more than 15 blocks in 
> /tmp (and none of them have been deleted thus far).
> I see that DiskBlock deletes temporary files when closed, but is it closed 
> after individual blocks have finished uploading or when the entire file has 
> been fully written to the FS (full upload completed, including all parts)?
> As a temporary workaround to avoid running out of space, I'm listing files, 
> sorting by atime, and deleting anything older than the first 20: `ls -ut | 
> tail -n +21 | xargs rm`
> Steve Loughran says:
> > They should be deleted as soon as the upload completes; the close() call 
> > that the AWS httpclient makes on the input stream triggers the deletion. 
> > Though there aren't tests for it, as I recall.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14097) Remove Java6 specific code from GzipCodec.java

2017-02-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15882514#comment-15882514
 ] 

Hadoop QA commented on HADOOP-14097:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m  6s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-14097 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/1285/HDFS-14097.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3ce17089d3e0 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 9c22a91 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11707/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11707/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11707/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Remove Java6 specific code from GzipCodec.java
> --
>
> Key: HADOOP-14097
> URL: https://issues.apache.org/jira/browse/HADOOP-14097
> Project: Hadoop Common
>  Issue Type

[jira] [Commented] (HADOOP-14028) S3A block output streams don't delete temporary files in multipart uploads

2017-02-24 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15882504#comment-15882504
 ] 

Steve Loughran commented on HADOOP-14028:
-

Hadn't seen {{IOUtilsClient}}...it's in HDFS and we don't actually depend on 
it. Move those methods into hadoop common & I can use them

> S3A block output streams don't delete temporary files in multipart uploads
> --
>
> Key: HADOOP-14028
> URL: https://issues.apache.org/jira/browse/HADOOP-14028
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: JDK 8 + ORC 1.3.0 + hadoop-aws 3.0.0-alpha2
>Reporter: Seth Fitzsimmons
>Assignee: Steve Loughran
>Priority: Critical
> Attachments: HADOOP-14028-006.patch, HADOOP-14028-007.patch, 
> HADOOP-14028-branch-2-001.patch, HADOOP-14028-branch-2.8-002.patch, 
> HADOOP-14028-branch-2.8-003.patch, HADOOP-14028-branch-2.8-004.patch, 
> HADOOP-14028-branch-2.8-005.patch, HADOOP-14028-branch-2.8-007.patch
>
>
> I have `fs.s3a.fast.upload` enabled with 3.0.0-alpha2 (it's exactly what I 
> was looking for after running into the same OOM problems) and don't see it 
> cleaning up the disk-cached blocks.
> I'm generating a ~50GB file on an instance with ~6GB free when the process 
> starts. My expectation is that local copies of the blocks would be deleted 
> after those parts finish uploading, but I'm seeing more than 15 blocks in 
> /tmp (and none of them have been deleted thus far).
> I see that DiskBlock deletes temporary files when closed, but is it closed 
> after individual blocks have finished uploading or when the entire file has 
> been fully written to the FS (full upload completed, including all parts)?
> As a temporary workaround to avoid running out of space, I'm listing files, 
> sorting by atime, and deleting anything older than the first 20: `ls -ut | 
> tail -n +21 | xargs rm`
> Steve Loughran says:
> > They should be deleted as soon as the upload completes; the close() call 
> > that the AWS httpclient makes on the input stream triggers the deletion. 
> > Though there aren't tests for it, as I recall.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14114) S3A can no longer handle unencoded + in URIs

2017-02-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15882465#comment-15882465
 ] 

Hudson commented on HADOOP-14114:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11298 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11298/])
HADOOP-14114 S3A can no longer handle unencoded + in URIs. Contributed (stevel: 
rev 9c22a91662af24569191ce45289ef8266e8755cc)
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3native/TestS3xLoginHelper.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3native/S3xLoginHelper.java


> S3A can no longer handle unencoded + in URIs 
> -
>
> Key: HADOOP-14114
> URL: https://issues.apache.org/jira/browse/HADOOP-14114
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Minor
> Fix For: 2.8.1
>
> Attachments: HADOOP-14114.001.patch
>
>
> Amazon secret access keys can include alphanumeric characters, but also / and 
> + (I wish there was an official source that was really specific on what they 
> can contain, but I'll have to rely on a few blog posts and my own experience).
> Keys containing slashes used to be impossible to embed in the URL (e.g. 
> s3a://access_key:secret_key@bucket/) but it is now possible to do it via URL 
> encoding. Pluses used to work, but that is now *only* possible via URL 
> encoding.
> In the case of pluses, they don't appear to cause any other problems for 
> parsing. So IMO the best all-around solution here is for people to URL-encode 
> these keys always, but so that keys that used to work just fine can continue 
> to work fine, all we need to do is detect that, log a warning, and we can 
> re-encode it for the user.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14114) S3A can no longer handle unencoded + in URIs

2017-02-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14114:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> S3A can no longer handle unencoded + in URIs 
> -
>
> Key: HADOOP-14114
> URL: https://issues.apache.org/jira/browse/HADOOP-14114
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Minor
> Fix For: 2.8.1
>
> Attachments: HADOOP-14114.001.patch
>
>
> Amazon secret access keys can include alphanumeric characters, but also / and 
> + (I wish there was an official source that was really specific on what they 
> can contain, but I'll have to rely on a few blog posts and my own experience).
> Keys containing slashes used to be impossible to embed in the URL (e.g. 
> s3a://access_key:secret_key@bucket/) but it is now possible to do it via URL 
> encoding. Pluses used to work, but that is now *only* possible via URL 
> encoding.
> In the case of pluses, they don't appear to cause any other problems for 
> parsing. So IMO the best all-around solution here is for people to URL-encode 
> these keys always, but so that keys that used to work just fine can continue 
> to work fine, all we need to do is detect that, log a warning, and we can 
> re-encode it for the user.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14114) S3A can no longer handle unencoded + in URIs

2017-02-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14114:

 Priority: Minor  (was: Major)
Fix Version/s: 2.8.1
  Component/s: fs/s3

+1, committed to 2.8.1+. For the curious, I've signed that commit; a {{git log 
--show-signature ff87ca84418a710c6}} will show whether or not you trust me

> S3A can no longer handle unencoded + in URIs 
> -
>
> Key: HADOOP-14114
> URL: https://issues.apache.org/jira/browse/HADOOP-14114
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Minor
> Fix For: 2.8.1
>
> Attachments: HADOOP-14114.001.patch
>
>
> Amazon secret access keys can include alphanumeric characters, but also / and 
> + (I wish there was an official source that was really specific on what they 
> can contain, but I'll have to rely on a few blog posts and my own experience).
> Keys containing slashes used to be impossible to embed in the URL (e.g. 
> s3a://access_key:secret_key@bucket/) but it is now possible to do it via URL 
> encoding. Pluses used to work, but that is now *only* possible via URL 
> encoding.
> In the case of pluses, they don't appear to cause any other problems for 
> parsing. So IMO the best all-around solution here is for people to URL-encode 
> these keys always, but so that keys that used to work just fine can continue 
> to work fine, all we need to do is detect that, log a warning, and we can 
> re-encode it for the user.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14097) Remove Java6 specific code from GzipCodec.java

2017-02-24 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HADOOP-14097:
--
Status: Patch Available  (was: Open)

Removed the code intoduced by HADOOP-8419 (except unit test)

> Remove Java6 specific code from GzipCodec.java
> --
>
> Key: HADOOP-14097
> URL: https://issues.apache.org/jira/browse/HADOOP-14097
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Elek, Marton
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-14097.001.patch
>
>
> GzipCodec has Java 6 specific code and it can be removed.
> {code:title=GzipCodec.java}
>   public static final String JVMVersion= 
> System.getProperty("java.version");
>   private static final boolean HAS_BROKEN_FINISH =
>   (IBM_JAVA && JVMVersion.contains("1.6.0"));
> {code}
> Hadoop 2.7+ dropped Java 6 support.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14097) Remove Java6 specific code from GzipCodec.java

2017-02-24 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HADOOP-14097:
--
Attachment: HDFS-14097.001.patch

> Remove Java6 specific code from GzipCodec.java
> --
>
> Key: HADOOP-14097
> URL: https://issues.apache.org/jira/browse/HADOOP-14097
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Elek, Marton
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-14097.001.patch
>
>
> GzipCodec has Java 6 specific code and it can be removed.
> {code:title=GzipCodec.java}
>   public static final String JVMVersion= 
> System.getProperty("java.version");
>   private static final boolean HAS_BROKEN_FINISH =
>   (IBM_JAVA && JVMVersion.contains("1.6.0"));
> {code}
> Hadoop 2.7+ dropped Java 6 support.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14114) S3A can no longer handle unencoded + in URIs

2017-02-24 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15882411#comment-15882411
 ] 

Steve Loughran commented on HADOOP-14114:
-

OK +1

impressed by your checking there. 

out of curiousity, did you test against a v4 endpoint to see what the error 
message was? I Wonder if it is different

> S3A can no longer handle unencoded + in URIs 
> -
>
> Key: HADOOP-14114
> URL: https://issues.apache.org/jira/browse/HADOOP-14114
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-14114.001.patch
>
>
> Amazon secret access keys can include alphanumeric characters, but also / and 
> + (I wish there was an official source that was really specific on what they 
> can contain, but I'll have to rely on a few blog posts and my own experience).
> Keys containing slashes used to be impossible to embed in the URL (e.g. 
> s3a://access_key:secret_key@bucket/) but it is now possible to do it via URL 
> encoding. Pluses used to work, but that is now *only* possible via URL 
> encoding.
> In the case of pluses, they don't appear to cause any other problems for 
> parsing. So IMO the best all-around solution here is for people to URL-encode 
> these keys always, but so that keys that used to work just fine can continue 
> to work fine, all we need to do is detect that, log a warning, and we can 
> re-encode it for the user.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14038) Add ADLS credential properties to core-default.xml

2017-02-24 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15882407#comment-15882407
 ] 

Steve Loughran commented on HADOOP-14038:
-

I agree with tagging everything as deprecated, we don't want to break anyone's 
setup

> Add ADLS credential properties to core-default.xml
> --
>
> Key: HADOOP-14038
> URL: https://issues.apache.org/jira/browse/HADOOP-14038
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha3
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14038.001.patch, HADOOP-14038.002.patch, 
> HADOOP-14038.003.patch
>
>
> Add ADLS credential configuration properties to {{core-default.xml}}. 
> Set/document the default value for 
> {{dfs.adls.oauth2.access.token.provider.type}} to {{ClientCredential}}.
> Fix {{AdlFileSystem#getAccessTokenProvider}} which implies the provider type 
> is {{Custom}}.
> Fix several unit tests that set {{dfs.adls.oauth2.access.token.provider}} but 
> does not set {{dfs.adls.oauth2.access.token.provider.type}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org