[jira] [Commented] (HADOOP-14600) LocatedFileStatus constructor forces RawLocalFS to exec a process to get the permissions

2017-12-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16275443#comment-16275443
 ] 

Hudson commented on HADOOP-14600:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13312 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13312/])
HADOOP-14600. LocatedFileStatus constructor forces RawLocalFS to exec a 
(cdouglas: rev f9d195dfe9cc2c3e4659c3475319ac7c937b5c44)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java
* (add) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/StatUtils.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/permission/TestFsPermission.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/FsPermission.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestRawLocalFileSystemContract.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/nativeio/TestNativeIO.java


> LocatedFileStatus constructor forces RawLocalFS to exec a process to get the 
> permissions
> 
>
> Key: HADOOP-14600
> URL: https://issues.apache.org/jira/browse/HADOOP-14600
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.3
> Environment: file:// in a dir with many files
>Reporter: Steve Loughran
>Assignee: Ping Liu
> Fix For: 3.1.0
>
> Attachments: HADOOP-14600.001.patch, HADOOP-14600.002.patch, 
> HADOOP-14600.003.patch, HADOOP-14600.004.patch, HADOOP-14600.005.patch, 
> HADOOP-14600.006.patch, HADOOP-14600.007.patch, HADOOP-14600.008.patch, 
> HADOOP-14600.009.patch, TestRawLocalFileSystemContract.java, 
> command_line_test_result__linux.txt, command_line_test_result__windows.txt
>
>
> Reported in SPARK-21137. a {{FileSystem.listStatus}} call really craws 
> against the local FS, because {{FileStatus.getPemissions}} call forces  
> {{DeprecatedRawLocalFileStatus}} tp spawn a process to read the real UGI 
> values.
> That is: for every other FS, what's a field lookup or even a no-op, on the 
> local FS it's a process exec/spawn, with all the costs. This gets expensive 
> if you have many files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14600) LocatedFileStatus constructor forces RawLocalFS to exec a process to get the permissions

2017-12-01 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-14600:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.1.0
   Status: Resolved  (was: Patch Available)

I committed this. Thanks [~myapachejira]

> LocatedFileStatus constructor forces RawLocalFS to exec a process to get the 
> permissions
> 
>
> Key: HADOOP-14600
> URL: https://issues.apache.org/jira/browse/HADOOP-14600
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.3
> Environment: file:// in a dir with many files
>Reporter: Steve Loughran
>Assignee: Ping Liu
> Fix For: 3.1.0
>
> Attachments: HADOOP-14600.001.patch, HADOOP-14600.002.patch, 
> HADOOP-14600.003.patch, HADOOP-14600.004.patch, HADOOP-14600.005.patch, 
> HADOOP-14600.006.patch, HADOOP-14600.007.patch, HADOOP-14600.008.patch, 
> HADOOP-14600.009.patch, TestRawLocalFileSystemContract.java, 
> command_line_test_result__linux.txt, command_line_test_result__windows.txt
>
>
> Reported in SPARK-21137. a {{FileSystem.listStatus}} call really craws 
> against the local FS, because {{FileStatus.getPemissions}} call forces  
> {{DeprecatedRawLocalFileStatus}} tp spawn a process to read the real UGI 
> values.
> That is: for every other FS, what's a field lookup or even a no-op, on the 
> local FS it's a process exec/spawn, with all the costs. This gets expensive 
> if you have many files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15086) NativeAzureFileSystem.rename is not atomic

2017-12-01 Thread Cheng Lian (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16275422#comment-16275422
 ] 

Cheng Lian commented on HADOOP-15086:
-

To be more specific, when multiple threads rename files to the same target 
path, more than 1 *but not all* threads can succeed. It's because check and 
copy file in {{NativeAzureFileSystem#rename()}} is not atomic.

The problem here is that it's unclear what the expected semantics of 
{{NativeAzureFileSystem#rename()}} is:

- If the semantics is "error if the destination file already exists", then only 
1 thread can succeed.
- If the semantics is "overwrite if the destination file already exists", then 
all threads should succeed.

> NativeAzureFileSystem.rename is not atomic
> --
>
> Key: HADOOP-15086
> URL: https://issues.apache.org/jira/browse/HADOOP-15086
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.7.3
>Reporter: Shixiong Zhu
> Attachments: RenameReproducer.java
>
>
> When multiple threads rename files to the same target path, more than 1 
> threads can succeed. It's because check and copy file in `rename` is not 
> atomic.
> I would expect it's atomic just like HDFS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14985) Remove subversion related code from VersionInfoMojo.java

2017-12-01 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16275408#comment-16275408
 ] 

Akira Ajisaka commented on HADOOP-14985:


Thank you for the patch, [~ajayydv]! Would you update the following comment as 
well?
{noformat}
   * Determines which SCM is in use (Subversion, git, or none) and captures
   * output of the SCM command for later parsing.
{noformat}

> Remove subversion related code from VersionInfoMojo.java
> 
>
> Key: HADOOP-14985
> URL: https://issues.apache.org/jira/browse/HADOOP-14985
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Akira Ajisaka
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-14985.001.patch
>
>
> When building Apache Hadoop, we can see the following message:
> {noformat}
> [WARNING] [svn, info] failed with error code 1
> {noformat}
> We have migrated to the code base from svn to git, so the message is useless.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15080) Cat-X dependency on org.json via derived json-lib

2017-12-01 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-15080:
---
Summary: Cat-X dependency on org.json via derived json-lib  (was: Cat-X 
transitive dependency on org.json library via json-lib)

> Cat-X dependency on org.json via derived json-lib
> -
>
> Key: HADOOP-15080
> URL: https://issues.apache.org/jira/browse/HADOOP-15080
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/oss
>Affects Versions: 3.0.0-beta1
>Reporter: Chris Douglas
>Priority: Blocker
>
> The OSS SDK has a dependency on json-lib. In LEGAL-245, the org.json library 
> (from which json-lib may be derived) is released under a 
> [category-x|https://www.apache.org/legal/resolved.html#json] license.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15086) NativeAzureFileSystem.rename is not atomic

2017-12-01 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu updated HADOOP-15086:
--
Attachment: RenameReproducer.java

Reproducer

> NativeAzureFileSystem.rename is not atomic
> --
>
> Key: HADOOP-15086
> URL: https://issues.apache.org/jira/browse/HADOOP-15086
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.7.3
>Reporter: Shixiong Zhu
> Attachments: RenameReproducer.java
>
>
> When multiple threads rename files to the same target path, more than 1 
> threads can succeed. It's because check and copy file in `rename` is not 
> atomic.
> I would expect it's atomic just like HDFS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15086) NativeAzureFileSystem.rename is not atomic

2017-12-01 Thread Shixiong Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16275198#comment-16275198
 ] 

Shixiong Zhu edited comment on HADOOP-15086 at 12/1/17 11:43 PM:
-

Attached a reproducer


was (Author: zsxwing):
Reproducer

> NativeAzureFileSystem.rename is not atomic
> --
>
> Key: HADOOP-15086
> URL: https://issues.apache.org/jira/browse/HADOOP-15086
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.7.3
>Reporter: Shixiong Zhu
> Attachments: RenameReproducer.java
>
>
> When multiple threads rename files to the same target path, more than 1 
> threads can succeed. It's because check and copy file in `rename` is not 
> atomic.
> I would expect it's atomic just like HDFS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15086) NativeAzureFileSystem.rename is not atomic

2017-12-01 Thread Shixiong Zhu (JIRA)
Shixiong Zhu created HADOOP-15086:
-

 Summary: NativeAzureFileSystem.rename is not atomic
 Key: HADOOP-15086
 URL: https://issues.apache.org/jira/browse/HADOOP-15086
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Affects Versions: 2.7.3
Reporter: Shixiong Zhu


When multiple threads rename files to the same target path, more than 1 threads 
can succeed. It's because check and copy file in `rename` is not atomic.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15086) NativeAzureFileSystem.rename is not atomic

2017-12-01 Thread Shixiong Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu updated HADOOP-15086:
--
Description: 
When multiple threads rename files to the same target path, more than 1 threads 
can succeed. It's because check and copy file in `rename` is not atomic.

I would expect it's atomic just like HDFS.

  was:When multiple threads rename files to the same target path, more than 1 
threads can succeed. It's because check and copy file in `rename` is not atomic.


> NativeAzureFileSystem.rename is not atomic
> --
>
> Key: HADOOP-15086
> URL: https://issues.apache.org/jira/browse/HADOOP-15086
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.7.3
>Reporter: Shixiong Zhu
>
> When multiple threads rename files to the same target path, more than 1 
> threads can succeed. It's because check and copy file in `rename` is not 
> atomic.
> I would expect it's atomic just like HDFS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14985) Remove subversion related code from VersionInfoMojo.java

2017-12-01 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16275186#comment-16275186
 ] 

genericqa commented on HADOOP-14985:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} hadoop-maven-plugins: The patch generated 0 new + 31 
unchanged - 26 fixed = 31 total (was 57) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 59s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-maven-plugins in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-14985 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900297/HADOOP-14985.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4791c7c707d1 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 60f95fb |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13773/testReport/ |
| Max. process+thread count | 303 (vs. ulimit of 5000) |
| modules | C: hadoop-maven-plugins U: hadoop-maven-plugins |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13773/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This m

[jira] [Commented] (HADOOP-14475) Metrics of S3A don't print out when enable it in Hadoop metrics property file

2017-12-01 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16275170#comment-16275170
 ] 

genericqa commented on HADOOP-14475:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 1 
new + 19 unchanged - 0 fixed = 20 total (was 19) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
40s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-14475 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900294/HADOOP-14475.017.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 000b4aa474eb 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 60f95fb |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13772/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13772/testReport/ |
| Max. process+thread count | 324 (vs. ulimit of 5000) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13772/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org

[jira] [Updated] (HADOOP-14788) Credentials readTokenStorageFile to stop wrapping IOEs in IOEs

2017-12-01 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-14788:

Status: Open  (was: Patch Available)

> Credentials readTokenStorageFile to stop wrapping IOEs in IOEs
> --
>
> Key: HADOOP-14788
> URL: https://issues.apache.org/jira/browse/HADOOP-14788
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-14788.001.patch, HADOOP-14788.002.patch, 
> HADOOP-14788.003.patch
>
>
> When {{Credentials readTokenStorageFile}} gets an IOE. it catches & wraps 
> with the filename, so losing the exception class information.
> Is this needed. or can it pass everything up?
> If it is needed, well, it's a common pattern: wrapping the exception with the 
> path & operation. Maybe it's time to add an IOE version of 
> {{NetworkUtils.wrapException()}} which handles the broader set of IOEs



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14788) Credentials readTokenStorageFile to stop wrapping IOEs in IOEs

2017-12-01 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-14788:

Status: Patch Available  (was: Open)

> Credentials readTokenStorageFile to stop wrapping IOEs in IOEs
> --
>
> Key: HADOOP-14788
> URL: https://issues.apache.org/jira/browse/HADOOP-14788
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-14788.001.patch, HADOOP-14788.002.patch, 
> HADOOP-14788.003.patch
>
>
> When {{Credentials readTokenStorageFile}} gets an IOE. it catches & wraps 
> with the filename, so losing the exception class information.
> Is this needed. or can it pass everything up?
> If it is needed, well, it's a common pattern: wrapping the exception with the 
> path & operation. Maybe it's time to add an IOE version of 
> {{NetworkUtils.wrapException()}} which handles the broader set of IOEs



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14985) Remove subversion related code from VersionInfoMojo.java

2017-12-01 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-14985:

Attachment: HADOOP-14985.001.patch

> Remove subversion related code from VersionInfoMojo.java
> 
>
> Key: HADOOP-14985
> URL: https://issues.apache.org/jira/browse/HADOOP-14985
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Akira Ajisaka
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-14985.001.patch
>
>
> When building Apache Hadoop, we can see the following message:
> {noformat}
> [WARNING] [svn, info] failed with error code 1
> {noformat}
> We have migrated to the code base from svn to git, so the message is useless.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14985) Remove subversion related code from VersionInfoMojo.java

2017-12-01 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-14985:

Status: Patch Available  (was: Open)

> Remove subversion related code from VersionInfoMojo.java
> 
>
> Key: HADOOP-14985
> URL: https://issues.apache.org/jira/browse/HADOOP-14985
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Akira Ajisaka
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-14985.001.patch
>
>
> When building Apache Hadoop, we can see the following message:
> {noformat}
> [WARNING] [svn, info] failed with error code 1
> {noformat}
> We have migrated to the code base from svn to git, so the message is useless.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14964) AliyunOSS: backport Aliyun OSS module to branch-2

2017-12-01 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16275100#comment-16275100
 ] 

Junping Du commented on HADOOP-14964:
-

Two hdfs blockers are resolved so 2.8.3 doesn't have any other blockers now. 
Looks like the patch here still need more discussions on license issue, so I 
think we should go ahead to cut 2.8.3 RC0 and defer this patch to next 2.8 
release.

> AliyunOSS: backport Aliyun OSS module to branch-2
> -
>
> Key: HADOOP-14964
> URL: https://issues.apache.org/jira/browse/HADOOP-14964
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/oss
>Reporter: Genmao Yu
>Assignee: SammiChen
> Fix For: 2.9.1
>
> Attachments: HADOOP-14964-branch-2.000.patch, 
> HADOOP-14964-branch-2.8.000.patch, HADOOP-14964-branch-2.8.001.patch, 
> HADOOP-14964-branch-2.9.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14475) Metrics of S3A don't print out when enable it in Hadoop metrics property file

2017-12-01 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-14475:
---
Attachment: HADOOP-14475.017.patch

Let's try this one on for size :) The only change in 016 is the documentation, 
the only change in 017 is replacing the use of synchronized against instances 
of S3AInstrumentation with synchronized(metricsSystemLock), which is static. I 
think this is correct.

> Metrics of S3A don't print out  when enable it in Hadoop metrics property file
> --
>
> Key: HADOOP-14475
> URL: https://issues.apache.org/jira/browse/HADOOP-14475
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: uname -a
> Linux client01 4.4.0-74-generic #95-Ubuntu SMP Wed Apr 12 09:50:34 UTC 2017 
> x86_64 x86_64 x86_64 GNU/Linux
>  cat /etc/issue
> Ubuntu 16.04.2 LTS \n \l
>Reporter: Yonger
>Assignee: Yonger
> Attachments: HADOOP-14475-003.patch, HADOOP-14475.002.patch, 
> HADOOP-14475.005.patch, HADOOP-14475.006.patch, HADOOP-14475.008.patch, 
> HADOOP-14475.009.patch, HADOOP-14475.010.patch, HADOOP-14475.011.patch, 
> HADOOP-14475.012.patch, HADOOP-14475.013.patch, HADOOP-14475.014.patch, 
> HADOOP-14475.015.patch, HADOOP-14475.016.patch, HADOOP-14475.017.patch, 
> HADOOP-14775.007.patch, failsafe-report-s3a-it.html, 
> failsafe-report-s3a-scale.html, failsafe-report-scale.html, 
> failsafe-report-scale.zip, s3a-metrics.patch1, stdout.zip
>
>
> *.sink.file.class=org.apache.hadoop.metrics2.sink.FileSink
> #*.sink.file.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> #*.sink.influxdb.url=http:/xx
> #*.sink.influxdb.influxdb_port=8086
> #*.sink.influxdb.database=hadoop
> #*.sink.influxdb.influxdb_username=hadoop
> #*.sink.influxdb.influxdb_password=hadoop
> #*.sink.ingluxdb.cluster=c1
> *.period=10
> #namenode.sink.influxdb.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> #S3AFileSystem.sink.influxdb.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> S3AFileSystem.sink.file.filename=s3afilesystem-metrics.out
> I can't find the out put file even i run a MR job which should be used s3.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15082) add AbstractContractRootDirectoryTest test for mkdir / ; wasb to implement the test

2017-12-01 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16274982#comment-16274982
 ] 

genericqa commented on HADOOP-15082:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
59s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 45s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m  
8s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
10s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}100m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15082 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900269/HADOOP-15082-002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 1b68510f66d6 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 53bbef3 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |

[jira] [Commented] (HADOOP-14600) LocatedFileStatus constructor forces RawLocalFS to exec a process to get the permissions

2017-12-01 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16274962#comment-16274962
 ] 

Chris Douglas commented on HADOOP-14600:


Thanks, [~myapachejira]. Glad to have that cleared up.

+1 on the latest patch. [~ste...@apache.org], unless you have other feedback, 
let's leave further refinements to followup JIRAs and commit this.

One question: this speeds up calls through {{DeprecatedRawLocalFileStatus}}. 
Did you look at refactoring the deprecation logic, to see if this class is 
still necessary? There are multiple checks for the platform and whether the 
native library is loaded, and not only for {{FileStatus}} operations. This is 
likely due to accumulated layers of ad hoc improvements and optimizations in 
{{RawLocalFileSystem}}. At a glance, it looks feasible to cut the number of 
inline checks substantially. What do you think?

> LocatedFileStatus constructor forces RawLocalFS to exec a process to get the 
> permissions
> 
>
> Key: HADOOP-14600
> URL: https://issues.apache.org/jira/browse/HADOOP-14600
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.3
> Environment: file:// in a dir with many files
>Reporter: Steve Loughran
>Assignee: Ping Liu
> Attachments: HADOOP-14600.001.patch, HADOOP-14600.002.patch, 
> HADOOP-14600.003.patch, HADOOP-14600.004.patch, HADOOP-14600.005.patch, 
> HADOOP-14600.006.patch, HADOOP-14600.007.patch, HADOOP-14600.008.patch, 
> HADOOP-14600.009.patch, TestRawLocalFileSystemContract.java, 
> command_line_test_result__linux.txt, command_line_test_result__windows.txt
>
>
> Reported in SPARK-21137. a {{FileSystem.listStatus}} call really craws 
> against the local FS, because {{FileStatus.getPemissions}} call forces  
> {{DeprecatedRawLocalFileStatus}} tp spawn a process to read the real UGI 
> values.
> That is: for every other FS, what's a field lookup or even a no-op, on the 
> local FS it's a process exec/spawn, with all the costs. This gets expensive 
> if you have many files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14475) Metrics of S3A don't print out when enable it in Hadoop metrics property file

2017-12-01 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16274914#comment-16274914
 ] 

genericqa commented on HADOOP-14475:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 14s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 1 
new + 19 unchanged - 0 fixed = 20 total (was 19) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
35s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-14475 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900268/HADOOP-14475.016.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0f56aed66a22 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 53bbef3 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13770/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13770/testReport/ |
| Max. process+thread count | 301 (vs. ulimit of 5000) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13770/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org

[jira] [Commented] (HADOOP-14475) Metrics of S3A don't print out when enable it in Hadoop metrics property file

2017-12-01 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16274907#comment-16274907
 ] 

Sean Mackrory commented on HADOOP-14475:


{quote} the test thinks s3guard is off, but it isn't, so the counters are 
wrong.{quote}

Ah that makes sense. Looks like it checks that there actually is a 
metadatastore, though. It's not naively checking a specific config property. So 
that shouldn't be an issue we need to fix.

{quote}The important thing is just to acquire the locks in the same order 
everywhere.{quote}

Well there's only one lock being acquired here, and it's only in testing, and 
initialization / closing, so perf is not a primary concern. That has 
highlighted another goof to me, though: these accesses needs to be synchronized 
across all instances, not just the one instance. Can't synchronize against the 
static MetricsSystem instance itself, so I'll just synchronize against a static 
object dedicated to that purpose. I'll attach another patch shortly...

> Metrics of S3A don't print out  when enable it in Hadoop metrics property file
> --
>
> Key: HADOOP-14475
> URL: https://issues.apache.org/jira/browse/HADOOP-14475
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: uname -a
> Linux client01 4.4.0-74-generic #95-Ubuntu SMP Wed Apr 12 09:50:34 UTC 2017 
> x86_64 x86_64 x86_64 GNU/Linux
>  cat /etc/issue
> Ubuntu 16.04.2 LTS \n \l
>Reporter: Yonger
>Assignee: Yonger
> Attachments: HADOOP-14475-003.patch, HADOOP-14475.002.patch, 
> HADOOP-14475.005.patch, HADOOP-14475.006.patch, HADOOP-14475.008.patch, 
> HADOOP-14475.009.patch, HADOOP-14475.010.patch, HADOOP-14475.011.patch, 
> HADOOP-14475.012.patch, HADOOP-14475.013.patch, HADOOP-14475.014.patch, 
> HADOOP-14475.015.patch, HADOOP-14475.016.patch, HADOOP-14775.007.patch, 
> failsafe-report-s3a-it.html, failsafe-report-s3a-scale.html, 
> failsafe-report-scale.html, failsafe-report-scale.zip, s3a-metrics.patch1, 
> stdout.zip
>
>
> *.sink.file.class=org.apache.hadoop.metrics2.sink.FileSink
> #*.sink.file.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> #*.sink.influxdb.url=http:/xx
> #*.sink.influxdb.influxdb_port=8086
> #*.sink.influxdb.database=hadoop
> #*.sink.influxdb.influxdb_username=hadoop
> #*.sink.influxdb.influxdb_password=hadoop
> #*.sink.ingluxdb.cluster=c1
> *.period=10
> #namenode.sink.influxdb.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> #S3AFileSystem.sink.influxdb.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> S3AFileSystem.sink.file.filename=s3afilesystem-metrics.out
> I can't find the out put file even i run a MR job which should be used s3.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14475) Metrics of S3A don't print out when enable it in Hadoop metrics property file

2017-12-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16274823#comment-16274823
 ] 

Steve Loughran commented on HADOOP-14475:
-

Commented on HADOOP-15079; I have seen this but thought it was caused by: 
s3guard off in the -D option, but enabled in your per-bucket settings: the test 
thinks s3guard is off, but it isn't, so the counters are wrong.

W.r.t reentrant locks: cost of sync() inside a sync is low; don't know about 
the others. The important thing is just to acquire the locks in the same order 
everywhere. After that it's a perf issue, not a deadlock one

> Metrics of S3A don't print out  when enable it in Hadoop metrics property file
> --
>
> Key: HADOOP-14475
> URL: https://issues.apache.org/jira/browse/HADOOP-14475
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: uname -a
> Linux client01 4.4.0-74-generic #95-Ubuntu SMP Wed Apr 12 09:50:34 UTC 2017 
> x86_64 x86_64 x86_64 GNU/Linux
>  cat /etc/issue
> Ubuntu 16.04.2 LTS \n \l
>Reporter: Yonger
>Assignee: Yonger
> Attachments: HADOOP-14475-003.patch, HADOOP-14475.002.patch, 
> HADOOP-14475.005.patch, HADOOP-14475.006.patch, HADOOP-14475.008.patch, 
> HADOOP-14475.009.patch, HADOOP-14475.010.patch, HADOOP-14475.011.patch, 
> HADOOP-14475.012.patch, HADOOP-14475.013.patch, HADOOP-14475.014.patch, 
> HADOOP-14475.015.patch, HADOOP-14475.016.patch, HADOOP-14775.007.patch, 
> failsafe-report-s3a-it.html, failsafe-report-s3a-scale.html, 
> failsafe-report-scale.html, failsafe-report-scale.zip, s3a-metrics.patch1, 
> stdout.zip
>
>
> *.sink.file.class=org.apache.hadoop.metrics2.sink.FileSink
> #*.sink.file.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> #*.sink.influxdb.url=http:/xx
> #*.sink.influxdb.influxdb_port=8086
> #*.sink.influxdb.database=hadoop
> #*.sink.influxdb.influxdb_username=hadoop
> #*.sink.influxdb.influxdb_password=hadoop
> #*.sink.ingluxdb.cluster=c1
> *.period=10
> #namenode.sink.influxdb.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> #S3AFileSystem.sink.influxdb.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> S3AFileSystem.sink.file.filename=s3afilesystem-metrics.out
> I can't find the out put file even i run a MR job which should be used s3.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15082) add AbstractContractRootDirectoryTest test for mkdir / ; wasb to implement the test

2017-12-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15082:

Status: Patch Available  (was: Open)

> add AbstractContractRootDirectoryTest test for mkdir / ; wasb to implement 
> the test
> ---
>
> Key: HADOOP-15082
> URL: https://issues.apache.org/jira/browse/HADOOP-15082
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/azure, test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-15082-001.patch, HADOOP-15082-002.patch
>
>
> I managed to get a stack trace on an older version of WASB with some coding 
> doing a mkdir(new Path("/"))some of the ranger parentage checks didn't 
> handle that specific case.
> # Add a new root Fs contract test for this operation
> # Have WASB implement the test suite as an integration test.
> # if the test fails shows a problem fix



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15082) add AbstractContractRootDirectoryTest test for mkdir / ; wasb to implement the test

2017-12-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15082:

Status: Open  (was: Patch Available)

> add AbstractContractRootDirectoryTest test for mkdir / ; wasb to implement 
> the test
> ---
>
> Key: HADOOP-15082
> URL: https://issues.apache.org/jira/browse/HADOOP-15082
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/azure, test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-15082-001.patch, HADOOP-15082-002.patch
>
>
> I managed to get a stack trace on an older version of WASB with some coding 
> doing a mkdir(new Path("/"))some of the ranger parentage checks didn't 
> handle that specific case.
> # Add a new root Fs contract test for this operation
> # Have WASB implement the test suite as an integration test.
> # if the test fails shows a problem fix



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15082) add AbstractContractRootDirectoryTest test for mkdir / ; wasb to implement the test

2017-12-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15082:

Attachment: HADOOP-15082-002.patch

HADOOP-15082 patch 002: fix checkstyle warning, and make sure new root test 
runs in serialized phase.

tested: azure ireland.

> add AbstractContractRootDirectoryTest test for mkdir / ; wasb to implement 
> the test
> ---
>
> Key: HADOOP-15082
> URL: https://issues.apache.org/jira/browse/HADOOP-15082
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/azure, test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-15082-001.patch, HADOOP-15082-002.patch
>
>
> I managed to get a stack trace on an older version of WASB with some coding 
> doing a mkdir(new Path("/"))some of the ranger parentage checks didn't 
> handle that specific case.
> # Add a new root Fs contract test for this operation
> # Have WASB implement the test suite as an integration test.
> # if the test fails shows a problem fix



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14475) Metrics of S3A don't print out when enable it in Hadoop metrics property file

2017-12-01 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-14475:
---
Attachment: HADOOP-14475.016.patch

Yes, that documentation blurb is out of date (and the "and ID" was superfluous) 
- corrected.

I believe getMetricsSystem() (which potentially creates the instance) and 
incrementing the counters needs to be an atomic operation, since close will 
decide whether or not to delete and null out the instance based on those 
counters. But then I haven't been brainwashed against the use of reentrant 
locks. One man's anti-pattern is a huge selling point for another man's 
language :)

> Metrics of S3A don't print out  when enable it in Hadoop metrics property file
> --
>
> Key: HADOOP-14475
> URL: https://issues.apache.org/jira/browse/HADOOP-14475
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: uname -a
> Linux client01 4.4.0-74-generic #95-Ubuntu SMP Wed Apr 12 09:50:34 UTC 2017 
> x86_64 x86_64 x86_64 GNU/Linux
>  cat /etc/issue
> Ubuntu 16.04.2 LTS \n \l
>Reporter: Yonger
>Assignee: Yonger
> Attachments: HADOOP-14475-003.patch, HADOOP-14475.002.patch, 
> HADOOP-14475.005.patch, HADOOP-14475.006.patch, HADOOP-14475.008.patch, 
> HADOOP-14475.009.patch, HADOOP-14475.010.patch, HADOOP-14475.011.patch, 
> HADOOP-14475.012.patch, HADOOP-14475.013.patch, HADOOP-14475.014.patch, 
> HADOOP-14475.015.patch, HADOOP-14475.016.patch, HADOOP-14775.007.patch, 
> failsafe-report-s3a-it.html, failsafe-report-s3a-scale.html, 
> failsafe-report-scale.html, failsafe-report-scale.zip, s3a-metrics.patch1, 
> stdout.zip
>
>
> *.sink.file.class=org.apache.hadoop.metrics2.sink.FileSink
> #*.sink.file.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> #*.sink.influxdb.url=http:/xx
> #*.sink.influxdb.influxdb_port=8086
> #*.sink.influxdb.database=hadoop
> #*.sink.influxdb.influxdb_username=hadoop
> #*.sink.influxdb.influxdb_password=hadoop
> #*.sink.ingluxdb.cluster=c1
> *.period=10
> #namenode.sink.influxdb.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> #S3AFileSystem.sink.influxdb.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> S3AFileSystem.sink.file.filename=s3afilesystem-metrics.out
> I can't find the out put file even i run a MR job which should be used s3.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15079) ITestS3AFileOperationCost#testFakeDirectoryDeletion failing after OutputCommitter patch

2017-12-01 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16274728#comment-16274728
 ] 

Sean Mackrory commented on HADOOP-15079:


So the innerMkdirs redundancy causes the test to fail. After that assertion it 
fails again after a rename. The test originally has 2 delete operations during 
that rename, but there is now a third. Again, it's the result of the fake 
directory logic. Partial stack trace of the extra delete operation:

{code}
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.deleteObjects(S3AFileSystem.java:1367)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.removeKeys(S3AFileSystem.java:1626)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.deleteUnnecessaryFakeDirectories(S3AFileSystem.java:2632)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.finishedWrite(S3AFileSystem.java:2597)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.putObjectDirect(S3AFileSystem.java:1499)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$createEmptyObject$11(S3AFileSystem.java:2682)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:108)
at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:259)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:255)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:230)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.createEmptyObject(S3AFileSystem.java:2680)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.createFakeDirectory(S3AFileSystem.java:2655)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.createFakeDirectoryIfNecessary(S3AFileSystem.java:1788)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.maybeCreateFakeParentDirectory(S3AFileSystem.java:1803)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerDelete(S3AFileSystem.java:1744)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerRename(S3AFileSystem.java:971)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.rename(S3AFileSystem.java:834){code}

> ITestS3AFileOperationCost#testFakeDirectoryDeletion failing after 
> OutputCommitter patch
> ---
>
> Key: HADOOP-15079
> URL: https://issues.apache.org/jira/browse/HADOOP-15079
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Priority: Critical
>
> I see this test failing with "object_delete_requests expected:<1> but 
> was:<2>". I printed stack traces whenever this metric was incremented, and 
> found the root cause to be that innerMkdirs is now causing two calls to 
> delete fake directories when it previously caused only one. It is called once 
> inside createFakeDirectory, and once directly inside innerMkdirs later:
> {code}
> at 
> org.apache.hadoop.fs.s3a.S3AInstrumentation.incrementCounter(S3AInstrumentation.java:454)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1108)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$deleteObjects$8(S3AFileSystem.java:1369)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:279)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteObjects(S3AFileSystem.java:1366)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.removeKeys(S3AFileSystem.java:1625)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteUnnecessaryFakeDirectories(S3AFileSystem.java:2634)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.finishedWrite(S3AFileSystem.java:2599)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.putObjectDirect(S3AFileSystem.java:1498)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$createEmptyObject$11(S3AFileSystem.java:2684)
> at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:108)
> at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:259)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
> at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:255)
> at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:230)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createEmptyObject(S3AFileSystem.java:2682)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createFakeDirectory(S3AFileSystem.java:2657)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerMkdirs(S3AFileSystem.java:2021)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:1956)
> at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2305)
> at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(A

[jira] [Comment Edited] (HADOOP-14964) AliyunOSS: backport Aliyun OSS module to branch-2

2017-12-01 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16274715#comment-16274715
 ] 

Chris Douglas edited comment on HADOOP-14964 at 12/1/17 6:12 PM:
-

The problem is that {{json-lib}} was likely derived from the JSON library that 
has the non-free license. So it may be incorrectly licensed as an 
Apache-licensed (ALv2) dependency. Since the JSON license was reclassified by 
ASF legal, no project can release code that takes it as a dependency. I filed 
LEGAL-349 to get an official opinion, but most likely the Aliyun SDK will need 
to shed this dependency before we can release it from any branch, including 
3.0.0.

Douglas Crockford has been told that the joke in his license is harming open 
source developers, and he doesn't care. Can you look into replacing 
{{json-lib}} with another library?


was (Author: chris.douglas):
The problem is that {{json-lib}} was likely derived from the JSON library that 
has the non-free license. So it is incorrectly licensed as an Apache-licensed 
(ALv2) dependency. Since the JSON license was reclassified by ASF legal, no 
project can release code that takes it as a dependency. I filed LEGAL-349 to 
get an official opinion, but most likely the Aliyun SDK will need to shed this 
dependency before we can release it from any branch, including 3.0.0.

Douglas Crockford has been told that the joke in his license is harming open 
source developers, and he doesn't care. Can you look into replacing 
{{json-lib}} with another library?

> AliyunOSS: backport Aliyun OSS module to branch-2
> -
>
> Key: HADOOP-14964
> URL: https://issues.apache.org/jira/browse/HADOOP-14964
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/oss
>Reporter: Genmao Yu
>Assignee: SammiChen
> Fix For: 2.9.1
>
> Attachments: HADOOP-14964-branch-2.000.patch, 
> HADOOP-14964-branch-2.8.000.patch, HADOOP-14964-branch-2.8.001.patch, 
> HADOOP-14964-branch-2.9.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14964) AliyunOSS: backport Aliyun OSS module to branch-2

2017-12-01 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16274715#comment-16274715
 ] 

Chris Douglas commented on HADOOP-14964:


The problem is that {{json-lib}} was likely derived from the JSON library that 
has the non-free license. So it is incorrectly licensed as an Apache-licensed 
(ALv2) dependency. Since the JSON license was reclassified by ASF legal, no 
project can release code that takes it as a dependency. I filed LEGAL-349 to 
get an official opinion, but most likely the Aliyun SDK will need to shed this 
dependency before we can release it from any branch, including 3.0.0.

Douglas Crockford has been told that the joke in his license is harming open 
source developers, and he doesn't care. Can you look into replacing 
{{json-lib}} with another library?

> AliyunOSS: backport Aliyun OSS module to branch-2
> -
>
> Key: HADOOP-14964
> URL: https://issues.apache.org/jira/browse/HADOOP-14964
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/oss
>Reporter: Genmao Yu
>Assignee: SammiChen
> Fix For: 2.9.1
>
> Attachments: HADOOP-14964-branch-2.000.patch, 
> HADOOP-14964-branch-2.8.000.patch, HADOOP-14964-branch-2.8.001.patch, 
> HADOOP-14964-branch-2.9.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15072) Upgrade Apache Kerby version to 1.1.0

2017-12-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16274644#comment-16274644
 ] 

Steve Loughran commented on HADOOP-15072:
-

What's "includes a GSSAPI module"? Does this replace the one in the JDK?

> Upgrade Apache Kerby version to 1.1.0
> -
>
> Key: HADOOP-15072
> URL: https://issues.apache.org/jira/browse/HADOOP-15072
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Jiajia Li
>Assignee: Jiajia Li
> Attachments: HADOOP-15072-001.patch
>
>
> Apache Kerby 1.1.0 implements cross-realm support, and also includes a GSSAPI 
> module.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15079) ITestS3AFileOperationCost#testFakeDirectoryDeletion failing after OutputCommitter patch

2017-12-01 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16274630#comment-16274630
 ] 

Sean Mackrory commented on HADOOP-15079:


Contrary to the comment, I think we do gain from this test. Digging into this 
as I described above, I'm counting 2 deletion requests when previously there 
was one and there's an additional operation in innerRename that I still need to 
get to the bottom of as well, that may also be an unnecessary and easily fixed 
addition. I think we should fix those and keep the test. If there's a case 
where these operation counts change that there's a good reason for, we can 
always update the test, but so far I don't think that's the case here.

> ITestS3AFileOperationCost#testFakeDirectoryDeletion failing after 
> OutputCommitter patch
> ---
>
> Key: HADOOP-15079
> URL: https://issues.apache.org/jira/browse/HADOOP-15079
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Priority: Critical
>
> I see this test failing with "object_delete_requests expected:<1> but 
> was:<2>". I printed stack traces whenever this metric was incremented, and 
> found the root cause to be that innerMkdirs is now causing two calls to 
> delete fake directories when it previously caused only one. It is called once 
> inside createFakeDirectory, and once directly inside innerMkdirs later:
> {code}
> at 
> org.apache.hadoop.fs.s3a.S3AInstrumentation.incrementCounter(S3AInstrumentation.java:454)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1108)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$deleteObjects$8(S3AFileSystem.java:1369)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:279)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteObjects(S3AFileSystem.java:1366)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.removeKeys(S3AFileSystem.java:1625)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteUnnecessaryFakeDirectories(S3AFileSystem.java:2634)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.finishedWrite(S3AFileSystem.java:2599)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.putObjectDirect(S3AFileSystem.java:1498)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$createEmptyObject$11(S3AFileSystem.java:2684)
> at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:108)
> at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:259)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
> at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:255)
> at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:230)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createEmptyObject(S3AFileSystem.java:2682)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createFakeDirectory(S3AFileSystem.java:2657)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerMkdirs(S3AFileSystem.java:2021)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:1956)
> at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2305)
> at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338)
> at 
> org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost.testFakeDirectoryDeletion(ITestS3AFileOperationCost.java:209)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74
> {code}
> {code}
> at 
> org.apache.hadoop.fs.s3a.S3AInstrumentation.incrementCounter(S3AInstrume

[jira] [Commented] (HADOOP-15083) Create base image for running hadoop in docker containers

2017-12-01 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16274605#comment-16274605
 ] 

genericqa commented on HADOOP-15083:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} HADOOP-15083 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-15083 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900238/HADOOP-15083.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13769/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Create base image for running hadoop in docker containers
> -
>
> Key: HADOOP-15083
> URL: https://issues.apache.org/jira/browse/HADOOP-15083
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HADOOP-15083.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15083) Create base image for running hadoop in docker containers

2017-12-01 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HADOOP-15083:
--
Status: Patch Available  (was: Open)

The patch is tested together with HDFS-7240. I propose to commit it to a 
separated docker/runner branch and ask the INFRA to register it to the 
dockerhub to create apache/hadoop-runner images.

> Create base image for running hadoop in docker containers
> -
>
> Key: HADOOP-15083
> URL: https://issues.apache.org/jira/browse/HADOOP-15083
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HADOOP-15083.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15083) Create base image for running hadoop in docker containers

2017-12-01 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16274593#comment-16274593
 ] 

Elek, Marton commented on HADOOP-15083:
---

This image is already under testing on the HDFS-7240 branch.

To test:

1. Apply the patch to a new empty branch:

{code}
git checkout --orphan docker/runner
git commit --allow-empty -m "HADOOP-15083. new branch for the docker/runner 
image
{code}

And now you can apply the attached patch as ususal.

2. Build the image

{code}
./build.sh
{code}

Note: with dockerhub automated builds the build.sh won't be called. It is just 
a helper for local development (and rat check)

3. Test it with HDFS-7240

{code}
git checkout HDFS-7240
mvn install -DskipTests -DskipShade -Pdist -Dmaven.javadoc.skip=true
cd dev-support/compose/ozone
#replace elek/hadoop-runner with apache/hadoop-runner [1]
docker-compose up -d
firefox http://localhost:9874
firefox http://localhost:9876
{code}



[1]: While it's under review I uploaded the proposed version to the dockerhub 
under elek to make it easier to test

> Create base image for running hadoop in docker containers
> -
>
> Key: HADOOP-15083
> URL: https://issues.apache.org/jira/browse/HADOOP-15083
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HADOOP-15083.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15085) Output streams closed with IOUtils suppressing write errors

2017-12-01 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16274552#comment-16274552
 ] 

Jason Lowe commented on HADOOP-15085:
-

Some additional locations that are using IOUtils.closeStream or closeQuietly to 
suppress IOExceptions during close that could lead to partial/corrupted output:
* FileContext.Util#copy
* CopyCommands.AppendToFile#processArguments
* TestConfiguration#testMultiByteCharacters
* MiniKMS#copyResource


> Output streams closed with IOUtils suppressing write errors
> ---
>
> Key: HADOOP-15085
> URL: https://issues.apache.org/jira/browse/HADOOP-15085
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jason Lowe
>
> There are a few places in hadoop-common that are closing an output stream 
> with IOUtils.cleanupWithLogger like this:
> {code}
>   try {
> ...write to outStream...
>   } finally {
> IOUtils.cleanupWithLogger(LOG, outStream);
>   }
> {code}
> This suppresses any IOException that occurs during the close() method which 
> could lead to partial/corrupted output without throwing a corresponding 
> exception.  The code should either use try-with-resources or explicitly close 
> the stream within the try block so the exception thrown during close() is 
> properly propagated as exceptions during write operations are.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14898) Create official Docker images for development and testing features

2017-12-01 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16274520#comment-16274520
 ] 

Elek, Marton commented on HADOOP-14898:
---

And some more detailed status about the progress:

 * I uploaded a design document/presentation about the key decision point 
regarding to the docker images.
 * I split the work to sub issues

I think there are two main use cases here:

 1. When we would like to create a pseudo cluster directly from the source. It 
requires just an empty image without the hadoop, and some smart start script. 
This is moved to HADOOP-15083 and under testing on the HDFS-7240. I will add 
more details there, feel free to test, if you interested.

 2. The other use case to create big images which contains the hadoop 
distribution and provide generic docker-compose files. This should be built on 
top the image of 1.) and will be implemented in HADOOP-15084  

> Create official Docker images for development and testing features 
> ---
>
> Key: HADOOP-14898
> URL: https://issues.apache.org/jira/browse/HADOOP-14898
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HADOOP-14898.001.tar.gz, HADOOP-14898.002.tar.gz, 
> HADOOP-14898.003.tgz, docker_design.pdf
>
>
> This is the original mail from the mailing list:
> {code}
> TL;DR: I propose to create official hadoop images and upload them to the 
> dockerhub.
> GOAL/SCOPE: I would like improve the existing documentation with easy-to-use 
> docker based recipes to start hadoop clusters with various configuration.
> The images also could be used to test experimental features. For example 
> ozone could be tested easily with these compose file and configuration:
> https://gist.github.com/elek/1676a97b98f4ba561c9f51fce2ab2ea6
> Or even the configuration could be included in the compose file:
> https://github.com/elek/hadoop/blob/docker-2.8.0/example/docker-compose.yaml
> I would like to create separated example compose files for federation, ha, 
> metrics usage, etc. to make it easier to try out and understand the features.
> CONTEXT: There is an existing Jira 
> https://issues.apache.org/jira/browse/HADOOP-13397
> But it’s about a tool to generate production quality docker images (multiple 
> types, in a flexible way). If no objections, I will create a separated issue 
> to create simplified docker images for rapid prototyping and investigating 
> new features. And register the branch to the dockerhub to create the images 
> automatically.
> MY BACKGROUND: I am working with docker based hadoop/spark clusters quite a 
> while and run them succesfully in different environments (kubernetes, 
> docker-swarm, nomad-based scheduling, etc.) My work is available from here: 
> https://github.com/flokkr but they could handle more complex use cases (eg. 
> instrumenting java processes with btrace, or read/reload configuration from 
> consul).
>  And IMHO in the official hadoop documentation it’s better to suggest to use 
> official apache docker images and not external ones (which could be changed).
> {code}
> The next list will enumerate the key decision points regarding to docker 
> image creating
> A. automated dockerhub build  / jenkins build
> Docker images could be built on the dockerhub (a branch pattern should be 
> defined for a github repository and the location of the Docker files) or 
> could be built on a CI server and pushed.
> The second one is more flexible (it's more easy to create matrix build, for 
> example)
> The first one had the advantage that we can get an additional flag on the 
> dockerhub that the build is automated (and built from the source by the 
> dockerhub).
> The decision is easy as ASF supports the first approach: (see 
> https://issues.apache.org/jira/browse/INFRA-12781?focusedCommentId=15824096&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15824096)
> B. source: binary distribution or source build
> The second question is about creating the docker image. One option is to 
> build the software on the fly during the creation of the docker image the 
> other one is to use the binary releases.
> I suggest to use the second approach as:
> 1. In that case the hadoop:2.7.3 could contain exactly the same hadoop 
> distrubution as the downloadable one
> 2. We don't need to add development tools to the image, the image could be 
> more smaller (which is important as the goal for this image to getting 
> started as fast as possible)
> 3. The docker definition will be more simple (and more easy to maintain)
> Usually this approach is used in other projects (I checked Apache Zeppelin 
> and Apache Nutch)
> C. branch usage
> Other question is the location of the Docker file. It coul

[jira] [Updated] (HADOOP-14898) Create official Docker images for development and testing features

2017-12-01 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HADOOP-14898:
--
Attachment: docker_design.pdf

> Create official Docker images for development and testing features 
> ---
>
> Key: HADOOP-14898
> URL: https://issues.apache.org/jira/browse/HADOOP-14898
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HADOOP-14898.001.tar.gz, HADOOP-14898.002.tar.gz, 
> HADOOP-14898.003.tgz, docker_design.pdf
>
>
> This is the original mail from the mailing list:
> {code}
> TL;DR: I propose to create official hadoop images and upload them to the 
> dockerhub.
> GOAL/SCOPE: I would like improve the existing documentation with easy-to-use 
> docker based recipes to start hadoop clusters with various configuration.
> The images also could be used to test experimental features. For example 
> ozone could be tested easily with these compose file and configuration:
> https://gist.github.com/elek/1676a97b98f4ba561c9f51fce2ab2ea6
> Or even the configuration could be included in the compose file:
> https://github.com/elek/hadoop/blob/docker-2.8.0/example/docker-compose.yaml
> I would like to create separated example compose files for federation, ha, 
> metrics usage, etc. to make it easier to try out and understand the features.
> CONTEXT: There is an existing Jira 
> https://issues.apache.org/jira/browse/HADOOP-13397
> But it’s about a tool to generate production quality docker images (multiple 
> types, in a flexible way). If no objections, I will create a separated issue 
> to create simplified docker images for rapid prototyping and investigating 
> new features. And register the branch to the dockerhub to create the images 
> automatically.
> MY BACKGROUND: I am working with docker based hadoop/spark clusters quite a 
> while and run them succesfully in different environments (kubernetes, 
> docker-swarm, nomad-based scheduling, etc.) My work is available from here: 
> https://github.com/flokkr but they could handle more complex use cases (eg. 
> instrumenting java processes with btrace, or read/reload configuration from 
> consul).
>  And IMHO in the official hadoop documentation it’s better to suggest to use 
> official apache docker images and not external ones (which could be changed).
> {code}
> The next list will enumerate the key decision points regarding to docker 
> image creating
> A. automated dockerhub build  / jenkins build
> Docker images could be built on the dockerhub (a branch pattern should be 
> defined for a github repository and the location of the Docker files) or 
> could be built on a CI server and pushed.
> The second one is more flexible (it's more easy to create matrix build, for 
> example)
> The first one had the advantage that we can get an additional flag on the 
> dockerhub that the build is automated (and built from the source by the 
> dockerhub).
> The decision is easy as ASF supports the first approach: (see 
> https://issues.apache.org/jira/browse/INFRA-12781?focusedCommentId=15824096&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15824096)
> B. source: binary distribution or source build
> The second question is about creating the docker image. One option is to 
> build the software on the fly during the creation of the docker image the 
> other one is to use the binary releases.
> I suggest to use the second approach as:
> 1. In that case the hadoop:2.7.3 could contain exactly the same hadoop 
> distrubution as the downloadable one
> 2. We don't need to add development tools to the image, the image could be 
> more smaller (which is important as the goal for this image to getting 
> started as fast as possible)
> 3. The docker definition will be more simple (and more easy to maintain)
> Usually this approach is used in other projects (I checked Apache Zeppelin 
> and Apache Nutch)
> C. branch usage
> Other question is the location of the Docker file. It could be on the 
> official source-code branches (branch-2, trunk, etc.) or we can create 
> separated branches for the dockerhub (eg. docker/2.7 docker/2.8 docker/3.0)
> For the first approach it's easier to find the docker images, but it's less 
> flexible. For example if we had a Dockerfile for on the source code it should 
> be used for every release (for example the Docker file from the tag 
> release-3.0.0 should be used for the 3.0 hadoop docker image). In that case 
> the release process is much more harder: in case of a Dockerfile error (which 
> could be test on dockerhub only after the taging), a new release should be 
> added after fixing the Dockerfile.
> Another problem is that with using tags it's not possible to improve the 
> Dockerfiles. I can ima

[jira] [Commented] (HADOOP-14898) Create official Docker images for development and testing features

2017-12-01 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16274515#comment-16274515
 ] 

Elek, Marton commented on HADOOP-14898:
---

Thank you the interest [~miklos.szeg...@cloudera.com]

Unfortunatelly I don't know how it could be tested agains the trunk, as:

 * it couldn't been built by maven, so I don't know if yetus could support it
 * I propose to commit it on a different branch

But I agree with you that it also should be checked with rat, so I added a 
small build.sh to check rat and build docker images, and fixed all the licence 
warnings.

{code}
*
Summary
---
Generated at: 2017-12-01T16:34:15+01:00

Notes: 1
Binaries: 0
Archives: 1
Standards: 7

Apache Licensed: 7
Generated Documents: 0

JavaDocs are generated, thus a license header is optional.
Generated files do not require license headers.

0 Unknown Licenses

Archives:

 + /home/elek/projects/hadoopdocker/build/apache-rat.tar.gz
 
*
  Files with Apache License headers will be marked AL
  Binary files (which do not require any license headers) will be marked B
  Compressed archives will be marked A
  Notices, licenses etc. will be marked N
  AL/home/elek/projects/hadoopdocker/Dockerfile
  N /home/elek/projects/hadoopdocker/LICENSE
  AL/home/elek/projects/hadoopdocker/README.md
  AL/home/elek/projects/hadoopdocker/build.sh
  A /home/elek/projects/hadoopdocker/build/apache-rat.tar.gz
  AL/home/elek/projects/hadoopdocker/scripts/.bashrc
  AL/home/elek/projects/hadoopdocker/scripts/envtoconf.py
  AL/home/elek/projects/hadoopdocker/scripts/starter.sh
  AL/home/elek/projects/hadoopdocker/scripts/transformation.py
{code}

> Create official Docker images for development and testing features 
> ---
>
> Key: HADOOP-14898
> URL: https://issues.apache.org/jira/browse/HADOOP-14898
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HADOOP-14898.001.tar.gz, HADOOP-14898.002.tar.gz, 
> HADOOP-14898.003.tgz
>
>
> This is the original mail from the mailing list:
> {code}
> TL;DR: I propose to create official hadoop images and upload them to the 
> dockerhub.
> GOAL/SCOPE: I would like improve the existing documentation with easy-to-use 
> docker based recipes to start hadoop clusters with various configuration.
> The images also could be used to test experimental features. For example 
> ozone could be tested easily with these compose file and configuration:
> https://gist.github.com/elek/1676a97b98f4ba561c9f51fce2ab2ea6
> Or even the configuration could be included in the compose file:
> https://github.com/elek/hadoop/blob/docker-2.8.0/example/docker-compose.yaml
> I would like to create separated example compose files for federation, ha, 
> metrics usage, etc. to make it easier to try out and understand the features.
> CONTEXT: There is an existing Jira 
> https://issues.apache.org/jira/browse/HADOOP-13397
> But it’s about a tool to generate production quality docker images (multiple 
> types, in a flexible way). If no objections, I will create a separated issue 
> to create simplified docker images for rapid prototyping and investigating 
> new features. And register the branch to the dockerhub to create the images 
> automatically.
> MY BACKGROUND: I am working with docker based hadoop/spark clusters quite a 
> while and run them succesfully in different environments (kubernetes, 
> docker-swarm, nomad-based scheduling, etc.) My work is available from here: 
> https://github.com/flokkr but they could handle more complex use cases (eg. 
> instrumenting java processes with btrace, or read/reload configuration from 
> consul).
>  And IMHO in the official hadoop documentation it’s better to suggest to use 
> official apache docker images and not external ones (which could be changed).
> {code}
> The next list will enumerate the key decision points regarding to docker 
> image creating
> A. automated dockerhub build  / jenkins build
> Docker images could be built on the dockerhub (a branch pattern should be 
> defined for a github repository and the location of the Docker files) or 
> could be built on a CI server and pushed.
> The second one is more flexible (it's more easy to create matrix build, for 
> example)
> The first one had the advantage that we can get an additional flag on the 
> dockerhub that the build is automated (and built from the source by the 
> dockerhub).
> The decision is easy as ASF supports the first approach: (see 
> https://issues.apache.org/jira/browse/INFRA-12781?focusedCommentId=15824096&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#com

[jira] [Commented] (HADOOP-15085) Output streams closed with IOUtils suppressing write errors

2017-12-01 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16274514#comment-16274514
 ] 

Jason Lowe commented on HADOOP-15085:
-

Some places in hadoop-common code that have this pattern:
* FileUtil#createJarWithClassPath
* MapFile#main
* NativeIO#copyFileUnbuffered
* TestCodec#writeSplitTestFile


> Output streams closed with IOUtils suppressing write errors
> ---
>
> Key: HADOOP-15085
> URL: https://issues.apache.org/jira/browse/HADOOP-15085
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jason Lowe
>
> There are a few places in hadoop-common that are closing an output stream 
> with IOUtils.cleanupWithLogger like this:
> {code}
>   try {
> ...write to outStream...
>   } finally {
> IOUtils.cleanupWithLogger(LOG, outStream);
>   }
> {code}
> This suppresses any IOException that occurs during the close() method which 
> could lead to partial/corrupted output without throwing a corresponding 
> exception.  The code should either use try-with-resources or explicitly close 
> the stream within the try block so the exception thrown during close() is 
> properly propagated as exceptions during write operations are.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15085) Output streams closed with IOUtils suppressing write errors

2017-12-01 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-15085:
---

 Summary: Output streams closed with IOUtils suppressing write 
errors
 Key: HADOOP-15085
 URL: https://issues.apache.org/jira/browse/HADOOP-15085
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Jason Lowe


There are a few places in hadoop-common that are closing an output stream with 
IOUtils.cleanupWithLogger like this:
{code}
  try {
...write to outStream...
  } finally {
IOUtils.cleanupWithLogger(LOG, outStream);
  }
{code}
This suppresses any IOException that occurs during the close() method which 
could lead to partial/corrupted output without throwing a corresponding 
exception.  The code should either use try-with-resources or explicitly close 
the stream within the try block so the exception thrown during close() is 
properly propagated as exceptions during write operations are.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15083) Create base image for running hadoop in docker containers

2017-12-01 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HADOOP-15083:
--
Attachment: HADOOP-15083.001.patch

> Create base image for running hadoop in docker containers
> -
>
> Key: HADOOP-15083
> URL: https://issues.apache.org/jira/browse/HADOOP-15083
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HADOOP-15083.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15082) add AbstractContractRootDirectoryTest test for mkdir / ; wasb to implement the test

2017-12-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16274478#comment-16274478
 ] 

Steve Loughran commented on HADOOP-15082:
-

Checkstyle: 
{code}
./hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/contract/ITestAzureRootDirectoryTest.java:22:import
 org.apache.hadoop.fs.contract.AbstractContractOpenTest;:8: Unused import - 
org.apache.hadoop.fs.contract.AbstractContractOpenTest. [UnusedImports]
{code}

> add AbstractContractRootDirectoryTest test for mkdir / ; wasb to implement 
> the test
> ---
>
> Key: HADOOP-15082
> URL: https://issues.apache.org/jira/browse/HADOOP-15082
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/azure, test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-15082-001.patch
>
>
> I managed to get a stack trace on an older version of WASB with some coding 
> doing a mkdir(new Path("/"))some of the ranger parentage checks didn't 
> handle that specific case.
> # Add a new root Fs contract test for this operation
> # Have WASB implement the test suite as an integration test.
> # if the test fails shows a problem fix



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15082) add AbstractContractRootDirectoryTest test for mkdir / ; wasb to implement the test

2017-12-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-15082:
---

Assignee: Steve Loughran

> add AbstractContractRootDirectoryTest test for mkdir / ; wasb to implement 
> the test
> ---
>
> Key: HADOOP-15082
> URL: https://issues.apache.org/jira/browse/HADOOP-15082
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/azure, test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-15082-001.patch
>
>
> I managed to get a stack trace on an older version of WASB with some coding 
> doing a mkdir(new Path("/"))some of the ranger parentage checks didn't 
> handle that specific case.
> # Add a new root Fs contract test for this operation
> # Have WASB implement the test suite as an integration test.
> # if the test fails shows a problem fix



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15081) org.apache.hadoop.util.JvmPauseMonitor Detected pause in JVM or host machine (eg GC) cause ResourceManager exit

2017-12-01 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16274433#comment-16274433
 ] 

Jason Lowe commented on HADOOP-15081:
-

The JvmPauseMonitor does not call System.exit (or proxies to it like ExitUtil), 
so I don't see how JvmPauseMonitor could be responsible for the process 
exiting.  Do you have evidence other than the log message appears towards the 
end of the logs?  What seems more likely is that there is memory pressure to 
the point that out of memory errors are being thrown and caught by the 
YarnUncaughtExceptionHandler.  Check the stderr log to see if there's a message 
about halting the system due to out of memory errors.


> org.apache.hadoop.util.JvmPauseMonitor Detected pause in JVM or host 
> machine (eg GC)   cause  ResourceManager   exit  
> ---
>
> Key: HADOOP-15081
> URL: https://issues.apache.org/jira/browse/HADOOP-15081
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.6.3
>Reporter: liuxiaobin
>
> org.apache.hadoop.util.JvmPauseMonitor
> Detected pause in JVM or host machine (eg GC): pause of approximately 2562ms
> GC pool 'ConcurrentMarkSweep' had collection(s): count=4 time=3040ms
> ResourceManager   NodeManagerexit   .



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15082) add AbstractContractRootDirectoryTest test for mkdir / ; wasb to implement the test

2017-12-01 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16274432#comment-16274432
 ] 

genericqa commented on HADOOP-15082:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
36s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 22s{color} | {color:orange} root: The patch generated 1 new + 6 unchanged - 
0 fixed = 7 total (was 6) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
33s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
14s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15082 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900201/HADOOP-15082-001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e5e9d84cb57b 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 556aea3 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13768/artifact/out/dif

[jira] [Created] (HADOOP-15084) Create docker images for latest stable hadoop2 build

2017-12-01 Thread Elek, Marton (JIRA)
Elek, Marton created HADOOP-15084:
-

 Summary: Create docker images for latest stable hadoop2 build
 Key: HADOOP-15084
 URL: https://issues.apache.org/jira/browse/HADOOP-15084
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Elek, Marton
Assignee: Elek, Marton






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15083) Create base image for running hadoop in docker containers

2017-12-01 Thread Elek, Marton (JIRA)
Elek, Marton created HADOOP-15083:
-

 Summary: Create base image for running hadoop in docker containers
 Key: HADOOP-15083
 URL: https://issues.apache.org/jira/browse/HADOOP-15083
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Elek, Marton
Assignee: Elek, Marton






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14898) Create official Docker images for development and testing features

2017-12-01 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HADOOP-14898:
--
Issue Type: New Feature  (was: Improvement)

> Create official Docker images for development and testing features 
> ---
>
> Key: HADOOP-14898
> URL: https://issues.apache.org/jira/browse/HADOOP-14898
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HADOOP-14898.001.tar.gz, HADOOP-14898.002.tar.gz, 
> HADOOP-14898.003.tgz
>
>
> This is the original mail from the mailing list:
> {code}
> TL;DR: I propose to create official hadoop images and upload them to the 
> dockerhub.
> GOAL/SCOPE: I would like improve the existing documentation with easy-to-use 
> docker based recipes to start hadoop clusters with various configuration.
> The images also could be used to test experimental features. For example 
> ozone could be tested easily with these compose file and configuration:
> https://gist.github.com/elek/1676a97b98f4ba561c9f51fce2ab2ea6
> Or even the configuration could be included in the compose file:
> https://github.com/elek/hadoop/blob/docker-2.8.0/example/docker-compose.yaml
> I would like to create separated example compose files for federation, ha, 
> metrics usage, etc. to make it easier to try out and understand the features.
> CONTEXT: There is an existing Jira 
> https://issues.apache.org/jira/browse/HADOOP-13397
> But it’s about a tool to generate production quality docker images (multiple 
> types, in a flexible way). If no objections, I will create a separated issue 
> to create simplified docker images for rapid prototyping and investigating 
> new features. And register the branch to the dockerhub to create the images 
> automatically.
> MY BACKGROUND: I am working with docker based hadoop/spark clusters quite a 
> while and run them succesfully in different environments (kubernetes, 
> docker-swarm, nomad-based scheduling, etc.) My work is available from here: 
> https://github.com/flokkr but they could handle more complex use cases (eg. 
> instrumenting java processes with btrace, or read/reload configuration from 
> consul).
>  And IMHO in the official hadoop documentation it’s better to suggest to use 
> official apache docker images and not external ones (which could be changed).
> {code}
> The next list will enumerate the key decision points regarding to docker 
> image creating
> A. automated dockerhub build  / jenkins build
> Docker images could be built on the dockerhub (a branch pattern should be 
> defined for a github repository and the location of the Docker files) or 
> could be built on a CI server and pushed.
> The second one is more flexible (it's more easy to create matrix build, for 
> example)
> The first one had the advantage that we can get an additional flag on the 
> dockerhub that the build is automated (and built from the source by the 
> dockerhub).
> The decision is easy as ASF supports the first approach: (see 
> https://issues.apache.org/jira/browse/INFRA-12781?focusedCommentId=15824096&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15824096)
> B. source: binary distribution or source build
> The second question is about creating the docker image. One option is to 
> build the software on the fly during the creation of the docker image the 
> other one is to use the binary releases.
> I suggest to use the second approach as:
> 1. In that case the hadoop:2.7.3 could contain exactly the same hadoop 
> distrubution as the downloadable one
> 2. We don't need to add development tools to the image, the image could be 
> more smaller (which is important as the goal for this image to getting 
> started as fast as possible)
> 3. The docker definition will be more simple (and more easy to maintain)
> Usually this approach is used in other projects (I checked Apache Zeppelin 
> and Apache Nutch)
> C. branch usage
> Other question is the location of the Docker file. It could be on the 
> official source-code branches (branch-2, trunk, etc.) or we can create 
> separated branches for the dockerhub (eg. docker/2.7 docker/2.8 docker/3.0)
> For the first approach it's easier to find the docker images, but it's less 
> flexible. For example if we had a Dockerfile for on the source code it should 
> be used for every release (for example the Docker file from the tag 
> release-3.0.0 should be used for the 3.0 hadoop docker image). In that case 
> the release process is much more harder: in case of a Dockerfile error (which 
> could be test on dockerhub only after the taging), a new release should be 
> added after fixing the Dockerfile.
> Another problem is that with using tags it's not possible to improve the 
> Dockerfiles. I can imagine 

[jira] [Updated] (HADOOP-15082) add AbstractContractRootDirectoryTest test for mkdir / ; wasb to implement the test

2017-12-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15082:

Status: Patch Available  (was: Open)

> add AbstractContractRootDirectoryTest test for mkdir / ; wasb to implement 
> the test
> ---
>
> Key: HADOOP-15082
> URL: https://issues.apache.org/jira/browse/HADOOP-15082
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/azure, test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-15082-001.patch
>
>
> I managed to get a stack trace on an older version of WASB with some coding 
> doing a mkdir(new Path("/"))some of the ranger parentage checks didn't 
> handle that specific case.
> # Add a new root Fs contract test for this operation
> # Have WASB implement the test suite as an integration test.
> # if the test fails shows a problem fix



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15082) add AbstractContractRootDirectoryTest test for mkdir / ; wasb to implement the test

2017-12-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15082:

Attachment: HADOOP-15082-001.patch

HADOOP-15082 patch 001: mkdir on root; add a WASB test. test case mimics 
sequence which gave me an NPE on an older version of hadoop

tested: s3, wasb

This test shows that the latest WASB code doesn't exhibit the problem (good)

Here's the stack trace I got from earlier
{code}
java.lang.NullPointerException
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.getAncestor(NativeAzureFileSystem.java:2404)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:2436)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:2422)
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1924)
at 
com.hortonworks.spark.cloud.integration.Generator.action(Generator.scala:93)
at 
com.hortonworks.spark.cloud.MinimalStoreExample$$anonfun$run$1.apply(MinimalStoreExample.scala:52)
at 
com.hortonworks.spark.cloud.MinimalStoreExample$$anonfun$run$1.apply(MinimalStoreExample.scala:52)
at 
com.hortonworks.spark.cloud.MinimalStoreExample$class.execute(MinimalStoreExample.scala:86)
at 
com.hortonworks.spark.cloud.integration.Generator.execute(Generator.scala:34)
at 
com.hortonworks.spark.cloud.MinimalStoreExample$class.run(MinimalStoreExample.scala:52)
at 
com.hortonworks.spark.cloud.integration.Generator.run(Generator.scala:34)
at 
com.hortonworks.spark.cloud.integration.Generator$.main(Generator.scala:183)
at 
com.hortonworks.spark.cloud.integration.Generator.main(Generator.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:750)
at 
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
{code}

> add AbstractContractRootDirectoryTest test for mkdir / ; wasb to implement 
> the test
> ---
>
> Key: HADOOP-15082
> URL: https://issues.apache.org/jira/browse/HADOOP-15082
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/azure, test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-15082-001.patch
>
>
> I managed to get a stack trace on an older version of WASB with some coding 
> doing a mkdir(new Path("/"))some of the ranger parentage checks didn't 
> handle that specific case.
> # Add a new root Fs contract test for this operation
> # Have WASB implement the test suite as an integration test.
> # if the test fails shows a problem fix



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14594) ITestS3AFileOperationCost::testFakeDirectoryDeletion to uncomment metric assertions

2017-12-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16274315#comment-16274315
 ] 

Steve Loughran commented on HADOOP-14594:
-

given that old patch uncomments things, the revert will leave the asserts 
commented out. I'd prefer deleting them entirely. If you supply a patch for 
that, I'll review

> ITestS3AFileOperationCost::testFakeDirectoryDeletion to uncomment metric 
> assertions
> ---
>
> Key: HADOOP-14594
> URL: https://issues.apache.org/jira/browse/HADOOP-14594
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HADOOP-14594.000.patch, HADOOP-14594.001.patch
>
>
> Per discussion [HADOOP-14255] and [HADOOP-13222], we can delete the TODO 
> comment in tests for metric assertions.
> See the attached patch for more details.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15082) add AbstractContractRootDirectoryTest test for mkdir / ; wasb to implement the test

2017-12-01 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15082:
---

 Summary: add AbstractContractRootDirectoryTest test for mkdir / ; 
wasb to implement the test
 Key: HADOOP-15082
 URL: https://issues.apache.org/jira/browse/HADOOP-15082
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs, fs/azure, test
Affects Versions: 2.9.0
Reporter: Steve Loughran
Priority: Minor


I managed to get a stack trace on an older version of WASB with some coding 
doing a mkdir(new Path("/"))some of the ranger parentage checks didn't 
handle that specific case.

# Add a new root Fs contract test for this operation
# Have WASB implement the test suite as an integration test.
# if the test fails shows a problem fix



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14444) New implementation of ftp and sftp filesystems

2017-12-01 Thread Lukas Waldmann (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16274214#comment-16274214
 ] 

Lukas Waldmann commented on HADOOP-1:
-

Steve, do you know of about some page where I can announce this new module so 
people know about it an can try it out?

> New implementation of ftp and sftp filesystems
> --
>
> Key: HADOOP-1
> URL: https://issues.apache.org/jira/browse/HADOOP-1
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Lukas Waldmann
>Assignee: Lukas Waldmann
> Attachments: HADOOP-1.10.patch, HADOOP-1.11.patch, 
> HADOOP-1.12.patch, HADOOP-1.13.patch, HADOOP-1.2.patch, 
> HADOOP-1.3.patch, HADOOP-1.4.patch, HADOOP-1.5.patch, 
> HADOOP-1.6.patch, HADOOP-1.7.patch, HADOOP-1.8.patch, 
> HADOOP-1.9.patch, HADOOP-1.patch
>
>
> Current implementation of FTP and SFTP filesystems have severe limitations 
> and performance issues when dealing with high number of files. Mine patch 
> solve those issues and integrate both filesystems such a way that most of the 
> core functionality is common for both and therefore simplifying the 
> maintainability.
> The core features:
> * Support for HTTP/SOCKS proxies
> * Support for passive FTP
> * Support for explicit FTPS (SSL/TLS)
> * Support of connection pooling - new connection is not created for every 
> single command but reused from the pool.
> For huge number of files it shows order of magnitude performance improvement 
> over not pooled connections.
> * Caching of directory trees. For ftp you always need to list whole directory 
> whenever you ask information about particular file.
> Again for huge number of files it shows order of magnitude performance 
> improvement over not cached connections.
> * Support of keep alive (NOOP) messages to avoid connection drops
> * Support for Unix style or regexp wildcard glob - useful for listing a 
> particular files across whole directory tree
> * Support for reestablishing broken ftp data transfers - can happen 
> surprisingly often



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15079) ITestS3AFileOperationCost#testFakeDirectoryDeletion failing after OutputCommitter patch

2017-12-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16274166#comment-16274166
 ] 

Steve Loughran commented on HADOOP-15079:
-

you file the patch (or I can just revert HADOOP-14594)

> ITestS3AFileOperationCost#testFakeDirectoryDeletion failing after 
> OutputCommitter patch
> ---
>
> Key: HADOOP-15079
> URL: https://issues.apache.org/jira/browse/HADOOP-15079
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Priority: Critical
>
> I see this test failing with "object_delete_requests expected:<1> but 
> was:<2>". I printed stack traces whenever this metric was incremented, and 
> found the root cause to be that innerMkdirs is now causing two calls to 
> delete fake directories when it previously caused only one. It is called once 
> inside createFakeDirectory, and once directly inside innerMkdirs later:
> {code}
> at 
> org.apache.hadoop.fs.s3a.S3AInstrumentation.incrementCounter(S3AInstrumentation.java:454)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1108)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$deleteObjects$8(S3AFileSystem.java:1369)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:279)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteObjects(S3AFileSystem.java:1366)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.removeKeys(S3AFileSystem.java:1625)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteUnnecessaryFakeDirectories(S3AFileSystem.java:2634)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.finishedWrite(S3AFileSystem.java:2599)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.putObjectDirect(S3AFileSystem.java:1498)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$createEmptyObject$11(S3AFileSystem.java:2684)
> at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:108)
> at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:259)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
> at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:255)
> at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:230)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createEmptyObject(S3AFileSystem.java:2682)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createFakeDirectory(S3AFileSystem.java:2657)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerMkdirs(S3AFileSystem.java:2021)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:1956)
> at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2305)
> at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338)
> at 
> org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost.testFakeDirectoryDeletion(ITestS3AFileOperationCost.java:209)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74
> {code}
> {code}
> at 
> org.apache.hadoop.fs.s3a.S3AInstrumentation.incrementCounter(S3AInstrumentation.java:454)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1108)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$deleteObjects$8(S3AFileSystem.java:1369)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:279)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteObjects(S3AFileSystem.java:1366)

[jira] [Commented] (HADOOP-15079) ITestS3AFileOperationCost#testFakeDirectoryDeletion failing after OutputCommitter patch

2017-12-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16274165#comment-16274165
 ] 

Steve Loughran commented on HADOOP-15079:
-

Correction: no JIRA, just a comment in HADOOP-14594 about it.

> ITestS3AFileOperationCost#testFakeDirectoryDeletion failing after 
> OutputCommitter patch
> ---
>
> Key: HADOOP-15079
> URL: https://issues.apache.org/jira/browse/HADOOP-15079
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Priority: Critical
>
> I see this test failing with "object_delete_requests expected:<1> but 
> was:<2>". I printed stack traces whenever this metric was incremented, and 
> found the root cause to be that innerMkdirs is now causing two calls to 
> delete fake directories when it previously caused only one. It is called once 
> inside createFakeDirectory, and once directly inside innerMkdirs later:
> {code}
> at 
> org.apache.hadoop.fs.s3a.S3AInstrumentation.incrementCounter(S3AInstrumentation.java:454)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1108)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$deleteObjects$8(S3AFileSystem.java:1369)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:279)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteObjects(S3AFileSystem.java:1366)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.removeKeys(S3AFileSystem.java:1625)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteUnnecessaryFakeDirectories(S3AFileSystem.java:2634)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.finishedWrite(S3AFileSystem.java:2599)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.putObjectDirect(S3AFileSystem.java:1498)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$createEmptyObject$11(S3AFileSystem.java:2684)
> at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:108)
> at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:259)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
> at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:255)
> at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:230)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createEmptyObject(S3AFileSystem.java:2682)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createFakeDirectory(S3AFileSystem.java:2657)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerMkdirs(S3AFileSystem.java:2021)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:1956)
> at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2305)
> at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338)
> at 
> org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost.testFakeDirectoryDeletion(ITestS3AFileOperationCost.java:209)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74
> {code}
> {code}
> at 
> org.apache.hadoop.fs.s3a.S3AInstrumentation.incrementCounter(S3AInstrumentation.java:454)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1108)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$deleteObjects$8(S3AFileSystem.java:1369)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:279)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteObjects(S3AFileSystem.java

[jira] [Commented] (HADOOP-15079) ITestS3AFileOperationCost#testFakeDirectoryDeletion failing after OutputCommitter patch

2017-12-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16274162#comment-16274162
 ] 

Steve Loughran commented on HADOOP-15079:
-

There's already a JIRA to remove that probe, with patch. Test is trying to be 
clever and skip the assert if S3Guard is up (as it does less work), but gets it 
wrong if you are enabling s3guard in a bucket-specific option

> ITestS3AFileOperationCost#testFakeDirectoryDeletion failing after 
> OutputCommitter patch
> ---
>
> Key: HADOOP-15079
> URL: https://issues.apache.org/jira/browse/HADOOP-15079
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Priority: Critical
>
> I see this test failing with "object_delete_requests expected:<1> but 
> was:<2>". I printed stack traces whenever this metric was incremented, and 
> found the root cause to be that innerMkdirs is now causing two calls to 
> delete fake directories when it previously caused only one. It is called once 
> inside createFakeDirectory, and once directly inside innerMkdirs later:
> {code}
> at 
> org.apache.hadoop.fs.s3a.S3AInstrumentation.incrementCounter(S3AInstrumentation.java:454)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1108)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$deleteObjects$8(S3AFileSystem.java:1369)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:279)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteObjects(S3AFileSystem.java:1366)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.removeKeys(S3AFileSystem.java:1625)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteUnnecessaryFakeDirectories(S3AFileSystem.java:2634)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.finishedWrite(S3AFileSystem.java:2599)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.putObjectDirect(S3AFileSystem.java:1498)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$createEmptyObject$11(S3AFileSystem.java:2684)
> at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:108)
> at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:259)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
> at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:255)
> at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:230)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createEmptyObject(S3AFileSystem.java:2682)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createFakeDirectory(S3AFileSystem.java:2657)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerMkdirs(S3AFileSystem.java:2021)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:1956)
> at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2305)
> at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338)
> at 
> org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost.testFakeDirectoryDeletion(ITestS3AFileOperationCost.java:209)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74
> {code}
> {code}
> at 
> org.apache.hadoop.fs.s3a.S3AInstrumentation.incrementCounter(S3AInstrumentation.java:454)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1108)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$deleteObjects$8(S3AFileSystem.java:1369)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
> at

[jira] [Commented] (HADOOP-14775) Change junit dependency in parent pom file to junit 5 while maintaining backward compatibility to junit4.

2017-12-01 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16274102#comment-16274102
 ] 

Akira Ajisaka commented on HADOOP-14775:


Now the patch cannot be applied to trunk. Would you rebase the patch?

> Change junit dependency in parent pom file to junit 5 while maintaining 
> backward compatibility to junit4. 
> --
>
> Key: HADOOP-14775
> URL: https://issues.apache.org/jira/browse/HADOOP-14775
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>  Labels: junit5
> Attachments: HADOOP-14775.01.patch, HADOOP-14775.02.patch
>
>
> Change junit dependency in parent pom file to junit 5 while maintaining 
> backward compatibility to junit4. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org