[jira] [Commented] (HADOOP-15506) Upgrading Azure Storage Sdk version and updated corresponding code blocks

2018-05-31 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497579#comment-16497579
 ] 

genericqa commented on HADOOP-15506:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
50s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 26m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
22s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
16s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}123m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15506 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926005/HADOOP-15506-001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux 

[jira] [Commented] (HADOOP-15137) ClassNotFoundException: org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using hadoop-client-minicluster

2018-05-31 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497573#comment-16497573
 ] 

genericqa commented on HADOOP-15137:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 31m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
43m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-client-minicluster in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15137 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911154/HADOOP-15137.02.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  |
| uname | Linux a529bcdbc590 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7dd26d5 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14711/testReport/ |
| Max. process+thread count | 335 (vs. ulimit of 1) |
| modules | C: hadoop-client-modules/hadoop-client-minicluster U: 
hadoop-client-modules/hadoop-client-minicluster |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14711/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> ClassNotFoundException: 
> org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using 
> hadoop-client-minicluster
> --
>
> Key: HADOOP-15137
> URL: https://issues.apache.org/jira/browse/HADOOP-15137
> Project: Hadoop Common
>   

[jira] [Updated] (HADOOP-15471) Hdfs recursive listing operation is very slow

2018-05-31 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HADOOP-15471:
---
Status: Patch Available  (was: Open)

Submitting the patch to trigger jenkins.

> Hdfs recursive listing operation is very slow
> -
>
> Key: HADOOP-15471
> URL: https://issues.apache.org/jira/browse/HADOOP-15471
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.1
> Environment: HCFS file system where HDP 2.6.1 is connected to ECS 
> (Object Store).
>Reporter: Ajay Sachdev
>Assignee: Ajay Sachdev
>Priority: Major
> Fix For: 2.7.1
>
> Attachments: HDFS-13398.001.patch, HDFS-13398.002.patch, 
> HDFS-13398.003.patch, parallelfsPatch
>
>
> The hdfs dfs -ls -R command is sequential in nature and is very slow for a 
> HCFS system. We have seen around 6 mins for 40K directory/files structure.
> The proposal is to use multithreading approach to speed up recursive list, du 
> and count operations.
> We have tried a ForkJoinPool implementation to improve performance for 
> recursive listing operation.
> [https://github.com/jasoncwik/hadoop-release/tree/parallel-fs-cli]
> commit id : 
> 82387c8cd76c2e2761bd7f651122f83d45ae8876
> Another implementation is to use Java Executor Service to improve performance 
> to run listing operation in multiple threads in parallel. This has 
> significantly reduced the time to 40 secs from 6 mins.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15483) Upgrade jquery to version 3.3.1

2018-05-31 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HADOOP-15483:
---
Attachment: HADOOP-15483.003.patch

> Upgrade jquery to version 3.3.1
> ---
>
> Key: HADOOP-15483
> URL: https://issues.apache.org/jira/browse/HADOOP-15483
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HADOOP-15483.001.patch, HADOOP-15483.002.patch, 
> HADOOP-15483.003.patch
>
>
> This Jira aims to upgrade jquery to version 3.3.1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15483) Upgrade jquery to version 3.3.1

2018-05-31 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497563#comment-16497563
 ] 

genericqa commented on HADOOP-15483:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HADOOP-15483 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-15483 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926035/HADOOP-15483.002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14712/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Upgrade jquery to version 3.3.1
> ---
>
> Key: HADOOP-15483
> URL: https://issues.apache.org/jira/browse/HADOOP-15483
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HADOOP-15483.001.patch, HADOOP-15483.002.patch
>
>
> This Jira aims to upgrade jquery to version 3.3.1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15483) Upgrade jquery to version 3.3.1

2018-05-31 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HADOOP-15483:
---
Attachment: HADOOP-15483.002.patch

> Upgrade jquery to version 3.3.1
> ---
>
> Key: HADOOP-15483
> URL: https://issues.apache.org/jira/browse/HADOOP-15483
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HADOOP-15483.001.patch, HADOOP-15483.002.patch
>
>
> This Jira aims to upgrade jquery to version 3.3.1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15137) ClassNotFoundException: org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using hadoop-client-minicluster

2018-05-31 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497528#comment-16497528
 ] 

Bharat Viswanadham commented on HADOOP-15137:
-

[~rohithsharma] branch-3, branch-3.1 and trunk.

> ClassNotFoundException: 
> org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using 
> hadoop-client-minicluster
> --
>
> Key: HADOOP-15137
> URL: https://issues.apache.org/jira/browse/HADOOP-15137
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Jeff Zhang
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HADOOP-15137.01.patch, HADOOP-15137.02.patch, 
> YARN-7673.00.patch
>
>
> I'd like to use hadoop-client-minicluster for hadoop downstream project, but 
> I encounter the following exception when starting hadoop minicluster.  And I 
> check the hadoop-client-minicluster, it indeed does not have this class. Is 
> this something that is missing when packaging the published jar ?
> {code}
> java.lang.NoClassDefFoundError: 
> org/apache/hadoop/yarn/server/api/DistributedSchedulingAMProtocol
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
>   at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>   at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
>   at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.createResourceManager(MiniYARNCluster.java:851)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.serviceInit(MiniYARNCluster.java:285)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15137) ClassNotFoundException: org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using hadoop-client-minicluster

2018-05-31 Thread Rohith Sharma K S (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497521#comment-16497521
 ] 

Rohith Sharma K S commented on HADOOP-15137:


[~bharatviswa] [~zjffdu] which are the branches affected with this? which are 
the branches need to be committed? 
[~zjffdu] would you once verify the fix please? 

> ClassNotFoundException: 
> org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using 
> hadoop-client-minicluster
> --
>
> Key: HADOOP-15137
> URL: https://issues.apache.org/jira/browse/HADOOP-15137
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Jeff Zhang
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HADOOP-15137.01.patch, HADOOP-15137.02.patch, 
> YARN-7673.00.patch
>
>
> I'd like to use hadoop-client-minicluster for hadoop downstream project, but 
> I encounter the following exception when starting hadoop minicluster.  And I 
> check the hadoop-client-minicluster, it indeed does not have this class. Is 
> this something that is missing when packaging the published jar ?
> {code}
> java.lang.NoClassDefFoundError: 
> org/apache/hadoop/yarn/server/api/DistributedSchedulingAMProtocol
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
>   at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>   at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
>   at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.createResourceManager(MiniYARNCluster.java:851)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.serviceInit(MiniYARNCluster.java:285)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15506) Upgrading Azure Storage Sdk version and updated corresponding code blocks

2018-05-31 Thread Thomas Marquardt (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Marquardt updated HADOOP-15506:
--
Release Note: WASB: Fix Spark process hang at shutdown due to use of 
non-daemon threads by updating Azure Storage Java SDK to 7.0
  Status: Patch Available  (was: Open)

Esfandiar attached HADOOP-15506-001.patch and posted the test results.  This 
SDK has a fix to the block blob output stream to use daemon threads for upload, 
and thereby fixes a process shutdown hang commonly seen when running Spark 
jobs. 

> Upgrading Azure Storage Sdk version and updated corresponding code blocks
> -
>
> Key: HADOOP-15506
> URL: https://issues.apache.org/jira/browse/HADOOP-15506
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Esfandiar Manii
>Assignee: Esfandiar Manii
>Priority: Minor
> Attachments: HADOOP-15506-001.patch
>
>
> - Upgraded Azure Storage Sdk to 7.0.0
> - Fixed code issues and couple of tests



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15407) Support Windows Azure Storage - Blob file system in Hadoop

2018-05-31 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497445#comment-16497445
 ] 

genericqa commented on HADOOP-15407:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 51 new or modified test 
files. {color} |
|| || || || {color:brown} HADOOP-15407 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  6m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
 7s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
27s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
13s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 19m  
2s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
32m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
14s{color} | {color:green} HADOOP-15407 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
4m  8s{color} | {color:orange} root: The patch generated 194 new + 0 unchanged 
- 0 fixed = 194 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 19m 
44s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 2 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
7s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project . {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
55s{color} | {color:red} hadoop-tools/hadoop-azure generated 17 new + 0 
unchanged - 0 fixed = 17 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 24s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}216m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-tools/hadoop-azure |
|  |  Hard coded reference to an absolute pathname in 
org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.getHomeDirectory()  At 
AzureBlobFileSystem.java:absolute pathname in 
org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.getHomeDirectory()  At 
AzureBlobFileSystem.java:[line 435] |
|  |  Should 

[jira] [Commented] (HADOOP-15507) Add MapReduce counters about EC bytes read

2018-05-31 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497415#comment-16497415
 ] 

genericqa commented on HADOOP-15507:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 26m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 42s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 48s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
47s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
11s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}149m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.TestTrash |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15507 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926009/HADOOP-15507.01.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f3d1cb52a279 4.4.0-121-generic #145-Ubuntu SMP Fri Apr 13 
13:47:23 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c95b9b5 |
| maven | version: Apache Maven 

[jira] [Commented] (HADOOP-15506) Upgrading Azure Storage Sdk version and updated corresponding code blocks

2018-05-31 Thread Thomas Marquardt (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497337#comment-16497337
 ] 

Thomas Marquardt commented on HADOOP-15506:
---

+1

> Upgrading Azure Storage Sdk version and updated corresponding code blocks
> -
>
> Key: HADOOP-15506
> URL: https://issues.apache.org/jira/browse/HADOOP-15506
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Esfandiar Manii
>Assignee: Esfandiar Manii
>Priority: Minor
> Attachments: HADOOP-15506-001.patch
>
>
> - Upgraded Azure Storage Sdk to 7.0.0
> - Fixed code issues and couple of tests



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14783) [KMS] Add missing configuration properties into kms-default.xml

2018-05-31 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497329#comment-16497329
 ] 

Hudson commented on HADOOP-14783:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14330 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14330/])
HADOOP-14783. [KMS] Add missing configuration properties into (weichiu: rev 
32671d87135f22707ea03c3f17e99d41d82c0a39)
* (edit) hadoop-common-project/hadoop-kms/src/main/resources/kms-default.xml


> [KMS] Add missing configuration properties into kms-default.xml
> ---
>
> Key: HADOOP-14783
> URL: https://issues.apache.org/jira/browse/HADOOP-14783
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Wei-Chiu Chuang
>Assignee: Chetna Chaudhari
>Priority: Minor
>  Labels: newbie++
> Fix For: 3.2.0
>
> Attachments: HADOOP-14783-2.patch, HADOOP-14783.patch
>
>
> A few KMS configs are missing from kms-default.xml
> hadoop.kms.key.authorization.enable
> hadoop.security.kms.encrypted.key.cache.{size,low.watermark,expiry,num.fill.threads}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14783) [KMS] Add missing configuration properties into kms-default.xml

2018-05-31 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-14783:
-
   Resolution: Fixed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

Pushed to trunk. Thanks a lot for the contribution!

> [KMS] Add missing configuration properties into kms-default.xml
> ---
>
> Key: HADOOP-14783
> URL: https://issues.apache.org/jira/browse/HADOOP-14783
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Wei-Chiu Chuang
>Assignee: Chetna Chaudhari
>Priority: Minor
>  Labels: newbie++
> Fix For: 3.2.0
>
> Attachments: HADOOP-14783-2.patch, HADOOP-14783.patch
>
>
> A few KMS configs are missing from kms-default.xml
> hadoop.kms.key.authorization.enable
> hadoop.security.kms.encrypted.key.cache.{size,low.watermark,expiry,num.fill.threads}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14783) [KMS] Add missing configuration properties into kms-default.xml

2018-05-31 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497301#comment-16497301
 ] 

Wei-Chiu Chuang commented on HADOOP-14783:
--

+1

> [KMS] Add missing configuration properties into kms-default.xml
> ---
>
> Key: HADOOP-14783
> URL: https://issues.apache.org/jira/browse/HADOOP-14783
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Wei-Chiu Chuang
>Assignee: Chetna Chaudhari
>Priority: Minor
>  Labels: newbie++
> Attachments: HADOOP-14783-2.patch, HADOOP-14783.patch
>
>
> A few KMS configs are missing from kms-default.xml
> hadoop.kms.key.authorization.enable
> hadoop.security.kms.encrypted.key.cache.{size,low.watermark,expiry,num.fill.threads}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15507) Add MapReduce counters about EC bytes read

2018-05-31 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497267#comment-16497267
 ] 

Xiao Chen commented on HADOOP-15507:


Added a screenshot of what this would look like

> Add MapReduce counters about EC bytes read
> --
>
> Key: HADOOP-15507
> URL: https://issues.apache.org/jira/browse/HADOOP-15507
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-15507.01.patch, image-2018-05-31-15-29-45-729.png
>
>
> HDFS has added Erasure Coding support in HDFS-7285. There are HDFS level 
> [ReadStatistics|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ReadStatistics.java]
>  so from DFSClient we can know how much reads are EC/replication.
> In order for users to have a better view of how much of their workload is 
> impacted by EC, we can expose EC read bytes to File System Counters, and to 
> MapReduce's job counters. This way, end users can tell from MR jobs directly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15507) Add MapReduce counters about EC bytes read

2018-05-31 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-15507:
---
Attachment: image-2018-05-31-15-29-45-729.png

> Add MapReduce counters about EC bytes read
> --
>
> Key: HADOOP-15507
> URL: https://issues.apache.org/jira/browse/HADOOP-15507
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-15507.01.patch, image-2018-05-31-15-29-45-729.png
>
>
> HDFS has added Erasure Coding support in HDFS-7285. There are HDFS level 
> [ReadStatistics|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ReadStatistics.java]
>  so from DFSClient we can know how much reads are EC/replication.
> In order for users to have a better view of how much of their workload is 
> impacted by EC, we can expose EC read bytes to File System Counters, and to 
> MapReduce's job counters. This way, end users can tell from MR jobs directly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15507) Add MapReduce counters about EC bytes read

2018-05-31 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-15507:
---
Status: Patch Available  (was: Open)

Updating patch 1 to demonstrate the idea and solicit early feedback

> Add MapReduce counters about EC bytes read
> --
>
> Key: HADOOP-15507
> URL: https://issues.apache.org/jira/browse/HADOOP-15507
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-15507.01.patch
>
>
> HDFS has added Erasure Coding support in HDFS-7285. There are HDFS level 
> [ReadStatistics|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ReadStatistics.java]
>  so from DFSClient we can know how much reads are EC/replication.
> In order for users to have a better view of how much of their workload is 
> impacted by EC, we can expose EC read bytes to File System Counters, and to 
> MapReduce's job counters. This way, end users can tell from MR jobs directly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15507) Add MapReduce counters about EC bytes read

2018-05-31 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-15507:
---
Attachment: HADOOP-15507.01.patch

> Add MapReduce counters about EC bytes read
> --
>
> Key: HADOOP-15507
> URL: https://issues.apache.org/jira/browse/HADOOP-15507
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-15507.01.patch
>
>
> HDFS has added Erasure Coding support in HDFS-7285. There are HDFS level 
> [ReadStatistics|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ReadStatistics.java]
>  so from DFSClient we can know how much reads are EC/replication.
> In order for users to have a better view of how much of their workload is 
> impacted by EC, we can expose EC read bytes to File System Counters, and to 
> MapReduce's job counters. This way, end users can tell from MR jobs directly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15507) Add MapReduce counters about EC bytes read

2018-05-31 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497247#comment-16497247
 ] 

Xiao Chen commented on HADOOP-15507:


(Write counters are calculated at FSDataOutputStream, and it's pretty difficult 
to bring HDFS information from DFSOutputStream up here. So for this Jira the 
proposal is to only do read stats)

> Add MapReduce counters about EC bytes read
> --
>
> Key: HADOOP-15507
> URL: https://issues.apache.org/jira/browse/HADOOP-15507
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
>
> HDFS has added Erasure Coding support in HDFS-7285. There are HDFS level 
> [ReadStatistics|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ReadStatistics.java]
>  so from DFSClient we can know how much reads are EC/replication.
> In order for users to have a better view of how much of their workload is 
> impacted by EC, we can expose EC read bytes to File System Counters, and to 
> MapReduce's job counters. This way, end users can tell from MR jobs directly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15507) Add MapReduce counters about EC bytes read

2018-05-31 Thread Xiao Chen (JIRA)
Xiao Chen created HADOOP-15507:
--

 Summary: Add MapReduce counters about EC bytes read
 Key: HADOOP-15507
 URL: https://issues.apache.org/jira/browse/HADOOP-15507
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Xiao Chen
Assignee: Xiao Chen


HDFS has added Erasure Coding support in HDFS-7285. There are HDFS level 
[ReadStatistics|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ReadStatistics.java]
 so from DFSClient we can know how much reads are EC/replication.

In order for users to have a better view of how much of their workload is 
impacted by EC, we can expose EC read bytes to File System Counters, and to 
MapReduce's job counters. This way, end users can tell from MR jobs directly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15507) Add MapReduce counters about EC bytes read

2018-05-31 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-15507:
---
Target Version/s: 3.2.0

> Add MapReduce counters about EC bytes read
> --
>
> Key: HADOOP-15507
> URL: https://issues.apache.org/jira/browse/HADOOP-15507
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
>
> HDFS has added Erasure Coding support in HDFS-7285. There are HDFS level 
> [ReadStatistics|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ReadStatistics.java]
>  so from DFSClient we can know how much reads are EC/replication.
> In order for users to have a better view of how much of their workload is 
> impacted by EC, we can expose EC read bytes to File System Counters, and to 
> MapReduce's job counters. This way, end users can tell from MR jobs directly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15506) Upgrading Azure Storage Sdk version and updated corresponding code blocks

2018-05-31 Thread Esfandiar Manii (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497236#comment-16497236
 ] 

Esfandiar Manii commented on HADOOP-15506:
--

{code:java}
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.hadoop.fs.azure.TestWasbFsck
[INFO] Running org.apache.hadoop.fs.azure.TestShellDecryptionKeyProvider
[INFO] Running org.apache.hadoop.fs.azure.TestBlobOperationDescriptor
[INFO] Running 
org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractMocked
[WARNING] Tests run: 2, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 0.157 
s - in org.apache.hadoop.fs.azure.TestShellDecryptionKeyProvider
[WARNING] Tests run: 2, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.736 
s - in org.apache.hadoop.fs.azure.TestWasbFsck
[INFO] Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked
[INFO] Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemConcurrency
[WARNING] Tests run: 43, Failures: 0, Errors: 0, Skipped: 5, Time elapsed: 
1.493 s - in org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractMocked
[INFO] Running org.apache.hadoop.fs.azure.TestBlobMetadata
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.11 s - 
in org.apache.hadoop.fs.azure.TestNativeAzureFileSystemConcurrency
[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.668 s 
- in org.apache.hadoop.fs.azure.TestBlobMetadata
[INFO] Running org.apache.hadoop.fs.azure.TestOutOfBandAzureBlobOperations
[INFO] Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemUploadLogic
[WARNING] Tests run: 3, Failures: 0, Errors: 0, Skipped: 3, Time elapsed: 0.058 
s - in org.apache.hadoop.fs.azure.TestNativeAzureFileSystemUploadLogic
[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.735 s 
- in org.apache.hadoop.fs.azure.TestOutOfBandAzureBlobOperations
[INFO] Running 
org.apache.hadoop.fs.azure.TestNativeAzureFileSystemBlockCompaction
[INFO] Running org.apache.hadoop.fs.azure.TestClientThrottlingAnalyzer
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.103 s 
- in org.apache.hadoop.fs.azure.TestBlobOperationDescriptor
[INFO] Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.649 s 
- in org.apache.hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck
[INFO] Running 
org.apache.hadoop.fs.azure.metrics.TestNativeAzureFileSystemMetricsSystem
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.596 s 
- in org.apache.hadoop.fs.azure.metrics.TestNativeAzureFileSystemMetricsSystem
[INFO] Running org.apache.hadoop.fs.azure.metrics.TestBandwidthGaugeUpdater
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.287 s 
- in org.apache.hadoop.fs.azure.metrics.TestBandwidthGaugeUpdater
[INFO] Running 
org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked
[INFO] Tests run: 50, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.335 s 
- in org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked
[INFO] Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemAuthorization
[INFO] Tests run: 46, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.206 
s - in org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.12 s 
- in org.apache.hadoop.fs.azure.TestNativeAzureFileSystemBlockCompaction
[INFO] Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 35.733 s 
- in org.apache.hadoop.fs.azure.TestClientThrottlingAnalyzer
[INFO] Tests run: 59, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 230.7 s 
- in org.apache.hadoop.fs.azure.TestNativeAzureFileSystemAuthorization
[INFO] 
[INFO] Results:
[INFO] 
[WARNING] Tests run: 241, Failures: 0, Errors: 0, Skipped: 11
[INFO] 
[INFO] 
[INFO] --- maven-surefire-plugin:2.21.0:test (serialized-test) @ hadoop-azure 
---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.hadoop.fs.azure.metrics.TestRollingWindowAverage
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.197 s 
- in org.apache.hadoop.fs.azure.metrics.TestRollingWindowAverage
[INFO] 
[INFO] Results:
[INFO] 
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-azure ---
[INFO] Deleting /home/esmanii/hadoop/hadoop-tools/hadoop-azure/target
[INFO] Deleting /home/esmanii/hadoop/hadoop-tools/hadoop-azure (includes = 
[dependency-reduced-pom.xml], excludes = [])
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-azure ---
[INFO] 

[jira] [Updated] (HADOOP-15506) Upgrading Azure Storage Sdk version and updated corresponding code blocks

2018-05-31 Thread Esfandiar Manii (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esfandiar Manii updated HADOOP-15506:
-
Attachment: HADOOP-15506-001.patch

> Upgrading Azure Storage Sdk version and updated corresponding code blocks
> -
>
> Key: HADOOP-15506
> URL: https://issues.apache.org/jira/browse/HADOOP-15506
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Esfandiar Manii
>Assignee: Esfandiar Manii
>Priority: Minor
> Attachments: HADOOP-15506-001.patch
>
>
> - Upgraded Azure Storage Sdk to 7.0.0
> - Fixed code issues and couple of tests



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15506) Upgrading Azure Storage Sdk version and updated corresponding code blocks

2018-05-31 Thread Esfandiar Manii (JIRA)
Esfandiar Manii created HADOOP-15506:


 Summary: Upgrading Azure Storage Sdk version and updated 
corresponding code blocks
 Key: HADOOP-15506
 URL: https://issues.apache.org/jira/browse/HADOOP-15506
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/azure
Affects Versions: 3.2.0
Reporter: Esfandiar Manii
Assignee: Esfandiar Manii


- Upgraded Azure Storage Sdk to 7.0.0
- Fixed code issues and couple of tests




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15407) Support Windows Azure Storage - Blob file system in Hadoop

2018-05-31 Thread Thomas Marquardt (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497218#comment-16497218
 ] 

Thomas Marquardt edited comment on HADOOP-15407 at 5/31/18 9:35 PM:


Submitting HADOOP-15407-HADOOP-15407.006.patch

All tests pass against my storage account in the US:

Tests run: 269, Failures: 0, Errors: 0, Skipped: 11
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
Tests run: 414, Failures: 0, Errors: 0, Skipped: 69
Tests run: 526, Failures: 0, Errors: 0, Skipped: 163


was (Author: tmarquardt):
Submitting HADOOP-15407-HADOOP-15407.006.patch

> Support Windows Azure Storage - Blob file system in Hadoop
> --
>
> Key: HADOOP-15407
> URL: https://issues.apache.org/jira/browse/HADOOP-15407
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Esfandiar Manii
>Assignee: Esfandiar Manii
>Priority: Major
> Attachments: HADOOP-15407-001.patch, HADOOP-15407-002.patch, 
> HADOOP-15407-003.patch, HADOOP-15407-004.patch, 
> HADOOP-15407-HADOOP-15407.006.patch
>
>
> *{color:#212121}Description{color}*
>  This JIRA adds a new file system implementation, ABFS, for running Big Data 
> and Analytics workloads against Azure Storage. This is a complete rewrite of 
> the previous WASB driver with a heavy focus on optimizing both performance 
> and cost.
>  {color:#212121} {color}
>  *{color:#212121}High level design{color}*
>  At a high level, the code here extends the FileSystem class to provide an 
> implementation for accessing blobs in Azure Storage. The scheme abfs is used 
> for accessing it over HTTP, and abfss for accessing over HTTPS. The following 
> URI scheme is used to address individual paths:
>  {color:#212121} {color}
>  
> {color:#212121}abfs[s]://@.dfs.core.windows.net/{color}
>  {color:#212121} {color}
>  {color:#212121}ABFS is intended as a replacement to WASB. WASB is not 
> deprecated but is in pure maintenance mode and customers should upgrade to 
> ABFS once it hits General Availability later in CY18.{color}
>  {color:#212121}Benefits of ABFS include:{color}
>  {color:#212121}· Higher scale (capacity, throughput, and IOPS) Big 
> Data and Analytics workloads by allowing higher limits on storage 
> accounts{color}
>  {color:#212121}· Removing any ramp up time with Storage backend 
> partitioning; blocks are now automatically sharded across partitions in the 
> Storage backend{color}
> {color:#212121}          .         This avoids the need for using 
> temporary/intermediate files, increasing the cost (and framework complexity 
> around committing jobs/tasks){color}
>  {color:#212121}· Enabling much higher read and write throughput on 
> single files (tens of Gbps by default){color}
>  {color:#212121}· Still retaining all of the Azure Blob features 
> customers are familiar with and expect, and gaining the benefits of future 
> Blob features as well{color}
>  {color:#212121}ABFS incorporates Hadoop Filesystem metrics to monitor the 
> file system throughput and operations. Ambari metrics are not currently 
> implemented for ABFS, but will be available soon.{color}
>  {color:#212121} {color}
>  *{color:#212121}Credits and history{color}*
>  Credit for this work goes to (hope I don't forget anyone): Shane Mainali, 
> {color:#212121}Thomas Marquardt, Zichen Sun, Georgi Chalakov, Esfandiar 
> Manii, Amit Singh, Dana Kaban, Da Zhou, Junhua Gu, Saher Ahwal, Saurabh Pant, 
> and James Baker. {color}
>  {color:#212121} {color}
>  *Test*
>  ABFS has gone through many test procedures including Hadoop file system 
> contract tests, unit testing, functional testing, and manual testing. All the 
> Junit tests provided with the driver are capable of running in both 
> sequential/parallel fashion in order to reduce the testing time.
>  {color:#212121}Besides unit tests, we have used ABFS as the default file 
> system in Azure HDInsight. Azure HDInsight will very soon offer ABFS as a 
> storage option. (HDFS is also used but not as default file system.) Various 
> different customer and test workloads have been run against clusters with 
> such configurations for quite some time. Benchmarks such as Tera*, TPC-DS, 
> Spark Streaming and Spark SQL, and others have been run to do scenario, 
> performance, and functional testing. Third parties and customers have also 
> done various testing of ABFS.{color}
>  {color:#212121}The current version reflects to the version of the code 
> tested and used in our production environment.{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For 

[jira] [Updated] (HADOOP-15407) Support Windows Azure Storage - Blob file system in Hadoop

2018-05-31 Thread Thomas Marquardt (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Marquardt updated HADOOP-15407:
--
Status: Patch Available  (was: Open)

Submitting HADOOP-15407-HADOOP-15407.006.patch

> Support Windows Azure Storage - Blob file system in Hadoop
> --
>
> Key: HADOOP-15407
> URL: https://issues.apache.org/jira/browse/HADOOP-15407
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Esfandiar Manii
>Assignee: Esfandiar Manii
>Priority: Major
> Attachments: HADOOP-15407-001.patch, HADOOP-15407-002.patch, 
> HADOOP-15407-003.patch, HADOOP-15407-004.patch, 
> HADOOP-15407-HADOOP-15407.006.patch
>
>
> *{color:#212121}Description{color}*
>  This JIRA adds a new file system implementation, ABFS, for running Big Data 
> and Analytics workloads against Azure Storage. This is a complete rewrite of 
> the previous WASB driver with a heavy focus on optimizing both performance 
> and cost.
>  {color:#212121} {color}
>  *{color:#212121}High level design{color}*
>  At a high level, the code here extends the FileSystem class to provide an 
> implementation for accessing blobs in Azure Storage. The scheme abfs is used 
> for accessing it over HTTP, and abfss for accessing over HTTPS. The following 
> URI scheme is used to address individual paths:
>  {color:#212121} {color}
>  
> {color:#212121}abfs[s]://@.dfs.core.windows.net/{color}
>  {color:#212121} {color}
>  {color:#212121}ABFS is intended as a replacement to WASB. WASB is not 
> deprecated but is in pure maintenance mode and customers should upgrade to 
> ABFS once it hits General Availability later in CY18.{color}
>  {color:#212121}Benefits of ABFS include:{color}
>  {color:#212121}· Higher scale (capacity, throughput, and IOPS) Big 
> Data and Analytics workloads by allowing higher limits on storage 
> accounts{color}
>  {color:#212121}· Removing any ramp up time with Storage backend 
> partitioning; blocks are now automatically sharded across partitions in the 
> Storage backend{color}
> {color:#212121}          .         This avoids the need for using 
> temporary/intermediate files, increasing the cost (and framework complexity 
> around committing jobs/tasks){color}
>  {color:#212121}· Enabling much higher read and write throughput on 
> single files (tens of Gbps by default){color}
>  {color:#212121}· Still retaining all of the Azure Blob features 
> customers are familiar with and expect, and gaining the benefits of future 
> Blob features as well{color}
>  {color:#212121}ABFS incorporates Hadoop Filesystem metrics to monitor the 
> file system throughput and operations. Ambari metrics are not currently 
> implemented for ABFS, but will be available soon.{color}
>  {color:#212121} {color}
>  *{color:#212121}Credits and history{color}*
>  Credit for this work goes to (hope I don't forget anyone): Shane Mainali, 
> {color:#212121}Thomas Marquardt, Zichen Sun, Georgi Chalakov, Esfandiar 
> Manii, Amit Singh, Dana Kaban, Da Zhou, Junhua Gu, Saher Ahwal, Saurabh Pant, 
> and James Baker. {color}
>  {color:#212121} {color}
>  *Test*
>  ABFS has gone through many test procedures including Hadoop file system 
> contract tests, unit testing, functional testing, and manual testing. All the 
> Junit tests provided with the driver are capable of running in both 
> sequential/parallel fashion in order to reduce the testing time.
>  {color:#212121}Besides unit tests, we have used ABFS as the default file 
> system in Azure HDInsight. Azure HDInsight will very soon offer ABFS as a 
> storage option. (HDFS is also used but not as default file system.) Various 
> different customer and test workloads have been run against clusters with 
> such configurations for quite some time. Benchmarks such as Tera*, TPC-DS, 
> Spark Streaming and Spark SQL, and others have been run to do scenario, 
> performance, and functional testing. Third parties and customers have also 
> done various testing of ABFS.{color}
>  {color:#212121}The current version reflects to the version of the code 
> tested and used in our production environment.{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15407) Support Windows Azure Storage - Blob file system in Hadoop

2018-05-31 Thread Thomas Marquardt (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Marquardt updated HADOOP-15407:
--
Attachment: HADOOP-15407-HADOOP-15407.006.patch

> Support Windows Azure Storage - Blob file system in Hadoop
> --
>
> Key: HADOOP-15407
> URL: https://issues.apache.org/jira/browse/HADOOP-15407
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Esfandiar Manii
>Assignee: Esfandiar Manii
>Priority: Major
> Attachments: HADOOP-15407-001.patch, HADOOP-15407-002.patch, 
> HADOOP-15407-003.patch, HADOOP-15407-004.patch, 
> HADOOP-15407-HADOOP-15407.006.patch
>
>
> *{color:#212121}Description{color}*
>  This JIRA adds a new file system implementation, ABFS, for running Big Data 
> and Analytics workloads against Azure Storage. This is a complete rewrite of 
> the previous WASB driver with a heavy focus on optimizing both performance 
> and cost.
>  {color:#212121} {color}
>  *{color:#212121}High level design{color}*
>  At a high level, the code here extends the FileSystem class to provide an 
> implementation for accessing blobs in Azure Storage. The scheme abfs is used 
> for accessing it over HTTP, and abfss for accessing over HTTPS. The following 
> URI scheme is used to address individual paths:
>  {color:#212121} {color}
>  
> {color:#212121}abfs[s]://@.dfs.core.windows.net/{color}
>  {color:#212121} {color}
>  {color:#212121}ABFS is intended as a replacement to WASB. WASB is not 
> deprecated but is in pure maintenance mode and customers should upgrade to 
> ABFS once it hits General Availability later in CY18.{color}
>  {color:#212121}Benefits of ABFS include:{color}
>  {color:#212121}· Higher scale (capacity, throughput, and IOPS) Big 
> Data and Analytics workloads by allowing higher limits on storage 
> accounts{color}
>  {color:#212121}· Removing any ramp up time with Storage backend 
> partitioning; blocks are now automatically sharded across partitions in the 
> Storage backend{color}
> {color:#212121}          .         This avoids the need for using 
> temporary/intermediate files, increasing the cost (and framework complexity 
> around committing jobs/tasks){color}
>  {color:#212121}· Enabling much higher read and write throughput on 
> single files (tens of Gbps by default){color}
>  {color:#212121}· Still retaining all of the Azure Blob features 
> customers are familiar with and expect, and gaining the benefits of future 
> Blob features as well{color}
>  {color:#212121}ABFS incorporates Hadoop Filesystem metrics to monitor the 
> file system throughput and operations. Ambari metrics are not currently 
> implemented for ABFS, but will be available soon.{color}
>  {color:#212121} {color}
>  *{color:#212121}Credits and history{color}*
>  Credit for this work goes to (hope I don't forget anyone): Shane Mainali, 
> {color:#212121}Thomas Marquardt, Zichen Sun, Georgi Chalakov, Esfandiar 
> Manii, Amit Singh, Dana Kaban, Da Zhou, Junhua Gu, Saher Ahwal, Saurabh Pant, 
> and James Baker. {color}
>  {color:#212121} {color}
>  *Test*
>  ABFS has gone through many test procedures including Hadoop file system 
> contract tests, unit testing, functional testing, and manual testing. All the 
> Junit tests provided with the driver are capable of running in both 
> sequential/parallel fashion in order to reduce the testing time.
>  {color:#212121}Besides unit tests, we have used ABFS as the default file 
> system in Azure HDInsight. Azure HDInsight will very soon offer ABFS as a 
> storage option. (HDFS is also used but not as default file system.) Various 
> different customer and test workloads have been run against clusters with 
> such configurations for quite some time. Benchmarks such as Tera*, TPC-DS, 
> Spark Streaming and Spark SQL, and others have been run to do scenario, 
> performance, and functional testing. Third parties and customers have also 
> done various testing of ABFS.{color}
>  {color:#212121}The current version reflects to the version of the code 
> tested and used in our production environment.{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15504) Upgrade Maven and Maven Wagon versions

2018-05-31 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497205#comment-16497205
 ] 

genericqa commented on HADOOP-15504:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 18m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
32m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
12s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
36s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 20m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 58s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}164m 44s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
54s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}354m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun |
|   | hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageSchema 
|
|   | 

[jira] [Updated] (HADOOP-15446) WASB: PageBlobInputStream.skip breaks HBASE replication

2018-05-31 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-15446:
---
Fix Version/s: 3.1.1

> WASB: PageBlobInputStream.skip breaks HBASE replication
> ---
>
> Key: HADOOP-15446
> URL: https://issues.apache.org/jira/browse/HADOOP-15446
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.9.0, 3.0.2
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1
>
> Attachments: HADOOP-15446-001.patch, HADOOP-15446-002.patch, 
> HADOOP-15446-003.patch, HADOOP-15446-branch-2.001.patch
>
>
> Page Blobs are primarily used by HBASE.  HBASE replication, which apparently 
> has not been used with WASB until recently, performs non-sequential reads on 
> log files using PageBlobInputStream.  There are bugs in this stream 
> implementation which prevent skip and seek from working properly, and 
> eventually the stream state becomes corrupt and unusable.
> I believe this bug affects all releases of WASB/HADOOP.  It appears to be a 
> day-0 bug in PageBlobInputStream.  There were similar bugs opened in the past 
> (HADOOP-15042) but the issue was not properly fixed, and no test coverage was 
> added.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15490) Multiple declaration of maven-enforcer-plugin found in pom.xml

2018-05-31 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16496970#comment-16496970
 ] 

Hudson commented on HADOOP-15490:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14326 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14326/])
HADOOP-15490:Multiple declaration of maven-enforcer-plugin found in (bharat: 
rev a58acd9080ab609db197438d2e5ff9152c91898c)
* (edit) pom.xml


> Multiple declaration of maven-enforcer-plugin found in pom.xml
> --
>
> Key: HADOOP-15490
> URL: https://issues.apache.org/jira/browse/HADOOP-15490
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HADOOP-15490.000.patch
>
>
> Multiple declaration of {{maven-enforcer-plugin}} in {{pom.xml}} is causing 
> the below warning during build.
> {noformat}
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hadoop:hadoop-project:pom:3.2.0-SNAPSHOT
> [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but 
> found duplicate declaration of plugin 
> org.apache.maven.plugins:maven-enforcer-plugin @ 
> org.apache.hadoop:hadoop-main:3.2.0-SNAPSHOT, 
> /Users/nvadivelu/codebase/apache/hadoop/pom.xml, line 431, column 15
> [WARNING]
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hadoop:hadoop-main:pom:3.2.0-SNAPSHOT
> [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but 
> found duplicate declaration of plugin 
> org.apache.maven.plugins:maven-enforcer-plugin @ line 431, column 15
> [WARNING]
> [WARNING] It is highly recommended to fix these problems because they 
> threaten the stability of your build.
> [WARNING]
> [WARNING] For this reason, future Maven versions might no longer support 
> building such malformed projects.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15490) Multiple declaration of maven-enforcer-plugin found in pom.xml

2018-05-31 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16496938#comment-16496938
 ] 

Bharat Viswanadham commented on HADOOP-15490:
-

Thank You [~nandakumar131] for contribution. I have committed this to trunk.

 

> Multiple declaration of maven-enforcer-plugin found in pom.xml
> --
>
> Key: HADOOP-15490
> URL: https://issues.apache.org/jira/browse/HADOOP-15490
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HADOOP-15490.000.patch
>
>
> Multiple declaration of {{maven-enforcer-plugin}} in {{pom.xml}} is causing 
> the below warning during build.
> {noformat}
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hadoop:hadoop-project:pom:3.2.0-SNAPSHOT
> [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but 
> found duplicate declaration of plugin 
> org.apache.maven.plugins:maven-enforcer-plugin @ 
> org.apache.hadoop:hadoop-main:3.2.0-SNAPSHOT, 
> /Users/nvadivelu/codebase/apache/hadoop/pom.xml, line 431, column 15
> [WARNING]
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hadoop:hadoop-main:pom:3.2.0-SNAPSHOT
> [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but 
> found duplicate declaration of plugin 
> org.apache.maven.plugins:maven-enforcer-plugin @ line 431, column 15
> [WARNING]
> [WARNING] It is highly recommended to fix these problems because they 
> threaten the stability of your build.
> [WARNING]
> [WARNING] For this reason, future Maven versions might no longer support 
> building such malformed projects.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15490) Multiple declaration of maven-enforcer-plugin found in pom.xml

2018-05-31 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HADOOP-15490:

   Resolution: Fixed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

> Multiple declaration of maven-enforcer-plugin found in pom.xml
> --
>
> Key: HADOOP-15490
> URL: https://issues.apache.org/jira/browse/HADOOP-15490
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HADOOP-15490.000.patch
>
>
> Multiple declaration of {{maven-enforcer-plugin}} in {{pom.xml}} is causing 
> the below warning during build.
> {noformat}
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hadoop:hadoop-project:pom:3.2.0-SNAPSHOT
> [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but 
> found duplicate declaration of plugin 
> org.apache.maven.plugins:maven-enforcer-plugin @ 
> org.apache.hadoop:hadoop-main:3.2.0-SNAPSHOT, 
> /Users/nvadivelu/codebase/apache/hadoop/pom.xml, line 431, column 15
> [WARNING]
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hadoop:hadoop-main:pom:3.2.0-SNAPSHOT
> [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but 
> found duplicate declaration of plugin 
> org.apache.maven.plugins:maven-enforcer-plugin @ line 431, column 15
> [WARNING]
> [WARNING] It is highly recommended to fix these problems because they 
> threaten the stability of your build.
> [WARNING]
> [WARNING] For this reason, future Maven versions might no longer support 
> building such malformed projects.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15490) Multiple declaration of maven-enforcer-plugin found in pom.xml

2018-05-31 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16496922#comment-16496922
 ] 

Bharat Viswanadham edited comment on HADOOP-15490 at 5/31/18 6:01 PM:
--

Hi [~nandakumar131]

Thank You for reporting and providing the patch.

I built the code using the patch, now the Warning's are not appearing during 
the build.

LGTM +1.

ASF license warning is not related to this patch.

 

Will commit this shortly.

 


was (Author: bharatviswa):
Hi [~nandakumar131]

Thank You for reporting and providing the patch.

I built the code using the patch, now the Warning's are not appearing during 
the build.

LGTM +1.

 

Will commit this shortly.

 

> Multiple declaration of maven-enforcer-plugin found in pom.xml
> --
>
> Key: HADOOP-15490
> URL: https://issues.apache.org/jira/browse/HADOOP-15490
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Minor
> Attachments: HADOOP-15490.000.patch
>
>
> Multiple declaration of {{maven-enforcer-plugin}} in {{pom.xml}} is causing 
> the below warning during build.
> {noformat}
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hadoop:hadoop-project:pom:3.2.0-SNAPSHOT
> [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but 
> found duplicate declaration of plugin 
> org.apache.maven.plugins:maven-enforcer-plugin @ 
> org.apache.hadoop:hadoop-main:3.2.0-SNAPSHOT, 
> /Users/nvadivelu/codebase/apache/hadoop/pom.xml, line 431, column 15
> [WARNING]
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hadoop:hadoop-main:pom:3.2.0-SNAPSHOT
> [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but 
> found duplicate declaration of plugin 
> org.apache.maven.plugins:maven-enforcer-plugin @ line 431, column 15
> [WARNING]
> [WARNING] It is highly recommended to fix these problems because they 
> threaten the stability of your build.
> [WARNING]
> [WARNING] For this reason, future Maven versions might no longer support 
> building such malformed projects.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15490) Multiple declaration of maven-enforcer-plugin found in pom.xml

2018-05-31 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16496922#comment-16496922
 ] 

Bharat Viswanadham commented on HADOOP-15490:
-

Hi [~nandakumar131]

Thank You for reporting and providing the patch.

I built the code using the patch, now the Warning's are not appearing during 
the build.

LGTM +1.

 

Will commit this shortly.

 

> Multiple declaration of maven-enforcer-plugin found in pom.xml
> --
>
> Key: HADOOP-15490
> URL: https://issues.apache.org/jira/browse/HADOOP-15490
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Minor
> Attachments: HADOOP-15490.000.patch
>
>
> Multiple declaration of {{maven-enforcer-plugin}} in {{pom.xml}} is causing 
> the below warning during build.
> {noformat}
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hadoop:hadoop-project:pom:3.2.0-SNAPSHOT
> [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but 
> found duplicate declaration of plugin 
> org.apache.maven.plugins:maven-enforcer-plugin @ 
> org.apache.hadoop:hadoop-main:3.2.0-SNAPSHOT, 
> /Users/nvadivelu/codebase/apache/hadoop/pom.xml, line 431, column 15
> [WARNING]
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hadoop:hadoop-main:pom:3.2.0-SNAPSHOT
> [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but 
> found duplicate declaration of plugin 
> org.apache.maven.plugins:maven-enforcer-plugin @ line 431, column 15
> [WARNING]
> [WARNING] It is highly recommended to fix these problems because they 
> threaten the stability of your build.
> [WARNING]
> [WARNING] For this reason, future Maven versions might no longer support 
> building such malformed projects.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15471) Hdfs recursive listing operation is very slow

2018-05-31 Thread Ajay Sachdev (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16496792#comment-16496792
 ] 

Ajay Sachdev commented on HADOOP-15471:
---

Hello Mukul/Yiqun/Rishabh,

Please let me know if you have any comments on latest patch.

Appreciate your help!

Ajay

> Hdfs recursive listing operation is very slow
> -
>
> Key: HADOOP-15471
> URL: https://issues.apache.org/jira/browse/HADOOP-15471
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.1
> Environment: HCFS file system where HDP 2.6.1 is connected to ECS 
> (Object Store).
>Reporter: Ajay Sachdev
>Assignee: Ajay Sachdev
>Priority: Major
> Fix For: 2.7.1
>
> Attachments: HDFS-13398.001.patch, HDFS-13398.002.patch, 
> HDFS-13398.003.patch, parallelfsPatch
>
>
> The hdfs dfs -ls -R command is sequential in nature and is very slow for a 
> HCFS system. We have seen around 6 mins for 40K directory/files structure.
> The proposal is to use multithreading approach to speed up recursive list, du 
> and count operations.
> We have tried a ForkJoinPool implementation to improve performance for 
> recursive listing operation.
> [https://github.com/jasoncwik/hadoop-release/tree/parallel-fs-cli]
> commit id : 
> 82387c8cd76c2e2761bd7f651122f83d45ae8876
> Another implementation is to use Java Executor Service to improve performance 
> to run listing operation in multiple threads in parallel. This has 
> significantly reduced the time to 40 secs from 6 mins.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15504) Upgrade Maven and Maven Wagon versions

2018-05-31 Thread Sean Mackrory (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-15504:
---
Fix Version/s: 3.2.0
  Component/s: build

> Upgrade Maven and Maven Wagon versions
> --
>
> Key: HADOOP-15504
> URL: https://issues.apache.org/jira/browse/HADOOP-15504
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15504.001.patch
>
>
> I'm not even sure that Hadoop's combination of the relevant dependencies is 
> vulnerable (even if they are, this is a relatively minor vulnerability), but 
> this is at least showing up as an issue in automated vulnerability scans. 
> Details can be found here [https://maven.apache.org/security.html] 
> (CVE-2013-0253, CVE-2012-6153). Essentially the combination of maven 3.0.4 
> (we use 3.0, and I guess that maps to 3.0.4?) and older versions of wagon 
> plugin don't use SSL properly (note that we neither use the WebDAV provider 
> nor a 2.x version of the SSH plugin, which is why I suspect that the 
> vulnerability does not affect Hadoop).
> I know some dependencies can be especially troublesome to upgrade - I suspect 
> that Maven's critical role in our build might make this risky - so if anyone 
> has ideas for how to more completely test this than a full build, please 
> chime in,



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15504) Upgrade Maven and Maven Wagon versions

2018-05-31 Thread Sean Mackrory (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-15504:
---
Status: Patch Available  (was: Open)

> Upgrade Maven and Maven Wagon versions
> --
>
> Key: HADOOP-15504
> URL: https://issues.apache.org/jira/browse/HADOOP-15504
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
> Attachments: HADOOP-15504.001.patch
>
>
> I'm not even sure that Hadoop's combination of the relevant dependencies is 
> vulnerable (even if they are, this is a relatively minor vulnerability), but 
> this is at least showing up as an issue in automated vulnerability scans. 
> Details can be found here [https://maven.apache.org/security.html] 
> (CVE-2013-0253, CVE-2012-6153). Essentially the combination of maven 3.0.4 
> (we use 3.0, and I guess that maps to 3.0.4?) and older versions of wagon 
> plugin don't use SSL properly (note that we neither use the WebDAV provider 
> nor a 2.x version of the SSH plugin, which is why I suspect that the 
> vulnerability does not affect Hadoop).
> I know some dependencies can be especially troublesome to upgrade - I suspect 
> that Maven's critical role in our build might make this risky - so if anyone 
> has ideas for how to more completely test this than a full build, please 
> chime in,



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15504) Upgrade Maven and Maven Wagon versions

2018-05-31 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16496702#comment-16496702
 ] 

Sean Mackrory commented on HADOOP-15504:


Attaching a patch that does the minimum updates required by those 2 CVEs. We 
could also update to the latest of each - I tried that too, and the POM changes 
required (specifically the exclusions) are identical. But there's a risk of 
more unknown changes coming in and possibly requiring an otherwise unnecessary 
update of the Maven version required to build Hadoop.

 

I've run 'mvn site:site', 'mvn install -DskipTests -DskipShade', and 'mvn 
install -DskipShade' and there seem to be no problems. Happy to test more stuff 
if there's something this impacts that I'm missing.

> Upgrade Maven and Maven Wagon versions
> --
>
> Key: HADOOP-15504
> URL: https://issues.apache.org/jira/browse/HADOOP-15504
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
> Attachments: HADOOP-15504.001.patch
>
>
> I'm not even sure that Hadoop's combination of the relevant dependencies is 
> vulnerable (even if they are, this is a relatively minor vulnerability), but 
> this is at least showing up as an issue in automated vulnerability scans. 
> Details can be found here [https://maven.apache.org/security.html] 
> (CVE-2013-0253, CVE-2012-6153). Essentially the combination of maven 3.0.4 
> (we use 3.0, and I guess that maps to 3.0.4?) and older versions of wagon 
> plugin don't use SSL properly (note that we neither use the WebDAV provider 
> nor a 2.x version of the SSH plugin, which is why I suspect that the 
> vulnerability does not affect Hadoop).
> I know some dependencies can be especially troublesome to upgrade - I suspect 
> that Maven's critical role in our build might make this risky - so if anyone 
> has ideas for how to more completely test this than a full build, please 
> chime in,



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15504) Upgrade Maven and Maven Wagon versions

2018-05-31 Thread Sean Mackrory (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-15504:
---
Attachment: HADOOP-15504.001.patch

> Upgrade Maven and Maven Wagon versions
> --
>
> Key: HADOOP-15504
> URL: https://issues.apache.org/jira/browse/HADOOP-15504
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
> Attachments: HADOOP-15504.001.patch
>
>
> I'm not even sure that Hadoop's combination of the relevant dependencies is 
> vulnerable (even if they are, this is a relatively minor vulnerability), but 
> this is at least showing up as an issue in automated vulnerability scans. 
> Details can be found here [https://maven.apache.org/security.html] 
> (CVE-2013-0253, CVE-2012-6153). Essentially the combination of maven 3.0.4 
> (we use 3.0, and I guess that maps to 3.0.4?) and older versions of wagon 
> plugin don't use SSL properly (note that we neither use the WebDAV provider 
> nor a 2.x version of the SSH plugin, which is why I suspect that the 
> vulnerability does not affect Hadoop).
> I know some dependencies can be especially troublesome to upgrade - I suspect 
> that Maven's critical role in our build might make this risky - so if anyone 
> has ideas for how to more completely test this than a full build, please 
> chime in,



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15504) Upgrade Maven and Maven Wagon versions

2018-05-31 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16496659#comment-16496659
 ] 

Sean Mackrory commented on HADOOP-15504:


[~ajisakaa] I'm only referring to the version of things like maven-core, etc. 
we depend on, not the maven version used to run the build. To get past the 
security issues involved, we need to update to 3.0.5 or higher. I would assume 
that unless there's a bug, that if we don't go above 3.3, it has no impact on 
what versions you can use to run the build.

> Upgrade Maven and Maven Wagon versions
> --
>
> Key: HADOOP-15504
> URL: https://issues.apache.org/jira/browse/HADOOP-15504
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
>
> I'm not even sure that Hadoop's combination of the relevant dependencies is 
> vulnerable (even if they are, this is a relatively minor vulnerability), but 
> this is at least showing up as an issue in automated vulnerability scans. 
> Details can be found here [https://maven.apache.org/security.html] 
> (CVE-2013-0253, CVE-2012-6153). Essentially the combination of maven 3.0.4 
> (we use 3.0, and I guess that maps to 3.0.4?) and older versions of wagon 
> plugin don't use SSL properly (note that we neither use the WebDAV provider 
> nor a 2.x version of the SSH plugin, which is why I suspect that the 
> vulnerability does not affect Hadoop).
> I know some dependencies can be especially troublesome to upgrade - I suspect 
> that Maven's critical role in our build might make this risky - so if anyone 
> has ideas for how to more completely test this than a full build, please 
> chime in,



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15307) Improve NFS error handling: Unsupported verifier flavorAUTH_SYS

2018-05-31 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16496658#comment-16496658
 ] 

Wei-Chiu Chuang commented on HADOOP-15307:
--

I'm reviewing the patch. IIUC according to 
[RFC-1831|https://tools.ietf.org/html/rfc1831] APPENDIX A: SYSTEM 
AUTHENTICATION, AUTH_SYS should also carry the following data structure:

{noformat}
  struct authsys_parms {
 unsigned int stamp;
 string machinename<255>;
 unsigned int uid;
 unsigned int gid;
 unsigned int gids<16>;
  };
{noformat}

> Improve NFS error handling: Unsupported verifier flavorAUTH_SYS
> ---
>
> Key: HADOOP-15307
> URL: https://issues.apache.org/jira/browse/HADOOP-15307
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
> Environment: CentOS 7.4, CDH5.13.1, Kerberized Hadoop cluster
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Major
>  Labels: newbie
> Attachments: HADOOP-15307.001.patch, HADOOP-15307.002.patch
>
>
> When NFS gateway starts and if the portmapper request is denied by rpcbind 
> for any reason (in our case, /etc/hosts.allow did not have the localhost), 
> NFS gateway fails with the following obscure exception:
> {noformat}
> 2018-03-05 12:49:31,976 INFO org.apache.hadoop.oncrpc.SimpleUdpServer: 
> Started listening to UDP requests at port 4242 for Rpc program: mountd at 
> localhost:4242 with workerCount 1
> 2018-03-05 12:49:31,988 INFO org.apache.hadoop.oncrpc.SimpleTcpServer: 
> Started listening to TCP requests at port 4242 for Rpc program: mountd at 
> localhost:4242 with workerCount 1
> 2018-03-05 12:49:31,993 TRACE org.apache.hadoop.oncrpc.RpcCall: 
> Xid:692394656, messageType:RPC_CALL, rpcVersion:2, program:10, version:2, 
> procedure:1, credential:(AuthFlavor:AUTH_NONE), 
> verifier:(AuthFlavor:AUTH_NONE)
> 2018-03-05 12:49:31,998 FATAL org.apache.hadoop.mount.MountdBase: Failed to 
> start the server. Cause:
> java.lang.UnsupportedOperationException: Unsupported verifier flavorAUTH_SYS
> at 
> org.apache.hadoop.oncrpc.security.Verifier.readFlavorAndVerifier(Verifier.java:45)
> at org.apache.hadoop.oncrpc.RpcDeniedReply.read(RpcDeniedReply.java:50)
> at org.apache.hadoop.oncrpc.RpcReply.read(RpcReply.java:67)
> at org.apache.hadoop.oncrpc.SimpleUdpClient.run(SimpleUdpClient.java:71)
> at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:130)
> at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:101)
> at org.apache.hadoop.mount.MountdBase.start(MountdBase.java:83)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startServiceInternal(Nfs3.java:56)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:69)
> at 
> org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter.start(PrivilegedNfsGatewayStarter.java:60)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
> 2018-03-05 12:49:32,007 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1{noformat}
>  Reading the code comment for class Verifier, I think this bug existed since 
> its inception
> {code:java}
> /**
>  * Base class for verifier. Currently our authentication only supports 3 types
>  * of auth flavors: {@link RpcAuthInfo.AuthFlavor#AUTH_NONE}, {@link 
> RpcAuthInfo.AuthFlavor#AUTH_SYS},
>  * and {@link RpcAuthInfo.AuthFlavor#RPCSEC_GSS}. Thus for verifier we only 
> need to handle
>  * AUTH_NONE and RPCSEC_GSS
>  */
> public abstract class Verifier extends RpcAuthInfo {{code}
> The verifier should also handle AUTH_SYS too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15504) Upgrade Maven and Maven Wagon versions

2018-05-31 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16496338#comment-16496338
 ] 

Akira Ajisaka commented on HADOOP-15504:


Now Apache Maven 3.3+ is required to build Apache Hadoop. Do you want to update 
the minimum version to 3.4 or 3.5?

> Upgrade Maven and Maven Wagon versions
> --
>
> Key: HADOOP-15504
> URL: https://issues.apache.org/jira/browse/HADOOP-15504
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
>
> I'm not even sure that Hadoop's combination of the relevant dependencies is 
> vulnerable (even if they are, this is a relatively minor vulnerability), but 
> this is at least showing up as an issue in automated vulnerability scans. 
> Details can be found here [https://maven.apache.org/security.html] 
> (CVE-2013-0253, CVE-2012-6153). Essentially the combination of maven 3.0.4 
> (we use 3.0, and I guess that maps to 3.0.4?) and older versions of wagon 
> plugin don't use SSL properly (note that we neither use the WebDAV provider 
> nor a 2.x version of the SSH plugin, which is why I suspect that the 
> vulnerability does not affect Hadoop).
> I know some dependencies can be especially troublesome to upgrade - I suspect 
> that Maven's critical role in our build might make this risky - so if anyone 
> has ideas for how to more completely test this than a full build, please 
> chime in,



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org