[GitHub] [hadoop] hadoop-yetus commented on issue #739: HDDS-1432. Ozone client list command truncates response without any indication

2019-04-13 Thread GitBox
hadoop-yetus commented on issue #739: HDDS-1432. Ozone client list command 
truncates response without any indication
URL: https://github.com/apache/hadoop/pull/739#issuecomment-482921537
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 39 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1041 | trunk passed |
   | +1 | compile | 115 | trunk passed |
   | +1 | checkstyle | 51 | trunk passed |
   | +1 | mvnsite | 69 | trunk passed |
   | +1 | shadedclient | 747 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 43 | trunk passed |
   | +1 | javadoc | 48 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 15 | Maven dependency ordering for patch |
   | +1 | mvninstall | 68 | the patch passed |
   | +1 | compile | 105 | the patch passed |
   | +1 | javac | 105 | the patch passed |
   | +1 | checkstyle | 25 | the patch passed |
   | +1 | mvnsite | 56 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 751 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 49 | the patch passed |
   | +1 | javadoc | 40 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 43 | ozone-manager in the patch passed. |
   | -1 | unit | 1426 | integration-test in the patch failed. |
   | +1 | asflicense | 39 | The patch does not generate ASF License warnings. |
   | | | 4836 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-739/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/739 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 5d11738c06c5 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b2cdf80 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-739/1/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-739/1/testReport/ |
   | Max. process+thread count | 4259 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager hadoop-ozone/integration-test U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-739/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #738: [YARN-9479] Use Objects.equals(String, String) to avoid possible NullPointerException

2019-04-13 Thread GitBox
hadoop-yetus commented on issue #738: [YARN-9479] Use 
Objects.equals(String,String) to avoid possible NullPointerException
URL: https://github.com/apache/hadoop/pull/738#issuecomment-482919322
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1050 | trunk passed |
   | +1 | compile | 43 | trunk passed |
   | +1 | checkstyle | 40 | trunk passed |
   | +1 | mvnsite | 47 | trunk passed |
   | +1 | shadedclient | 676 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | findbugs | 72 | 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in trunk has 2 extant Findbugs warnings. |
   | +1 | javadoc | 27 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 40 | the patch passed |
   | +1 | compile | 39 | the patch passed |
   | +1 | javac | 39 | the patch passed |
   | -0 | checkstyle | 31 | 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 2 new + 14 unchanged - 0 fixed = 16 total (was 14) |
   | +1 | mvnsite | 43 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 673 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 79 | the patch passed |
   | +1 | javadoc | 26 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 4663 | hadoop-yarn-server-resourcemanager in the patch failed. 
|
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 7633 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestOpportunisticContainerAllocatorAMService 
|
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-738/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/738 |
   | JIRA Issue | YARN-9479 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 90a22cc098c3 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b2cdf80 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-738/2/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-warnings.html
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-738/2/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-738/2/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-738/2/testReport/ |
   | Max. process+thread count | 899 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-738/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] swagle opened a new pull request #739: HDDS-1432. Ozone client list command truncates response without any indication

2019-04-13 Thread GitBox
swagle opened a new pull request #739: HDDS-1432. Ozone client list command 
truncates response without any indication
URL: https://github.com/apache/hadoop/pull/739
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #738: [YARN-9479] Use Objects.equals(String, String) to avoid possible NullPointerException

2019-04-13 Thread GitBox
hadoop-yetus commented on issue #738: [YARN-9479] Use 
Objects.equals(String,String) to avoid possible NullPointerException
URL: https://github.com/apache/hadoop/pull/738#issuecomment-482915009
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 42 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1203 | trunk passed |
   | +1 | compile | 47 | trunk passed |
   | +1 | checkstyle | 40 | trunk passed |
   | +1 | mvnsite | 51 | trunk passed |
   | +1 | shadedclient | 815 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | findbugs | 75 | 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in trunk has 2 extant Findbugs warnings. |
   | +1 | javadoc | 30 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 25 | hadoop-yarn-server-resourcemanager in the patch 
failed. |
   | -1 | compile | 25 | hadoop-yarn-server-resourcemanager in the patch 
failed. |
   | -1 | javac | 25 | hadoop-yarn-server-resourcemanager in the patch failed. |
   | -0 | checkstyle | 35 | 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 2 new + 14 unchanged - 0 fixed = 16 total (was 14) |
   | -1 | mvnsite | 27 | hadoop-yarn-server-resourcemanager in the patch 
failed. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | -1 | shadedclient | 229 | patch has errors when building and testing our 
client artifacts. |
   | -1 | findbugs | 25 | hadoop-yarn-server-resourcemanager in the patch 
failed. |
   | +1 | javadoc | 27 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 27 | hadoop-yarn-server-resourcemanager in the patch failed. |
   | +1 | asflicense | 26 | The patch does not generate ASF License warnings. |
   | | | 2798 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-738/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/738 |
   | JIRA Issue | YARN-9479 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 7dc3bda83038 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed 
Feb 13 15:00:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b2cdf80 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-738/1/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-warnings.html
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-738/1/artifact/out/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-738/1/artifact/out/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-738/1/artifact/out/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-738/1/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-738/1/artifact/out/patch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-738/1/artifact/out/patch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-738/1/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-738/1/testReport/ |
   | Max. process+thread count | 339 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-

[GitHub] [hadoop] bd2019us opened a new pull request #738: [YARN-9479] Use Objects.equals(String, String) to avoid possible NullPointerException

2019-04-13 Thread GitBox
bd2019us opened a new pull request #738: [YARN-9479] Use 
Objects.equals(String,String) to avoid possible NullPointerException
URL: https://github.com/apache/hadoop/pull/738
 
 
   Hello,
   I found that the String "queueName" may have potential risk of 
   NullPointerException since they are immediately used after initialization 
and there is no null checker. One recommended API is 
Objects.equals(String,String) which can avoid this exception.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16249) Add Flink to CallerContext LimitedPrivate scope

2019-04-13 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16817138#comment-16817138
 ] 

Steve Loughran commented on HADOOP-16249:
-

Given how broad that use is, it clearly is needed. Making it public+evolving 
makes sense

> Add Flink to CallerContext LimitedPrivate scope
> ---
>
> Key: HADOOP-16249
> URL: https://issues.apache.org/jira/browse/HADOOP-16249
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Kenneth Yang
>Priority: Minor
> Attachments: HADOOP-16249.000.patch
>
>
> A lots of Flink applications run on Hadoop. Flink will invoke Hadoop caller 
> context APIs to set up its caller contexts in HDFS/Yarn, so Hadoop should add 
> Flink as one of the users in the LimitedPrivate scope.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15124) Slow FileSystem.Statistics counters implementation

2019-04-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16817085#comment-16817085
 ] 

Hadoop QA commented on HADOOP-15124:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
14s{color} | {color:green} root generated 0 new + 1493 unchanged - 3 fixed = 
1493 total (was 1496) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
1s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 11s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
53s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}215m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestNameNodeMXBean |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-15124 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12965814/HADOOP-15124.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ba1e60b05271 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon Mar 
18 15:02:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b2cdf80 |
| maven | version: Apache Maven 3.3.9 |
| 

[jira] [Commented] (HADOOP-16249) Add Flink to CallerContext LimitedPrivate scope

2019-04-13 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16817069#comment-16817069
 ] 

Wei-Chiu Chuang commented on HADOOP-16249:
--

I am not familiar with Flink and not familiar with CallerContext, but if such a 
versatile set of downstream applications rely on this class, we should probably 
open it up, making it public class.

> Add Flink to CallerContext LimitedPrivate scope
> ---
>
> Key: HADOOP-16249
> URL: https://issues.apache.org/jira/browse/HADOOP-16249
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Kenneth Yang
>Priority: Minor
> Attachments: HADOOP-16249.000.patch
>
>
> A lots of Flink applications run on Hadoop. Flink will invoke Hadoop caller 
> context APIs to set up its caller contexts in HDFS/Yarn, so Hadoop should add 
> Flink as one of the users in the LimitedPrivate scope.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16246) Unbounded thread pool maximum pool size in S3AFileSystem TransferManager

2019-04-13 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16817051#comment-16817051
 ] 

Steve Loughran commented on HADOOP-16246:
-

BTW, w.r.t thread size, that xfer manager doesn't just do bandwidth-consuming 
uploads, it does COPY requests, where having multiple threads copying 
individual parts of a larger file is a good thing: it doesn't consume system 
resources other than HTTPS connections.

[~gregakinman]: what operation was your app trying to do when it failed?

> Unbounded thread pool maximum pool size in S3AFileSystem TransferManager
> 
>
> Key: HADOOP-16246
> URL: https://issues.apache.org/jira/browse/HADOOP-16246
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Greg Kinman
>Priority: Major
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> I have something running in production that is running up on {{ulimit}} 
> trying to create {{s3a-transfer-unbounded}} threads.
> Relevant background: https://issues.apache.org/jira/browse/HADOOP-13826.
> Before that change, the thread pool used in the {{TransferManager}} had both 
> a reasonably small maximum pool size and work queue capacity.
> After that change, the thread pool has both a maximum pool size and work 
> queue capacity of {{Integer.MAX_VALUE}}.
> This seems like a pretty bad idea, because now we have, practically speaking, 
> no bound on the number of threads that might get created. I understand the 
> change was made in response to experiencing deadlocks and at the warning of 
> the documentation, which I will repeat here:
> {quote}It is not recommended to use a single threaded executor or a thread 
> pool with a bounded work queue as control tasks may submit subtasks that 
> can't complete until all sub tasks complete. Using an incorrectly configured 
> thread pool may cause a deadlock (I.E. the work queue is filled with control 
> tasks that can't finish until subtasks complete but subtasks can't execute 
> because the queue is filled).
> {quote}
> The documentation only warns against having a bounded _work queue_, not 
> against having a bounded _maximum pool size_. And this seems fine, as having 
> an unbounded work queue sounds ok. Having an unbounded maximum pool size, 
> however, does not.
> I will also note that this constructor is now deprecated and suggests using 
> {{TransferManagerBuilder}} instead, which by default creates a fixed thread 
> pool of size 10: 
> [https://github.com/aws/aws-sdk-java/blob/1.11.534/aws-java-sdk-s3/src/main/java/com/amazonaws/services/s3/transfer/internal/TransferManagerUtils.java#L59].
> I suggest we make a small change here and keep the maximum pool size at 
> {{maxThreads}}, which defaults to 10, while keeping the work queue as is 
> (unbounded).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16246) Unbounded thread pool maximum pool size in S3AFileSystem TransferManager

2019-04-13 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16817049#comment-16817049
 ] 

Steve Loughran commented on HADOOP-16246:
-

It was the AWS task manager which created problems here. We'd have to review 
the lib to see if it is now safe to use in the bounded pool. 

I could imagine making it possible to set an upper limit on that 
no-longer-unbounded pool, just to catch thread overload. Then you could turn it 
on to see if deadlocks were still surfacing. Though as usual, one more config 
option == one more way to get the system misconfigured

> Unbounded thread pool maximum pool size in S3AFileSystem TransferManager
> 
>
> Key: HADOOP-16246
> URL: https://issues.apache.org/jira/browse/HADOOP-16246
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Greg Kinman
>Priority: Major
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> I have something running in production that is running up on {{ulimit}} 
> trying to create {{s3a-transfer-unbounded}} threads.
> Relevant background: https://issues.apache.org/jira/browse/HADOOP-13826.
> Before that change, the thread pool used in the {{TransferManager}} had both 
> a reasonably small maximum pool size and work queue capacity.
> After that change, the thread pool has both a maximum pool size and work 
> queue capacity of {{Integer.MAX_VALUE}}.
> This seems like a pretty bad idea, because now we have, practically speaking, 
> no bound on the number of threads that might get created. I understand the 
> change was made in response to experiencing deadlocks and at the warning of 
> the documentation, which I will repeat here:
> {quote}It is not recommended to use a single threaded executor or a thread 
> pool with a bounded work queue as control tasks may submit subtasks that 
> can't complete until all sub tasks complete. Using an incorrectly configured 
> thread pool may cause a deadlock (I.E. the work queue is filled with control 
> tasks that can't finish until subtasks complete but subtasks can't execute 
> because the queue is filled).
> {quote}
> The documentation only warns against having a bounded _work queue_, not 
> against having a bounded _maximum pool size_. And this seems fine, as having 
> an unbounded work queue sounds ok. Having an unbounded maximum pool size, 
> however, does not.
> I will also note that this constructor is now deprecated and suggests using 
> {{TransferManagerBuilder}} instead, which by default creates a fixed thread 
> pool of size 10: 
> [https://github.com/aws/aws-sdk-java/blob/1.11.534/aws-java-sdk-s3/src/main/java/com/amazonaws/services/s3/transfer/internal/TransferManagerUtils.java#L59].
> I suggest we make a small change here and keep the maximum pool size at 
> {{maxThreads}}, which defaults to 10, while keeping the work queue as is 
> (unbounded).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16252) Use configurable dynamo table name prefix in S3Guard tests

2019-04-13 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16817045#comment-16817045
 ] 

Steve Loughran commented on HADOOP-16252:
-

makes sense. 
Are the tests still actually creating tables, as I think we might want to turn 
that option off, at least until the creation of on-demand tables is supported. 
I fear test run failures running up bills if the test suite is interrupted

> Use configurable dynamo table name prefix in S3Guard tests
> --
>
> Key: HADOOP-16252
> URL: https://issues.apache.org/jira/browse/HADOOP-16252
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Ben Roling
>Priority: Major
>
> Table names are hardcoded into tests for S3Guard with DynamoDB.  This makes 
> it awkward to set up a least-privilege type AWS IAM user or role that can 
> successfully execute the full test suite.  You either have to know all the 
> specific hardcoded table names and give the user Dynamo read/write access to 
> those by name or just give blanket read/write access to all Dynamo tables in 
> the account.
> I propose the tests use a configuration property to specify a prefix for the 
> table names used.  Then the full test suite can be run by a user that is 
> given read/write access to all tables with names starting with the configured 
> prefix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on issue #606: HADOOP-16190. S3A copyFile operation to include source versionID or etag in the copy request

2019-04-13 Thread GitBox
bgaborg commented on issue #606: HADOOP-16190. S3A copyFile operation to 
include source versionID or etag in the copy request
URL: https://github.com/apache/hadoop/pull/606#issuecomment-482850170
 
 
   I don't see any integration tests added to test this feature, just a unit 
test for the Invoker. What is the reason for this?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on a change in pull request #606: HADOOP-16190. S3A copyFile operation to include source versionID or etag in the copy request

2019-04-13 Thread GitBox
bgaborg commented on a change in pull request #606: HADOOP-16190. S3A copyFile 
operation to include source versionID or etag in the copy request
URL: https://github.com/apache/hadoop/pull/606#discussion_r275126182
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
 ##
 @@ -2878,27 +2880,41 @@ private void copyFile(String srcKey, String dstKey, 
long size)
   }
 };
 
-once("copyFile(" + srcKey + ", " + dstKey + ")", srcKey,
-() -> {
-  ObjectMetadata srcom = getObjectMetadata(srcKey);
-  ObjectMetadata dstom = cloneObjectMetadata(srcom);
-  setOptionalObjectMetadata(dstom);
-  CopyObjectRequest copyObjectRequest =
-  new CopyObjectRequest(bucket, srcKey, bucket, dstKey);
-  setOptionalCopyObjectRequestParameters(copyObjectRequest);
-  copyObjectRequest.setCannedAccessControlList(cannedACL);
-  copyObjectRequest.setNewObjectMetadata(dstom);
-  Copy copy = transfers.copy(copyObjectRequest);
-  copy.addProgressListener(progressListener);
-  try {
-copy.waitForCopyResult();
-incrementWriteOperations();
-instrumentation.filesCopied(1, size);
-  } catch (InterruptedException e) {
-throw new InterruptedIOException("Interrupted copying " + srcKey
-+ " to " + dstKey + ", cancelling");
-  }
-});
+try {
+  return once("copyFile(" + srcKey + ", " + dstKey + ")", srcKey,
+  () -> {
+ObjectMetadata srcom = getObjectMetadata(srcKey);
+ObjectMetadata dstom = cloneObjectMetadata(srcom);
+setOptionalObjectMetadata(dstom);
+CopyObjectRequest copyObjectRequest =
+new CopyObjectRequest(bucket, srcKey, bucket, dstKey);
+setOptionalCopyObjectRequestParameters(copyObjectRequest);
+copyObjectRequest.setCannedAccessControlList(cannedACL);
+copyObjectRequest.setNewObjectMetadata(dstom);
+String id = srcom.getVersionId();
+if (id != null) {
+  copyObjectRequest.setSourceVersionId(id);
+} else if (isNotEmpty(srcom.getETag())) {
+  copyObjectRequest.withMatchingETagConstraint(srcom.getETag());
+}
+Copy copy = transfers.copy(copyObjectRequest);
+copy.addProgressListener(progressListener);
+try {
+  CopyResult r = copy.waitForCopyResult();
+  incrementWriteOperations();
+  instrumentation.filesCopied(1, size);
+  return r;
+} catch (InterruptedException e) {
+  throw (IOException) new InterruptedIOException(
 
 Review comment:
   Why do you cast `InterruptedIOException` to `IOException`? If you cast it, 
then this method won't throw InterruptedIOException, just `IOException`, so it 
can be removed from it's signature.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15124) Slow FileSystem.Statistics counters implementation

2019-04-13 Thread Igor Dvorzhak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Dvorzhak updated HADOOP-15124:
---
Attachment: HADOOP-15124.001.patch
Status: Patch Available  (was: Open)

> Slow FileSystem.Statistics counters implementation
> --
>
> Key: HADOOP-15124
> URL: https://issues.apache.org/jira/browse/HADOOP-15124
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 3.1.0, 3.0.0, 2.7.5, 2.8.3, 2.9.0
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Major
>  Labels: common, filesystem, fs, statistics
> Attachments: HADOOP-15124.001.patch
>
>
> While profiling 1TB TeraGen job on Hadoop 2.8.2 cluster (Google Dataproc, 2 
> workers, GCS connector) I saw that FileSystem.Statistics code paths Wall time 
> is 5.58% and CPU time is 26.5% of total execution time.
> After switching FileSystem.Statistics implementation to LongAdder, consumed 
> Wall time decreased to 0.006% and CPU time to 0.104% of total execution time.
> Total job runtime decreased from 66 mins to 61 mins.
> These results are not conclusive, because I didn't benchmark multiple times 
> to average results, but regardless of performance gains switching to 
> LongAdder simplifies code and reduces its complexity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15124) Slow FileSystem.Statistics counters implementation

2019-04-13 Thread Igor Dvorzhak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Dvorzhak updated HADOOP-15124:
---
Attachment: (was: HADOOP-15124.001.patch)

> Slow FileSystem.Statistics counters implementation
> --
>
> Key: HADOOP-15124
> URL: https://issues.apache.org/jira/browse/HADOOP-15124
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 2.9.0, 2.8.3, 2.7.5, 3.0.0, 3.1.0
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Major
>  Labels: common, filesystem, fs, statistics
>
> While profiling 1TB TeraGen job on Hadoop 2.8.2 cluster (Google Dataproc, 2 
> workers, GCS connector) I saw that FileSystem.Statistics code paths Wall time 
> is 5.58% and CPU time is 26.5% of total execution time.
> After switching FileSystem.Statistics implementation to LongAdder, consumed 
> Wall time decreased to 0.006% and CPU time to 0.104% of total execution time.
> Total job runtime decreased from 66 mins to 61 mins.
> These results are not conclusive, because I didn't benchmark multiple times 
> to average results, but regardless of performance gains switching to 
> LongAdder simplifies code and reduces its complexity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15124) Slow FileSystem.Statistics counters implementation

2019-04-13 Thread Igor Dvorzhak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Dvorzhak updated HADOOP-15124:
---
Status: Open  (was: Patch Available)

> Slow FileSystem.Statistics counters implementation
> --
>
> Key: HADOOP-15124
> URL: https://issues.apache.org/jira/browse/HADOOP-15124
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 3.1.0, 3.0.0, 2.7.5, 2.8.3, 2.9.0
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Major
>  Labels: common, filesystem, fs, statistics
>
> While profiling 1TB TeraGen job on Hadoop 2.8.2 cluster (Google Dataproc, 2 
> workers, GCS connector) I saw that FileSystem.Statistics code paths Wall time 
> is 5.58% and CPU time is 26.5% of total execution time.
> After switching FileSystem.Statistics implementation to LongAdder, consumed 
> Wall time decreased to 0.006% and CPU time to 0.104% of total execution time.
> Total job runtime decreased from 66 mins to 61 mins.
> These results are not conclusive, because I didn't benchmark multiple times 
> to average results, but regardless of performance gains switching to 
> LongAdder simplifies code and reduces its complexity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16244) Hadoop Support ISA-L Compress/Decompress

2019-04-13 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16816995#comment-16816995
 ] 

Wei-Chiu Chuang edited comment on HADOOP-16244 at 4/13/19 4:07 PM:
---

Thanks for bring it up. I am not aware of ISA-L compress/decompress library. As 
for high performance compression library, ZStandard seems quite promising these 
days. 


was (Author: jojochuang):
Thanks for bring it up. I am not aware of ISA-L compress/decompress library. As 
for high performance compression library, ZSTD seems quite promising these 
days. 

> Hadoop Support ISA-L Compress/Decompress
> 
>
> Key: HADOOP-16244
> URL: https://issues.apache.org/jira/browse/HADOOP-16244
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: build, common, fs, native
>Reporter: zhong yingqun
>Priority: Major
>
> In hadoop 3.0.0+ , for default RS codec, there is also a native 
> implementation which leverages Intel ISA-L library to improve the performance 
> of codec. as we know, ISA-L has set of highly optimized functions include 
> compression/decompression base on gzip. can ISA-L compatible with hadoop to 
> provide highly performance of un/compress? however, hadoop comunity seem not 
> support isa-l compression & decompression up to now. Has some problem in 
> ISA-L compress-support? 
> Can someone share the experience in this area? thanks a lot!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16244) Hadoop Support ISA-L Compress/Decompress

2019-04-13 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16816995#comment-16816995
 ] 

Wei-Chiu Chuang commented on HADOOP-16244:
--

Thanks for bring it up. I am not aware of ISA-L compress/decompress library. As 
for high performance compression library, ZSTD seems quite promising these 
days. 

> Hadoop Support ISA-L Compress/Decompress
> 
>
> Key: HADOOP-16244
> URL: https://issues.apache.org/jira/browse/HADOOP-16244
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: build, common, fs, native
>Reporter: zhong yingqun
>Priority: Major
>
> In hadoop 3.0.0+ , for default RS codec, there is also a native 
> implementation which leverages Intel ISA-L library to improve the performance 
> of codec. as we know, ISA-L has set of highly optimized functions include 
> compression/decompression base on gzip. can ISA-L compatible with hadoop to 
> provide highly performance of un/compress? however, hadoop comunity seem not 
> support isa-l compression & decompression up to now. Has some problem in 
> ISA-L compress-support? 
> Can someone share the experience in this area? thanks a lot!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 merged pull request #737: HDDS-1198. Rename chill mode to safe mode. Contributed by Siddharth Wagle.

2019-04-13 Thread GitBox
arp7 merged pull request #737: HDDS-1198. Rename chill mode to safe mode. 
Contributed by Siddharth Wagle.
URL: https://github.com/apache/hadoop/pull/737
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16247) NPE in FsUrlConnection

2019-04-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16816881#comment-16816881
 ] 

Hadoop QA commented on HADOOP-16247:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m  6s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}103m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.TestFsUrlConnectionPath |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-16247 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12965800/HADOOP-16247-002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b7243441e542 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon Mar 
18 15:02:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1943db5 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16152/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16152/testReport/ |
| Max. process+thread count | 1426 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16152/console |
| Powered by | Apache Yetus

[jira] [Commented] (HADOOP-15124) Slow FileSystem.Statistics counters implementation

2019-04-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16816865#comment-16816865
 ] 

Hadoop QA commented on HADOOP-15124:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
6s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
36s{color} | {color:green} root generated 0 new + 1493 unchanged - 3 fixed = 
1493 total (was 1496) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 33s{color} | {color:orange} root: The patch generated 3 new + 120 unchanged 
- 0 fixed = 123 total (was 120) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
48s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}104m  9s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}220m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
|   | hadoop.hdfs.server.datanode.TestBPOfferService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-15124 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12965792/HADOOP-15124.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ba37fd0bc5

[jira] [Updated] (HADOOP-16247) NPE in FsUrlConnection

2019-04-13 Thread Karthik Palanisamy (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palanisamy updated HADOOP-16247:

Attachment: HADOOP-16247-002.patch

> NPE in FsUrlConnection
> --
>
> Key: HADOOP-16247
> URL: https://issues.apache.org/jira/browse/HADOOP-16247
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.1.2
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Major
> Attachments: HADOOP-16247-001.patch, HADOOP-16247-002.patch
>
>
> FsUrlConnection doesn't handle relativePath correctly after the change 
> [HADOOP-15217|https://issues.apache.org/jira/browse/HADOOP-15217]
> {code}
> Exception in thread "main" java.lang.NullPointerException
>  at org.apache.hadoop.fs.Path.isUriPathAbsolute(Path.java:385)
>  at org.apache.hadoop.fs.Path.isAbsolute(Path.java:395)
>  at 
> org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:87)
>  at 
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:636)
>  at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:930)
>  at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631)
>  at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:454)
>  at 
> org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.(ChecksumFileSystem.java:146)
>  at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:347)
>  at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:899)
>  at org.apache.hadoop.fs.FsUrlConnection.connect(FsUrlConnection.java:62)
>  at 
> org.apache.hadoop.fs.FsUrlConnection.getInputStream(FsUrlConnection.java:71)
>  at java.net.URL.openStream(URL.java:1045)
>  at UrlProblem.testRelativePath(UrlProblem.java:33)
>  at UrlProblem.main(UrlProblem.java:19)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16158) DistCp to support checksum validation when copy blocks in parallel

2019-04-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16816858#comment-16816858
 ] 

Hadoop QA commented on HADOOP-16158:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-tools/hadoop-distcp: The patch generated 0 
new + 218 unchanged - 2 fixed = 218 total (was 220) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
39s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-16158 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12965795/HADOOP-16158-005.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4e7dc3f1d43c 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon Mar 
18 15:02:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1943db5 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16151/testReport/ |
| Max. process+thread count | 306 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16151/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> DistCp to support checksum validation when copy blocks in paral