[jira] [Updated] (HDFS-12506) Ozone: ListBucket is too slow

2017-09-20 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12506:
---
Status: Patch Available  (was: Open)

> Ozone: ListBucket is too slow
> -
>
> Key: HDFS-12506
> URL: https://issues.apache.org/jira/browse/HDFS-12506
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Blocker
>  Labels: ozoneMerge, performance
> Attachments: HDFS-12506-HDFS-7240.001.patch
>
>
> Generated 3 million keys in ozone, and run {{listBucket}} command to get a 
> list of buckets under a volume,
> {code}
> bin/hdfs oz -listBucket http://15oz1.fyre.ibm.com:9864/vol-0-15143 -user wwei
> {code}
> this call spent over *15 seconds* to finish. The problem was caused by the 
> inflexible structure of KSM DB. Right now {{ksm.db}} stores keys like 
> following
> {code}
> /v1/b1
> /v1/b1/k1
> /v1/b1/k2
> /v1/b1/k3
> /v1/b2
> /v1/b2/k1
> /v1/b2/k2
> /v1/b2/k3
> /v1/b3
> /v1/b4
> {code}
> keys are sorted in nature order so when we do list buckets under a volume e.g 
> /v1, we need to seek to /v1 point and start to iterate and filter keys, this 
> ends up with scanning all keys under volume /v1. The problem with this design 
> is we don't have an efficient approach to locate all buckets without scanning 
> the keys.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12506) Ozone: ListBucket is too slow

2017-09-20 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12506:
---
Attachment: (was: HDFS-12506-HDFS-7240.001.patch)

> Ozone: ListBucket is too slow
> -
>
> Key: HDFS-12506
> URL: https://issues.apache.org/jira/browse/HDFS-12506
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Blocker
>  Labels: ozoneMerge, performance
> Attachments: HDFS-12506-HDFS-7240.001.patch
>
>
> Generated 3 million keys in ozone, and run {{listBucket}} command to get a 
> list of buckets under a volume,
> {code}
> bin/hdfs oz -listBucket http://15oz1.fyre.ibm.com:9864/vol-0-15143 -user wwei
> {code}
> this call spent over *15 seconds* to finish. The problem was caused by the 
> inflexible structure of KSM DB. Right now {{ksm.db}} stores keys like 
> following
> {code}
> /v1/b1
> /v1/b1/k1
> /v1/b1/k2
> /v1/b1/k3
> /v1/b2
> /v1/b2/k1
> /v1/b2/k2
> /v1/b2/k3
> /v1/b3
> /v1/b4
> {code}
> keys are sorted in nature order so when we do list buckets under a volume e.g 
> /v1, we need to seek to /v1 point and start to iterate and filter keys, this 
> ends up with scanning all keys under volume /v1. The problem with this design 
> is we don't have an efficient approach to locate all buckets without scanning 
> the keys.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12506) Ozone: ListBucket is too slow

2017-09-20 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12506:
---
Attachment: HDFS-12506-HDFS-7240.001.patch

> Ozone: ListBucket is too slow
> -
>
> Key: HDFS-12506
> URL: https://issues.apache.org/jira/browse/HDFS-12506
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Blocker
>  Labels: ozoneMerge, performance
> Attachments: HDFS-12506-HDFS-7240.001.patch
>
>
> Generated 3 million keys in ozone, and run {{listBucket}} command to get a 
> list of buckets under a volume,
> {code}
> bin/hdfs oz -listBucket http://15oz1.fyre.ibm.com:9864/vol-0-15143 -user wwei
> {code}
> this call spent over *15 seconds* to finish. The problem was caused by the 
> inflexible structure of KSM DB. Right now {{ksm.db}} stores keys like 
> following
> {code}
> /v1/b1
> /v1/b1/k1
> /v1/b1/k2
> /v1/b1/k3
> /v1/b2
> /v1/b2/k1
> /v1/b2/k2
> /v1/b2/k3
> /v1/b3
> /v1/b4
> {code}
> keys are sorted in nature order so when we do list buckets under a volume e.g 
> /v1, we need to seek to /v1 point and start to iterate and filter keys, this 
> ends up with scanning all keys under volume /v1. The problem with this design 
> is we don't have an efficient approach to locate all buckets without scanning 
> the keys.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12506) Ozone: ListBucket is too slow

2017-09-20 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12506:
---
Status: Open  (was: Patch Available)

> Ozone: ListBucket is too slow
> -
>
> Key: HDFS-12506
> URL: https://issues.apache.org/jira/browse/HDFS-12506
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Blocker
>  Labels: ozoneMerge, performance
> Attachments: HDFS-12506-HDFS-7240.001.patch
>
>
> Generated 3 million keys in ozone, and run {{listBucket}} command to get a 
> list of buckets under a volume,
> {code}
> bin/hdfs oz -listBucket http://15oz1.fyre.ibm.com:9864/vol-0-15143 -user wwei
> {code}
> this call spent over *15 seconds* to finish. The problem was caused by the 
> inflexible structure of KSM DB. Right now {{ksm.db}} stores keys like 
> following
> {code}
> /v1/b1
> /v1/b1/k1
> /v1/b1/k2
> /v1/b1/k3
> /v1/b2
> /v1/b2/k1
> /v1/b2/k2
> /v1/b2/k3
> /v1/b3
> /v1/b4
> {code}
> keys are sorted in nature order so when we do list buckets under a volume e.g 
> /v1, we need to seek to /v1 point and start to iterate and filter keys, this 
> ends up with scanning all keys under volume /v1. The problem with this design 
> is we don't have an efficient approach to locate all buckets without scanning 
> the keys.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12486) GetConf to get journalnodeslist

2017-09-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174301#comment-16174301
 ] 

Hadoop QA commented on HDFS-12486:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
35s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
19s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m 32s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}135m  6s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithQJM |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12486 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888198/HDFS-12486.06.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 93b2ff57ea42 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 53047f9 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21264/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21264/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client 
hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project |
| Console output | 

[jira] [Commented] (HDFS-12523) Thread pools in ErasureCodingWorker do not shutdown

2017-09-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174299#comment-16174299
 ] 

Hadoop QA commented on HDFS-12523:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 36s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 154 unchanged - 0 fixed = 156 total (was 154) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}113m  
5s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}140m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12523 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888195/HDFS-12523.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0afe0295c121 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 53047f9 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21262/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21262/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21262/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Thread pools in ErasureCodingWorker do not shutdown
> ---
>
> Key: HDFS-12523
> URL: https://issues.apache.org/jira/browse/HDFS-12523
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects 

[jira] [Updated] (HDFS-12506) Ozone: ListBucket is too slow

2017-09-20 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12506:
---
Status: Patch Available  (was: Open)

> Ozone: ListBucket is too slow
> -
>
> Key: HDFS-12506
> URL: https://issues.apache.org/jira/browse/HDFS-12506
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Blocker
>  Labels: ozoneMerge, performance
> Attachments: HDFS-12506-HDFS-7240.001.patch
>
>
> Generated 3 million keys in ozone, and run {{listBucket}} command to get a 
> list of buckets under a volume,
> {code}
> bin/hdfs oz -listBucket http://15oz1.fyre.ibm.com:9864/vol-0-15143 -user wwei
> {code}
> this call spent over *15 seconds* to finish. The problem was caused by the 
> inflexible structure of KSM DB. Right now {{ksm.db}} stores keys like 
> following
> {code}
> /v1/b1
> /v1/b1/k1
> /v1/b1/k2
> /v1/b1/k3
> /v1/b2
> /v1/b2/k1
> /v1/b2/k2
> /v1/b2/k3
> /v1/b3
> /v1/b4
> {code}
> keys are sorted in nature order so when we do list buckets under a volume e.g 
> /v1, we need to seek to /v1 point and start to iterate and filter keys, this 
> ends up with scanning all keys under volume /v1. The problem with this design 
> is we don't have an efficient approach to locate all buckets without scanning 
> the keys.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12458) TestReencryptionWithKMS fails regularly

2017-09-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174293#comment-16174293
 ] 

Hadoop QA commented on HDFS-12458:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 47s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}130m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12458 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888197/HDFS-12458.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux af481a0a4dd3 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 53047f9 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21261/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21261/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21261/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestReencryptionWithKMS fails regularly
> ---
>
> Key: HDFS-12458
> URL: https://issues.apache.org/jira/browse/HDFS-12458
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, test
>Affects Versions: 

[jira] [Updated] (HDFS-12506) Ozone: ListBucket is too slow

2017-09-20 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12506:
---
Attachment: HDFS-12506-HDFS-7240.001.patch

> Ozone: ListBucket is too slow
> -
>
> Key: HDFS-12506
> URL: https://issues.apache.org/jira/browse/HDFS-12506
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Blocker
>  Labels: ozoneMerge, performance
> Attachments: HDFS-12506-HDFS-7240.001.patch
>
>
> Generated 3 million keys in ozone, and run {{listBucket}} command to get a 
> list of buckets under a volume,
> {code}
> bin/hdfs oz -listBucket http://15oz1.fyre.ibm.com:9864/vol-0-15143 -user wwei
> {code}
> this call spent over *15 seconds* to finish. The problem was caused by the 
> inflexible structure of KSM DB. Right now {{ksm.db}} stores keys like 
> following
> {code}
> /v1/b1
> /v1/b1/k1
> /v1/b1/k2
> /v1/b1/k3
> /v1/b2
> /v1/b2/k1
> /v1/b2/k2
> /v1/b2/k3
> /v1/b3
> /v1/b4
> {code}
> keys are sorted in nature order so when we do list buckets under a volume e.g 
> /v1, we need to seek to /v1 point and start to iterate and filter keys, this 
> ends up with scanning all keys under volume /v1. The problem with this design 
> is we don't have an efficient approach to locate all buckets without scanning 
> the keys.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12496) Make QuorumJournalManager timeout properties configurable

2017-09-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174291#comment-16174291
 ] 

Hadoop QA commented on HDFS-12496:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}100m 53s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}129m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12496 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888196/HDFS-12496.06.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux f94da36c389d 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 53047f9 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21263/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21263/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21263/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was 

[jira] [Commented] (HDFS-7337) Configurable and pluggable Erasure Codec and schema

2017-09-20 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174250#comment-16174250
 ] 

Andrew Wang commented on HDFS-7337:
---

I think we've resolved the scope targeted for beta1, shall we close this 
umbrella JIRA and move out the remaining subtasks?

> Configurable and pluggable Erasure Codec and schema
> ---
>
> Key: HDFS-7337
> URL: https://issues.apache.org/jira/browse/HDFS-7337
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: erasure-coding
>Reporter: Zhe Zhang
>Priority: Critical
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-7337-prototype-v1.patch, 
> HDFS-7337-prototype-v2.zip, HDFS-7337-prototype-v3.zip, 
> PluggableErasureCodec.pdf, PluggableErasureCodec-v2.pdf, 
> PluggableErasureCodec-v3.pdf, PluggableErasureCodec v4.pdf
>
>
> According to HDFS-7285 and the design, this considers to support multiple 
> Erasure Codecs via pluggable approach. It allows to define and configure 
> multiple codec schemas with different coding algorithms and parameters. The 
> resultant codec schemas can be utilized and specified via command tool for 
> different file folders. While design and implement such pluggable framework, 
> it’s also to implement a concrete codec by default (Reed Solomon) to prove 
> the framework is useful and workable. Separate JIRA could be opened for the 
> RS codec implementation.
> Note HDFS-7353 will focus on the very low level codec API and implementation 
> to make concrete vendor libraries transparent to the upper layer. This JIRA 
> focuses on high level stuffs that interact with configuration, schema and etc.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12452) TestDataNodeVolumeFailureReporting fails in trunk Jenkins runs

2017-09-20 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12452:
---
Labels: flaky-test  (was: )

> TestDataNodeVolumeFailureReporting fails in trunk Jenkins runs
> --
>
> Key: HDFS-12452
> URL: https://issues.apache.org/jira/browse/HDFS-12452
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Arpit Agarwal
>Priority: Critical
>  Labels: flaky-test
>
> TestDataNodeVolumeFailureReporting#testSuccessiveVolumeFailures fails 
> frequently in Jenkins runs but it passes locally on my dev machine.
> e.g. 
> https://builds.apache.org/job/PreCommit-HDFS-Build/21134/testReport/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailureReporting/testSuccessiveVolumeFailures/
> {code}
> Error Message
> test timed out after 12 milliseconds
> Stacktrace
> java.lang.Exception: test timed out after 12 milliseconds
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.hdfs.DFSTestUtil.waitReplication(DFSTestUtil.java:761)
>   at 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting.testSuccessiveVolumeFailures(TestDataNodeVolumeFailureReporting.java:189)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12453) TestDataNodeHotSwapVolumes fails in trunk Jenkins runs

2017-09-20 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12453:
---
Labels: flaky-test  (was: )

> TestDataNodeHotSwapVolumes fails in trunk Jenkins runs
> --
>
> Key: HDFS-12453
> URL: https://issues.apache.org/jira/browse/HDFS-12453
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Arpit Agarwal
>Priority: Critical
>  Labels: flaky-test
> Attachments: TestLogs.txt
>
>
> TestDataNodeHotSwapVolumes fails occasionally with the following error (see 
> comment). Ran it ~10 times locally and it passed every time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12507) javadoc: error - class file for org.apache.http.annotation.ThreadSafe not found

2017-09-20 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174230#comment-16174230
 ] 

Andrew Wang commented on HDFS-12507:


I've been able to build okay too, and grepping for "org.apache.http.annotation" 
doesn't reveal anything in the Hadoop source tree either.

> javadoc: error - class file for org.apache.http.annotation.ThreadSafe not 
> found
> ---
>
> Key: HDFS-12507
> URL: https://issues.apache.org/jira/browse/HDFS-12507
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Mukul Kumar Singh
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:2.10.4:jar (module-javadocs) on 
> project hadoop-hdfs-client: MavenReportException: Error while generating 
> Javadoc: 
> [ERROR] Exit code: 1 - 
> /Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/StripedBlockUtil.java:694:
>  warning - Tag @link: reference not found: StripingCell
> [ERROR] javadoc: error - class file for org.apache.http.annotation.ThreadSafe 
> not found
> [ERROR] 
> [ERROR] Command line was: 
> /Library/Java/JavaVirtualMachines/jdk1.8.0_144.jdk/Contents/Home/jre/../bin/javadoc
>  -J-Xmx768m @options @packages
> [ERROR] 
> [ERROR] Refer to the generated Javadoc files in 
> '/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs-client/target/api' 
> dir.
> {code}
> To reproduce the error above, run
> {code}
> mvn package -Pdist -DskipTests -DskipDocs -Dtar
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12506) Ozone: ListBucket is too slow

2017-09-20 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174215#comment-16174215
 ] 

Weiwei Yang commented on HDFS-12506:


Hi [~xyao], [~nandakumar131]

Thanks for contributing your ideas, this approach should work but there is one 
thing I need to point out. With such prefix definition, we will need to avoid 
user from adding volumes with name like "#volumeName", bucket with name like 
"#bucketName". That will cause problems

If user adds a volume *#v1*, a bucket is added

/#v1/b1

this will confuse KSM to think this is a volume with name *#v1/b1*.



> Ozone: ListBucket is too slow
> -
>
> Key: HDFS-12506
> URL: https://issues.apache.org/jira/browse/HDFS-12506
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Blocker
>  Labels: ozoneMerge, performance
>
> Generated 3 million keys in ozone, and run {{listBucket}} command to get a 
> list of buckets under a volume,
> {code}
> bin/hdfs oz -listBucket http://15oz1.fyre.ibm.com:9864/vol-0-15143 -user wwei
> {code}
> this call spent over *15 seconds* to finish. The problem was caused by the 
> inflexible structure of KSM DB. Right now {{ksm.db}} stores keys like 
> following
> {code}
> /v1/b1
> /v1/b1/k1
> /v1/b1/k2
> /v1/b1/k3
> /v1/b2
> /v1/b2/k1
> /v1/b2/k2
> /v1/b2/k3
> /v1/b3
> /v1/b4
> {code}
> keys are sorted in nature order so when we do list buckets under a volume e.g 
> /v1, we need to seek to /v1 point and start to iterate and filter keys, this 
> ends up with scanning all keys under volume /v1. The problem with this design 
> is we don't have an efficient approach to locate all buckets without scanning 
> the keys.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12486) GetConf to get journalnodeslist

2017-09-20 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-12486:
--
Attachment: HDFS-12486.06.patch

> GetConf to get journalnodeslist
> ---
>
> Key: HDFS-12486
> URL: https://issues.apache.org/jira/browse/HDFS-12486
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12486.01.patch, HDFS-12486.02.patch, 
> HDFS-12486.03.patch, HDFS-12486.04.patch, HDFS-12486.05.patch, 
> HDFS-12486.06.patch
>
>
> GetConf command to list journal nodes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12486) GetConf to get journalnodeslist

2017-09-20 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174203#comment-16174203
 ] 

Bharat Viswanadham commented on HDFS-12486:
---

Attached patch v06 to fix checkstyle issues.

> GetConf to get journalnodeslist
> ---
>
> Key: HDFS-12486
> URL: https://issues.apache.org/jira/browse/HDFS-12486
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12486.01.patch, HDFS-12486.02.patch, 
> HDFS-12486.03.patch, HDFS-12486.04.patch, HDFS-12486.05.patch, 
> HDFS-12486.06.patch
>
>
> GetConf command to list journal nodes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12458) TestReencryptionWithKMS fails regularly

2017-09-20 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-12458:
-
Attachment: HDFS-12458.02.patch

Patch 2 to fix checkstyle.

To add some confidence, ran this with HDFS-12518 on dist-test, was able to get 
a previous 30%+ failure rate (blush) to no failures out of 100 test runs. 
(Sorry, should have done this when doing the original fix.) 
http://dist-test.cloudera.org/job?job_id=hadoop.xiao.1505953659.16755

> TestReencryptionWithKMS fails regularly
> ---
>
> Key: HDFS-12458
> URL: https://issues.apache.org/jira/browse/HDFS-12458
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, test
>Affects Versions: 3.0.0-beta1
>Reporter: Konstantin Shvachko
>Assignee: Xiao Chen
> Attachments: HDFS-12458.01.patch, HDFS-12458.02.patch
>
>
> {{TestReencryptionWithKMS}} fails pretty often on Jenkins. Should fix it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12496) Make QuorumJournalManager timeout properties configurable

2017-09-20 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12496:
--
Attachment: HDFS-12496.06.patch

fixed two checkstyle warnings. 

> Make QuorumJournalManager timeout properties configurable
> -
>
> Key: HDFS-12496
> URL: https://issues.apache.org/jira/browse/HDFS-12496
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12496.01.patch, HDFS-12496.02.patch, 
> HDFS-12496.03.patch, HDFS-12496.04.patch, HDFS-12496.05.patch, 
> HDFS-12496.06.patch
>
>
> Make QuorumJournalManager timeout properties configurable using a common key. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12523) Thread pools in ErasureCodingWorker do not shutdown

2017-09-20 Thread Huafeng Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huafeng Wang updated HDFS-12523:

Status: Patch Available  (was: Open)

> Thread pools in ErasureCodingWorker do not shutdown
> ---
>
> Key: HDFS-12523
> URL: https://issues.apache.org/jira/browse/HDFS-12523
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha4
>Reporter: Lei (Eddy) Xu
>Assignee: Huafeng Wang
> Attachments: HDFS-12523.001.patch
>
>
> There is no code path in {{ErasureCodingWorker}} to shutdown its two thread 
> pools: {{stripedReconstructionPool}} and {{stripedReadPool}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12523) Thread pools in ErasureCodingWorker do not shutdown

2017-09-20 Thread Huafeng Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huafeng Wang updated HDFS-12523:

Attachment: HDFS-12523.001.patch

> Thread pools in ErasureCodingWorker do not shutdown
> ---
>
> Key: HDFS-12523
> URL: https://issues.apache.org/jira/browse/HDFS-12523
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha4
>Reporter: Lei (Eddy) Xu
>Assignee: Huafeng Wang
> Attachments: HDFS-12523.001.patch
>
>
> There is no code path in {{ErasureCodingWorker}} to shutdown its two thread 
> pools: {{stripedReconstructionPool}} and {{stripedReadPool}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12518) Re-encryption should handle task cancellation and progress better

2017-09-20 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174184#comment-16174184
 ] 

Xiao Chen edited comment on HDFS-12518 at 9/21/17 3:11 AM:
---

Patch 1:
- Added the fix in {{ReencryptionUpdater}} to handle canceled future better
- Modified the way {{submissions}} in {{ReencryptionHandler}} is managed: on 
-cancel and valid new -start, the zone submission tracker is reset. This means 
even if somehow an re-encryption is ended up in an erroneous state, it can 
still be reset from crypto CLI.
- Fixed / improved a few related places: protect {{submissions}} with the 
handler's object lock, logging fix, missed assert.
- Enhanced {{TestReencryption}} to ensure fault injector behave correctly when 
multi-threaded (had a 1% failure rate on dist-test)


was (Author: xiaochen):
Patch 1:
- Added the fix in {{ReencryptionUpdater}} to handle canceled future better
- Modified the way {{submissions}} in {{ReencryptionHandler}} is managed: on 
-cancel and valid new -start, the zone submission tracker is reset. This means 
even if somehow an re-encryption is ended up in an erroneous state, it can 
still be reset from crypto CLI.
- Fixed / improved a few related places: protect {{submissions}} with the 
handler's object lock, logging fix, missed assert.
- Enhanced {{TestReencryption}} to ensure fault injector behave correctly when 
multi-threaded (had a 1% failure rate on dist-test

> Re-encryption should handle task cancellation and progress better
> -
>
> Key: HDFS-12518
> URL: https://issues.apache.org/jira/browse/HDFS-12518
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 3.0.0-beta1
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-12518.01.patch
>
>
> Re-encryption should handle task cancellation and progress tracking better in 
> general.
> In a recent internal report, a canceled re-encryption could lead to the 
> progress of the zone being 'Processing' forever. Sending a new cancel command 
> would make it complete, but new re-encryptions for the same zone wouldn't 
> work because the canceled future is not removed.
> This jira proposes to fix that, and enhance the currently handling so new 
> command would start from a clean state.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12518) Re-encryption should handle task cancellation and progress better

2017-09-20 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-12518:
-
Status: Patch Available  (was: Open)

> Re-encryption should handle task cancellation and progress better
> -
>
> Key: HDFS-12518
> URL: https://issues.apache.org/jira/browse/HDFS-12518
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 3.0.0-beta1
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-12518.01.patch
>
>
> Re-encryption should handle task cancellation and progress tracking better in 
> general.
> In a recent internal report, a canceled re-encryption could lead to the 
> progress of the zone being 'Processing' forever. Sending a new cancel command 
> would make it complete, but new re-encryptions for the same zone wouldn't 
> work because the canceled future is not removed.
> This jira proposes to fix that, and enhance the currently handling so new 
> command would start from a clean state.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12518) Re-encryption should handle task cancellation and progress better

2017-09-20 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-12518:
-
Attachment: HDFS-12518.01.patch

Patch 1:
- Added the fix in {{ReencryptionUpdater}} to handle canceled future better
- Modified the way {{submissions}} in {{ReencryptionHandler}} is managed: on 
-cancel and valid new -start, the zone submission tracker is reset. This means 
even if somehow an re-encryption is ended up in an erroneous state, it can 
still be reset from crypto CLI.
- Fixed / improved a few related places: protect {{submissions}} with the 
handler's object lock, logging fix, missed assert.
- Enhanced {{TestReencryption}} to ensure fault injector behave correctly when 
multi-threaded (had a 1% failure rate on dist-test

> Re-encryption should handle task cancellation and progress better
> -
>
> Key: HDFS-12518
> URL: https://issues.apache.org/jira/browse/HDFS-12518
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 3.0.0-beta1
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-12518.01.patch
>
>
> Re-encryption should handle task cancellation and progress tracking better in 
> general.
> In a recent internal report, a canceled re-encryption could lead to the 
> progress of the zone being 'Processing' forever. Sending a new cancel command 
> would make it complete, but new re-encryptions for the same zone wouldn't 
> work because the canceled future is not removed.
> This jira proposes to fix that, and enhance the currently handling so new 
> command would start from a clean state.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12523) Thread pools in ErasureCodingWorker do not shutdown

2017-09-20 Thread Huafeng Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huafeng Wang reassigned HDFS-12523:
---

Assignee: Huafeng Wang

> Thread pools in ErasureCodingWorker do not shutdown
> ---
>
> Key: HDFS-12523
> URL: https://issues.apache.org/jira/browse/HDFS-12523
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha4
>Reporter: Lei (Eddy) Xu
>Assignee: Huafeng Wang
>
> There is no code path in {{ErasureCodingWorker}} to shutdown its two thread 
> pools: {{stripedReconstructionPool}} and {{stripedReadPool}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12496) Make QuorumJournalManager timeout properties configurable

2017-09-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174172#comment-16174172
 ] 

Hadoop QA commented on HDFS-12496:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 42s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 427 unchanged - 0 fixed = 429 total (was 427) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}194m 28s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
51s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}223m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReservedRawPaths |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
|   | hadoop.hdfs.TestHDFSFileSystemContract |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 |
|   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 |
|   | 

[jira] [Commented] (HDFS-12513) Ozone: Create UI page to show Ozone configs by tags

2017-09-20 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174170#comment-16174170
 ] 

Weiwei Yang commented on HDFS-12513:


Hi [~ajayydv]

Want to know how you plan to design the page, display configs by tag using 
label, panel or other UI layouts? Conf servlet was a very basic implementation 
which doesn't have any html or javascript elements. It might be good if you 
have a mockup to share so we can see how much work is there. Thanks.

> Ozone: Create UI page to show Ozone configs by tags
> ---
>
> Key: HDFS-12513
> URL: https://issues.apache.org/jira/browse/HDFS-12513
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Fix For: HDFS-7240
>
>
> Create UI page to show Ozone configs by tags



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12486) GetConf to get journalnodeslist

2017-09-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174168#comment-16174168
 ] 

Hadoop QA commented on HDFS-12486:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 39s{color} | {color:orange} hadoop-hdfs-project: The patch generated 4 new + 
103 unchanged - 0 fixed = 107 total (was 103) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
11s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 52s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}136m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestPread |
|   | hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12486 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888174/HDFS-12486.05.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9fd2e8c75ce0 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a12f09b |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21259/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt
 |
| unit | 

[jira] [Commented] (HDFS-12513) Ozone: Create UI page to show Ozone configs by tags

2017-09-20 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174152#comment-16174152
 ] 

Ajay Kumar commented on HDFS-12513:
---

[~cheersyang], thanks for moving this under ozone jira. I am planning to use 
conf servlet to return a new html page. Any thoughts?

> Ozone: Create UI page to show Ozone configs by tags
> ---
>
> Key: HDFS-12513
> URL: https://issues.apache.org/jira/browse/HDFS-12513
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Fix For: HDFS-7240
>
>
> Create UI page to show Ozone configs by tags



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11576) Block recovery will fail indefinitely if recovery time > heartbeat interval

2017-09-20 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174133#comment-16174133
 ] 

Konstantin Shvachko commented on HDFS-11576:


Thanks for pinging me, Brahma. Sorry for slow repsponse.
My main concern is the new config parameter. I think we should make the 
{{BLOCK_RECOVERY_TIMEOUT_MULTIPLIER}} a constant, not configurable.
If I understand [~lukmajercak] correctly, it was made configurable only for 
testing. We can address this by intorducing a method 
{code}
static long getBlockRecoveryTimeout() {
  return TimeUnit.SECONDS.toMillis(heartbeatIntervalSecs * 
BLOCK_RECOVERY_TIMEOUT_MULTIPLIER);
}
{code}
And either
# Make it visible for testing, or
# Create a test utility mocking this method, so that one could change the 
timeout for tests.

Both ways work for me.

Minor things:
# Would be good to add a log message stating that block recovery was been 
started but is still not complete.
Unless I missed such message as I don't see it in {{internalReleaseLease()}}. 
# {{PendingRecoveryBlocks.getTime()}} seems redundant. Static import should 
achieve the same.
# n {{testRecoveryTimeout()}} member {{callRealMethod}} should be final, 
otherwise you won't be able to backport in branch-2*. Would also rename it to 
{{realMethodCalled}}.
# And I don't understand adding new protected {{SleepAnswer.callRealMethod()}}, 
if you can just override the entire {{SleepAnswer.answer()}} in your test.

> Block recovery will fail indefinitely if recovery time > heartbeat interval
> ---
>
> Key: HDFS-11576
> URL: https://issues.apache.org/jira/browse/HDFS-11576
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs, namenode
>Affects Versions: 2.7.1, 2.7.2, 2.7.3, 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Critical
> Attachments: HDFS-11576.001.patch, HDFS-11576.002.patch, 
> HDFS-11576.003.patch, HDFS-11576.004.patch, HDFS-11576.005.patch, 
> HDFS-11576.006.patch, HDFS-11576.007.patch, HDFS-11576.008.patch, 
> HDFS-11576.009.patch, HDFS-11576.010.patch, HDFS-11576.011.patch, 
> HDFS-11576.repro.patch
>
>
> Block recovery will fail indefinitely if the time to recover a block is 
> always longer than the heartbeat interval. Scenario:
> 1. DN sends heartbeat 
> 2. NN sends a recovery command to DN, recoveryID=X
> 3. DN starts recovery
> 4. DN sends another heartbeat
> 5. NN sends a recovery command to DN, recoveryID=X+1
> 6. DN calls commitBlockSyncronization after succeeding with first recovery to 
> NN, which fails because X < X+1
> ... 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12506) Ozone: ListBucket is too slow

2017-09-20 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang reassigned HDFS-12506:
--

Assignee: Weiwei Yang

> Ozone: ListBucket is too slow
> -
>
> Key: HDFS-12506
> URL: https://issues.apache.org/jira/browse/HDFS-12506
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Blocker
>  Labels: ozoneMerge, performance
>
> Generated 3 million keys in ozone, and run {{listBucket}} command to get a 
> list of buckets under a volume,
> {code}
> bin/hdfs oz -listBucket http://15oz1.fyre.ibm.com:9864/vol-0-15143 -user wwei
> {code}
> this call spent over *15 seconds* to finish. The problem was caused by the 
> inflexible structure of KSM DB. Right now {{ksm.db}} stores keys like 
> following
> {code}
> /v1/b1
> /v1/b1/k1
> /v1/b1/k2
> /v1/b1/k3
> /v1/b2
> /v1/b2/k1
> /v1/b2/k2
> /v1/b2/k3
> /v1/b3
> /v1/b4
> {code}
> keys are sorted in nature order so when we do list buckets under a volume e.g 
> /v1, we need to seek to /v1 point and start to iterate and filter keys, this 
> ends up with scanning all keys under volume /v1. The problem with this design 
> is we don't have an efficient approach to locate all buckets without scanning 
> the keys.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12521) Ozone: SCM should read all Container info into memory when booting up

2017-09-20 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12521:
---
Labels: performance  (was: )

> Ozone: SCM should read all Container info into memory when booting up
> -
>
> Key: HDFS-12521
> URL: https://issues.apache.org/jira/browse/HDFS-12521
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Anu Engineer
>  Labels: performance
>
> When SCM boots up it should read all containers into memory. This is a 
> performance optimization that allows delays on SCM side. This JIRA tracks 
> that issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12386) Add fsserver defaults call to WebhdfsFileSystem.

2017-09-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174085#comment-16174085
 ] 

Hadoop QA commented on HDFS-12386:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 43s{color} 
| {color:red} hadoop-hdfs-project generated 1 new + 450 unchanged - 0 fixed = 
451 total (was 450) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 47s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
418 unchanged - 0 fixed = 419 total (was 418) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
45s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
18s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m  5s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}138m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.hdfs.web.JsonUtilClient.toFsServerDefaults(Map)  At 
JsonUtilClient.java:then immediately reboxed in 
org.apache.hadoop.hdfs.web.JsonUtilClient.toFsServerDefaults(Map)  At 
JsonUtilClient.java:[line 666] |
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12386 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888149/HDFS-12386-3.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 011d6f454daf 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HDFS-12517) Ozone: mvn package is failing with out skipshade

2017-09-20 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174082#comment-16174082
 ] 

Bharat Viswanadham commented on HDFS-12517:
---

Updated the patch to resolve the issue.
Tested locally the ozone branch is building.

> Ozone: mvn package is failing with out skipshade
> 
>
> Key: HDFS-12517
> URL: https://issues.apache.org/jira/browse/HDFS-12517
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>  Labels: ozoneMerge
> Attachments: HDFS-12517-HDFS-7240.01.patch, 
> HDFS-12517-HDFS-7240.02.patch, ozone-build.txt
>
>
> ozone branch build is failing with out skipshade option



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12517) Ozone: mvn package is failing with out skipshade

2017-09-20 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-12517:
--
Attachment: ozone-build.txt

> Ozone: mvn package is failing with out skipshade
> 
>
> Key: HDFS-12517
> URL: https://issues.apache.org/jira/browse/HDFS-12517
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>  Labels: ozoneMerge
> Attachments: HDFS-12517-HDFS-7240.01.patch, 
> HDFS-12517-HDFS-7240.02.patch, ozone-build.txt
>
>
> ozone branch build is failing with out skipshade option



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12517) Ozone: mvn package is failing with out skipshade

2017-09-20 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-12517:
--
Attachment: HDFS-12517-HDFS-7240.02.patch

> Ozone: mvn package is failing with out skipshade
> 
>
> Key: HDFS-12517
> URL: https://issues.apache.org/jira/browse/HDFS-12517
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>  Labels: ozoneMerge
> Attachments: HDFS-12517-HDFS-7240.01.patch, 
> HDFS-12517-HDFS-7240.02.patch
>
>
> ozone branch build is failing with out skipshade option



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12523) Thread pools in ErasureCodingWorker do not shutdown

2017-09-20 Thread Lei (Eddy) Xu (JIRA)
Lei (Eddy) Xu created HDFS-12523:


 Summary: Thread pools in ErasureCodingWorker do not shutdown
 Key: HDFS-12523
 URL: https://issues.apache.org/jira/browse/HDFS-12523
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: erasure-coding
Affects Versions: 3.0.0-alpha4
Reporter: Lei (Eddy) Xu


There is no code path in {{ErasureCodingWorker}} to shutdown its two thread 
pools: {{stripedReconstructionPool}} and {{stripedReadPool}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12486) GetConf to get journalnodeslist

2017-09-20 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174072#comment-16174072
 ] 

Bharat Viswanadham commented on HDFS-12486:
---

Test failures are not related to this patch.
Ran Tests locally they are passing.

> GetConf to get journalnodeslist
> ---
>
> Key: HDFS-12486
> URL: https://issues.apache.org/jira/browse/HDFS-12486
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12486.01.patch, HDFS-12486.02.patch, 
> HDFS-12486.03.patch, HDFS-12486.04.patch, HDFS-12486.05.patch
>
>
> GetConf command to list journal nodes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12511) Add tags to ozone config

2017-09-20 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12511:

Issue Type: Sub-task  (was: New Feature)
Parent: HDFS-7240

> Add tags to ozone config
> 
>
> Key: HDFS-12511
> URL: https://issues.apache.org/jira/browse/HDFS-12511
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Fix For: HDFS-7240
>
> Attachments: HDFS-12511-HDFS-7240.01.patch
>
>
> Add tags to ozone config:
> Example:
> {code} 
> 
> ozone.ksm.handler.count.key
> 200
> OZONE,PERFORMANCE,KSM
> 
>   The number of RPC handler threads for each KSM service endpoint.
> 
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12511) Ozone: Add tags to config

2017-09-20 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12511:

Summary: Ozone: Add tags to config  (was: Add tags to ozone config)

> Ozone: Add tags to config
> -
>
> Key: HDFS-12511
> URL: https://issues.apache.org/jira/browse/HDFS-12511
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Fix For: HDFS-7240
>
> Attachments: HDFS-12511-HDFS-7240.01.patch
>
>
> Add tags to ozone config:
> Example:
> {code} 
> 
> ozone.ksm.handler.count.key
> 200
> OZONE,PERFORMANCE,KSM
> 
>   The number of RPC handler threads for each KSM service endpoint.
> 
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12458) TestReencryptionWithKMS fails regularly

2017-09-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174057#comment-16174057
 ] 

Hadoop QA commented on HDFS-12458:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
8s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 34s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 47s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}125m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12458 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888145/HDFS-12458.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 73eeaa928819 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a12f09b |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21254/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21254/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21254/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21254/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestReencryptionWithKMS fails regularly
> ---
>
> Key: HDFS-12458
> URL: https://issues.apache.org/jira/browse/HDFS-12458
> Project: 

[jira] [Comment Edited] (HDFS-12486) GetConf to get journalnodeslist

2017-09-20 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174025#comment-16174025
 ] 

Bharat Viswanadham edited comment on HDFS-12486 at 9/21/17 12:34 AM:
-

[~hanishakoneru] Thanks for your comment and offline discussion.
1. The previous code patch is also using InetSocketAddress. I think it is okay 
here to continue with Util#getAddressesList and reuse the existing function.
2. The check is added because if user called hdfs getConf -journalnodes and qjm 
is not setup to return empty value for user and this also helps in start-dfs.sh 
and stop-dfs.sh (when dfs.namenode.shared.edits.dir is set with shared storage) 
As now start-dfs.sh and stop-dfs.sh logic is changed in HDFS-12375 jira. 

Updated the patch v05 to address other review comments and checkstyle issues.


was (Author: bharatviswa):
[~hanishakoneru] Thanks for your comment and offline discussion.
1. The previous code patch is also using InetSocketAddress. I think it is okay 
here to continue with Util#getAddressesList and reuse the existing function.
2. The check is added because if user called hdfs getConf -journalnodes and qjm 
is not setup to return empty value for user and this also helps in start-dfs.sh 
and stop-dfs.sh (when dfs.namenode.shared.edits.dir is set with shared storage) 
As now start-dfs.sh and stop-dfs.sh logic is changed in HDFS-12375 jira. 

Updated the patch to address other review comments and checkstyle issues.

> GetConf to get journalnodeslist
> ---
>
> Key: HDFS-12486
> URL: https://issues.apache.org/jira/browse/HDFS-12486
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12486.01.patch, HDFS-12486.02.patch, 
> HDFS-12486.03.patch, HDFS-12486.04.patch, HDFS-12486.05.patch
>
>
> GetConf command to list journal nodes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12486) GetConf to get journalnodeslist

2017-09-20 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-12486:
--
Attachment: HDFS-12486.05.patch

> GetConf to get journalnodeslist
> ---
>
> Key: HDFS-12486
> URL: https://issues.apache.org/jira/browse/HDFS-12486
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12486.01.patch, HDFS-12486.02.patch, 
> HDFS-12486.03.patch, HDFS-12486.04.patch, HDFS-12486.05.patch
>
>
> GetConf command to list journal nodes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12486) GetConf to get journalnodeslist

2017-09-20 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174025#comment-16174025
 ] 

Bharat Viswanadham edited comment on HDFS-12486 at 9/21/17 12:32 AM:
-

[~hanishakoneru] Thanks for your comment and offline discussion.
1. The previous code patch is also using InetSocketAddress. I think it is okay 
here to continue with Util#getAddressesList and reuse the existing function.
2. The check is added because if user called hdfs getConf -journalnodes and qjm 
is not setup to return empty value for user and this also helps in start-dfs.sh 
and stop-dfs.sh (when dfs.namenode.shared.edits.dir is set with shared storage) 
As now start-dfs.sh and stop-dfs.sh logic is changed in HDFS-12375 jira. 

Updated the patch to address other review comments and checkstyle issues.


was (Author: bharatviswa):
[~hanishakoneru] Thanks for your comment and offline discussion.
1. The previous code patch is also using InetSocketAddress. I think it is okay 
here to continue with Util#getAddressesList and reuse the existing function.
2. The check is added because if user called hdfs getConf -journalnodes and qjm 
is not setup to return empty value for user and this also helps in start-dfs.sh 
and stop-dfs.sh (when dfs.namenode.shared.edits.dir is set with shared storage) 
As now start-dfs.sh and stop-dfs.sh logic is changed in HDFS-12375 jira. 

addressed other review comments.

> GetConf to get journalnodeslist
> ---
>
> Key: HDFS-12486
> URL: https://issues.apache.org/jira/browse/HDFS-12486
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12486.01.patch, HDFS-12486.02.patch, 
> HDFS-12486.03.patch, HDFS-12486.04.patch, HDFS-12486.05.patch
>
>
> GetConf command to list journal nodes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12511) Add tags to ozone config

2017-09-20 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12511:
--
Attachment: HDFS-12511-HDFS-7240.01.patch

> Add tags to ozone config
> 
>
> Key: HDFS-12511
> URL: https://issues.apache.org/jira/browse/HDFS-12511
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: HDFS-7240
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Fix For: HDFS-7240
>
> Attachments: HDFS-12511-HDFS-7240.01.patch
>
>
> Add tags to ozone config:
> Example:
> {code} 
> 
> ozone.ksm.handler.count.key
> 200
> OZONE,PERFORMANCE,KSM
> 
>   The number of RPC handler threads for each KSM service endpoint.
> 
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12511) Add tags to ozone config

2017-09-20 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12511:
--
Attachment: HDFS-12511.01.patch

[~anu],[~xyao] Could you please have a look when possible.

> Add tags to ozone config
> 
>
> Key: HDFS-12511
> URL: https://issues.apache.org/jira/browse/HDFS-12511
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: HDFS-7240
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Fix For: HDFS-7240
>
>
> Add tags to ozone config:
> Example:
> {code} 
> 
> ozone.ksm.handler.count.key
> 200
> OZONE,PERFORMANCE,KSM
> 
>   The number of RPC handler threads for each KSM service endpoint.
> 
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12511) Add tags to ozone config

2017-09-20 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12511:
--
Attachment: (was: HDFS-12511.01.patch)

> Add tags to ozone config
> 
>
> Key: HDFS-12511
> URL: https://issues.apache.org/jira/browse/HDFS-12511
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: HDFS-7240
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Fix For: HDFS-7240
>
>
> Add tags to ozone config:
> Example:
> {code} 
> 
> ozone.ksm.handler.count.key
> 200
> OZONE,PERFORMANCE,KSM
> 
>   The number of RPC handler threads for each KSM service endpoint.
> 
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12496) Make QuorumJournalManager timeout properties configurable

2017-09-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174036#comment-16174036
 ] 

Hadoop QA commented on HDFS-12496:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 89m 35s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}117m  6s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.server.namenode.TestReencryption |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12496 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888141/HDFS-12496.04.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux b2cc3ecaf800 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a12f09b |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21253/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21253/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21253/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Make QuorumJournalManager timeout properties configurable
> -
>
> 

[jira] [Created] (HDFS-12522) Ozone: Remove the Priority Queues used in the Container State Manager

2017-09-20 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-12522:
---

 Summary: Ozone: Remove the Priority Queues used in the Container 
State Manager
 Key: HDFS-12522
 URL: https://issues.apache.org/jira/browse/HDFS-12522
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Anu Engineer


During code review of HDFS-12387, it was suggested that we remove the priority 
queues that was used in ContainerStateManager. This JIRA tracks that issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12521) Ozone: SCM should read all Container info into memory when booting up

2017-09-20 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-12521:
---

 Summary: Ozone: SCM should read all Container info into memory 
when booting up
 Key: HDFS-12521
 URL: https://issues.apache.org/jira/browse/HDFS-12521
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Anu Engineer


When SCM boots up it should read all containers into memory. This is a 
performance optimization that allows delays on SCM side. This JIRA tracks that 
issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12520) Ozone : Add an API to get Open Container by Owner, Replication Type and Replication Count

2017-09-20 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-12520:
---

 Summary: Ozone : Add an API to get Open Container by Owner, 
Replication Type and Replication Count
 Key: HDFS-12520
 URL: https://issues.apache.org/jira/browse/HDFS-12520
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Anu Engineer


During the code review of HDFS-12387 [~xyao] mentioned that it is a good idea 
to have this API. This JIRA tracks that issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12486) GetConf to get journalnodeslist

2017-09-20 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174025#comment-16174025
 ] 

Bharat Viswanadham commented on HDFS-12486:
---

[~hanishakoneru] Thanks for your comment and offline discussion.
1. The previous code patch is also using InetSocketAddress. I think it is okay 
here to continue with Util#getAddressesList and reuse the existing function.
2. The check is added because if user called hdfs getConf -journalnodes and qjm 
is not setup to return empty value for user and this also helps in start-dfs.sh 
and stop-dfs.sh (when dfs.namenode.shared.edits.dir is set with shared storage) 
As now start-dfs.sh and stop-dfs.sh logic is changed in HDFS-12375 jira. 

addressed other review comments.

> GetConf to get journalnodeslist
> ---
>
> Key: HDFS-12486
> URL: https://issues.apache.org/jira/browse/HDFS-12486
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12486.01.patch, HDFS-12486.02.patch, 
> HDFS-12486.03.patch, HDFS-12486.04.patch
>
>
> GetConf command to list journal nodes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12519) Ozone: Add a Lease Manager to SCM

2017-09-20 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-12519:
---

 Summary: Ozone: Add a Lease Manager to SCM
 Key: HDFS-12519
 URL: https://issues.apache.org/jira/browse/HDFS-12519
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Anu Engineer
Assignee: Anu Engineer


Many objects, including Containers and pipelines can time out during creating 
process. We need a way to track these timeouts. This lease Manager allows SCM 
to hold a lease on these objects and helps SCM timeout waiting for creating of 
these objects.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12517) Ozone: mvn package is failing with out skipshade

2017-09-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174023#comment-16174023
 ] 

Hadoop QA commented on HDFS-12517:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
5s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 16m 
11s{color} | {color:red} root in HDFS-7240 failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
4s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
33s{color} | {color:red} hadoop-hdfs-client in HDFS-7240 failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
46s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-client-minicluster in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
22s{color} | {color:green} hadoop-client-check-test-invariants in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12517 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888147/HDFS-12517-HDFS-7240.01.patch
 |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  compile  
javac  javadoc  mvninstall  xml  |
| uname | Linux 3eac2c85abb6 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 244e7a5 |
| Default Java | 1.8.0_144 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21256/artifact/patchprocess/branch-mvninstall-root.txt
 |
| shellcheck | v0.4.6 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21256/artifact/patchprocess/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-client.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21256/artifact/patchprocess/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-client.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-12502) nntop should support a category based on FilesInGetListingOps

2017-09-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174013#comment-16174013
 ] 

Hadoop QA commented on HDFS-12502:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 17s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}112m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12502 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888137/HDFS-12502.00.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f4d68e6aac5f 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a12f09b |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21252/artifact/patchprocess/whitespace-eol.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21252/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21252/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21252/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> nntop should support a category based on FilesInGetListingOps
> 

[jira] [Commented] (HDFS-12496) Make QuorumJournalManager timeout properties configurable

2017-09-20 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174001#comment-16174001
 ] 

Arpit Agarwal commented on HDFS-12496:
--

+1 for the v5 patch, pending Jenkins. Thanks [~ajayydv].

> Make QuorumJournalManager timeout properties configurable
> -
>
> Key: HDFS-12496
> URL: https://issues.apache.org/jira/browse/HDFS-12496
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12496.01.patch, HDFS-12496.02.patch, 
> HDFS-12496.03.patch, HDFS-12496.04.patch, HDFS-12496.05.patch
>
>
> Make QuorumJournalManager timeout properties configurable using a common key. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12486) GetConf to get journalnodeslist

2017-09-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173997#comment-16173997
 ] 

Hadoop QA commented on HDFS-12486:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 43s{color} | {color:orange} hadoop-hdfs-project: The patch generated 24 new 
+ 103 unchanged - 0 fixed = 127 total (was 103) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
28s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}170m 54s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}209m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 |
|   | hadoop.hdfs.TestDFSShell |
|   | hadoop.hdfs.web.TestWebHDFSXAttr |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages |
|   | hadoop.hdfs.security.token.block.TestBlockToken |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.TestFileCreationEmpty |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 |
|   | 

[jira] [Commented] (HDFS-12499) dfs.namenode.shared.edits.dir property is currently namenode specific key

2017-09-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173996#comment-16173996
 ] 

Hadoop QA commented on HDFS-12499:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 20m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 169 unchanged - 2 fixed = 169 total (was 171) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}110m 38s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}158m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12499 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888125/HDFS-12499.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9577808e0180 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a12f09b |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21251/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21251/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21251/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> dfs.namenode.shared.edits.dir property is currently namenode specific key
> -
>
> Key: HDFS-12499
> URL: https://issues.apache.org/jira/browse/HDFS-12499
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>

[jira] [Commented] (HDFS-12503) Ozone: some UX improvements to oz_debug

2017-09-20 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173994#comment-16173994
 ] 

Weiwei Yang commented on HDFS-12503:


Hi [~vagarychen] when you get a chance, could you help to review this patch? 
Thanks a lot.

> Ozone: some UX improvements to oz_debug
> ---
>
> Key: HDFS-12503
> URL: https://issues.apache.org/jira/browse/HDFS-12503
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12503-HDFS-7240.001.patch, 
> HDFS-12503-HDFS-7240.002.patch
>
>
> I tried to use {{oz_debug}} to dump KSM DB for offline analysis, found a few 
> problems need to be fixed in order to make this tool easier to use. I know 
> this is a debug tool for admins, but it's still necessary to improve the UX 
> so new users (like me) can figure out how to use it without reading more docs.
> # Support *--help* argument. --help is the general arg for all hdfs scripts 
> to print usage.
> # When specify output path {{-o}}, we need to add a description to let user 
> know the path needs to be a file (instead of a dir). If the path is specified 
> as a dir, it will end up with a funny error {{unable to open the database 
> file (out of memory)}}, which is pretty misleading. And it will be helpful to 
> add a check to make sure the specified path is not an existing dir.
> # SQLCLI currently swallows exception
> # We should remove {{levelDB}} words from the command output as we are by 
> default using rocksDB



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12518) Re-encryption should handle task cancellation and progress better

2017-09-20 Thread Xiao Chen (JIRA)
Xiao Chen created HDFS-12518:


 Summary: Re-encryption should handle task cancellation and 
progress better
 Key: HDFS-12518
 URL: https://issues.apache.org/jira/browse/HDFS-12518
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: encryption
Affects Versions: 3.0.0-beta1
Reporter: Xiao Chen
Assignee: Xiao Chen


Re-encryption should handle task cancellation and progress tracking better in 
general.

In a recent internal report, a canceled re-encryption could lead to the 
progress of the zone being 'Processing' forever. Sending a new cancel command 
would make it complete, but new re-encryptions for the same zone wouldn't work 
because the canceled future is not removed.

This jira proposes to fix that, and enhance the currently handling so new 
command would start from a clean state.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12504) Ozone: Improve SQLCLI performance

2017-09-20 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12504:
---
Labels: performance  (was: )

> Ozone: Improve SQLCLI performance
> -
>
> Key: HDFS-12504
> URL: https://issues.apache.org/jira/browse/HDFS-12504
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>  Labels: performance
>
> In my test, my {{ksm.db}} has *3017660* entries with total size of *128mb*, 
> SQLCLI tool runs over *2 hours* but still not finish exporting the DB. This 
> is because it iterates each entry and inserts that to another sqllite DB 
> file, which is not efficient. We need to improve this to be running more 
> efficiently on large DB files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12506) Ozone: ListBucket is too slow

2017-09-20 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12506:
---
Labels: ozoneMerge performance  (was: ozoneMerge)

> Ozone: ListBucket is too slow
> -
>
> Key: HDFS-12506
> URL: https://issues.apache.org/jira/browse/HDFS-12506
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Priority: Blocker
>  Labels: ozoneMerge, performance
>
> Generated 3 million keys in ozone, and run {{listBucket}} command to get a 
> list of buckets under a volume,
> {code}
> bin/hdfs oz -listBucket http://15oz1.fyre.ibm.com:9864/vol-0-15143 -user wwei
> {code}
> this call spent over *15 seconds* to finish. The problem was caused by the 
> inflexible structure of KSM DB. Right now {{ksm.db}} stores keys like 
> following
> {code}
> /v1/b1
> /v1/b1/k1
> /v1/b1/k2
> /v1/b1/k3
> /v1/b2
> /v1/b2/k1
> /v1/b2/k2
> /v1/b2/k3
> /v1/b3
> /v1/b4
> {code}
> keys are sorted in nature order so when we do list buckets under a volume e.g 
> /v1, we need to seek to /v1 point and start to iterate and filter keys, this 
> ends up with scanning all keys under volume /v1. The problem with this design 
> is we don't have an efficient approach to locate all buckets without scanning 
> the keys.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12496) Make QuorumJournalManager timeout properties configurable

2017-09-20 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12496:
--
Attachment: HDFS-12496.05.patch

Changed {{DFS_QJM_OP_TIMEOUT,DFS_QJM_OP_TIMEOUT_DEFAULT}} to 
{{DFS_QJM_OPERATIONS_TIMEOUT_DEFAULT}}

> Make QuorumJournalManager timeout properties configurable
> -
>
> Key: HDFS-12496
> URL: https://issues.apache.org/jira/browse/HDFS-12496
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12496.01.patch, HDFS-12496.02.patch, 
> HDFS-12496.03.patch, HDFS-12496.04.patch, HDFS-12496.05.patch
>
>
> Make QuorumJournalManager timeout properties configurable using a common key. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12496) Make QuorumJournalManager timeout properties configurable

2017-09-20 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173983#comment-16173983
 ] 

Ajay Kumar edited comment on HDFS-12496 at 9/20/17 11:12 PM:
-

Changed {{DFS_QJM_OP_TIMEOUT,DFS_QJM_OP_TIMEOUT_DEFAULT}} to 
{{DFS_QJM_OPERATIONS_TIMEOUT,DFS_QJM_OPERATIONS_TIMEOUT_DEFAULT}}


was (Author: ajayydv):
Changed {{DFS_QJM_OP_TIMEOUT,DFS_QJM_OP_TIMEOUT_DEFAULT}} to 
{{DFS_QJM_OPERATIONS_TIMEOUT_DEFAULT}}

> Make QuorumJournalManager timeout properties configurable
> -
>
> Key: HDFS-12496
> URL: https://issues.apache.org/jira/browse/HDFS-12496
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12496.01.patch, HDFS-12496.02.patch, 
> HDFS-12496.03.patch, HDFS-12496.04.patch, HDFS-12496.05.patch
>
>
> Make QuorumJournalManager timeout properties configurable using a common key. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12503) Ozone: some UX improvements to oz_debug

2017-09-20 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173978#comment-16173978
 ] 

Weiwei Yang commented on HDFS-12503:


With this patch, now user could get more help message while using the tool,

{code}
bin/hdfs oz_debug --help
usage: hdfs oz_debug -p  -o 
 -h,--helpdisplay help message
 -o,--outPathspecify output DB file path
 -p,--dbPath specify DB path
{code}

Also modified description of {{outPath}} to make it more accurate, I have also 
added a check to make sure {{outPath}} is valid. Please kindly review, thanks.

> Ozone: some UX improvements to oz_debug
> ---
>
> Key: HDFS-12503
> URL: https://issues.apache.org/jira/browse/HDFS-12503
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12503-HDFS-7240.001.patch, 
> HDFS-12503-HDFS-7240.002.patch
>
>
> I tried to use {{oz_debug}} to dump KSM DB for offline analysis, found a few 
> problems need to be fixed in order to make this tool easier to use. I know 
> this is a debug tool for admins, but it's still necessary to improve the UX 
> so new users (like me) can figure out how to use it without reading more docs.
> # Support *--help* argument. --help is the general arg for all hdfs scripts 
> to print usage.
> # When specify output path {{-o}}, we need to add a description to let user 
> know the path needs to be a file (instead of a dir). If the path is specified 
> as a dir, it will end up with a funny error {{unable to open the database 
> file (out of memory)}}, which is pretty misleading. And it will be helpful to 
> add a check to make sure the specified path is not an existing dir.
> # SQLCLI currently swallows exception
> # We should remove {{levelDB}} words from the command output as we are by 
> default using rocksDB



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12496) Make QuorumJournalManager timeout properties configurable

2017-09-20 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173976#comment-16173976
 ] 

Ajay Kumar commented on HDFS-12496:
---

 [~arpitagarwal], Thanks for offline discussion about Unit test. Removing the 
unit test in patch v4 since it is hard to test reliably without some 
refactoring and also that QuorumCall testing is covered by other UTs. 

> Make QuorumJournalManager timeout properties configurable
> -
>
> Key: HDFS-12496
> URL: https://issues.apache.org/jira/browse/HDFS-12496
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12496.01.patch, HDFS-12496.02.patch, 
> HDFS-12496.03.patch, HDFS-12496.04.patch
>
>
> Make QuorumJournalManager timeout properties configurable using a common key. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12503) Ozone: some UX improvements to oz_debug

2017-09-20 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12503:
---
Attachment: HDFS-12503-HDFS-7240.002.patch

> Ozone: some UX improvements to oz_debug
> ---
>
> Key: HDFS-12503
> URL: https://issues.apache.org/jira/browse/HDFS-12503
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12503-HDFS-7240.001.patch, 
> HDFS-12503-HDFS-7240.002.patch
>
>
> I tried to use {{oz_debug}} to dump KSM DB for offline analysis, found a few 
> problems need to be fixed in order to make this tool easier to use. I know 
> this is a debug tool for admins, but it's still necessary to improve the UX 
> so new users (like me) can figure out how to use it without reading more docs.
> # Support *--help* argument. --help is the general arg for all hdfs scripts 
> to print usage.
> # When specify output path {{-o}}, we need to add a description to let user 
> know the path needs to be a file (instead of a dir). If the path is specified 
> as a dir, it will end up with a funny error {{unable to open the database 
> file (out of memory)}}, which is pretty misleading. And it will be helpful to 
> add a check to make sure the specified path is not an existing dir.
> # SQLCLI currently swallows exception
> # We should remove {{levelDB}} words from the command output as we are by 
> default using rocksDB



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12425) Ozone: OzoneFileSystem: OzoneFileystem read/write/create/open/getFileInfo APIs

2017-09-20 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173971#comment-16173971
 ] 

Xiaoyu Yao commented on HDFS-12425:
---

Thanks [~msingh] for working on this. The patch looks good to me overall. Here 
are some comments.

1. OzoneFileSystem.java
LINE 151/168/269/321, NIT: change the log level from debug to trace, also 
suggest using parameterized syntax to reduce logging overhead.
 
Line 183: the try/catch is not being used to catch any exception that we plan 
to handle, should we remove it?
 
Line 254: suggest using parameterized syntax to reduce logging overhead.
 
Line 268-273: can you add document somewhere mentioning how ozfs differentiate 
directory/file by the trailing "/"?
Maybe define a OZONE_URI_SEPARATOR "/" as I've seen "/" used in many places if 
there is not one exist. The other choice is to use URI#resolve
to handle this without worrying about the "/"

OzoneInputStream.java
1. Line 70/81: suggest using URI to handle this. 

2. Line 53/92: can we use a stream (Bucket#readKey) instead of a local 
file(Bucket#getKey) here for better perf?
This will affect other API implementation based on RandomAccessFile

OzoneOutputStream.java
Similar to the inputstream, can we avoid the stream backed by local file that 
write to the bucket when close()?

 
TestOzoneFileInterfaces.java
Line 113: inputStream needs to be closed to avoid leaking. Consider using 
try-with-resource to achieve that easily.




> Ozone: OzoneFileSystem: OzoneFileystem read/write/create/open/getFileInfo APIs
> --
>
> Key: HDFS-12425
> URL: https://issues.apache.org/jira/browse/HDFS-12425
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>  Labels: ozoneMerge
> Fix For: HDFS-7240
>
> Attachments: HDFS-12425-HDFS-7240.001.patch, 
> HDFS-12425-HDFS-7240.002.patch, HDFS-12425-HDFS-7240.003.patch
>
>
> This jira will add create/open and read/write APIs for OzoneFileSystem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12475) Ozone : add document for using Datanode http address

2017-09-20 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173970#comment-16173970
 ] 

Weiwei Yang commented on HDFS-12475:


Thanks [~vagarychen], that makes sense.

> Ozone : add document for using Datanode http address
> 
>
> Key: HDFS-12475
> URL: https://issues.apache.org/jira/browse/HDFS-12475
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Chen Liang
>Assignee: Lokesh Jain
>  Labels: ozoneDoc
>
> Currently Ozone's REST API uses the port 9864, all commands mentioned in 
> OzoneCommandShell.md use the address localhost:9864.
> This port was used by Datanode http server, which is now shared by Ozone. 
> Changing this config means user should be using the value of this setting 
> rather than localhost:9864 as in doc. The value is controlled by the config 
> key {{dfs.datanode.http.address}}. We should document this information in 
> {{OzoneCommandShell.md}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12515) Ozone: mvn package compilation fails on HDFS-7240

2017-09-20 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12515:

Status: Open  (was: Patch Available)

> Ozone: mvn package compilation fails on HDFS-7240
> -
>
> Key: HDFS-12515
> URL: https://issues.apache.org/jira/browse/HDFS-12515
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Blocker
>  Labels: ozoneMerge
> Fix For: HDFS-7240
>
> Attachments: HDFS-12515-HDFS-7240.001.patch
>
>
> Creation of a package on ozone(HDFS-7240) fails
> {{mvn clean package -Pdist -DskipTests -Dtar -Dmaven.javadoc.skip=true}}
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) 
> on project hadoop-mapreduce-examples: Compilation failure: Compilation 
> failure: 
> [ERROR] 
> /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraGen.java:[50,17]
>  package org.slf4j does not exist
> [ERROR] 
> /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraGen.java:[51,17]
>  package org.slf4j does not exist
> [ERROR] 
> /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraGen.java:[69,24]
>  cannot find symbol
> [ERROR]   symbol:   class Logger
> [ERROR]   location: class org.apache.hadoop.examples.terasort.TeraGen
> [ERROR] 
> /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/BaileyBorweinPlouffe.java:[53,17]
>  package org.slf4j does not exist
> [ERROR] 
> /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/BaileyBorweinPlouffe.java:[86,24]
>  cannot find symbol
> [ERROR]   symbol:   class Logger
> [ERROR]   location: class org.apache.hadoop.examples.BaileyBorweinPlouffe
> [ERROR] 
> /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/DBCountPageView.java:[51,17]
>  package org.slf4j does not exist
> [ERROR] 
> /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/DBCountPageView.java:[80,24]
>  cannot find symbol
> [ERROR]   symbol:   class Logger
> [ERROR]   location: class org.apache.hadoop.examples.DBCountPageView
> [ERROR] 
> /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/pi/DistSum.java:[57,17]
>  package org.slf4j does not exist
> [ERROR] 
> /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/pi/DistSum.java:[69,24]
>  cannot find symbol
> [ERROR]   symbol:   class Logger
> [ERROR]   location: class org.apache.hadoop.examples.pi.DistSum
> [ERROR] 
> /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/dancing/DancingLinks.java:[23,17]
>  package org.slf4j does not exist
> [ERROR] 
> /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/dancing/DancingLinks.java:[38,24]
>  cannot find symbol
> [ERROR]   symbol:   class Logger
> [ERROR]   location: class 
> org.apache.hadoop.examples.dancing.DancingLinks
> [ERROR] 
> /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraSort.java:[40,17]
>  package org.slf4j does not exist
> [ERROR] 
> /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraSort.java:[50,24]
>  cannot find symbol
> [ERROR]   symbol:   class Logger
> [ERROR]   location: class org.apache.hadoop.examples.terasort.TeraSort
> [ERROR] 
> /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraOutputFormat.java:[40,17]
>  package org.slf4j does not exist
> [ERROR] 
> 

[jira] [Commented] (HDFS-12513) Ozone: Create UI page to show Ozone configs by tags

2017-09-20 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173966#comment-16173966
 ] 

Weiwei Yang commented on HDFS-12513:


Hi Ajay

I assume you wanted to create this as a sub task in ozone branch so I moved 
this to HDFS-7240. Please let me know if this is not the intention. And another 
thing, we have a general conf servlet to display configs, how do you plan to 
implement this? Will this be a different new page?

> Ozone: Create UI page to show Ozone configs by tags
> ---
>
> Key: HDFS-12513
> URL: https://issues.apache.org/jira/browse/HDFS-12513
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Fix For: HDFS-7240
>
>
> Create UI page to show Ozone configs by tags



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12513) Ozone: Create UI page to show Ozone configs by tags

2017-09-20 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12513:
---
Summary: Ozone: Create UI page to show Ozone configs by tags  (was: Create 
UI page to show Ozone configs by tags)

> Ozone: Create UI page to show Ozone configs by tags
> ---
>
> Key: HDFS-12513
> URL: https://issues.apache.org/jira/browse/HDFS-12513
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Fix For: HDFS-7240
>
>
> Create UI page to show Ozone configs by tags



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12513) Create UI page to show Ozone configs by tags

2017-09-20 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12513:
---
Issue Type: Sub-task  (was: New Feature)
Parent: HDFS-7240

> Create UI page to show Ozone configs by tags
> 
>
> Key: HDFS-12513
> URL: https://issues.apache.org/jira/browse/HDFS-12513
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Fix For: HDFS-7240
>
>
> Create UI page to show Ozone configs by tags



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12507) javadoc: error - class file for org.apache.http.annotation.ThreadSafe not found

2017-09-20 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173963#comment-16173963
 ] 

Ray Chiang commented on HDFS-12507:
---

Just a quick FYI, I was running on trunk with git commit 
a12f09ba3c4a3aa4c4558090c5e1b7bcaebe3b94 at HEAD.

> javadoc: error - class file for org.apache.http.annotation.ThreadSafe not 
> found
> ---
>
> Key: HDFS-12507
> URL: https://issues.apache.org/jira/browse/HDFS-12507
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Mukul Kumar Singh
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:2.10.4:jar (module-javadocs) on 
> project hadoop-hdfs-client: MavenReportException: Error while generating 
> Javadoc: 
> [ERROR] Exit code: 1 - 
> /Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/StripedBlockUtil.java:694:
>  warning - Tag @link: reference not found: StripingCell
> [ERROR] javadoc: error - class file for org.apache.http.annotation.ThreadSafe 
> not found
> [ERROR] 
> [ERROR] Command line was: 
> /Library/Java/JavaVirtualMachines/jdk1.8.0_144.jdk/Contents/Home/jre/../bin/javadoc
>  -J-Xmx768m @options @packages
> [ERROR] 
> [ERROR] Refer to the generated Javadoc files in 
> '/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs-client/target/api' 
> dir.
> {code}
> To reproduce the error above, run
> {code}
> mvn package -Pdist -DskipTests -DskipDocs -Dtar
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12516) Suppress the fsnamesystem lock warning on nn startup

2017-09-20 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-12516:
-
Description: 
If FsNameSystemLock is held for more than 1 second then we log stacktrace . We 
can suppress this fsnamesystem lock warning on NameNode startup.
{code}
17/09/20 21:41:39 INFO namenode.FSNamesystem: FSNamesystem write lock held for 
7159 ms via
java.lang.Thread.getStackTrace(Thread.java:1552)
org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:945)
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1659)
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1074)
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:703)
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:688)
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:752)
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:992)
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:976)
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1701)
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1769)
Number of suppressed write-lock reports: 0
Longest write-lock held interval: 7159
{code}


  was:
If FsNameSystemLock is held for more than 10 seconds then we log stacktrace . 
We can suppress this fsnamesystem lock warning on NameNode startup.
{code}
17/09/20 21:41:39 INFO namenode.FSNamesystem: FSNamesystem write lock held for 
7159 ms via
java.lang.Thread.getStackTrace(Thread.java:1552)
org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:945)
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1659)
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1074)
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:703)
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:688)
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:752)
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:992)
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:976)
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1701)
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1769)
Number of suppressed write-lock reports: 0
Longest write-lock held interval: 7159
{code}



> Suppress the fsnamesystem lock warning on nn startup
> 
>
> Key: HDFS-12516
> URL: https://issues.apache.org/jira/browse/HDFS-12516
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>
> If FsNameSystemLock is held for more than 1 second then we log stacktrace . 
> We can suppress this fsnamesystem lock warning on NameNode startup.
> {code}
> 17/09/20 21:41:39 INFO namenode.FSNamesystem: FSNamesystem write lock held 
> for 7159 ms via
> java.lang.Thread.getStackTrace(Thread.java:1552)
> org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:945)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1659)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1074)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:703)
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:688)
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:752)
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:992)
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:976)
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1701)
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1769)
> Number of suppressed write-lock reports: 0
> Longest write-lock held interval: 7159
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12517) Ozone: mvn package is failing with out skipshade

2017-09-20 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12517:

Affects Version/s: HDFS-7240

> Ozone: mvn package is failing with out skipshade
> 
>
> Key: HDFS-12517
> URL: https://issues.apache.org/jira/browse/HDFS-12517
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>  Labels: ozoneMerge
> Attachments: HDFS-12517-HDFS-7240.01.patch
>
>
> ozone branch build is failing with out skipshade option



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12517) Ozone: mvn package is failing with out skipshade

2017-09-20 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12517:

Labels: ozoneMerge  (was: )

> Ozone: mvn package is failing with out skipshade
> 
>
> Key: HDFS-12517
> URL: https://issues.apache.org/jira/browse/HDFS-12517
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>  Labels: ozoneMerge
> Attachments: HDFS-12517-HDFS-7240.01.patch
>
>
> ozone branch build is failing with out skipshade option



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12517) Ozone: mvn package is failing with out skipshade

2017-09-20 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12517:

Component/s: ozone

> Ozone: mvn package is failing with out skipshade
> 
>
> Key: HDFS-12517
> URL: https://issues.apache.org/jira/browse/HDFS-12517
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>  Labels: ozoneMerge
> Attachments: HDFS-12517-HDFS-7240.01.patch
>
>
> ozone branch build is failing with out skipshade option



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12507) javadoc: error - class file for org.apache.http.annotation.ThreadSafe not found

2017-09-20 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173957#comment-16173957
 ] 

Ray Chiang commented on HDFS-12507:
---

So, I've tried this command on a couple of trees, plus some variation on the 
commands, but I just keep getting "BUILD SUCCESS".  Is anyone else running into 
this?  

> javadoc: error - class file for org.apache.http.annotation.ThreadSafe not 
> found
> ---
>
> Key: HDFS-12507
> URL: https://issues.apache.org/jira/browse/HDFS-12507
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Mukul Kumar Singh
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:2.10.4:jar (module-javadocs) on 
> project hadoop-hdfs-client: MavenReportException: Error while generating 
> Javadoc: 
> [ERROR] Exit code: 1 - 
> /Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/StripedBlockUtil.java:694:
>  warning - Tag @link: reference not found: StripingCell
> [ERROR] javadoc: error - class file for org.apache.http.annotation.ThreadSafe 
> not found
> [ERROR] 
> [ERROR] Command line was: 
> /Library/Java/JavaVirtualMachines/jdk1.8.0_144.jdk/Contents/Home/jre/../bin/javadoc
>  -J-Xmx768m @options @packages
> [ERROR] 
> [ERROR] Refer to the generated Javadoc files in 
> '/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs-client/target/api' 
> dir.
> {code}
> To reproduce the error above, run
> {code}
> mvn package -Pdist -DskipTests -DskipDocs -Dtar
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12517) Ozone: mvn package is failing with out skipshade

2017-09-20 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173956#comment-16173956
 ] 

Anu Engineer commented on HDFS-12517:
-

[~bharatviswa] Thanks for fixing this issue. +1, pending Jenkins.

> Ozone: mvn package is failing with out skipshade
> 
>
> Key: HDFS-12517
> URL: https://issues.apache.org/jira/browse/HDFS-12517
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12517-HDFS-7240.01.patch
>
>
> ozone branch build is failing with out skipshade option



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12517) Ozone: mvn package is failing with out skipshade

2017-09-20 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12517:

Summary: Ozone: mvn package is failing with out skipshade  (was: mvn 
package is failing with out skipshade)

> Ozone: mvn package is failing with out skipshade
> 
>
> Key: HDFS-12517
> URL: https://issues.apache.org/jira/browse/HDFS-12517
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12517-HDFS-7240.01.patch
>
>
> ozone branch build is failing with out skipshade option



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12386) Add fsserver defaults call to WebhdfsFileSystem.

2017-09-20 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-12386:
--
Attachment: HDFS-12386-3.patch

{{TestWebHDFS}} was a valid test failure.
It was a result of dependency on static variable which the previous test left 
around.
Added a test method to reset the value

{{TestNameNodeMetadataConsistency}} was just a flaky/slow test.
It passed few times I ran.

The previous patch also introduced one new find bug warning but I feel its 
required.
Would be open to hear if somehow we can fix it.

Attaching a new patch.

> Add fsserver defaults call to WebhdfsFileSystem.
> 
>
> Key: HDFS-12386
> URL: https://issues.apache.org/jira/browse/HDFS-12386
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: webhdfs
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Minor
> Attachments: HDFS-12386-1.patch, HDFS-12386-2.patch, 
> HDFS-12386-3.patch, HDFS-12386.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12386) Add fsserver defaults call to WebhdfsFileSystem.

2017-09-20 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-12386:
--
Status: Patch Available  (was: Open)

> Add fsserver defaults call to WebhdfsFileSystem.
> 
>
> Key: HDFS-12386
> URL: https://issues.apache.org/jira/browse/HDFS-12386
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: webhdfs
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Minor
> Attachments: HDFS-12386-1.patch, HDFS-12386-2.patch, 
> HDFS-12386-3.patch, HDFS-12386.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11858) JN httpServerUri should be set to hostname when using Default Http Address

2017-09-20 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-11858:
--
Attachment: HDFS-11858.001.patch

The patch does the following
Whenever the JN http addr is not explicitly set and the default address 
(0.0.0.0:8480) is used, the JN determines the hostname via the DNS class. This 
hostname is set as the JN httpServerUri.

> JN httpServerUri should be set to hostname when using Default Http Address
> --
>
> Key: HDFS-11858
> URL: https://issues.apache.org/jira/browse/HDFS-11858
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-11858.001.patch
>
>
> Currently, when JN uses the default http address (0.0.0.0:8480), it sets the 
> httpServerURI to 0.0.0.0:8480 as well. This value is passed as fromUrl 
> address in GetEditLogManifestResponseProto and GetJournalStateResponseProto.
> When using the default http address, we should change the JN's httpServerURI 
> to use the actual hostname.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12517) mvn package is failing with out skipshade

2017-09-20 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-12517:
--
Attachment: HDFS-12517-HDFS-7240.01.patch

> mvn package is failing with out skipshade
> -
>
> Key: HDFS-12517
> URL: https://issues.apache.org/jira/browse/HDFS-12517
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12517-HDFS-7240.01.patch
>
>
> ozone branch build is failing with out skipshade option



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12517) mvn package is failing with out skipshade

2017-09-20 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-12517:
--
Status: Patch Available  (was: In Progress)

> mvn package is failing with out skipshade
> -
>
> Key: HDFS-12517
> URL: https://issues.apache.org/jira/browse/HDFS-12517
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12517-HDFS-7240.01.patch
>
>
> ozone branch build is failing with out skipshade option



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12386) Add fsserver defaults call to WebhdfsFileSystem.

2017-09-20 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-12386:
--
Status: Open  (was: Patch Available)

Cancelling patch for addressing test failures.

> Add fsserver defaults call to WebhdfsFileSystem.
> 
>
> Key: HDFS-12386
> URL: https://issues.apache.org/jira/browse/HDFS-12386
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: webhdfs
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Minor
> Attachments: HDFS-12386-1.patch, HDFS-12386-2.patch, HDFS-12386.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-12517) mvn package is failing with out skipshade

2017-09-20 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-12517 started by Bharat Viswanadham.
-
> mvn package is failing with out skipshade
> -
>
> Key: HDFS-12517
> URL: https://issues.apache.org/jira/browse/HDFS-12517
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>
> ozone branch build is failing with out skipshade option



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12517) mvn package is failing with out skipshade

2017-09-20 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDFS-12517:
-

 Summary: mvn package is failing with out skipshade
 Key: HDFS-12517
 URL: https://issues.apache.org/jira/browse/HDFS-12517
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


ozone branch build is failing with out skipshade option





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12291) [SPS]: Provide a mechanism to recursively iterate and satisfy storage policy of all the files under the given dir

2017-09-20 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173944#comment-16173944
 ] 

Uma Maheswara Rao G commented on HDFS-12291:


Thank you [~surendrasingh] for the nice work. Few comments below:

# .
I think, currently both traverser’s are throttled in their own way. So, to 
unify the convention, How about having specific names for extended classes?
ThrottledFSTreeTraverser —> ReencryptionPendingInodeIdCollector
FileInodeIdCollector —> StorageMovementPendingInodeIdCollector ?
# .
Could you define with size? I think max it would queue limit size.
{code}
+private List currentBatch = new ArrayList<>();
{code}
# .
A few other test cases needed: 
1.delete the directory while current dir is in progress. Probably artificial 
pause in throttle will help to test this case?
2. Empty directory - should clean Xattr
3. A directory with directory only sub tree - means no files under this sub 
tree. - It should collect nothing and remove Xattrs at the end
4. A call on Non existent directory - should not happen as it will fail to add 
Xattr itself initially itself
# .
This needs the documentation update.

[~xiaochen] do you want to take a stab at this latest patch? Thanks for your 
help.

> [SPS]: Provide a mechanism to recursively iterate and satisfy storage policy 
> of all the files under the given dir
> -
>
> Key: HDFS-12291
> URL: https://issues.apache.org/jira/browse/HDFS-12291
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Rakesh R
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-12291-HDFS-10285-01.patch, 
> HDFS-12291-HDFS-10285-02.patch, HDFS-12291-HDFS-10285-03.patch, 
> HDFS-12291-HDFS-10285-04.patch
>
>
> For the given source path directory, presently SPS consider only the files 
> immediately under the directory(only one level of scanning) for satisfying 
> the policy. It WON’T do recursive directory scanning and then schedules SPS 
> tasks to satisfy the storage policy of all the files till the leaf node. 
> The idea of this jira is to discuss & implement an efficient recursive 
> directory iteration mechanism and satisfies storage policy for all the files 
> under the given directory.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12495) TestPendingInvalidateBlock.testPendingDeleteUnknownBlocks fails intermittently

2017-09-20 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated HDFS-12495:
---
Status: Open  (was: Patch Available)

> TestPendingInvalidateBlock.testPendingDeleteUnknownBlocks fails intermittently
> --
>
> Key: HDFS-12495
> URL: https://issues.apache.org/jira/browse/HDFS-12495
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-beta1, 2.8.2
>Reporter: Eric Badger
>Assignee: Eric Badger
>  Labels: flaky-test
> Attachments: HDFS-12495.001.patch
>
>
> {noformat}
> java.net.BindException: Problem binding to [localhost:36701] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:546)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:955)
>   at org.apache.hadoop.ipc.Server.(Server.java:2655)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:968)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:367)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:810)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:954)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1314)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:481)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2611)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2499)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2546)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2152)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock.testPendingDeleteUnknownBlocks(TestPendingInvalidateBlock.java:175)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12495) TestPendingInvalidateBlock.testPendingDeleteUnknownBlocks fails intermittently

2017-09-20 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated HDFS-12495:
---
Status: Patch Available  (was: Open)

Not sure why Jenkins isn't running. Cancelling and resubmitting patch again

> TestPendingInvalidateBlock.testPendingDeleteUnknownBlocks fails intermittently
> --
>
> Key: HDFS-12495
> URL: https://issues.apache.org/jira/browse/HDFS-12495
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-beta1, 2.8.2
>Reporter: Eric Badger
>Assignee: Eric Badger
>  Labels: flaky-test
> Attachments: HDFS-12495.001.patch
>
>
> {noformat}
> java.net.BindException: Problem binding to [localhost:36701] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:546)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:955)
>   at org.apache.hadoop.ipc.Server.(Server.java:2655)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:968)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:367)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:810)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:954)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1314)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:481)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2611)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2499)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2546)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2152)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock.testPendingDeleteUnknownBlocks(TestPendingInvalidateBlock.java:175)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12458) TestReencryptionWithKMS fails regularly

2017-09-20 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-12458:
-
Attachment: HDFS-12458.01.patch

The direct failure of this is the updater was not paused, making it possible 
that a task would slip in and get processed. Directly fixed by pausing the 
updater before sending a re-encrypt command.

I was also looking at the re-encryption tests in general, and patch 1 includes 
some improvements: (Since they're still related to the said test class, IMO we 
can do them in this jira.)
- wait for mini cluster to come out of safemode, in addition to {{waitActive}}
- removed the change to re-encryption sleep interval, and notify directly
- {{TestReencryptionHandler}} in our internal infra is also failing at times. 
Turns out it could run fast enough (within a millisecond), so we should check 
for greater or equal to, instead of greater than. 

> TestReencryptionWithKMS fails regularly
> ---
>
> Key: HDFS-12458
> URL: https://issues.apache.org/jira/browse/HDFS-12458
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, test
>Affects Versions: 3.0.0-beta1
>Reporter: Konstantin Shvachko
>Assignee: Xiao Chen
> Attachments: HDFS-12458.01.patch
>
>
> {{TestReencryptionWithKMS}} fails pretty often on Jenkins. Should fix it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12458) TestReencryptionWithKMS fails regularly

2017-09-20 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-12458:
-
Status: Patch Available  (was: Open)

> TestReencryptionWithKMS fails regularly
> ---
>
> Key: HDFS-12458
> URL: https://issues.apache.org/jira/browse/HDFS-12458
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, test
>Affects Versions: 3.0.0-beta1
>Reporter: Konstantin Shvachko
>Assignee: Xiao Chen
> Attachments: HDFS-12458.01.patch
>
>
> {{TestReencryptionWithKMS}} fails pretty often on Jenkins. Should fix it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12496) Make QuorumJournalManager timeout properties configurable

2017-09-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173937#comment-16173937
 ] 

Hadoop QA commented on HDFS-12496:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 40s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 440 unchanged - 0 fixed = 441 total (was 440) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 10s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}126m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.qjournal.client.TestQuorumJournalManager |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12496 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888119/HDFS-12496.03.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux dd6167be967e 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a12f09b |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21250/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21250/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21250/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21250/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   

[jira] [Moved] (HDFS-12516) Suppress the fsnamesystem lock warning on nn startup

2017-09-20 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar moved HADOOP-14889 to HDFS-12516:


Key: HDFS-12516  (was: HADOOP-14889)
Project: Hadoop HDFS  (was: Hadoop Common)

> Suppress the fsnamesystem lock warning on nn startup
> 
>
> Key: HDFS-12516
> URL: https://issues.apache.org/jira/browse/HDFS-12516
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>
> If FsNameSystemLock is held for more than 10 seconds then we log stacktrace . 
> We can suppress this fsnamesystem lock warning on NameNode startup.
> {code}
> 17/09/20 21:41:39 INFO namenode.FSNamesystem: FSNamesystem write lock held 
> for 7159 ms via
> java.lang.Thread.getStackTrace(Thread.java:1552)
> org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:945)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1659)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1074)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:703)
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:688)
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:752)
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:992)
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:976)
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1701)
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1769)
> Number of suppressed write-lock reports: 0
> Longest write-lock held interval: 7159
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12458) TestReencryptionWithKMS fails regularly

2017-09-20 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-12458:
-
Component/s: (was: kms)
 encryption

> TestReencryptionWithKMS fails regularly
> ---
>
> Key: HDFS-12458
> URL: https://issues.apache.org/jira/browse/HDFS-12458
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, test
>Affects Versions: 3.0.0-beta1
>Reporter: Konstantin Shvachko
>Assignee: Xiao Chen
>
> {{TestReencryptionWithKMS}} fails pretty often on Jenkins. Should fix it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12458) TestReencryptionWithKMS fails regularly

2017-09-20 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-12458:
-
Affects Version/s: (was: 3.0.0)
   3.0.0-beta1

> TestReencryptionWithKMS fails regularly
> ---
>
> Key: HDFS-12458
> URL: https://issues.apache.org/jira/browse/HDFS-12458
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, test
>Affects Versions: 3.0.0-beta1
>Reporter: Konstantin Shvachko
>Assignee: Xiao Chen
>
> {{TestReencryptionWithKMS}} fails pretty often on Jenkins. Should fix it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12458) TestReencryptionWithKMS fails regularly

2017-09-20 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173915#comment-16173915
 ] 

Xiao Chen commented on HDFS-12458:
--

Thanks you [~shv] for reporting.
I was out and missed this jira, will work on it soon.

> TestReencryptionWithKMS fails regularly
> ---
>
> Key: HDFS-12458
> URL: https://issues.apache.org/jira/browse/HDFS-12458
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: kms, test
>Affects Versions: 3.0.0
>Reporter: Konstantin Shvachko
>Assignee: Xiao Chen
>
> {{TestReencryptionWithKMS}} fails pretty often on Jenkins. Should fix it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12458) TestReencryptionWithKMS fails regularly

2017-09-20 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen reassigned HDFS-12458:


Assignee: Xiao Chen

> TestReencryptionWithKMS fails regularly
> ---
>
> Key: HDFS-12458
> URL: https://issues.apache.org/jira/browse/HDFS-12458
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: kms, test
>Affects Versions: 3.0.0
>Reporter: Konstantin Shvachko
>Assignee: Xiao Chen
>
> {{TestReencryptionWithKMS}} fails pretty often on Jenkins. Should fix it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >