[jira] [Commented] (HDFS-11062) Ozone:SCM: Explore if we can remove nullcommand

2017-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15944555#comment-15944555
 ] 

Hadoop QA commented on HDFS-11062:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
33s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 66m  
2s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 91m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11062 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860796/HDFS-11062-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux ed8623c0de3d 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 8f4d8c4 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18866/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18866/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone:SCM: Explore if we can remove nullcommand
> ---
>
> Key: HDFS-11062
> URL: https://issues.apache.org/jira/browse/HDFS-11062
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Yuanbo Liu
> Fix For: HDFS-7240
>
> Attachments: 

[jira] [Commented] (HDFS-11577) Combine the old and the new chooseRandom for better performance

2017-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15944527#comment-15944527
 ] 

Hadoop QA commented on HDFS-11577:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 57s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 47s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}148m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ipc.TestRPCWaitForProxy |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11577 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860785/HDFS-11577.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2c465bc9962a 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 9bae672 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18863/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18863/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18863/testReport/ |
| modules | 

[jira] [Commented] (HDFS-10629) Federation Router

2017-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15944510#comment-15944510
 ] 

Hadoop QA commented on HDFS-10629:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
9s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
36s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
21s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
18s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 51s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 403 unchanged - 0 fixed = 405 total (was 403) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
14s{color} | {color:green} The patch generated 0 new + 98 unchanged - 1 fixed = 
98 total (was 99) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
21s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 12s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}112m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  
org.apache.hadoop.hdfs.server.federation.router.Router.initAndStartRouter(Configuration,
 boolean) invokes System.exit(...), which shuts down the entire virtual machine 
 At Router.java:shuts down the entire virtual machine  At Router.java:[line 
130] |
| Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
|   | hadoop.hdfs.TestMaintenanceState |
|   | hadoop.hdfs.server.datanode.checker.TestThrottledAsyncCheckerTimeout |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-10629 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860788/HDFS-10629-HDFS-10467-012.patch
 |
| Optional Tests 

[jira] [Commented] (HDFS-10881) Federation State Store Driver API

2017-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15944503#comment-15944503
 ] 

Hadoop QA commented on HDFS-10881:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
59s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 39s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 64m 
34s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 93m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-10881 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860794/HDFS-10881-HDFS-10467-011.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 651ba52a4514 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10467 / 6c399a8 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18865/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18865/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18865/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Federation State Store Driver API
> -
>
> Key: HDFS-10881
> URL: https://issues.apache.org/jira/browse/HDFS-10881
> Project: Hadoop HDFS
>  Issue 

[jira] [Commented] (HDFS-11486) Client close() should not fail fast if the last block is being decommissioned

2017-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15944502#comment-15944502
 ] 

Hadoop QA commented on HDFS-11486:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
11s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 46m 
46s{color} | {color:green} hadoop-hdfs in the patch passed with JDK v1.7.0_121. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}123m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_121 Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestComputeInvalidateWork |
|   | hadoop.hdfs.server.namenode.TestCheckpoint |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5af2af1 |
| JIRA Issue | HDFS-11486 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860783/HDFS-11486-branch-2.8.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1f2c980adf74 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2.8 / b218676 |
| 

[jira] [Updated] (HDFS-11062) Ozone:SCM: Explore if we can remove nullcommand

2017-03-27 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HDFS-11062:
--
Status: Patch Available  (was: Open)

> Ozone:SCM: Explore if we can remove nullcommand
> ---
>
> Key: HDFS-11062
> URL: https://issues.apache.org/jira/browse/HDFS-11062
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Yuanbo Liu
> Fix For: HDFS-7240
>
> Attachments: HDFS-11062-HDFS-7240.001.patch
>
>
> in SCM protocol we have a nullCommand that gets returned as the default case. 
> Explore if we can remove this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11062) Ozone:SCM: Explore if we can remove nullcommand

2017-03-27 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HDFS-11062:
--
Attachment: HDFS-11062-HDFS-7240.001.patch

upload v1 patch

> Ozone:SCM: Explore if we can remove nullcommand
> ---
>
> Key: HDFS-11062
> URL: https://issues.apache.org/jira/browse/HDFS-11062
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Yuanbo Liu
> Fix For: HDFS-7240
>
> Attachments: HDFS-11062-HDFS-7240.001.patch
>
>
> in SCM protocol we have a nullCommand that gets returned as the default case. 
> Explore if we can remove this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11567) Ozone: Support update container

2017-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15944450#comment-15944450
 ] 

Hadoop QA commented on HDFS-11567:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
56s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 55s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.cblock.TestCBlockServer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11567 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860777/HDFS-11567-HDFS-7240.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1484cd4e7b80 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 8f4d8c4 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18860/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18860/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18860/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: Support update container
> ---
>
> Key: HDFS-11567
> URL: https://issues.apache.org/jira/browse/HDFS-11567
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> 

[jira] [Commented] (HDFS-11551) Handle SlowDiskReport from DataNode at the NameNode

2017-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1590#comment-1590
 ] 

Hadoop QA commented on HDFS-11551:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} hadoop-hdfs-project: The patch generated 12 new 
+ 44 unchanged - 0 fixed = 56 total (was 44) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
8s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
0s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m 30s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}120m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Suspicious comparison of Double references in 
org.apache.hadoop.hdfs.server.blockmanagement.SlowDiskTracker$DiskLatency.areLatenciesEqual(Map)
  At SlowDiskTracker.java:in 
org.apache.hadoop.hdfs.server.blockmanagement.SlowDiskTracker$DiskLatency.areLatenciesEqual(Map)
  At SlowDiskTracker.java:[line 213] |
| Failed junit tests | 
hadoop.hdfs.server.datanode.checker.TestThrottledAsyncChecker |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestSafeModeWithStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11551 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860774/HDFS-11551.006.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1f4c0626d9b7 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Updated] (HDFS-10881) Federation State Store Driver API

2017-03-27 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HDFS-10881:
---
Attachment: HDFS-10881-HDFS-10467-011.patch

Removing {{Federation}} prefix from classes.

> Federation State Store Driver API
> -
>
> Key: HDFS-10881
> URL: https://issues.apache.org/jira/browse/HDFS-10881
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Jason Kace
>Assignee: Jason Kace
> Attachments: HDFS-10881-HDFS-10467-001.patch, 
> HDFS-10881-HDFS-10467-002.patch, HDFS-10881-HDFS-10467-003.patch, 
> HDFS-10881-HDFS-10467-004.patch, HDFS-10881-HDFS-10467-005.patch, 
> HDFS-10881-HDFS-10467-006.patch, HDFS-10881-HDFS-10467-007.patch, 
> HDFS-10881-HDFS-10467-008.patch, HDFS-10881-HDFS-10467-009.patch, 
> HDFS-10881-HDFS-10467-010.patch, HDFS-10881-HDFS-10467-011.patch
>
>
> The API interfaces and minimal classes required to support a state store data 
> backend such as ZooKeeper or a file system.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10971) Distcp should not copy replication factor if source file is erasure coded

2017-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15944423#comment-15944423
 ] 

Hadoop QA commented on HDFS-10971:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} hadoop-tools/hadoop-distcp: The patch generated 0 
new + 211 unchanged - 1 fixed = 211 total (was 212) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 
49s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-10971 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860780/HDFS-10971.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7f13ced21481 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 9bae672 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18861/testReport/ |
| modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18861/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Distcp should not copy replication factor if source file is erasure coded
> -
>
> Key: HDFS-10971
> URL: https://issues.apache.org/jira/browse/HDFS-10971
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: distcp
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: 

[jira] [Updated] (HDFS-10629) Federation Router

2017-03-27 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HDFS-10629:
---
Attachment: HDFS-10629-HDFS-10467-012.patch

Renamed {{FederationStateStoreService}} to {{StateStoreService}}.

Tackled most [~chris.douglas] comments:
* Fixed {{FederationNamenodeServiceState}}
* {{RouterConfigBuilder}} right now is very basic, but we will add new config 
settings once we add services to the {{Router}}
* The {{restartRpcServer()}} functionality was only internal debuggin, so 
removed it
* Clarified {{RemoteLocationContext#getDest()}}
* {{ReflectionUtils#newInstance}} doesn't support parameter passing, and this 
is a pretty common pattern for our use case. We use the {{getClass()}} already 
and I'm not sure adding the creation of a simple constructor adds much
* Made list return unmodifiable in {{PathLocation}}


> Federation Router
> -
>
> Key: HDFS-10629
> URL: https://issues.apache.org/jira/browse/HDFS-10629
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Inigo Goiri
>Assignee: Jason Kace
> Attachments: HDFS-10629.000.patch, HDFS-10629.001.patch, 
> HDFS-10629-HDFS-10467-002.patch, HDFS-10629-HDFS-10467-003.patch, 
> HDFS-10629-HDFS-10467-004.patch, HDFS-10629-HDFS-10467-005.patch, 
> HDFS-10629-HDFS-10467-006.patch, HDFS-10629-HDFS-10467-007.patch, 
> HDFS-10629-HDFS-10467-008.patch, HDFS-10629-HDFS-10467-009.patch, 
> HDFS-10629-HDFS-10467-010.patch, HDFS-10629-HDFS-10467-011.patch, 
> HDFS-10629-HDFS-10467-012.patch, routerlatency.png
>
>
> Component that routes calls from the clients to the right Namespace.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11577) Combine the old and the new chooseRandom for better performance

2017-03-27 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11577:
-
Attachment: HDFS-11577.003.patch

Thanks [~vagarychen] for updating the patch.
{code}
+  if (n == null) {
+LOG.debug("No node to choose?");
+// this means there is simply no node to choose from
+return null;
+  }
{code}
{{LOG.debug("No node to choose?");}} this is not correctly, it should add 
condition {{LOG.isDebugEnabled}} when there are no parameters.
Attach the new patch to fix this. And I will commit the latest patch at the end 
of day.

> Combine the old and the new chooseRandom for better performance
> ---
>
> Key: HDFS-11577
> URL: https://issues.apache.org/jira/browse/HDFS-11577
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11577.001.patch, HDFS-11577.002.patch, 
> HDFS-11577.003.patch
>
>
> As discussed in HDFS-11535, this JIRA adds a new function combining both the 
> new and the old chooseRandom methods for better performance.
> More specifically, when choosing a random node with storage type requirement, 
> the combined method first tries the old method of blindly picking a random 
> node. If this node satisfies, it is returned. Otherwise, the new chooseRandom 
> is called, which guarantees to find a eligible node in one call (if there is 
> one at all).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11486) Client close() should not fail fast if the last block is being decommissioned

2017-03-27 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HDFS-11486:

Attachment: HDFS-11486-branch-2.8.003.patch

attaching backport patch for QA build on branch-2.8.

> Client close() should not fail fast if the last block is being decommissioned
> -
>
> Key: HDFS-11486
> URL: https://issues.apache.org/jira/browse/HDFS-11486
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDF-11486.test.patch, HDFS-11486.001.patch, 
> HDFS-11486.002.patch, HDFS-11486.003.patch, HDFS-11486-branch-2.8.003.patch, 
> HDFS-11486.test-inmaintenance.patch
>
>
> If a DFS client closes a file while the last block is being decommissioned, 
> the close() may fail if the decommission of the block does not complete in a 
> few seconds.
> When a DataNode is being decommissioned, NameNode marks the DN's state as 
> DECOMMISSION_INPROGRESS_INPROGRESS, and blocks with replicas on these 
> DataNodes become under-replicated immediately. A close() call which attempts 
> to complete the last open block will fail if the number of live replicas is 
> below minimal replicated factor, due to too many replicas residing on the 
> DataNodes.
> The client internally will try to complete the last open block for up to 5 
> times by default, which is roughly 12 seconds. After that, close() throws an 
> exception like the following, which is typically not handled properly.
> {noformat}
> java.io.IOException: Unable to close file because the last 
> blockBP-33575088-10.0.0.200-1488410554081:blk_1073741827_1003 does not have 
> enough number of replicas.
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:864)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:827)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:793)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
>   at 
> org.apache.hadoop.hdfs.TestDecommission.testCloseWhileDecommission(TestDecommission.java:708)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}
> Once the exception is thrown, the client usually does not attempt to close 
> again, so the file remains in open state, and the last block remains in under 
> replicated state.
> Subsequently, administrator runs recoverLease tool to salvage the file, but 
> the attempt failed because the block remains in under replicated state. It is 
> not clear why the block is never replicated though. However, administrators 
> think it becomes a corrupt file because the file remains open via fsck 
> -openforwrite and the file modification time is hours ago.
> In summary, I do not think close() should fail because the last block is 
> being decommissioned. The block has sufficient number replicas, and it's just 
> that some replicas are being decommissioned. Decomm should be transparent to 
> clients.
> This issue seems to be more prominent on a very large scale cluster, with min 
> replication factor set to 2.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11582) Block Storage : add SCSI target access daemon

2017-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15944397#comment-15944397
 ] 

Hadoop QA commented on HDFS-11582:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
0s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
43s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
29s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
32s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
39s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in HDFS-7240 
has 90 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
32s{color} | {color:green} HDFS-7240 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
11s{color} | {color:green} The patch generated 0 new + 98 unchanged - 1 fixed = 
98 total (was 99) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
0s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 12s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
22s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}110m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.container.common.TestEndPoint |
|   | 
hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness
 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11582 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860764/HDFS-11582-HDFS-7240.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  shellcheck  shelldocs  |
| uname | Linux 886dde9e85d2 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 

[jira] [Updated] (HDFS-10971) Distcp should not copy replication factor if source file is erasure coded

2017-03-27 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-10971:
--
Status: Patch Available  (was: In Progress)

> Distcp should not copy replication factor if source file is erasure coded
> -
>
> Key: HDFS-10971
> URL: https://issues.apache.org/jira/browse/HDFS-10971
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: distcp
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-10971.01.patch, HDFS-10971.testcase.patch
>
>
> The current erasure coding implementation uses replication factor field to 
> store erasure coding policy.
> Distcp copies the source file's replication factor to the destination if 
> {{-pr}} is specified. However, if the source file is EC, the replication 
> factor (which is EC policy) should not be replicated to the destination file. 
> When a HdfsFileStatus is converted to FileStatus, the replication factor is 
> set to 0 if it's an EC file.
> In fact, I will attach a test case that shows trying to replicate the 
> replication factor of an EC file results in an IOException: "Requested 
> replication factor of 0 is less than the required minimum of 1 for 
> /tmp/dst/dest2"



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10971) Distcp should not copy replication factor if source file is erasure coded

2017-03-27 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-10971:
--
Attachment: HDFS-10971.01.patch

[~zhz]
bq. For the initial 3.0 release I prefer a simpler strategy of only supporting 
-pr when both the source and destination directories are replicated.
This model sounds good. Made changes to {{DistCpUtils#preserve()}} to apply 
replication changes for preserve operation only if both src and dst files are 
not erasure coded. No changes needed in {{RetriableFileCopyCommand}} as the 
backend already takes care of doing the right thing when creation EC files.

[~andrew.wang], can you please take a look at the patch ?

> Distcp should not copy replication factor if source file is erasure coded
> -
>
> Key: HDFS-10971
> URL: https://issues.apache.org/jira/browse/HDFS-10971
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: distcp
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-10971.01.patch, HDFS-10971.testcase.patch
>
>
> The current erasure coding implementation uses replication factor field to 
> store erasure coding policy.
> Distcp copies the source file's replication factor to the destination if 
> {{-pr}} is specified. However, if the source file is EC, the replication 
> factor (which is EC policy) should not be replicated to the destination file. 
> When a HdfsFileStatus is converted to FileStatus, the replication factor is 
> set to 0 if it's an EC file.
> In fact, I will attach a test case that shows trying to replicate the 
> replication factor of an EC file results in an IOException: "Requested 
> replication factor of 0 is less than the required minimum of 1 for 
> /tmp/dst/dest2"



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11567) Ozone: Support update container

2017-03-27 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15944374#comment-15944374
 ] 

Weiwei Yang commented on HDFS-11567:


Hi [~anu]

Thank you for the review, I have uploaded v2 patch to improve the error 
message. And moving on I filed HDFS-11585 to support force update.

Thanks

> Ozone: Support update container
> ---
>
> Key: HDFS-11567
> URL: https://issues.apache.org/jira/browse/HDFS-11567
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-11567-HDFS-7240.001.patch, 
> HDFS-11567-HDFS-7240.002.patch
>
>
> Add support to update a container. A container has a set of states. That 
> states include information like SHA256 hashes, the metadata of the container, 
> a set of key value pairs. etc. This API allows us to update or change those 
> values for an existing container. This API is also critical if we want to 
> force a rewrite of the container data on the datanode. We could read the data 
> and write it back to for a disk update, which would allow us to repair a 
> container metadata if it is really needed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11585) Ozone: Support force update a container

2017-03-27 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-11585:
--

 Summary: Ozone: Support force update a container
 Key: HDFS-11585
 URL: https://issues.apache.org/jira/browse/HDFS-11585
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Weiwei Yang
Assignee: Weiwei Yang


HDFS-11567 added support of updating a container, and in following cases

# Container is closed
# Container meta file is falsely removed on disk or corrupted

a container cannot be gracefully updated. It is useful to support forcibly 
update if a container gets into such state, that gives us the chance to repair 
meta data.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11567) Ozone: Support update container

2017-03-27 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-11567:
---
Attachment: HDFS-11567-HDFS-7240.002.patch

> Ozone: Support update container
> ---
>
> Key: HDFS-11567
> URL: https://issues.apache.org/jira/browse/HDFS-11567
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-11567-HDFS-7240.001.patch, 
> HDFS-11567-HDFS-7240.002.patch
>
>
> Add support to update a container. A container has a set of states. That 
> states include information like SHA256 hashes, the metadata of the container, 
> a set of key value pairs. etc. This API allows us to update or change those 
> values for an existing container. This API is also critical if we want to 
> force a rewrite of the container data on the datanode. We could read the data 
> and write it back to for a disk update, which would allow us to repair a 
> container metadata if it is really needed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11576) Block recovery will fail indefinitely if recovery time > heartbeat interval

2017-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15944357#comment-15944357
 ] 

Hadoop QA commented on HDFS-11576:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
19s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  3s{color} | {color:orange} root: The patch generated 11 new + 779 unchanged 
- 1 fixed = 790 total (was 780) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
3s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
27s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 37s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}144m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Write to static field 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blockRecoveryTimeoutInterval
 from instance method new 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(Namesystem, boolean, 
Configuration)  At BlockManager.java:from instance method new 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(Namesystem, boolean, 
Configuration)  At BlockManager.java:[line 594] |
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestAppendSnapshotTruncate |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11576 |
| JIRA Patch URL | 

[jira] [Updated] (HDFS-11548) Ozone: SCM: Add node pool management API

2017-03-27 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-11548:

  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: HDFS-7240
Target Version/s: HDFS-7240
  Status: Resolved  (was: Patch Available)

[~xyao] Thank you for the contribution. I have committed this to the feature 
branch.


> Ozone: SCM: Add node pool management API
> 
>
> Key: HDFS-11548
> URL: https://issues.apache.org/jira/browse/HDFS-11548
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: HDFS-7240
>
> Attachments: HDFS-11548-HDFS-7240.001.patch, 
> HDFS-11548-HDFS-7240.002.patch, HDFS-11548-HDFS-7240.003.patch, 
> HDFS-11548-HDFS-7240.004.patch
>
>
> The idea is to group registered nodes into pools of fixed size (say 24 nodes 
> per pool) so that the container allocation and report can all be handled 
> independently on a pool basis by SCM.  
> The initial patch will implement the following Node Pool API that 
> 1) add node to a node pool 
> 2) remove a node from a pool 
> 3) get the pool name that a node belongs to 
> 4) get all the pool names 
> 5) get all nodes of a pool
> The integration with SCM container allocation can be all nodes in a single 
> default pool upon registration. We will provide a CLI to manage multiple 
> pools and support for pool definition file later. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11575) Supporting HDFS NFS gateway with Federated HDFS

2017-03-27 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15944335#comment-15944335
 ] 

Andrew Wang commented on HDFS-11575:


Hi Mukul, thanks for posting this patch,

I'm wondering how important this is, given that the client can emulate this by 
setting up multiple NFS mounts. Since each HDFS instance is separate (e.g. no 
rename), I think separate mounts also matches well semantically.

> Supporting HDFS NFS gateway with Federated HDFS
> ---
>
> Key: HDFS-11575
> URL: https://issues.apache.org/jira/browse/HDFS-11575
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-11575.001.patch, SupportingNFSwithFederatedHDFS.pdf
>
>
> Currently HDFS NFS gateway only supports HDFS as the underlying filesystem.
> Federated HDFS with ViewFS helps in improving the scalability of the name 
> nodes. However NFS is not supported with ViewFS.
> With this change, ViewFS using HDFS as the underlying filesystem can be 
> exported using NFS. ViewFS mount table will be used to determine the exports 
> which needs to be supported.
> Some important points
> 1) This patch only supports HDFS as the underlying filesystem for ViewFS.
> 2) This patch add support to add more than one export point in the NFS gateway
> 3) Root filesystem of the ViewFS will not be mountable for NFS gateway with 
> ViewFS,
>however this will not be the case for NFS gateway with HDFS
> 4) A filehandle now apart from the field will also contain an identifier to 
> identify the name node, this will be used to map to correct name node for 
> file operations.
> Please find the attached pdf document which helps in explaining the design 
> and the solution.
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11551) Handle SlowDiskReport from DataNode at the NameNode

2017-03-27 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-11551:
--
Attachment: HDFS-11551.006.patch

Patch v06 uses a different approach for generating slow disk reports. A daemon 
runs every 30 mins (default, DFS_DATANODE_OUTLIERS_REPORT_INTERVAL_KEY) to 
generate the top N slow disks from all the reports received from DataNodes. 
This daemon also cleans up the stale reports.

> Handle SlowDiskReport from DataNode at the NameNode
> ---
>
> Key: HDFS-11551
> URL: https://issues.apache.org/jira/browse/HDFS-11551
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-11551.001.patch, HDFS-11551.002.patch, 
> HDFS-11551.003.patch, HDFS-11551.004.patch, HDFS-11551.005.patch, 
> HDFS-11551.006.patch
>
>
> DataNodes send slow disk reports via heartbeats. Handle these reports at the 
> NameNode to find the topN slow disks.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11486) Client close() should not fail fast if the last block is being decommissioned

2017-03-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15944309#comment-15944309
 ] 

Hudson commented on HDFS-11486:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11476 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11476/])
HDFS-11486. Client close() should not fail fast if the last block is 
(iwasakims: rev 64ea62c1ccc05d9b0a0030beafa60ddd31c38952)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java


> Client close() should not fail fast if the last block is being decommissioned
> -
>
> Key: HDFS-11486
> URL: https://issues.apache.org/jira/browse/HDFS-11486
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDF-11486.test.patch, HDFS-11486.001.patch, 
> HDFS-11486.002.patch, HDFS-11486.003.patch, 
> HDFS-11486.test-inmaintenance.patch
>
>
> If a DFS client closes a file while the last block is being decommissioned, 
> the close() may fail if the decommission of the block does not complete in a 
> few seconds.
> When a DataNode is being decommissioned, NameNode marks the DN's state as 
> DECOMMISSION_INPROGRESS_INPROGRESS, and blocks with replicas on these 
> DataNodes become under-replicated immediately. A close() call which attempts 
> to complete the last open block will fail if the number of live replicas is 
> below minimal replicated factor, due to too many replicas residing on the 
> DataNodes.
> The client internally will try to complete the last open block for up to 5 
> times by default, which is roughly 12 seconds. After that, close() throws an 
> exception like the following, which is typically not handled properly.
> {noformat}
> java.io.IOException: Unable to close file because the last 
> blockBP-33575088-10.0.0.200-1488410554081:blk_1073741827_1003 does not have 
> enough number of replicas.
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:864)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:827)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:793)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
>   at 
> org.apache.hadoop.hdfs.TestDecommission.testCloseWhileDecommission(TestDecommission.java:708)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}
> Once the exception is thrown, the client usually does not attempt to close 
> again, so the file remains in open state, and the last block remains in under 
> replicated state.
> Subsequently, administrator runs recoverLease tool to salvage the file, but 
> the attempt failed because the block remains in under replicated state. It is 
> not clear why the block is never replicated though. However, administrators 
> think it becomes a corrupt file because the file remains open via fsck 
> -openforwrite and the file modification time is hours ago.
> In summary, I do not think close() should fail because the last block is 
> being decommissioned. The block has sufficient number replicas, and it's just 
> that some replicas are being decommissioned. Decomm should be transparent to 
> clients.
> This issue seems to be more prominent on a very large scale cluster, with min 
> replication factor set to 2.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11384) Add option for balancer to disperse getBlocks calls to avoid NameNode's rpc.CallQueueLength spike

2017-03-27 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15944307#comment-15944307
 ] 

Konstantin Shvachko commented on HDFS-11384:


Took me some time to refresh my memory of the details of balancing. So here is 
my understanding of what is happening:
* {{dispatchBlockMoves()}} spawns a thread for each {{Source}}, which 
represents all storages of the same type of a DN. Each thread executes 
{{dispatchBlocks()}} then.
* {{dispatchBlocks()}} first tries to schedule block transfers for already 
selected (source:target) DN pairs, and if there are no more pairs it calls 
{{getBlockList()}}, which contacts NN to obtain the next portion of blocks from 
the source DN to be moved out.
* The problem of two many RPC calls happens in the beginning of the Balancer 
iteration, when there are no scheduled pairs yet, so all the threads call 
{{getBlockList()}} and go to the NameNode simultaneously. So we need to 
disperse only the initial burst of RPCs at the start of an iteration, as 
subsequent {{getBlocks()}} are already dispersed fine.

I see two ways to fix this:
# Add a parameter to  {{getBlockList(long delay)}}, where {{delay}} is a random 
time within a reasonable interval, which Balancer should wait for before 
sending the {{getBlocks()}} RPC to NN. The delay is only applied once, and set 
to 0 once applied. This looks rather straightforward to me.
# Allocate a reasonable throughput of  {{getBlocks()}} RPCs to NN, and delay 
calls if the quota is exceeded. This is similar to [~benoyantony]'s proposal, 
but allows to precisely  specify how much of NN RPC bandwidth is allocated for 
the Balancer.

[~zhaoyunjiong] I understand you wanted a simple fix without making too much 
changes, but this looks like a real problem to me, and we should fix it in a 
more generic manner. I am fine if you wish to implement option #1 here as an 
initial step. Ultimately we should target solution #2, which could be done in 
another jira.

> Add option for balancer to disperse getBlocks calls to avoid NameNode's 
> rpc.CallQueueLength spike
> -
>
> Key: HDFS-11384
> URL: https://issues.apache.org/jira/browse/HDFS-11384
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer & mover
>Affects Versions: 2.7.3
>Reporter: yunjiong zhao
>Assignee: yunjiong zhao
> Attachments: balancer.day.png, balancer.week.png, 
> HDFS-11384.001.patch, HDFS-11384.002.patch
>
>
> When running balancer on hadoop cluster which have more than 3000 Datanodes 
> will cause NameNode's rpc.CallQueueLength spike. We observed this situation 
> could cause Hbase cluster failure due to RegionServer's WAL timeout.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-10971) Distcp should not copy replication factor if source file is erasure coded

2017-03-27 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-10971 started by Manoj Govindassamy.
-
> Distcp should not copy replication factor if source file is erasure coded
> -
>
> Key: HDFS-10971
> URL: https://issues.apache.org/jira/browse/HDFS-10971
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: distcp
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-10971.testcase.patch
>
>
> The current erasure coding implementation uses replication factor field to 
> store erasure coding policy.
> Distcp copies the source file's replication factor to the destination if 
> {{-pr}} is specified. However, if the source file is EC, the replication 
> factor (which is EC policy) should not be replicated to the destination file. 
> When a HdfsFileStatus is converted to FileStatus, the replication factor is 
> set to 0 if it's an EC file.
> In fact, I will attach a test case that shows trying to replicate the 
> replication factor of an EC file results in an IOException: "Requested 
> replication factor of 0 is less than the required minimum of 1 for 
> /tmp/dst/dest2"



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10629) Federation Router

2017-03-27 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15944302#comment-15944302
 ] 

Chris Douglas commented on HDFS-10629:
--

Very minor feedback on v11:
* {{FederationNamenodeServiceState}} could include all the 
{{HAServiceProtocol.HAServiceState}} values, so it can statically check that 
they're exhaustive if new states are added
* Will later patches add to {{RouterConfigBuilder}}? If not, it seems vestigial.
* Is the {{Router}} using the {{CompositeService}} abstraction as intended? It 
looks like the pattern adds some set of services via {{addService}}, then 
follows the lifecycle for a generic service (including starting/stopping 
services in order). The patch also combines the {{initService}} and 
{{startService}} stages into a single {{initAndStartRouter}} method, called 
from {{createRouter}}. The RPC server calls {{addService}} when restarted, 
after stopping then re-adding the RPC service.
* {{RemoteLocationContext#getDest}} javadoc is unclear. What is a context 
string?
* {{FederationUtil#newInstance}} could be replaced by a combination of 
{{Configuration#getClass}} and {{ReflectionUtils#newInstance}} using 
{{Configurable}}.
* Some of the APIs returning collections/lists could avoid unintended 
dependencies by wrapping in {{Collections#unmodifiableList}}, etc. before 
returning to callers.

Overall +1 for committing this to the branch.

> Federation Router
> -
>
> Key: HDFS-10629
> URL: https://issues.apache.org/jira/browse/HDFS-10629
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Inigo Goiri
>Assignee: Jason Kace
> Attachments: HDFS-10629.000.patch, HDFS-10629.001.patch, 
> HDFS-10629-HDFS-10467-002.patch, HDFS-10629-HDFS-10467-003.patch, 
> HDFS-10629-HDFS-10467-004.patch, HDFS-10629-HDFS-10467-005.patch, 
> HDFS-10629-HDFS-10467-006.patch, HDFS-10629-HDFS-10467-007.patch, 
> HDFS-10629-HDFS-10467-008.patch, HDFS-10629-HDFS-10467-009.patch, 
> HDFS-10629-HDFS-10467-010.patch, HDFS-10629-HDFS-10467-011.patch, 
> routerlatency.png
>
>
> Component that routes calls from the clients to the right Namespace.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11582) Block Storage : add SCSI target access daemon

2017-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15944300#comment-15944300
 ] 

Hadoop QA commented on HDFS-11582:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
0s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
49s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
41s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
31s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
47s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in HDFS-7240 
has 90 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
36s{color} | {color:green} HDFS-7240 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
11s{color} | {color:green} The patch generated 0 new + 98 unchanged - 1 fixed = 
98 total (was 99) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
59s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 38s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
21s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}109m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.qjournal.client.TestQuorumJournalManager |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.cblock.TestCBlockCLI |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11582 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860716/HDFS-11582-HDFS-7240.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  shellcheck  shelldocs  |
| uname | 

[jira] [Commented] (HDFS-7967) Reduce the performance impact of the balancer

2017-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15944292#comment-15944292
 ] 

Hadoop QA commented on HDFS-7967:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
56s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
18s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
51s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 5 new + 458 unchanged - 13 fixed = 463 total (was 471) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 51m 10s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}133m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_121 Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock |
| JDK v1.7.0_121 Failed junit tests | 
hadoop.hdfs.TestSecureEncryptionZoneWithKMS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5af2af1 |
| JIRA Issue | HDFS-7967 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12847070/HDFS-7967.branch-2.8.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 674e9983a57d 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality 

[jira] [Comment Edited] (HDFS-11558) BPServiceActor thread name is too long

2017-03-27 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15943889#comment-15943889
 ] 

Arpit Agarwal edited comment on HDFS-11558 at 3/28/17 12:17 AM:


Posted v2. Thanks all for reviews. Since actor is instantiated per active or 
standby namenode, address of which is always available through conf. Do we need 
assemble it into actor thread name? [~arpitagarwal] Thanks.

With v2 patch, thread name looks like:
{noformat}
2017-03-27 12:11:12,548 [ heartbeating] INFO
{noformat}

{noformat}
2017-03-27 12:11:12,584 [BP-2084616792--1490641870531 heartbeating]
{noformat}



was (Author: xiaobingo):
Posted v2. Thanks all for reviews. Since actor is instantiated per active or 
standby namenode, address of which is always available through conf. Do we need 
assemble it into actor thread name? [~arpitagarwal] Thanks.

With v2 patch, thread name looks like:
{noformat}
2017-03-27 12:11:12,548 [ heartbeating] INFO
{noformat}

{noformat}
2017-03-27 12:11:12,584 [BP-2084616792-10.22.6.77-1490641870531 heartbeating]
{noformat}


> BPServiceActor thread name is too long
> --
>
> Key: HDFS-11558
> URL: https://issues.apache.org/jira/browse/HDFS-11558
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
>Priority: Minor
> Attachments: HDFS-11558.000.patch, HDFS-11558.001.patch, 
> HDFS-11558.002.patch
>
>
> Currently, the thread name looks like
> {code}
> 2017-03-20 18:32:22,022 [DataNode: 
> [[[DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data0,
>  
> [DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data1]]
>   heartbeating to localhost/127.0.0.1:51772] INFO  ...
> {code}
> which contains the full path for each storage dir.  It is unnecessarily long.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11486) Client close() should not fail fast if the last block is being decommissioned

2017-03-27 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15944280#comment-15944280
 ] 

Masatake Iwasaki commented on HDFS-11486:
-

Thanks for the update, [~jojochuang]. +1, committing this.

> Client close() should not fail fast if the last block is being decommissioned
> -
>
> Key: HDFS-11486
> URL: https://issues.apache.org/jira/browse/HDFS-11486
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDF-11486.test.patch, HDFS-11486.001.patch, 
> HDFS-11486.002.patch, HDFS-11486.003.patch, 
> HDFS-11486.test-inmaintenance.patch
>
>
> If a DFS client closes a file while the last block is being decommissioned, 
> the close() may fail if the decommission of the block does not complete in a 
> few seconds.
> When a DataNode is being decommissioned, NameNode marks the DN's state as 
> DECOMMISSION_INPROGRESS_INPROGRESS, and blocks with replicas on these 
> DataNodes become under-replicated immediately. A close() call which attempts 
> to complete the last open block will fail if the number of live replicas is 
> below minimal replicated factor, due to too many replicas residing on the 
> DataNodes.
> The client internally will try to complete the last open block for up to 5 
> times by default, which is roughly 12 seconds. After that, close() throws an 
> exception like the following, which is typically not handled properly.
> {noformat}
> java.io.IOException: Unable to close file because the last 
> blockBP-33575088-10.0.0.200-1488410554081:blk_1073741827_1003 does not have 
> enough number of replicas.
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:864)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:827)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:793)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
>   at 
> org.apache.hadoop.hdfs.TestDecommission.testCloseWhileDecommission(TestDecommission.java:708)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}
> Once the exception is thrown, the client usually does not attempt to close 
> again, so the file remains in open state, and the last block remains in under 
> replicated state.
> Subsequently, administrator runs recoverLease tool to salvage the file, but 
> the attempt failed because the block remains in under replicated state. It is 
> not clear why the block is never replicated though. However, administrators 
> think it becomes a corrupt file because the file remains open via fsck 
> -openforwrite and the file modification time is hours ago.
> In summary, I do not think close() should fail because the last block is 
> being decommissioned. The block has sufficient number replicas, and it's just 
> that some replicas are being decommissioned. Decomm should be transparent to 
> clients.
> This issue seems to be more prominent on a very large scale cluster, with min 
> replication factor set to 2.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11548) Ozone: SCM: Add node pool management API

2017-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15944278#comment-15944278
 ] 

Hadoop QA commented on HDFS-11548:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  3m 
57s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
54s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
35s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
23s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
37s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}152m 34s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
58s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}202m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.cblock.TestCBlockCLI |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.cblock.TestCBlockServer |
|   | hadoop.ozone.container.common.impl.TestContainerPersistence |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
| Timed out junit tests | org.apache.hadoop.ozone.scm.node.TestNodeManager |
|   | org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11548 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860720/HDFS-11548-HDFS-7240.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 039399e78f94 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / ed14373 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18853/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18853/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18853/console 

[jira] [Commented] (HDFS-11531) Expose hedged read metrics via libHDFS API

2017-03-27 Thread Colin P. McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15944272#comment-15944272
 ] 

Colin P. McCabe commented on HDFS-11531:


bq. It looks like Hadoop QA does not track native tests.

Yeah, this has been a problem for a while  Hopefully it at least runs them now 
(the cmake plugin helped with that).

{code}
+if (m) free(m);
{code}
You don't need this because {{free}} checks if the pointer is {{NULL}}.

{code}
+LIBHDFS_EXTERNAL
+int hdfsGetHedgedReadMetrics(hdfsFS fs, struct hdfsHedgedReadMetrics 
**metrics);
{code}
Why not just return the {{hdfsHedgedReadMetrics}} pointer?  The error is in 
{{errno}} anyway on a failure.

Looks good aside from that.

> Expose hedged read metrics via libHDFS API
> --
>
> Key: HDFS-11531
> URL: https://issues.apache.org/jira/browse/HDFS-11531
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs
>Affects Versions: 2.6.0
>Reporter: Sailesh Mukil
>Assignee: Sailesh Mukil
> Attachments: HDFS-11531.000.patch, HDFS-11531.001.patch
>
>
> It would be good to expose the DFSHedgedReadMetrics via a libHDFS API for 
> applications to retrieve.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11582) Block Storage : add SCSI target access daemon

2017-03-27 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11582:
--
Attachment: HDFS-11582-HDFS-7240.003.patch

Thanks [~anu] for the review and the comments! addressed in v003 patch.

bq. who ensures that we don't have race condition that is two volumeNames from 
2 different CBlock servers are not racing here ? I understand that it is a 
pretty far fetched scenerio and I was wondering if the jscsi target server does 
any kind of concurrency control.

This was written to make sure that mounting a volume twice or more would not 
reset the in-memory metadata. But just like you mentioned, jscsi code, or even 
system iscsi might already handled this scenario. This code is more like doing 
a safety check without assuming it will be handled.

> Block Storage : add SCSI target access daemon
> -
>
> Key: HDFS-11582
> URL: https://issues.apache.org/jira/browse/HDFS-11582
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11582-HDFS-7240.001.patch, 
> HDFS-11582-HDFS-7240.002.patch, HDFS-11582-HDFS-7240.003.patch
>
>
> This JIRA adds the daemon process that exposes SCSI target access. More 
> specifically, with this daemon process running, any OS with SCSI can talk to 
> this daemon process and treat CBlock volumes as SCSI targets, in this way the 
> user can mount the volume just like the POSIX manner.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11529) libHDFS still does not return appropriate error information in many cases

2017-03-27 Thread Colin P. McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15944265#comment-15944265
 ] 

Colin P. McCabe commented on HDFS-11529:


Nice improvement.

{{printExceptionAndFreeV}}: This function is intended to print exceptions and 
then free them.  If you are overloading it to set thread-local data, you should 
change the name to reflect that.  Something like {{handleExceptionAndFree}} 
would work.  You also need to document this information in the function 
doxygen, found in {{exception.h}}.

It seems to me that the thread-local exception should be set regardless of 
whether {{noPrint}} is true or not.  {{noPrint}} was intended to avoid spammy 
logging for things we expected to happen, but not to skip setting the error 
return.  The thread-local storage is essentially an out-of-band way of 
returning more error data, so I don't see why it should be affected by 
{{noPrint}}.

{code}
/**
 * Get the last exception root cause that happened in the context of the
 * current thread, i.e. the thread that called into libHDFS.
 *
 * The pointer returned by this function is guaranteed to be valid until
 * the next call into libHDFS by the current thread.
 * Users of this function should not free the pointer.
 *
 * @return   The root cause as a C-string.
 */
LIBHDFS_EXTERNAL
char* hdfsGetLastExceptionRootCause();
{code}
You need to document what a {{NULL}} return means here.

{{getJNIEnv}} should free and zero out these thread-local pointers.  Otherwise 
the exception text from one call may bleed into another, since there are still 
some code paths that don't set the thread-local error status.

It is not related to your patch, but I just noticed that {{hdfsGetHosts}} 
doesn't set {{errno}} on failure.  Do you mind fixing that?

> libHDFS still does not return appropriate error information in many cases
> -
>
> Key: HDFS-11529
> URL: https://issues.apache.org/jira/browse/HDFS-11529
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Affects Versions: 2.6.0
>Reporter: Sailesh Mukil
>Assignee: Sailesh Mukil
>Priority: Critical
>  Labels: errorhandling, libhdfs
> Attachments: HDFS-11529.000.patch, HDFS-11529.001.patch
>
>
> libHDFS uses a table to compare exceptions against and returns a 
> corresponding error code to the application in case of an error.
> However, this table is manually populated and many times is disremembered 
> when new exceptions are added.
> This causes libHDFS to return EINTERNAL (or Unknown Error(255)) whenever 
> these exceptions are hit. These are some examples of exceptions that have 
> been observed on an Error(255):
> org.apache.hadoop.ipc.StandbyException (Operation category WRITE is not 
> supported in state standby)
> java.io.EOFException: Cannot seek after EOF
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> It is of course not possible to have an error code for each and every type of 
> exception, so one suggestion of how this can be addressed is by having a call 
> such as hdfsGetLastException() that would return the last exception that a 
> libHDFS thread encountered. This way, an application may choose to call 
> hdfsGetLastException() if it receives EINTERNAL.
> We can make use of the Thread Local Storage to store this information. Also, 
> this makes sure that the current functionality is preserved.
> This is a follow up from HDFS-4997.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9705) Refine the behaviour of getFileChecksum when length = 0

2017-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15944224#comment-15944224
 ] 

Hadoop QA commented on HDFS-9705:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 
49s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 6s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
18s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
25s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
29s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
36s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
57s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 32s{color} | {color:orange} hadoop-hdfs-project: The patch generated 4 new + 
116 unchanged - 4 fixed = 120 total (was 120) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
2s{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_121. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 51m 37s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}154m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_121 Failed junit tests | 
hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs |
|   | hadoop.tracing.TestTraceAdmin |
|   | 
hadoop.hdfs.server.datanode.metrics.TestDataNodeOutlierDetectionViaMetrics |
|   | 

[jira] [Updated] (HDFS-11576) Block recovery will fail indefinitely if recovery time > heartbeat interval

2017-03-27 Thread Lukas Majercak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HDFS-11576:
--
Attachment: HDFS-11576.001.patch

> Block recovery will fail indefinitely if recovery time > heartbeat interval
> ---
>
> Key: HDFS-11576
> URL: https://issues.apache.org/jira/browse/HDFS-11576
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs, namenode
>Affects Versions: 2.7.1, 2.7.2, 2.7.3, 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Critical
> Attachments: HDFS-11576.001.patch, HDFS-11576.repro.patch
>
>
> Block recovery will fail indefinitely if the time to recover a block is 
> always longer than the heartbeat interval. Scenario:
> 1. DN sends heartbeat 
> 2. NN sends a recovery command to DN, recoveryID=X
> 3. DN starts recovery
> 4. DN sends another heartbeat
> 5. NN sends a recovery command to DN, recoveryID=X+1
> 6. DN calls commitBlockSyncronization after succeeding with first recovery to 
> NN, which fails because X < X+1
> ... 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11582) Block Storage : add SCSI target access daemon

2017-03-27 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15944192#comment-15944192
 ] 

Anu Engineer commented on HDFS-11582:
-

[~vagarychen] Thanks for adding this. This makes CBlock starting to look real. 
Some minor comments.

* File : {{CBlockClientServerProtocolClientSideTranslatorPB.java}}
I am struggling with this name. It has lots of words but no meaning really pops 
up in my mind. Can we please rename it ? 

* Class comment: 
{{* The client side of CBlockClientServerProtcol}}, may be add some more 
information here
about what this class really does.

* Function: {{mountVolume}}
-- Should we check if userName and volumeName are not null ? 

* Should we have an else part that logs error and throws an exception ?  
{{if (containerID.hasPipeline()) // it should always have a pipeline only 
except for tests.}}

* Just a curiosity question. You can ignore this since it is very hypothetical.
 {{if (!targets.containsKey(volumeKey))}} , we have this code in 
{{isValidTargetName}}
 who ensures that we don't have race condition that is two volumeNames from 2 
different
 CBlock servers are not racing here ? I understand that it is a pretty far 
fetched scenerio and I was wondering if the jscsi target server does any kind 
of concurrency control.

*  File: {{SCSITargetDaemon.java}}
 {code}
 ContainerOperationClient.setContainerSizeB(containerSizeGB * 1024 * 1024 * 
1024L);
{code}
 replace with 
 {code}
 ContainerOperationClient.setContainerSizeB(containerSizeGB * OzoneConsts.GB);
{code}

* Debug Statements ? 
These look like statements that came out of debugging. Could you please remove 
it ? 
{code}
ozoneConf.setBoolean(OzoneConfigKeys.OZONE_ENABLED, true);
ozoneConf.setBoolean(OzoneConfigKeys.OZONE_TRACE_ENABLED_KEY, true);
ozoneConf.set(OzoneConfigKeys.OZONE_HANDLER_TYPE_KEY, "distributed")
{code}
I understand that these are required for CBlock to work, but shouldn't we just 
assert or log error and get out instead of setting these values ourselves.





> Block Storage : add SCSI target access daemon
> -
>
> Key: HDFS-11582
> URL: https://issues.apache.org/jira/browse/HDFS-11582
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11582-HDFS-7240.001.patch, 
> HDFS-11582-HDFS-7240.002.patch
>
>
> This JIRA adds the daemon process that exposes SCSI target access. More 
> specifically, with this daemon process running, any OS with SCSI can talk to 
> this daemon process and treat CBlock volumes as SCSI targets, in this way the 
> user can mount the volume just like the POSIX manner.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11576) Block recovery will fail indefinitely if recovery time > heartbeat interval

2017-03-27 Thread Lukas Majercak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HDFS-11576:
--
Status: Open  (was: Patch Available)

> Block recovery will fail indefinitely if recovery time > heartbeat interval
> ---
>
> Key: HDFS-11576
> URL: https://issues.apache.org/jira/browse/HDFS-11576
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs, namenode
>Affects Versions: 3.0.0-alpha2, 3.0.0-alpha1, 2.7.3, 2.7.2, 2.7.1
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Critical
> Attachments: HDFS-11576.001.patch, HDFS-11576.repro.patch
>
>
> Block recovery will fail indefinitely if the time to recover a block is 
> always longer than the heartbeat interval. Scenario:
> 1. DN sends heartbeat 
> 2. NN sends a recovery command to DN, recoveryID=X
> 3. DN starts recovery
> 4. DN sends another heartbeat
> 5. NN sends a recovery command to DN, recoveryID=X+1
> 6. DN calls commitBlockSyncronization after succeeding with first recovery to 
> NN, which fails because X < X+1
> ... 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11576) Block recovery will fail indefinitely if recovery time > heartbeat interval

2017-03-27 Thread Lukas Majercak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HDFS-11576:
--
Status: Patch Available  (was: Open)

> Block recovery will fail indefinitely if recovery time > heartbeat interval
> ---
>
> Key: HDFS-11576
> URL: https://issues.apache.org/jira/browse/HDFS-11576
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs, namenode
>Affects Versions: 3.0.0-alpha2, 3.0.0-alpha1, 2.7.3, 2.7.2, 2.7.1
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Critical
> Attachments: HDFS-11576.001.patch, HDFS-11576.repro.patch
>
>
> Block recovery will fail indefinitely if the time to recover a block is 
> always longer than the heartbeat interval. Scenario:
> 1. DN sends heartbeat 
> 2. NN sends a recovery command to DN, recoveryID=X
> 3. DN starts recovery
> 4. DN sends another heartbeat
> 5. NN sends a recovery command to DN, recoveryID=X+1
> 6. DN calls commitBlockSyncronization after succeeding with first recovery to 
> NN, which fails because X < X+1
> ... 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11529) libHDFS still does not return appropriate error information in many cases

2017-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15944176#comment-15944176
 ] 

Hadoop QA commented on HDFS-11529:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
36s{color} | {color:green} hadoop-hdfs-native-client in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11529 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860751/HDFS-11529.001.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux d80ca79e40bd 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / cd014d5 |
| Default Java | 1.8.0_121 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18855/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18855/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> libHDFS still does not return appropriate error information in many cases
> -
>
> Key: HDFS-11529
> URL: https://issues.apache.org/jira/browse/HDFS-11529
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Affects Versions: 2.6.0
>Reporter: Sailesh Mukil
>Assignee: Sailesh Mukil
>Priority: Critical
>  Labels: errorhandling, libhdfs
> Attachments: HDFS-11529.000.patch, HDFS-11529.001.patch
>
>
> libHDFS uses a table to compare exceptions against and returns a 
> corresponding error code to the application in case of an error.
> However, this table is manually populated and many times is disremembered 
> when new exceptions are added.
> This causes libHDFS to return EINTERNAL (or Unknown Error(255)) whenever 
> these exceptions are hit. These are some examples of exceptions that have 
> been observed on an Error(255):
> org.apache.hadoop.ipc.StandbyException (Operation category WRITE is not 
> supported in state standby)
> java.io.EOFException: Cannot seek after EOF
> 

[jira] [Updated] (HDFS-11582) Block Storage : add SCSI target access daemon

2017-03-27 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-11582:

Status: Patch Available  (was: Open)

> Block Storage : add SCSI target access daemon
> -
>
> Key: HDFS-11582
> URL: https://issues.apache.org/jira/browse/HDFS-11582
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11582-HDFS-7240.001.patch, 
> HDFS-11582-HDFS-7240.002.patch
>
>
> This JIRA adds the daemon process that exposes SCSI target access. More 
> specifically, with this daemon process running, any OS with SCSI can talk to 
> this daemon process and treat CBlock volumes as SCSI targets, in this way the 
> user can mount the volume just like the POSIX manner.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11355) Block Storage : merge configuration into ozone configuration classes

2017-03-27 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-11355:

  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: HDFS-7240
Target Version/s: HDFS-7240
  Status: Resolved  (was: Patch Available)

[~vagarychen] Thanks for the contribution. I have committed this to the feature 
branch.

> Block Storage : merge configuration into ozone configuration classes
> 
>
> Key: HDFS-11355
> URL: https://issues.apache.org/jira/browse/HDFS-11355
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Chen Liang
>Assignee: Chen Liang
> Fix For: HDFS-7240
>
> Attachments: HDFS-11355-HDFS-7240.001.patch, 
> HDFS-11355-HDFS-7240.002.patch, HDFS-11355-HDFS-7240.003.patch, 
> HDFS-11355-HDFS-7240.004.patch, HDFS-11355-HDFS-7240.005.patch
>
>
> Currently Block storage has {{CBlockConfiguration}} as the configuration 
> class.  While it also require {{OzoneConfiguration}} settings. Having both of 
> these two is redundant and makes settings error-prone. This JIRA merges the 
> former one into the later one such that there is only one configuration class 
> needed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11548) Ozone: SCM: Add node pool management API

2017-03-27 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-11548:

Summary: Ozone: SCM: Add node pool management API  (was: OZone: SCM: Add 
node pool management API)

> Ozone: SCM: Add node pool management API
> 
>
> Key: HDFS-11548
> URL: https://issues.apache.org/jira/browse/HDFS-11548
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-11548-HDFS-7240.001.patch, 
> HDFS-11548-HDFS-7240.002.patch, HDFS-11548-HDFS-7240.003.patch, 
> HDFS-11548-HDFS-7240.004.patch
>
>
> The idea is to group registered nodes into pools of fixed size (say 24 nodes 
> per pool) so that the container allocation and report can all be handled 
> independently on a pool basis by SCM.  
> The initial patch will implement the following Node Pool API that 
> 1) add node to a node pool 
> 2) remove a node from a pool 
> 3) get the pool name that a node belongs to 
> 4) get all the pool names 
> 5) get all nodes of a pool
> The integration with SCM container allocation can be all nodes in a single 
> default pool upon registration. We will provide a CLI to manage multiple 
> pools and support for pool definition file later. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11529) libHDFS still does not return appropriate error information in many cases

2017-03-27 Thread Sailesh Mukil (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sailesh Mukil updated HDFS-11529:
-
Status: Patch Available  (was: Open)

Thanks for the comments [~dhecht], I've addressed them in the next patch.

> libHDFS still does not return appropriate error information in many cases
> -
>
> Key: HDFS-11529
> URL: https://issues.apache.org/jira/browse/HDFS-11529
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Affects Versions: 2.6.0
>Reporter: Sailesh Mukil
>Assignee: Sailesh Mukil
>Priority: Critical
>  Labels: errorhandling, libhdfs
> Attachments: HDFS-11529.000.patch, HDFS-11529.001.patch
>
>
> libHDFS uses a table to compare exceptions against and returns a 
> corresponding error code to the application in case of an error.
> However, this table is manually populated and many times is disremembered 
> when new exceptions are added.
> This causes libHDFS to return EINTERNAL (or Unknown Error(255)) whenever 
> these exceptions are hit. These are some examples of exceptions that have 
> been observed on an Error(255):
> org.apache.hadoop.ipc.StandbyException (Operation category WRITE is not 
> supported in state standby)
> java.io.EOFException: Cannot seek after EOF
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> It is of course not possible to have an error code for each and every type of 
> exception, so one suggestion of how this can be addressed is by having a call 
> such as hdfsGetLastException() that would return the last exception that a 
> libHDFS thread encountered. This way, an application may choose to call 
> hdfsGetLastException() if it receives EINTERNAL.
> We can make use of the Thread Local Storage to store this information. Also, 
> this makes sure that the current functionality is preserved.
> This is a follow up from HDFS-4997.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11529) libHDFS still does not return appropriate error information in many cases

2017-03-27 Thread Sailesh Mukil (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sailesh Mukil updated HDFS-11529:
-
Status: Open  (was: Patch Available)

> libHDFS still does not return appropriate error information in many cases
> -
>
> Key: HDFS-11529
> URL: https://issues.apache.org/jira/browse/HDFS-11529
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Affects Versions: 2.6.0
>Reporter: Sailesh Mukil
>Assignee: Sailesh Mukil
>Priority: Critical
>  Labels: errorhandling, libhdfs
> Attachments: HDFS-11529.000.patch, HDFS-11529.001.patch
>
>
> libHDFS uses a table to compare exceptions against and returns a 
> corresponding error code to the application in case of an error.
> However, this table is manually populated and many times is disremembered 
> when new exceptions are added.
> This causes libHDFS to return EINTERNAL (or Unknown Error(255)) whenever 
> these exceptions are hit. These are some examples of exceptions that have 
> been observed on an Error(255):
> org.apache.hadoop.ipc.StandbyException (Operation category WRITE is not 
> supported in state standby)
> java.io.EOFException: Cannot seek after EOF
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> It is of course not possible to have an error code for each and every type of 
> exception, so one suggestion of how this can be addressed is by having a call 
> such as hdfsGetLastException() that would return the last exception that a 
> libHDFS thread encountered. This way, an application may choose to call 
> hdfsGetLastException() if it receives EINTERNAL.
> We can make use of the Thread Local Storage to store this information. Also, 
> this makes sure that the current functionality is preserved.
> This is a follow up from HDFS-4997.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11529) libHDFS still does not return appropriate error information in many cases

2017-03-27 Thread Sailesh Mukil (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sailesh Mukil updated HDFS-11529:
-
Attachment: HDFS-11529.001.patch

> libHDFS still does not return appropriate error information in many cases
> -
>
> Key: HDFS-11529
> URL: https://issues.apache.org/jira/browse/HDFS-11529
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Affects Versions: 2.6.0
>Reporter: Sailesh Mukil
>Assignee: Sailesh Mukil
>Priority: Critical
>  Labels: errorhandling, libhdfs
> Attachments: HDFS-11529.000.patch, HDFS-11529.001.patch
>
>
> libHDFS uses a table to compare exceptions against and returns a 
> corresponding error code to the application in case of an error.
> However, this table is manually populated and many times is disremembered 
> when new exceptions are added.
> This causes libHDFS to return EINTERNAL (or Unknown Error(255)) whenever 
> these exceptions are hit. These are some examples of exceptions that have 
> been observed on an Error(255):
> org.apache.hadoop.ipc.StandbyException (Operation category WRITE is not 
> supported in state standby)
> java.io.EOFException: Cannot seek after EOF
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> It is of course not possible to have an error code for each and every type of 
> exception, so one suggestion of how this can be addressed is by having a call 
> such as hdfsGetLastException() that would return the last exception that a 
> libHDFS thread encountered. This way, an application may choose to call 
> hdfsGetLastException() if it receives EINTERNAL.
> We can make use of the Thread Local Storage to store this information. Also, 
> this makes sure that the current functionality is preserved.
> This is a follow up from HDFS-4997.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11355) Block Storage : merge configuration into ozone configuration classes

2017-03-27 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15944134#comment-15944134
 ] 

Anu Engineer commented on HDFS-11355:
-

+1, I will commit this shortly. 

> Block Storage : merge configuration into ozone configuration classes
> 
>
> Key: HDFS-11355
> URL: https://issues.apache.org/jira/browse/HDFS-11355
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11355-HDFS-7240.001.patch, 
> HDFS-11355-HDFS-7240.002.patch, HDFS-11355-HDFS-7240.003.patch, 
> HDFS-11355-HDFS-7240.004.patch, HDFS-11355-HDFS-7240.005.patch
>
>
> Currently Block storage has {{CBlockConfiguration}} as the configuration 
> class.  While it also require {{OzoneConfiguration}} settings. Having both of 
> these two is redundant and makes settings error-prone. This JIRA merges the 
> former one into the later one such that there is only one configuration class 
> needed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11567) Ozone: Support update container

2017-03-27 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15944130#comment-15944130
 ] 

Anu Engineer commented on HDFS-11567:
-

Thanks for taking care of this feature. It looks really good. 

Couple of minor comments:

Function: {{updateContainer}}
* Better error message
{code}
   } catch (NoSuchAlgorithmException e) {
  throw new StorageContainerException("failed to create container",
  NO_SUCH_ALGORITHM);
}
{code}
-- replace with 
StorageContainerException("Unable to create Message Digest, usually this is a 
java configuration issue.",NO_SUCH_ALGORITHM);

* File another JIRA to force update or support force flag
{code}
 if (!orgData.isOpen()) {
throw new StorageContainerException(
"Update a closed container is not allowed. Name: " + 
containerName,
UNSUPPORTED_REQUEST);
}
{code}
This will allow us to repair the metadata from command line if needed. 

* Add force flag ? 
{code}
if (!containerFile.exists() || !containerFile.canWrite()) {
throw new StorageContainerException(
"Container file not exists or corrupted. Name: " + 
containerName,
CONTAINER_INTERNAL_ERROR);
}
{code}

> Ozone: Support update container
> ---
>
> Key: HDFS-11567
> URL: https://issues.apache.org/jira/browse/HDFS-11567
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-11567-HDFS-7240.001.patch
>
>
> Add support to update a container. A container has a set of states. That 
> states include information like SHA256 hashes, the metadata of the container, 
> a set of key value pairs. etc. This API allows us to update or change those 
> values for an existing container. This API is also critical if we want to 
> force a rewrite of the container data on the datanode. We could read the data 
> and write it back to for a disk update, which would allow us to repair a 
> container metadata if it is really needed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11548) OZone: SCM: Add node pool management API

2017-03-27 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15944114#comment-15944114
 ] 

Anu Engineer commented on HDFS-11548:
-

[~xyao] Thanks for the update. +1, Pending Jenkins.

> OZone: SCM: Add node pool management API
> 
>
> Key: HDFS-11548
> URL: https://issues.apache.org/jira/browse/HDFS-11548
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-11548-HDFS-7240.001.patch, 
> HDFS-11548-HDFS-7240.002.patch, HDFS-11548-HDFS-7240.003.patch, 
> HDFS-11548-HDFS-7240.004.patch
>
>
> The idea is to group registered nodes into pools of fixed size (say 24 nodes 
> per pool) so that the container allocation and report can all be handled 
> independently on a pool basis by SCM.  
> The initial patch will implement the following Node Pool API that 
> 1) add node to a node pool 
> 2) remove a node from a pool 
> 3) get the pool name that a node belongs to 
> 4) get all the pool names 
> 5) get all nodes of a pool
> The integration with SCM container allocation can be all nodes in a single 
> default pool upon registration. We will provide a CLI to manage multiple 
> pools and support for pool definition file later. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11567) Ozone: Support update container

2017-03-27 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15944109#comment-15944109
 ] 

Lei (Eddy) Xu commented on HDFS-11567:
--

Thanks for updating the JIRA. It looks good, [~anu].


> Ozone: Support update container
> ---
>
> Key: HDFS-11567
> URL: https://issues.apache.org/jira/browse/HDFS-11567
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-11567-HDFS-7240.001.patch
>
>
> Add support to update a container. A container has a set of states. That 
> states include information like SHA256 hashes, the metadata of the container, 
> a set of key value pairs. etc. This API allows us to update or change those 
> values for an existing container. This API is also critical if we want to 
> force a rewrite of the container data on the datanode. We could read the data 
> and write it back to for a disk update, which would allow us to repair a 
> container metadata if it is really needed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11567) Ozone: Support update container

2017-03-27 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15944107#comment-15944107
 ] 

Anu Engineer commented on HDFS-11567:
-

[~eddyxu] Thanks for your comment. I took liberty to update the description. 
Please let me know if that makes sense or should I add more info.


> Ozone: Support update container
> ---
>
> Key: HDFS-11567
> URL: https://issues.apache.org/jira/browse/HDFS-11567
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-11567-HDFS-7240.001.patch
>
>
> Add support to update a container. A container has a set of states. That 
> states include information like SHA256 hashes, the metadata of the container, 
> a set of key value pairs. etc. This API allows us to update or change those 
> values for an existing container. This API is also critical if we want to 
> force a rewrite of the container data on the datanode. We could read the data 
> and write it back to for a disk update, which would allow us to repair a 
> container metadata if it is really needed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8631) WebHDFS : Support get/setQuota

2017-03-27 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15944105#comment-15944105
 ] 

Andrew Wang commented on HDFS-8631:
---

Hi [~surendrasingh],

I did some digging into the history, as the asymmetry between 
FileSystem#getQuota and HdfsAdmin#setQuota seemed intentional. Sure enough, 
there's a discussion on HDFS-3000.

It sounds like we want to expose the HDFS-specific functionality in HdfsAdmin 
via a REST interface. However, we need to do this without expanding the 
FileSystem interface, since the HdfsAdmin functionality is HDFS-specific. So, 
we need to define a separate WebHdfsAdmin separate from WebHdfsFileSystem. The 
server-side functionality can still be implemented with the same NN and DN 
servlets.

We also need to document these WebHdfsAdmin APIs specially, perhaps in a new 
page that separates them from the FileSystem APIs.

> WebHDFS : Support get/setQuota
> --
>
> Key: HDFS-8631
> URL: https://issues.apache.org/jira/browse/HDFS-8631
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.7.2
>Reporter: nijel
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-8631-001.patch, HDFS-8631-002.patch, 
> HDFS-8631-003.patch, HDFS-8631-004.patch, HDFS-8631-005.patch, 
> HDFS-8631-006.patch
>
>
> User is able do quota management from filesystem object. Same operation can 
> be allowed trough REST API.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11567) Ozone: Support update container

2017-03-27 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-11567:

Description: Add support to update a container. A container has a set of 
states. That states include information like SHA256 hashes, the metadata of the 
container, a set of key value pairs. etc. This API allows us to update or 
change those values for an existing container. This API is also critical if we 
want to force a rewrite of the container data on the datanode. We could read 
the data and write it back to for a disk update, which would allow us to repair 
a container metadata if it is really needed.  (was: Add support to update a 
container.)

> Ozone: Support update container
> ---
>
> Key: HDFS-11567
> URL: https://issues.apache.org/jira/browse/HDFS-11567
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-11567-HDFS-7240.001.patch
>
>
> Add support to update a container. A container has a set of states. That 
> states include information like SHA256 hashes, the metadata of the container, 
> a set of key value pairs. etc. This API allows us to update or change those 
> values for an existing container. This API is also critical if we want to 
> force a rewrite of the container data on the datanode. We could read the data 
> and write it back to for a disk update, which would allow us to repair a 
> container metadata if it is really needed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11567) Ozone: Support update container

2017-03-27 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15944093#comment-15944093
 ] 

Lei (Eddy) Xu commented on HDFS-11567:
--

Hey, [~cheersyang] Would you mind to update the description of this JIRA to be 
more concrete about what  the goal is of this patch?

Thanks a lot.

> Ozone: Support update container
> ---
>
> Key: HDFS-11567
> URL: https://issues.apache.org/jira/browse/HDFS-11567
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-11567-HDFS-7240.001.patch
>
>
> Add support to update a container.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11170) Add builder-based create API to FileSystem

2017-03-27 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-11170:
---
   Resolution: Fixed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

Thanks again Sammi, committed branch-2 patch to branch-2.

> Add builder-based create API to FileSystem
> --
>
> Key: HDFS-11170
> URL: https://issues.apache.org/jira/browse/HDFS-11170
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: SammiChen
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-nice-to-have
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HDFS-11170-00.patch, HDFS-11170-01.patch, 
> HDFS-11170-02.patch, HDFS-11170-03.patch, HDFS-11170-04.patch, 
> HDFS-11170-05.patch, HDFS-11170-06.patch, HDFS-11170-07.patch, 
> HDFS-11170-08.patch, HDFS-11170-branch-2.001.patch
>
>
> FileSystem class supports multiple create functions to help user create file. 
> Some create functions has many parameters, it's hard for user to exactly 
> remember these parameters and their orders. This task is to add builder  
> based create functions to help user more easily create file. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7337) Configurable and pluggable Erasure Codec and schema

2017-03-27 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15944070#comment-15944070
 ] 

Andrew Wang commented on HDFS-7337:
---

Hi Kai, glad the suggestion was helpful,

bq. Do you think we should allow removing of schema/policy by this XML means? 
IMO, the XML file is only for new entries. Extra CLI command could be provided 
to do removal. When do removal, codec/schema/policy name would be used to 
distinguish and reference the entry to remove? No update is supported, since 
admins can remove and then add.

Agree, sounds good to me. Thanks again for driving this!

> Configurable and pluggable Erasure Codec and schema
> ---
>
> Key: HDFS-7337
> URL: https://issues.apache.org/jira/browse/HDFS-7337
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: erasure-coding
>Reporter: Zhe Zhang
>Assignee: Kai Zheng
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-7337-prototype-v1.patch, 
> HDFS-7337-prototype-v2.zip, HDFS-7337-prototype-v3.zip, 
> PluggableErasureCodec.pdf, PluggableErasureCodec-v2.pdf, 
> PluggableErasureCodec-v3.pdf
>
>
> According to HDFS-7285 and the design, this considers to support multiple 
> Erasure Codecs via pluggable approach. It allows to define and configure 
> multiple codec schemas with different coding algorithms and parameters. The 
> resultant codec schemas can be utilized and specified via command tool for 
> different file folders. While design and implement such pluggable framework, 
> it’s also to implement a concrete codec by default (Reed Solomon) to prove 
> the framework is useful and workable. Separate JIRA could be opened for the 
> RS codec implementation.
> Note HDFS-7353 will focus on the very low level codec API and implementation 
> to make concrete vendor libraries transparent to the upper layer. This JIRA 
> focuses on high level stuffs that interact with configuration, schema and etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11170) Add builder-based create API to FileSystem

2017-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15944060#comment-15944060
 ] 

Hadoop QA commented on HDFS-11170:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
49s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
31s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
3s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
37s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
30s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
25s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
45s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
12s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
9s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
1s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
4s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
1s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m  
1s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 27s{color} | {color:orange} root: The patch generated 1 new + 236 unchanged 
- 1 fixed = 237 total (was 237) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
56s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
28s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
4s{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_121. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 54m 33s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}223m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_121 Failed junit tests | 

[jira] [Commented] (HDFS-11577) Combine the old and the new chooseRandom for better performance

2017-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15944052#comment-15944052
 ] 

Hadoop QA commented on HDFS-11577:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m  0s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m  2s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}146m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.sftp.TestSFTPFileSystem |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11577 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860690/HDFS-11577.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f8fa1cbd295c 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / db2adf3 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18849/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18849/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18849/testReport/ |
| modules | C: 

[jira] [Updated] (HDFS-11584) Fix flaky test TestErasureCodeBenchmarkThroughput.testECReadWrite - failed creating new block streams

2017-03-27 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-11584:
--
Attachment: TestECCodeBenchmark.fail.log

> Fix flaky test TestErasureCodeBenchmarkThroughput.testECReadWrite - failed 
> creating new block streams
> -
>
> Key: HDFS-11584
> URL: https://issues.apache.org/jira/browse/HDFS-11584
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
> Attachments: TestECCodeBenchmark.fail.log
>
>
> TestErasureCodeBenchmarkThroughput.testECReadWrite has been failing 
> intermittently. Attached logs from the recent precheckin failure run.
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11907/testReport/org.apache.hadoop.hdfs/TestErasureCodeBenchmarkThroughput/testECReadWrite/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11888/testReport/org.apache.hadoop.hdfs/TestDFSStripedOutputStreamWithFailure110/testAddBlockWhenNoSufficientDataBlockNumOfNodes/



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11584) Fix flaky test TestErasureCodeBenchmarkThroughput.testECReadWrite - failed creating new block streams

2017-03-27 Thread Manoj Govindassamy (JIRA)
Manoj Govindassamy created HDFS-11584:
-

 Summary: Fix flaky test 
TestErasureCodeBenchmarkThroughput.testECReadWrite - failed creating new block 
streams
 Key: HDFS-11584
 URL: https://issues.apache.org/jira/browse/HDFS-11584
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: erasure-coding
Affects Versions: 3.0.0-alpha1
Reporter: Manoj Govindassamy
 Attachments: TestECCodeBenchmark.fail.log

TestErasureCodeBenchmarkThroughput.testECReadWrite has been failing 
intermittently. Attached logs from the recent precheckin failure run.

https://builds.apache.org/job/PreCommit-HADOOP-Build/11907/testReport/org.apache.hadoop.hdfs/TestErasureCodeBenchmarkThroughput/testECReadWrite/

https://builds.apache.org/job/PreCommit-HADOOP-Build/11888/testReport/org.apache.hadoop.hdfs/TestDFSStripedOutputStreamWithFailure110/testAddBlockWhenNoSufficientDataBlockNumOfNodes/



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11548) OZone: SCM: Add node pool management API

2017-03-27 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11548:
--
Attachment: HDFS-11548-HDFS-7240.004.patch

Thanks [~anu] for the review. Patch 004 fixed the remaining issues.

> OZone: SCM: Add node pool management API
> 
>
> Key: HDFS-11548
> URL: https://issues.apache.org/jira/browse/HDFS-11548
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-11548-HDFS-7240.001.patch, 
> HDFS-11548-HDFS-7240.002.patch, HDFS-11548-HDFS-7240.003.patch, 
> HDFS-11548-HDFS-7240.004.patch
>
>
> The idea is to group registered nodes into pools of fixed size (say 24 nodes 
> per pool) so that the container allocation and report can all be handled 
> independently on a pool basis by SCM.  
> The initial patch will implement the following Node Pool API that 
> 1) add node to a node pool 
> 2) remove a node from a pool 
> 3) get the pool name that a node belongs to 
> 4) get all the pool names 
> 5) get all nodes of a pool
> The integration with SCM container allocation can be all nodes in a single 
> default pool upon registration. We will provide a CLI to manage multiple 
> pools and support for pool definition file later. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9705) Refine the behaviour of getFileChecksum when length = 0

2017-03-27 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-9705:
--
Status: Patch Available  (was: Reopened)

> Refine the behaviour of getFileChecksum when length = 0
> ---
>
> Key: HDFS-9705
> URL: https://issues.apache.org/jira/browse/HDFS-9705
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Assignee: SammiChen
>Priority: Minor
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-9705-branch-2.001.patch, HDFS-9705-v1.patch, 
> HDFS-9705-v2.patch, HDFS-9705-v3.patch, HDFS-9705-v4.patch, 
> HDFS-9705-v5.patch, HDFS-9705-v6.patch, HDFS-9705-v7.patch
>
>
> {{FileSystem#getFileChecksum}} may accept {{length}} parameter and 0 is a 
> valid value. Currently it will return {{null}} when length is 0, in the 
> following code block:
> {code}
> //compute file MD5
> final MD5Hash fileMD5 = MD5Hash.digest(md5out.getData());
> switch (crcType) {
> case CRC32:
>   return new MD5MD5CRC32GzipFileChecksum(bytesPerCRC,
>   crcPerBlock, fileMD5);
> case CRC32C:
>   return new MD5MD5CRC32CastagnoliFileChecksum(bytesPerCRC,
>   crcPerBlock, fileMD5);
> default:
>   // If there is no block allocated for the file,
>   // return one with the magic entry that matches what previous
>   // hdfs versions return.
>   if (locatedblocks.size() == 0) {
> return new MD5MD5CRC32GzipFileChecksum(0, 0, fileMD5);
>   }
>   // we should never get here since the validity was checked
>   // when getCrcType() was called above.
>   return null;
> }
> {code}
> The comment says "we should never get here since the validity was checked" 
> but it does. As we're using the MD5-MD5-X approach, and {{EMPTY--CONTENT}} 
> actually is a valid case in which the MD5 value is 
> {{d41d8cd98f00b204e9800998ecf8427e}}, so suggest we return a reasonable value 
> other than null. At least some useful information in the returned value can 
> be seen, like values from block checksum header.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11486) Client close() should not fail fast if the last block is being decommissioned

2017-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15943984#comment-15943984
 ] 

Hadoop QA commented on HDFS-11486:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 12s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 92m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11486 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860699/HDFS-11486.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4025590d67ab 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / db2adf3 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18850/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18850/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18850/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Client close() should not fail fast if the last block is being decommissioned
> -
>
> Key: HDFS-11486
> URL: https://issues.apache.org/jira/browse/HDFS-11486
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Wei-Chiu Chuang
>

[jira] [Commented] (HDFS-11583) Parent spans not initialized to NullScope for every DFSPacket

2017-03-27 Thread Karan Mehta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15943979#comment-15943979
 ] 

Karan Mehta commented on HDFS-11583:


Since a single {{DFSQueue}} of a {{DataStreamer}} thread can encounter packets 
that may or may not have tracing enabled, a simple fix would be to initialize 
the tracing scope to {{NullScope.INSTANCE}} for every loop iteration. The scope 
can start a new {{dataStreamer}} scope when it finds the {{parents}} field of 
{{DFSPacket}} to be not empty.

> Parent spans not initialized to NullScope for every DFSPacket
> -
>
> Key: HDFS-11583
> URL: https://issues.apache.org/jira/browse/HDFS-11583
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tracing
>Reporter: Karan Mehta
>
> The issue was found while working with PHOENIX-3752.
> Each packet received by the {{run()}} method of {{DataStreamer}} class, uses 
> the {{parents}} field of the {{DFSPacket}} to create a new {{dataStreamer}} 
> span, which in turn creates a {{writeTo}} span as its child span. The parents 
> field is initialized when the packet is added to the {{dataQueue}} and the 
> value is initialized from the {{ThreadLocal}}. This is how HTrace handles 
> spans. 
> A {{TraceScope}} is created and initialized to {{NullScope}} before the loop 
> which runs till the point when the stream is closed. 
> Consider the following scenario, when the {{dataQueue}} contains multiple 
> packets, only the first of which has a tracing enabled. The scope is 
> initialized to the {{dataStreamer}} scope and a {{writeTo}} span is created 
> as its child, which gets closed once the packet is sent out to a remote 
> datanode. Before {{writeTo}} span is started, the {{dataStreamer}} scope is 
> detached. So calling the close method on it doesn't do anything at the end of 
> loop. 
> The second iteration will be using the stale value of the {{scope}} variable 
> with a DFSPacket on which tracing is not enabled. This results in generation 
> of an orphan {{writeTo}} spans which are being delivered to the 
> {{SpanReceiver}} as registered in the TraceFramework. This may result in 
> unlimited number of spans being generated and sent out to receiver. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11582) Block Storage : add SCSI target access daemon

2017-03-27 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11582:
--
Attachment: HDFS-11582-HDFS-7240.002.patch

Thanks [~msingh] for the comments! Post v002 patch to fix these.

> Block Storage : add SCSI target access daemon
> -
>
> Key: HDFS-11582
> URL: https://issues.apache.org/jira/browse/HDFS-11582
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11582-HDFS-7240.001.patch, 
> HDFS-11582-HDFS-7240.002.patch
>
>
> This JIRA adds the daemon process that exposes SCSI target access. More 
> specifically, with this daemon process running, any OS with SCSI can talk to 
> this daemon process and treat CBlock volumes as SCSI targets, in this way the 
> user can mount the volume just like the POSIX manner.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8498) Blocks can be committed with wrong size

2017-03-27 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15943948#comment-15943948
 ] 

Daryn Sharp commented on HDFS-8498:
---

As the original filer, hardening the client wasn't the intended fix.

The intention was to fix the NN ignoring the IBR-reported size and state when 
the block is on the expected storage.  It only updates the GS.
{code}
 for (int i = 0; i < replicas.length; i++) {
DatanodeStorageInfo expected =
replicas[i].getExpectedStorageLocation();
if (expected == storage) {
  replicas[i].setGenerationStamp(reportedBlock.getGenerationStamp());
  return;
} else if (expected != null && expected.getDatanodeDescriptor() ==
{code}

The NN wouldn't get into this bad state if it kept track of reported sizes and 
did some sanity checking.

> Blocks can be committed with wrong size
> ---
>
> Key: HDFS-8498
> URL: https://issues.apache.org/jira/browse/HDFS-8498
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.5.0
>Reporter: Daryn Sharp
>Assignee: Jing Zhao
>Priority: Critical
> Fix For: 2.9.0, 3.0.0-alpha3, 2.8.1
>
> Attachments: HDFS-8498.000.patch, HDFS-8498.001.patch, 
> HDFS-8498.branch-2.001.patch, HDFS-8498.branch-2.7.001.patch, 
> HDFS-8498.branch-2.patch
>
>
> When an IBR for a UC block arrives, the NN updates the expected location's 
> block and replica state _only_ if it's on an unexpected storage for an 
> expected DN.  If it's for an expected storage, only the genstamp is updated.  
> When the block is committed, and the expected locations are verified, only 
> the genstamp is checked.  The size is not checked but it wasn't updated in 
> the expected locations anyway.
> A faulty client may misreport the size when committing the block.  The block 
> is effectively corrupted.  If the NN issues replications, the received IBR is 
> considered corrupt, the NN invalidates the block, immediately issues another 
> replication.  The NN eventually realizes all the original replicas are 
> corrupt after full BRs are received from the original DNs.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11558) BPServiceActor thread name is too long

2017-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15943931#comment-15943931
 ] 

Hadoop QA commented on HDFS-11558:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
46s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
59s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
58s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 58s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 49 unchanged - 0 fixed = 50 total (was 49) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  1m  
3s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m  1s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 29m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11558 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860706/HDFS-11558.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 55d835e28e25 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / db2adf3 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18851/artifact/patchprocess/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18851/artifact/patchprocess/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18851/artifact/patchprocess/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18851/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18851/artifact/patchprocess/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| findbugs | 

[jira] [Commented] (HDFS-11548) OZone: SCM: Add node pool management API

2017-03-27 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15943933#comment-15943933
 ] 

Anu Engineer commented on HDFS-11548:
-

Thank you for updating the patch. Some very minor comments below.

* Could we please rewrite this pattern ? 
{code}
 try {
  lock.writeLock().lock();
{code}

* Replace this pattern with 
{code}
lock.writeLock().lock();
try {
...
} finally {
lock.writeLock().unlock();
}
That seems to be the Java idiom. 
https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/locks/Lock.html
{code}

* nit: Preconditions.CheckNotNull(pool);
{code}
public void addNode(final String pool, final DatanodeID node) 
{code}
Same feedback in removeNode, checkNotNull on node.

* Hard coded nodeCount in {{testDefaultNodePoolReload}}


> OZone: SCM: Add node pool management API
> 
>
> Key: HDFS-11548
> URL: https://issues.apache.org/jira/browse/HDFS-11548
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-11548-HDFS-7240.001.patch, 
> HDFS-11548-HDFS-7240.002.patch, HDFS-11548-HDFS-7240.003.patch
>
>
> The idea is to group registered nodes into pools of fixed size (say 24 nodes 
> per pool) so that the container allocation and report can all be handled 
> independently on a pool basis by SCM.  
> The initial patch will implement the following Node Pool API that 
> 1) add node to a node pool 
> 2) remove a node from a pool 
> 3) get the pool name that a node belongs to 
> 4) get all the pool names 
> 5) get all nodes of a pool
> The integration with SCM container allocation can be all nodes in a single 
> default pool upon registration. We will provide a CLI to manage multiple 
> pools and support for pool definition file later. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11558) BPServiceActor thread name is too long

2017-03-27 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15943889#comment-15943889
 ] 

Xiaobing Zhou edited comment on HDFS-11558 at 3/27/17 7:20 PM:
---

Posted v2. Thanks all for reviews. Since actor is instantiated per active or 
standby namenode, address of which is always available through conf. Do we need 
assemble it into actor thread name? [~arpitagarwal] Thanks.

With v2 patch, thread name looks like:
{noformat}
2017-03-27 12:11:12,548 [ heartbeating] INFO
{noformat}

{noformat}
2017-03-27 12:11:12,584 [BP-2084616792-10.22.6.77-1490641870531 heartbeating]
{noformat}



was (Author: xiaobingo):
Posted v2. Thanks all for reviews. Since actor is instantiated per active or 
standby namenode, address of which is always available through conf. Do we need 
assemble it into actor thread name? [~arpitagarwal] Thanks.

With v2 patch, thread name looks like:
{noformat}
2017-03-27 12:11:12,548 [ heartbeating] INFO
{noformat}
2017-03-27 12:11:12,584 [BP-2084616792-10.22.6.77-1490641870531 heartbeating]
{noformat}

{noformat}


> BPServiceActor thread name is too long
> --
>
> Key: HDFS-11558
> URL: https://issues.apache.org/jira/browse/HDFS-11558
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
>Priority: Minor
> Attachments: HDFS-11558.000.patch, HDFS-11558.001.patch, 
> HDFS-11558.002.patch
>
>
> Currently, the thread name looks like
> {code}
> 2017-03-20 18:32:22,022 [DataNode: 
> [[[DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data0,
>  
> [DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data1]]
>   heartbeating to localhost/127.0.0.1:51772] INFO  ...
> {code}
> which contains the full path for each storage dir.  It is unnecessarily long.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11558) BPServiceActor thread name is too long

2017-03-27 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15943889#comment-15943889
 ] 

Xiaobing Zhou commented on HDFS-11558:
--

Posted v2. Thanks all for reviews. Since actor is instantiated per active or 
standby namenode, address of which is always available through conf. Do we need 
assemble it into actor thread name? [~arpitagarwal] Thanks.

With v2 patch, thread name looks like:
{noformat}
2017-03-27 12:11:12,548 [ heartbeating] INFO
{noformat}
2017-03-27 12:11:12,584 [BP-2084616792-10.22.6.77-1490641870531 heartbeating]
{noformat}

{noformat}


> BPServiceActor thread name is too long
> --
>
> Key: HDFS-11558
> URL: https://issues.apache.org/jira/browse/HDFS-11558
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
>Priority: Minor
> Attachments: HDFS-11558.000.patch, HDFS-11558.001.patch, 
> HDFS-11558.002.patch
>
>
> Currently, the thread name looks like
> {code}
> 2017-03-20 18:32:22,022 [DataNode: 
> [[[DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data0,
>  
> [DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data1]]
>   heartbeating to localhost/127.0.0.1:51772] INFO  ...
> {code}
> which contains the full path for each storage dir.  It is unnecessarily long.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11558) BPServiceActor thread name is too long

2017-03-27 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-11558:
-
Attachment: HDFS-11558.002.patch

> BPServiceActor thread name is too long
> --
>
> Key: HDFS-11558
> URL: https://issues.apache.org/jira/browse/HDFS-11558
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
>Priority: Minor
> Attachments: HDFS-11558.000.patch, HDFS-11558.001.patch, 
> HDFS-11558.002.patch
>
>
> Currently, the thread name looks like
> {code}
> 2017-03-20 18:32:22,022 [DataNode: 
> [[[DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data0,
>  
> [DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data1]]
>   heartbeating to localhost/127.0.0.1:51772] INFO  ...
> {code}
> which contains the full path for each storage dir.  It is unnecessarily long.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11548) OZone: SCM: Add node pool management API

2017-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15943843#comment-15943843
 ] 

Hadoop QA commented on HDFS-11548:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
46s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m 38s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}103m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestEncryptionZones |
|   | hadoop.cblock.TestCBlockServerPersistence |
|   | hadoop.cblock.TestCBlockServer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11548 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860681/HDFS-11548-HDFS-7240.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3d4bb98a3b77 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / ed14373 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18847/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18847/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18847/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> OZone: SCM: Add node pool management API
> 
>
> Key: HDFS-11548
> URL: https://issues.apache.org/jira/browse/HDFS-11548
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>  

[jira] [Comment Edited] (HDFS-11582) Block Storage : add SCSI target access daemon

2017-03-27 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15943814#comment-15943814
 ] 

Mukul Kumar Singh edited comment on HDFS-11582 at 3/27/17 6:47 PM:
---

Hi Chen,

Overall the patch looks good, however I have following comments.

1) Should we include "gb" in the key ?
+  public static final String DFS_CBLOCK_CONTAINER_SIZE_GB_KEY =
+  "dfs.cblock.container.size";

2) CBlockTargetServer.java, CBlockManagerHandler.java and other files would 
need license update




was (Author: msingh):
1) Should we include "gb" in the key ?
+  public static final String DFS_CBLOCK_CONTAINER_SIZE_GB_KEY =
+  "dfs.cblock.container.size";

2) CBlockTargetServer.java, CBlockManagerHandler.java and other files need 
license update



> Block Storage : add SCSI target access daemon
> -
>
> Key: HDFS-11582
> URL: https://issues.apache.org/jira/browse/HDFS-11582
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11582-HDFS-7240.001.patch
>
>
> This JIRA adds the daemon process that exposes SCSI target access. More 
> specifically, with this daemon process running, any OS with SCSI can talk to 
> this daemon process and treat CBlock volumes as SCSI targets, in this way the 
> user can mount the volume just like the POSIX manner.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11486) Client close() should not fail fast if the last block is being decommissioned

2017-03-27 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-11486:
---
Attachment: HDFS-11486.003.patch

Hello [~iwasakims] thanks for your comments. Sorry it took long to get back to 
this patch.
Uploaded v03 patch to address your comment.

> Client close() should not fail fast if the last block is being decommissioned
> -
>
> Key: HDFS-11486
> URL: https://issues.apache.org/jira/browse/HDFS-11486
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDF-11486.test.patch, HDFS-11486.001.patch, 
> HDFS-11486.002.patch, HDFS-11486.003.patch, 
> HDFS-11486.test-inmaintenance.patch
>
>
> If a DFS client closes a file while the last block is being decommissioned, 
> the close() may fail if the decommission of the block does not complete in a 
> few seconds.
> When a DataNode is being decommissioned, NameNode marks the DN's state as 
> DECOMMISSION_INPROGRESS_INPROGRESS, and blocks with replicas on these 
> DataNodes become under-replicated immediately. A close() call which attempts 
> to complete the last open block will fail if the number of live replicas is 
> below minimal replicated factor, due to too many replicas residing on the 
> DataNodes.
> The client internally will try to complete the last open block for up to 5 
> times by default, which is roughly 12 seconds. After that, close() throws an 
> exception like the following, which is typically not handled properly.
> {noformat}
> java.io.IOException: Unable to close file because the last 
> blockBP-33575088-10.0.0.200-1488410554081:blk_1073741827_1003 does not have 
> enough number of replicas.
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:864)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:827)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:793)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
>   at 
> org.apache.hadoop.hdfs.TestDecommission.testCloseWhileDecommission(TestDecommission.java:708)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}
> Once the exception is thrown, the client usually does not attempt to close 
> again, so the file remains in open state, and the last block remains in under 
> replicated state.
> Subsequently, administrator runs recoverLease tool to salvage the file, but 
> the attempt failed because the block remains in under replicated state. It is 
> not clear why the block is never replicated though. However, administrators 
> think it becomes a corrupt file because the file remains open via fsck 
> -openforwrite and the file modification time is hours ago.
> In summary, I do not think close() should fail because the last block is 
> being decommissioned. The block has sufficient number replicas, and it's just 
> that some replicas are being decommissioned. Decomm should be transparent to 
> clients.
> This issue seems to be more prominent on a very large scale cluster, with min 
> replication factor set to 2.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11537) Block Storage : add cache layer

2017-03-27 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15943821#comment-15943821
 ] 

Chen Liang commented on HDFS-11537:
---

The failed tests were caused by "address already in use" error. The tests 
passed in local runs so the fails seem to be specific to this particular 
Jenkins run. The checkstyle warnings were about the setting method, as 
mentioned above.

> Block Storage : add cache layer
> ---
>
> Key: HDFS-11537
> URL: https://issues.apache.org/jira/browse/HDFS-11537
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11537-HDFS-7240.004.patch, 
> HDFS-11537-HDFS-7240.005.patch, HDFS-11537-HDSF-7240.001.patch, 
> HDFS-11537-HDSF-7240.002.patch, HDFS-11537-HDSF-7240.003.patch
>
>
> This JIRA adds the cache layer. Specifically, this JIRA implements the cache 
> interface in HDFS-11361 and adds the code that actually talks to containers. 
> The upper layer can simply view the storage as a cache with simple put and 
> get interface, while in the backend the get and put are actually talking to 
> containers. This is a critical part to the cblock performance. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11582) Block Storage : add SCSI target access daemon

2017-03-27 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15943814#comment-15943814
 ] 

Mukul Kumar Singh commented on HDFS-11582:
--

1) Should we include "gb" in the key ?
+  public static final String DFS_CBLOCK_CONTAINER_SIZE_GB_KEY =
+  "dfs.cblock.container.size";

2) CBlockTargetServer.java, CBlockManagerHandler.java and other files need 
license update



> Block Storage : add SCSI target access daemon
> -
>
> Key: HDFS-11582
> URL: https://issues.apache.org/jira/browse/HDFS-11582
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11582-HDFS-7240.001.patch
>
>
> This JIRA adds the daemon process that exposes SCSI target access. More 
> specifically, with this daemon process running, any OS with SCSI can talk to 
> this daemon process and treat CBlock volumes as SCSI targets, in this way the 
> user can mount the volume just like the POSIX manner.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-7967) Reduce the performance impact of the balancer

2017-03-27 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-7967:
-
Target Version/s: 2.8.1  (was: 2.8.0)

> Reduce the performance impact of the balancer
> -
>
> Key: HDFS-7967
> URL: https://issues.apache.org/jira/browse/HDFS-7967
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-7967.branch-2.001.patch, 
> HDFS-7967.branch-2.002.patch, HDFS-7967.branch-2-1.patch, 
> HDFS-7967.branch-2.8.001.patch, HDFS-7967.branch-2.8.002.patch, 
> HDFS-7967.branch-2.8.003.patch, HDFS-7967.branch-2.8-1.patch, 
> HDFS-7967-branch-2.8.patch, HDFS-7967-branch-2.patch
>
>
> The balancer needs to query for blocks to move from overly full DNs.  The 
> block lookup is extremely inefficient.  An iterator of the node's blocks is 
> created from the iterators of its storages' blocks.  A random number is 
> chosen corresponding to how many blocks will be skipped via the iterator.  
> Each skip requires costly scanning of triplets.
> The current design also only considers node imbalances while ignoring 
> imbalances within the nodes's storages.  A more efficient and intelligent 
> design may eliminate the costly skipping of blocks via round-robin selection 
> of blocks from the storages based on remaining capacity.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11557) Empty directories may be recursively deleted without being listable

2017-03-27 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15943800#comment-15943800
 ] 

Chen Liang commented on HDFS-11557:
---

Hi [~dmtucker],

This issue looks interesting. Have you got any updates on this? Do you mind I 
work on it?

> Empty directories may be recursively deleted without being listable
> ---
>
> Key: HDFS-11557
> URL: https://issues.apache.org/jira/browse/HDFS-11557
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.3
>Reporter: David Tucker
>
> To reproduce, create a directory without read and/or execute permissions 
> (i.e. 0666, 0333, or 0222), then call delete on it with can_recurse=True. 
> Note that the delete succeeds even though the client is unable to check for 
> emptiness and, therefore, cannot otherwise know that any/all children are 
> deletable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11583) Parent spans not initialized to NullScope for every DFSPacket

2017-03-27 Thread Karan Mehta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karan Mehta updated HDFS-11583:
---
Summary: Parent spans not initialized to NullScope for every DFSPacket  
(was: Parent spans not initialized for every DFSPacket)

> Parent spans not initialized to NullScope for every DFSPacket
> -
>
> Key: HDFS-11583
> URL: https://issues.apache.org/jira/browse/HDFS-11583
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tracing
>Reporter: Karan Mehta
>
> The issue was found while working with PHOENIX-3752.
> Each packet received by the {{run()}} method of {{DataStreamer}} class, uses 
> the {{parents}} field of the {{DFSPacket}} to create a new {{dataStreamer}} 
> span, which in turn creates a {{writeTo}} span as its child span. The parents 
> field is initialized when the packet is added to the {{dataQueue}} and the 
> value is initialized from the {{ThreadLocal}}. This is how HTrace handles 
> spans. 
> A {{TraceScope}} is created and initialized to {{NullScope}} before the loop 
> which runs till the point when the stream is closed. 
> Consider the following scenario, when the {{dataQueue}} contains multiple 
> packets, only the first of which has a tracing enabled. The scope is 
> initialized to the {{dataStreamer}} scope and a {{writeTo}} span is created 
> as its child, which gets closed once the packet is sent out to a remote 
> datanode. Before {{writeTo}} span is started, the {{dataStreamer}} scope is 
> detached. So calling the close method on it doesn't do anything at the end of 
> loop. 
> The second iteration will be using the stale value of the {{scope}} variable 
> with a DFSPacket on which tracing is not enabled. This results in generation 
> of an orphan {{writeTo}} spans which are being delivered to the 
> {{SpanReceiver}} as registered in the TraceFramework. This may result in 
> unlimited number of spans being generated and sent out to receiver. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11582) Block Storage : add SCSI target access daemon

2017-03-27 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11582:
--
Attachment: HDFS-11582-HDFS-7240.001.patch

> Block Storage : add SCSI target access daemon
> -
>
> Key: HDFS-11582
> URL: https://issues.apache.org/jira/browse/HDFS-11582
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11582-HDFS-7240.001.patch
>
>
> This JIRA adds the daemon process that exposes SCSI target access. More 
> specifically, with this daemon process running, any OS with SCSI can talk to 
> this daemon process and treat CBlock volumes as SCSI targets, in this way the 
> user can mount the volume just like the POSIX manner.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11583) Parent spans not initialized for every DFSPacket

2017-03-27 Thread Karan Mehta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karan Mehta updated HDFS-11583:
---
Component/s: tracing

> Parent spans not initialized for every DFSPacket
> 
>
> Key: HDFS-11583
> URL: https://issues.apache.org/jira/browse/HDFS-11583
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tracing
>Reporter: Karan Mehta
>
> The issue was found while working with PHOENIX-3752.
> Each packet received by the {{run()}} method of {{DataStreamer}} class, uses 
> the {{parents}} field of the {{DFSPacket}} to create a new {{dataStreamer}} 
> span, which in turn creates a {{writeTo}} span as its child span. The parents 
> field is initialized when the packet is added to the {{dataQueue}} and the 
> value is initialized from the {{ThreadLocal}}. This is how HTrace handles 
> spans. 
> A {{TraceScope}} is created and initialized to {{NullScope}} before the loop 
> which runs till the point when the stream is closed. 
> Consider the following scenario, when the {{dataQueue}} contains multiple 
> packets, only the first of which has a tracing enabled. The scope is 
> initialized to the {{dataStreamer}} scope and a {{writeTo}} span is created 
> as its child, which gets closed once the packet is sent out to a remote 
> datanode. Before {{writeTo}} span is started, the {{dataStreamer}} scope is 
> detached. So calling the close method on it doesn't do anything at the end of 
> loop. 
> The second iteration will be using the stale value of the {{scope}} variable 
> with a DFSPacket on which tracing is not enabled. This results in generation 
> of an orphan {{writeTo}} spans which are being delivered to the 
> {{SpanReceiver}} as registered in the TraceFramework. This may result in 
> unlimited number of spans being generated and sent out to receiver. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11583) Parent spans not initialized for every DFSPacket

2017-03-27 Thread Karan Mehta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karan Mehta updated HDFS-11583:
---
Tags:   (was: tracing)

> Parent spans not initialized for every DFSPacket
> 
>
> Key: HDFS-11583
> URL: https://issues.apache.org/jira/browse/HDFS-11583
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tracing
>Reporter: Karan Mehta
>
> The issue was found while working with PHOENIX-3752.
> Each packet received by the {{run()}} method of {{DataStreamer}} class, uses 
> the {{parents}} field of the {{DFSPacket}} to create a new {{dataStreamer}} 
> span, which in turn creates a {{writeTo}} span as its child span. The parents 
> field is initialized when the packet is added to the {{dataQueue}} and the 
> value is initialized from the {{ThreadLocal}}. This is how HTrace handles 
> spans. 
> A {{TraceScope}} is created and initialized to {{NullScope}} before the loop 
> which runs till the point when the stream is closed. 
> Consider the following scenario, when the {{dataQueue}} contains multiple 
> packets, only the first of which has a tracing enabled. The scope is 
> initialized to the {{dataStreamer}} scope and a {{writeTo}} span is created 
> as its child, which gets closed once the packet is sent out to a remote 
> datanode. Before {{writeTo}} span is started, the {{dataStreamer}} scope is 
> detached. So calling the close method on it doesn't do anything at the end of 
> loop. 
> The second iteration will be using the stale value of the {{scope}} variable 
> with a DFSPacket on which tracing is not enabled. This results in generation 
> of an orphan {{writeTo}} spans which are being delivered to the 
> {{SpanReceiver}} as registered in the TraceFramework. This may result in 
> unlimited number of spans being generated and sent out to receiver. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11567) Ozone: Support update container

2017-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15943778#comment-15943778
 ] 

Hadoop QA commented on HDFS-11567:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
45s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m  7s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}104m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.cblock.TestCBlockCLI |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11567 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860676/HDFS-11567-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6e087472b491 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / ed14373 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18846/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18846/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18846/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: Support update container
> ---
>
> Key: HDFS-11567
> URL: 

[jira] [Created] (HDFS-11583) Parent spans not initialized for every DFSPacket

2017-03-27 Thread Karan Mehta (JIRA)
Karan Mehta created HDFS-11583:
--

 Summary: Parent spans not initialized for every DFSPacket
 Key: HDFS-11583
 URL: https://issues.apache.org/jira/browse/HDFS-11583
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Karan Mehta


The issue was found while working with PHOENIX-3752.

Each packet received by the {{run()}} method of {{DataStreamer}} class, uses 
the {{parents}} field of the {{DFSPacket}} to create a new {{dataStreamer}} 
span, which in turn creates a {{writeTo}} span as its child span. The parents 
field is initialized when the packet is added to the {{dataQueue}} and the 
value is initialized from the {{ThreadLocal}}. This is how HTrace handles 
spans. 
A {{TraceScope}} is created and initialized to {{NullScope}} before the loop 
which runs till the point when the stream is closed. 

Consider the following scenario, when the {{dataQueue}} contains multiple 
packets, only the first of which has a tracing enabled. The scope is 
initialized to the {{dataStreamer}} scope and a {{writeTo}} span is created as 
its child, which gets closed once the packet is sent out to a remote datanode. 
Before {{writeTo}} span is started, the {{dataStreamer}} scope is detached. So 
calling the close method on it doesn't do anything at the end of loop. 

The second iteration will be using the stale value of the {{scope}} variable 
with a DFSPacket on which tracing is not enabled. This results in generation of 
an orphan {{writeTo}} spans which are being delivered to the {{SpanReceiver}} 
as registered in the TraceFramework. This may result in unlimited number of 
spans being generated and sent out to receiver. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11558) BPServiceActor thread name is too long

2017-03-27 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15943771#comment-15943771
 ] 

Arpit Agarwal commented on HDFS-11558:
--

+1 for [~szetszwo]'s suggestion and for retaining the NN IP address and port. 
That could be very useful if a specific actor thread is blocked.

> BPServiceActor thread name is too long
> --
>
> Key: HDFS-11558
> URL: https://issues.apache.org/jira/browse/HDFS-11558
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
>Priority: Minor
> Attachments: HDFS-11558.000.patch, HDFS-11558.001.patch
>
>
> Currently, the thread name looks like
> {code}
> 2017-03-20 18:32:22,022 [DataNode: 
> [[[DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data0,
>  
> [DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data1]]
>   heartbeating to localhost/127.0.0.1:51772] INFO  ...
> {code}
> which contains the full path for each storage dir.  It is unnecessarily long.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11582) Block Storage : add SCSI target access daemon

2017-03-27 Thread Chen Liang (JIRA)
Chen Liang created HDFS-11582:
-

 Summary: Block Storage : add SCSI target access daemon
 Key: HDFS-11582
 URL: https://issues.apache.org/jira/browse/HDFS-11582
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Chen Liang
Assignee: Chen Liang


This JIRA adds the daemon process that exposes SCSI target access. More 
specifically, with this daemon process running, any OS with SCSI can talk to 
this daemon process and treat CBlock volumes as SCSI targets, in this way the 
user can mount the volume just like the POSIX manner.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11577) Combine the old and the new chooseRandom for better performance

2017-03-27 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11577:
--
Attachment: HDFS-11577.002.patch

Thanks [~linyiqun] for the review! Post v002 patch to include the two debug log.

> Combine the old and the new chooseRandom for better performance
> ---
>
> Key: HDFS-11577
> URL: https://issues.apache.org/jira/browse/HDFS-11577
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11577.001.patch, HDFS-11577.002.patch
>
>
> As discussed in HDFS-11535, this JIRA adds a new function combining both the 
> new and the old chooseRandom methods for better performance.
> More specifically, when choosing a random node with storage type requirement, 
> the combined method first tries the old method of blindly picking a random 
> node. If this node satisfies, it is returned. Otherwise, the new chooseRandom 
> is called, which guarantees to find a eligible node in one call (if there is 
> one at all).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11170) Add builder-based create API to FileSystem

2017-03-27 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-11170:
---
Status: Patch Available  (was: Reopened)

> Add builder-based create API to FileSystem
> --
>
> Key: HDFS-11170
> URL: https://issues.apache.org/jira/browse/HDFS-11170
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: SammiChen
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-nice-to-have
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-11170-00.patch, HDFS-11170-01.patch, 
> HDFS-11170-02.patch, HDFS-11170-03.patch, HDFS-11170-04.patch, 
> HDFS-11170-05.patch, HDFS-11170-06.patch, HDFS-11170-07.patch, 
> HDFS-11170-08.patch, HDFS-11170-branch-2.001.patch
>
>
> FileSystem class supports multiple create functions to help user create file. 
> Some create functions has many parameters, it's hard for user to exactly 
> remember these parameters and their orders. This task is to add builder  
> based create functions to help user more easily create file. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11581) Ozone: Support force delete a container

2017-03-27 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-11581:
--

 Summary: Ozone: Support force delete a container
 Key: HDFS-11581
 URL: https://issues.apache.org/jira/browse/HDFS-11581
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Weiwei Yang


In some occasions, we may want to forcibly delete a container regardless of if 
deletion condition is satisfied, e.g container is empty. This way we can do 
best-effort to clean up containers. Note, only a CLOSED container can be force 
deleted. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11548) OZone: SCM: Add node pool management API

2017-03-27 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11548:
--
Attachment: HDFS-11548-HDFS-7240.003.patch

Fix the Jenkins issue.

> OZone: SCM: Add node pool management API
> 
>
> Key: HDFS-11548
> URL: https://issues.apache.org/jira/browse/HDFS-11548
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-11548-HDFS-7240.001.patch, 
> HDFS-11548-HDFS-7240.002.patch, HDFS-11548-HDFS-7240.003.patch
>
>
> The idea is to group registered nodes into pools of fixed size (say 24 nodes 
> per pool) so that the container allocation and report can all be handled 
> independently on a pool basis by SCM.  
> The initial patch will implement the following Node Pool API that 
> 1) add node to a node pool 
> 2) remove a node from a pool 
> 3) get the pool name that a node belongs to 
> 4) get all the pool names 
> 5) get all nodes of a pool
> The integration with SCM container allocation can be all nodes in a single 
> default pool upon registration. We will provide a CLI to manage multiple 
> pools and support for pool definition file later. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11567) Ozone: Support update container

2017-03-27 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-11567:
---
Status: Patch Available  (was: In Progress)

> Ozone: Support update container
> ---
>
> Key: HDFS-11567
> URL: https://issues.apache.org/jira/browse/HDFS-11567
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-11567-HDFS-7240.001.patch
>
>
> Add support to update a container.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11567) Ozone: Support update container

2017-03-27 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15943577#comment-15943577
 ] 

Weiwei Yang edited comment on HDFS-11567 at 3/27/17 4:34 PM:
-

Hello [~anu], [~xyao]

Appreciate if you can help to review this patch, I implemented 
{{UpdateContainer}} call in the dispatcher, it behaves like
# Throws CONTAINER_NOT_FOUND error if the container doesn't exist
# Throws NO_SUCH_ALGORITHM if the digest algorithm not found
# Throws UNSUPPORTED_REQUEST if tries to update a closed container
# Throws CONTAINER_INTERNAL_ERROR if some other error happens during the update
# Updates an existing container with the given container data, if succeed, both 
container map and data on disk updated
# Backup the container file first, if update fails, restores the container file 
from the backup

For test
# Added an unit test {{TestContainerPersistence#testUpdateContainer}}
# Added some code in {{TestOzoneContainer#testOzoneContainerViaDataNode}} to 
verify client server request/response.

Thanks a lot!


was (Author: cheersyang):
Hello [~anu], [~xyao]

Appreciate if you can help to review this patch, I implemented 
{{UpdateContainer}} call in the dispatcher, it behaves like
# Throws CONTAINER_NOT_FOUND error if the container doesn't exist
# Throws NO_SUCH_ALGORITHM if the digest algorithm not found
# Throws UNSUPPORTED_REQUEST if tries to update a closed container
# Throws CONTAINER_INTERNAL_ERROR if some other error happens during the update
# Updates an existing container with the given container data, if succeed, both 
container map and data on disk updated
# Backup the container file first, if update fails, restores the container file 
from the backup

For test
# Added an unit test {{TestContainerPersistence#testUpdateContainer}}
# Added some code in {{TestOzoneContainer#testOzoneContainerViaDataNode}} to 
verify client server request/response.

> Ozone: Support update container
> ---
>
> Key: HDFS-11567
> URL: https://issues.apache.org/jira/browse/HDFS-11567
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-11567-HDFS-7240.001.patch
>
>
> Add support to update a container.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11567) Ozone: Support update container

2017-03-27 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15943577#comment-15943577
 ] 

Weiwei Yang commented on HDFS-11567:


Hello [~anu], [~xyao]

Appreciate if you can help to review this patch, I implemented 
{{UpdateContainer}} call in the dispatcher, it behaves like
# Throws CONTAINER_NOT_FOUND error if the container doesn't exist
# Throws NO_SUCH_ALGORITHM if the digest algorithm not found
# Throws UNSUPPORTED_REQUEST if tries to update a closed container
# Throws CONTAINER_INTERNAL_ERROR if some other error happens during the update
# Updates an existing container with the given container data, if succeed, both 
container map and data on disk updated
# Backup the container file first, if update fails, restores the container file 
from the backup

For test
# Added an unit test {{TestContainerPersistence#testUpdateContainer}}
# Added some code in {{TestOzoneContainer#testOzoneContainerViaDataNode}} to 
verify client server request/response.

> Ozone: Support update container
> ---
>
> Key: HDFS-11567
> URL: https://issues.apache.org/jira/browse/HDFS-11567
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-11567-HDFS-7240.001.patch
>
>
> Add support to update a container.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11567) Ozone: Support update container

2017-03-27 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-11567:
---
Attachment: HDFS-11567-HDFS-7240.001.patch

> Ozone: Support update container
> ---
>
> Key: HDFS-11567
> URL: https://issues.apache.org/jira/browse/HDFS-11567
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-11567-HDFS-7240.001.patch
>
>
> Add support to update a container.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11566) Ozone: Document missing metrics for container operations

2017-03-27 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15943533#comment-15943533
 ] 

Anu Engineer commented on HDFS-11566:
-

Thanks for catching this, but just wondering, should we create an 
Ozonemetrics.md instead of putting it into the HDFS metrics file ?  At least 
just until we merge the ozone branch, it would make us more resilient against 
trunk merges :)

> Ozone: Document missing metrics for container operations
> 
>
> Key: HDFS-11566
> URL: https://issues.apache.org/jira/browse/HDFS-11566
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation, ozone
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-11566-HDFS-7240.001.patch
>
>
> In HDFS-11463, it adds some metrics for container operations and can be 
> exported over JMX. But it hasn't been documented in {{Metrics.md}}. There are 
> many metrics added for container. Document these will be helpful for users.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11550) Ozone: Add a check to prevent removing a container that has keys in it

2017-03-27 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15943524#comment-15943524
 ] 

Weiwei Yang commented on HDFS-11550:


Thanks [~anu] !

> Ozone: Add a check to prevent removing a container that has keys in it
> --
>
> Key: HDFS-11550
> URL: https://issues.apache.org/jira/browse/HDFS-11550
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Fix For: HDFS-7240
>
> Attachments: HDFS-11550-HDFS-7240.001.patch, 
> HDFS-11550-HDFS-7240.002.patch, HDFS-11550-HDFS-7240.003.patch, 
> HDFS-11550-HDFS-7240.004.patch, HDFS-11550-HDFS-7240.005.patch
>
>
> The Storage Container remove call must check if there are keys in the 
> container before removing itself. if not it should return an error, 
> ERROR_CONTAINER_NOT_EMPTY.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11580) Ozone: Support asynchronus client API for SCM and containers

2017-03-27 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-11580:
---

 Summary: Ozone: Support asynchronus client API for SCM and 
containers
 Key: HDFS-11580
 URL: https://issues.apache.org/jira/browse/HDFS-11580
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Anu Engineer


This is an umbrella JIRA that needs to support a set of APIs in Asynchronous 
form.

For containers -- or the datanode API currently supports a call 
{{sendCommand}}. we need to build proper programming interface and support an 
async interface.

There is also a set of SCM API that clients can call, it would be nice to 
support Async interface for those too.





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11006) Ozone: support setting chunk size in streaming API

2017-03-27 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15943509#comment-15943509
 ] 

Anu Engineer commented on HDFS-11006:
-

[~linyiqun] For starters it looks perfect. However in the long run, we might 
have to either throw an error if the chunk size is too big -- There are both 
netty and protobuf encoding layers which work well at smaller sizes. So either 
we have to enforce a max size or do the chunking inside this layer.


> Ozone: support setting chunk size in streaming API
> --
>
> Key: HDFS-11006
> URL: https://issues.apache.org/jira/browse/HDFS-11006
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Yiqun Lin
> Attachments: HDFS-11006-HDFS-7240.001.patch
>
>
> Right now we have a hard coded chunk size, we should either have this read 
> from config or the user should be able to pass this to ChunkInputStream



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >