[jira] [Commented] (HDFS-12021) Ozone: Documentation: Add Ozone-defaults documentation

2017-06-23 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061818#comment-16061818
 ] 

Elek, Marton commented on HDFS-12021:
-

For me (as a user) the Getting Started Guide -- which explains a selected set 
of the settings -- was very usefull. So (for me) it's enough to read 
GettingStarted about the most important setings and I can use the 
ozone-default.xml as a reference.

Maybe an other one could be useful which explains the most important settings 
from the tuning/operations point of view.

> Ozone: Documentation: Add Ozone-defaults  documentation
> ---
>
> Key: HDFS-12021
> URL: https://issues.apache.org/jira/browse/HDFS-12021
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
> Attachments: hadoop_doc_front.jpg
>
>
> We need to add documentation about the settings that are exposed via 
> ozone-defaults.xml
> Since ozone is new, we might have to put some extra effort into this to make 
> it easy to understand. In other words, we should write a proper doc 
> explaining what these settings mean and the rationale of various values we 
> choose, instead of a table with lots of settings.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11993) Add log info when connect to datanode socket address failed

2017-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061791#comment-16061791
 ] 

Hadoop QA commented on HDFS-11993:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
17s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11993 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12874350/HADOOP-11993.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1fdc4d406c18 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0111711 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20026/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20026/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add log info when connect to datanode socket address failed
> ---
>
> Key: HDFS-11993
> URL: https://issues.apache.org/jira/browse/HDFS-11993
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: chencan
>Assignee: chencan
> Attachments: HADOOP-11993.002.patch, HADOOP-11993.patch
>
>
> In function BlockSeekTo, when connect faild to datanode socket address,log as 
> follow:
> DFSClient.LOG.warn("Failed to connect to " + targetAddr + " for block"
>   + ", add to deadNodes and continue. " + ex, ex);
> add block info may be more explicit.



--
This message was sent by 

[jira] [Commented] (HDFS-12028) Ozone: CLI: remove noisy slf4j binding output from hdfs oz command.

2017-06-23 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061787#comment-16061787
 ] 

Xiaoyu Yao commented on HDFS-12028:
---

Read through the discussion on stackoverflow, I think [~vagarychen]'s patch 
takes the second approach. The other choice is to ask the jscsi library to add 
logback slf4j binding as a runtime dependency. 

> Ozone: CLI: remove noisy slf4j binding output from hdfs oz command.
> ---
>
> Key: HDFS-12028
> URL: https://issues.apache.org/jira/browse/HDFS-12028
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Chen Liang
> Attachments: HDFS-12028-HDFS-7240.001.patch
>
>
> Currently when you run CLI "hdfs oz ...", there always a noisy slf4j biding 
> issue log output. This ticket is opened to removed it. 
> {code}
> xyao$ hdfs oz
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/Users/xyao/deploy/ozone/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/xyao/deploy/ozone/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/hdfs/lib/logback-classic-1.0.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12011) Add a new load balancing volume choosing policy

2017-06-23 Thread chencan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDFS-12011:
---
Attachment: (was: HADOOP-12011.002.pathch)

> Add a  new load balancing volume choosing policy
> 
>
> Key: HDFS-12011
> URL: https://issues.apache.org/jira/browse/HDFS-12011
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: chencan
>Assignee: chencan
> Attachments: HADOOP-12011.002.path, HADOOP-12011.patch
>
>
> There  are two types of volume choosing policies when choose a volume 
> inner a datanode to write in a datablock : RoundRobinVolumeChoosingPolicy and 
> AvailableSpaceVolumeChoosingPolicy.This two policies do not take into account 
> the fsvolume's load. We can add a new load balancing volume choosing policy,  
> using existing reference in FsVolumeImpl.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12011) Add a new load balancing volume choosing policy

2017-06-23 Thread chencan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061785#comment-16061785
 ] 

chencan commented on HDFS-12011:


Thank you for your reply. I found  fsvolume's reference count may 
increase/decrease during every read and write process. So i think this can 
represent the load on the disk. It seems thoughtless. I have tested it in my 
environment using hibench dfsioe,the performance of this policy is not better 
than other ones. I think part of the reason is dfsioe's test filesize is equal. 
May be this policy is a  choice in a complex situation where the disk load is 
uneven.

> Add a  new load balancing volume choosing policy
> 
>
> Key: HDFS-12011
> URL: https://issues.apache.org/jira/browse/HDFS-12011
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: chencan
>Assignee: chencan
> Attachments: HADOOP-12011.002.path, HADOOP-12011.patch
>
>
> There  are two types of volume choosing policies when choose a volume 
> inner a datanode to write in a datablock : RoundRobinVolumeChoosingPolicy and 
> AvailableSpaceVolumeChoosingPolicy.This two policies do not take into account 
> the fsvolume's load. We can add a new load balancing volume choosing policy,  
> using existing reference in FsVolumeImpl.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12011) Add a new load balancing volume choosing policy

2017-06-23 Thread chencan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDFS-12011:
---
Attachment: HADOOP-12011.002.path

> Add a  new load balancing volume choosing policy
> 
>
> Key: HDFS-12011
> URL: https://issues.apache.org/jira/browse/HDFS-12011
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: chencan
>Assignee: chencan
> Attachments: HADOOP-12011.002.path, HADOOP-12011.002.pathch, 
> HADOOP-12011.patch
>
>
> There  are two types of volume choosing policies when choose a volume 
> inner a datanode to write in a datablock : RoundRobinVolumeChoosingPolicy and 
> AvailableSpaceVolumeChoosingPolicy.This two policies do not take into account 
> the fsvolume's load. We can add a new load balancing volume choosing policy,  
> using existing reference in FsVolumeImpl.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12011) Add a new load balancing volume choosing policy

2017-06-23 Thread chencan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDFS-12011:
---
Attachment: HADOOP-12011.002.pathch

> Add a  new load balancing volume choosing policy
> 
>
> Key: HDFS-12011
> URL: https://issues.apache.org/jira/browse/HDFS-12011
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: chencan
>Assignee: chencan
> Attachments: HADOOP-12011.002.pathch, HADOOP-12011.patch
>
>
> There  are two types of volume choosing policies when choose a volume 
> inner a datanode to write in a datablock : RoundRobinVolumeChoosingPolicy and 
> AvailableSpaceVolumeChoosingPolicy.This two policies do not take into account 
> the fsvolume's load. We can add a new load balancing volume choosing policy,  
> using existing reference in FsVolumeImpl.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11993) Add log info when connect to datanode socket address failed

2017-06-23 Thread chencan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDFS-11993:
---
Attachment: HADOOP-11993.002.patch

> Add log info when connect to datanode socket address failed
> ---
>
> Key: HDFS-11993
> URL: https://issues.apache.org/jira/browse/HDFS-11993
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: chencan
>Assignee: chencan
> Attachments: HADOOP-11993.002.patch, HADOOP-11993.patch
>
>
> In function BlockSeekTo, when connect faild to datanode socket address,log as 
> follow:
> DFSClient.LOG.warn("Failed to connect to " + targetAddr + " for block"
>   + ", add to deadNodes and continue. " + ex, ex);
> add block info may be more explicit.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11993) Add log info when connect to datanode socket address failed

2017-06-23 Thread chencan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDFS-11993:
---
Attachment: (was: HADOOP-11993.002.patch)

> Add log info when connect to datanode socket address failed
> ---
>
> Key: HDFS-11993
> URL: https://issues.apache.org/jira/browse/HDFS-11993
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: chencan
>Assignee: chencan
> Attachments: HADOOP-11993.patch
>
>
> In function BlockSeekTo, when connect faild to datanode socket address,log as 
> follow:
> DFSClient.LOG.warn("Failed to connect to " + targetAddr + " for block"
>   + ", add to deadNodes and continue. " + ex, ex);
> add block info may be more explicit.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11993) Add log info when connect to datanode socket address failed

2017-06-23 Thread chencan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDFS-11993:
---
Attachment: (was: HADOOP-11993.003.patch)

> Add log info when connect to datanode socket address failed
> ---
>
> Key: HDFS-11993
> URL: https://issues.apache.org/jira/browse/HDFS-11993
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: chencan
>Assignee: chencan
> Attachments: HADOOP-11993.002.patch, HADOOP-11993.patch
>
>
> In function BlockSeekTo, when connect faild to datanode socket address,log as 
> follow:
> DFSClient.LOG.warn("Failed to connect to " + targetAddr + " for block"
>   + ", add to deadNodes and continue. " + ex, ex);
> add block info may be more explicit.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11993) Add log info when connect to datanode socket address failed

2017-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061778#comment-16061778
 ] 

Hadoop QA commented on HDFS-11993:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
14s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11993 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12874349/HADOOP-11993.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1caf8e66fc82 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0111711 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20025/artifact/patchprocess/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20025/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20025/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add log info when connect to datanode socket address failed
> ---
>
> Key: HDFS-11993
> URL: https://issues.apache.org/jira/browse/HDFS-11993
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: chencan
>Assignee: chencan
> Attachments: HADOOP-11993.002.patch, HADOOP-11993.003.patch, 
> HADOOP-11993.patch
>
>
> In function BlockSeekTo, when connect faild to datanode socket 

[jira] [Commented] (HDFS-12028) Ozone: CLI: remove noisy slf4j binding output from hdfs oz command.

2017-06-23 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061774#comment-16061774
 ] 

Weiwei Yang commented on HDFS-12028:


Hi [~vagarychen]

I tried your patch but I still see binding warnings, can you double check if 
this patch could resolve the issue?

{code}
SLF4J: Found binding in 
[jar:file:/home/wwei/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/hdfs/lib/logback-classic-1.0.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
{code}

Thank you.

> Ozone: CLI: remove noisy slf4j binding output from hdfs oz command.
> ---
>
> Key: HDFS-12028
> URL: https://issues.apache.org/jira/browse/HDFS-12028
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Chen Liang
> Attachments: HDFS-12028-HDFS-7240.001.patch
>
>
> Currently when you run CLI "hdfs oz ...", there always a noisy slf4j biding 
> issue log output. This ticket is opened to removed it. 
> {code}
> xyao$ hdfs oz
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/Users/xyao/deploy/ozone/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/xyao/deploy/ozone/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/hdfs/lib/logback-classic-1.0.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12028) Ozone: CLI: remove noisy slf4j binding output from hdfs oz command.

2017-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061772#comment-16061772
 ] 

Hadoop QA commented on HDFS-12028:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
10s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}116m 55s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}140m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.cblock.TestCBlockReadWrite |
|   | hadoop.ozone.scm.node.TestNodeManager |
|   | hadoop.ozone.container.ozoneimpl.TestOzoneContainerRatis |
|   | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
|   | hadoop.ozone.container.ozoneimpl.TestRatisManager |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 |
| Timed out junit tests | org.apache.hadoop.cblock.TestLocalBlockCache |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12028 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12874337/HDFS-12028-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 53aee214831c 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 29c942b |
| Default Java | 1.8.0_131 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20022/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20022/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20022/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: CLI: remove noisy slf4j binding output from hdfs oz command.
> ---
>
> Key: HDFS-12028
> URL: https://issues.apache.org/jira/browse/HDFS-12028
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Chen Liang
> Attachments: HDFS-12028-HDFS-7240.001.patch
>
>
> Currently when you run CLI "hdfs oz ...", there always a noisy slf4j biding 
> issue 

[jira] [Updated] (HDFS-11993) Add log info when connect to datanode socket address failed

2017-06-23 Thread chencan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDFS-11993:
---
Attachment: HADOOP-11993.003.patch

> Add log info when connect to datanode socket address failed
> ---
>
> Key: HDFS-11993
> URL: https://issues.apache.org/jira/browse/HDFS-11993
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: chencan
>Assignee: chencan
> Attachments: HADOOP-11993.002.patch, HADOOP-11993.003.patch, 
> HADOOP-11993.patch
>
>
> In function BlockSeekTo, when connect faild to datanode socket address,log as 
> follow:
> DFSClient.LOG.warn("Failed to connect to " + targetAddr + " for block"
>   + ", add to deadNodes and continue. " + ex, ex);
> add block info may be more explicit.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11881) NameNode consumes a lot of memory for snapshot diff report generation

2017-06-23 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-11881:
--
Attachment: 2_ArrayList_SnapshotDiffReport.png
1_ChunkedArrayList_SnapshotDiffReport.png

[~jojochuang] / [~yzhangal],
  Wrote a test to have 500K files in the snaphot diff report and run the 
snapshot diff shell command for 100+ times to see how the heap gets fragmented 
and the FullGC frequencies. Attached heap graph for both ArrayList and 
ChunkedArrayList based implementations of SnapshotDiffReport. The ArrayList 
needs quite a frequent LongGC to clear up the heap and to make room for the new 
report. Whereas, ChunkedArrayList based SansphotDiffReport needed only less 
number of FullGCs for the same test. If we can scale this test to have 10G+ 
SnapshotDiffReport, then the differences in heap usages and FullGCs requirement 
for ArrayList based approach will be of order of magnitude higher compared to 
ChunkedArrayList.

  Tried to do similar ChunkedArrayList approach for DirDiff, but soon realized 
that DirDiff uses far more functionality in the diff list like add by index, 
remove by index, set by index etc. All these index based operations are 
currently not supported in ChunkedArrayList. So, will take up this bugger task 
in a separate jira. 

   Can you please review the patch v01 in the context of FileDiff improvements 
alone for SnapshotDiffReport usecase? 



> NameNode consumes a lot of memory for snapshot diff report generation
> -
>
> Key: HDFS-11881
> URL: https://issues.apache.org/jira/browse/HDFS-11881
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, snapshots
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: 1_ChunkedArrayList_SnapshotDiffReport.png, 
> 2_ArrayList_SnapshotDiffReport.png, HDFS-11881.01.patch
>
>
> *Problem:*
> HDFS supports a snapshot diff tool which can generate a [detailed report | 
> https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsSnapshots.html#Get_Snapshots_Difference_Report]
>  of modified, created, deleted and renamed files between any 2 snapshots.
> {noformat}
> hdfs snapshotDiff   
> {noformat}
> However, if the diff list between 2 snapshots happens to be huge, in the 
> order of millions, then NameNode can consume a lot of memory while generating 
> the huge diff report. In a few cases, we are seeing NameNode getting into a 
> long GC lasting for few minutes to make room for this burst in memory 
> requirement during snapshot diff report generation.
> *RootCause:*
> * NameNode tries to generate the diff report with all diff entries at once 
> which puts undue pressure 
> * Each diff report entry has the diff type (enum), source path byte array, 
> and destination path byte array to the minimum. Let's take file deletions use 
> case. For file deletions, there would be only source or destination paths in 
> the diff report entry. Let's assume these deleted files on average take 
> 128Bytes for the path. 4 million file deletion captured in diff report will 
> thus need 512MB of memory 
> * The snapshot diff report uses simple java ArrayList which tries to double 
> its backing contiguous memory chunk every time the usage factor crosses the 
> capacity threshold. So, a 512MB memory requirement might be internally asking 
> for a much larger contiguous memory chunk
> *Proposal:*
> * Make NameNode snapshot diff report service follow the batch model (like 
> directory listing service). Clients (hdfs snapshotDiff command) will then 
> receive  diff report in small batches, and need to iterate several times to 
> get the full list.
> * Additionally, snap diff report service in the NameNode can make use of 
> ChunkedArrayList data structure instead of the current ArrayList so as to 
> avoid the curse of fragmentation and large contiguous memory requirement.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11993) Add log info when connect to datanode socket address failed

2017-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061748#comment-16061748
 ] 

Hadoop QA commented on HDFS-11993:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-client: The 
patch generated 1 new + 41 unchanged - 0 fixed = 42 total (was 41) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
9s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11993 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12874343/HADOOP-11993.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b82f53de27da 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0111711 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20023/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20023/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20023/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add log info when connect to datanode socket address failed
> ---
>
> Key: HDFS-11993
> URL: https://issues.apache.org/jira/browse/HDFS-11993
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: chencan
>Assignee: chencan
> Attachments: HADOOP-11993.002.patch, HADOOP-11993.patch
>
>
> In function BlockSeekTo, when 

[jira] [Commented] (HDFS-12033) DatanodeManager picking EC recovery tasks should also consider the number of regular replication tasks.

2017-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061747#comment-16061747
 ] 

Hadoop QA commented on HDFS-12033:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m 44s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}131m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl |
|   | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
|   | hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality |
|   | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.server.datanode.TestFsDatasetCache |
|   | hadoop.fs.viewfs.TestViewFsAtHdfsRoot |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDistributedFileSystem |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12033 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12874331/HDFS-12033.00.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4e02e2a2dc55 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0111711 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 

[jira] [Commented] (HDFS-11773) Ozone: KSM : add listVolumes

2017-06-23 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061735#comment-16061735
 ] 

Weiwei Yang commented on HDFS-11773:


Hi [~xyao] sure I am going to work on this shortly. Thanks for the reminder.

> Ozone: KSM : add listVolumes
> 
>
> Key: HDFS-11773
> URL: https://issues.apache.org/jira/browse/HDFS-11773
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Weiwei Yang
>
> List volume call can be used in three different contexts. One is for the 
> administrators to list all volumes in a cluster. Second is for the 
> administrator to list the volumes owned by a specific user. Third is a user 
> listing the volumes owned by himself/herself.
> Since these calls can return large number of entries the rest protocol 
> supports paging. Paging is supported by the use of prevKey, prefix and 
> maxKeys. The caller is aware the this call is neither atomic nor consistent. 
> So we can iterate over the list even while changes are happening to the list.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11993) Add log info when connect to datanode socket address failed

2017-06-23 Thread chencan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061734#comment-16061734
 ] 

chencan commented on HDFS-11993:


Thanks for your suggestion, I have changed and add a new patch.

> Add log info when connect to datanode socket address failed
> ---
>
> Key: HDFS-11993
> URL: https://issues.apache.org/jira/browse/HDFS-11993
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: chencan
>Assignee: chencan
> Attachments: HADOOP-11993.002.patch, HADOOP-11993.patch
>
>
> In function BlockSeekTo, when connect faild to datanode socket address,log as 
> follow:
> DFSClient.LOG.warn("Failed to connect to " + targetAddr + " for block"
>   + ", add to deadNodes and continue. " + ex, ex);
> add block info may be more explicit.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11993) Add log info when connect to datanode socket address failed

2017-06-23 Thread chencan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDFS-11993:
---
Attachment: HADOOP-11993.002.patch

> Add log info when connect to datanode socket address failed
> ---
>
> Key: HDFS-11993
> URL: https://issues.apache.org/jira/browse/HDFS-11993
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: chencan
>Assignee: chencan
> Attachments: HADOOP-11993.002.patch, HADOOP-11993.patch
>
>
> In function BlockSeekTo, when connect faild to datanode socket address,log as 
> follow:
> DFSClient.LOG.warn("Failed to connect to " + targetAddr + " for block"
>   + ", add to deadNodes and continue. " + ex, ex);
> add block info may be more explicit.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12032) Inaccurate comment on DatanodeDescriptor#getNumberOfBlocksToBeErasureCoded

2017-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061721#comment-16061721
 ] 

Hadoop QA commented on HDFS-12032:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
43s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 29 unchanged - 2 fixed = 29 total (was 31) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m 32s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 86m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12032 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12874328/HDFS-12032.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c9d8af2b1d19 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0111711 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20020/artifact/patchprocess/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20020/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20020/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20020/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Inaccurate comment on 

[jira] [Updated] (HDFS-12028) Ozone: CLI: remove noisy slf4j binding output from hdfs oz command.

2017-06-23 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12028:
--
Status: Patch Available  (was: Open)

> Ozone: CLI: remove noisy slf4j binding output from hdfs oz command.
> ---
>
> Key: HDFS-12028
> URL: https://issues.apache.org/jira/browse/HDFS-12028
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Chen Liang
> Attachments: HDFS-12028-HDFS-7240.001.patch
>
>
> Currently when you run CLI "hdfs oz ...", there always a noisy slf4j biding 
> issue log output. This ticket is opened to removed it. 
> {code}
> xyao$ hdfs oz
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/Users/xyao/deploy/ozone/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/xyao/deploy/ozone/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/hdfs/lib/logback-classic-1.0.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12028) Ozone: CLI: remove noisy slf4j binding output from hdfs oz command.

2017-06-23 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12028:
--
Attachment: HDFS-12028-HDFS-7240.001.patch

Post v001 patch.

> Ozone: CLI: remove noisy slf4j binding output from hdfs oz command.
> ---
>
> Key: HDFS-12028
> URL: https://issues.apache.org/jira/browse/HDFS-12028
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Chen Liang
> Attachments: HDFS-12028-HDFS-7240.001.patch
>
>
> Currently when you run CLI "hdfs oz ...", there always a noisy slf4j biding 
> issue log output. This ticket is opened to removed it. 
> {code}
> xyao$ hdfs oz
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/Users/xyao/deploy/ozone/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/xyao/deploy/ozone/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/hdfs/lib/logback-classic-1.0.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12028) Ozone: CLI: remove noisy slf4j binding output from hdfs oz command.

2017-06-23 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang reassigned HDFS-12028:
-

Assignee: Chen Liang

> Ozone: CLI: remove noisy slf4j binding output from hdfs oz command.
> ---
>
> Key: HDFS-12028
> URL: https://issues.apache.org/jira/browse/HDFS-12028
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Chen Liang
>
> Currently when you run CLI "hdfs oz ...", there always a noisy slf4j biding 
> issue log output. This ticket is opened to removed it. 
> {code}
> xyao$ hdfs oz
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/Users/xyao/deploy/ozone/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/xyao/deploy/ozone/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/hdfs/lib/logback-classic-1.0.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12031) Ozone: Rename OzoneClient to OzoneRestClient

2017-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061693#comment-16061693
 ] 

Hadoop QA commented on HDFS-12031:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
46s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 35s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 6 new + 16 unchanged - 1 fixed = 22 total (was 17) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 39s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}105m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.container.ozoneimpl.TestRatisManager |
|   | hadoop.ozone.container.ozoneimpl.TestOzoneContainerRatis |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12031 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12874325/HDFS-12031-HDFS-7240.000.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 91127dcb07bf 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 29c942b |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20019/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20019/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20019/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20019/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: Rename OzoneClient to OzoneRestClient
> 
>
> Key: HDFS-12031
> URL: 

[jira] [Commented] (HDFS-11896) Non-dfsUsed will be doubled on dead node re-registration in branch-2.7.

2017-06-23 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061676#comment-16061676
 ] 

Brahma Reddy Battula commented on HDFS-11896:
-

[~arpitagarwal] if you get chance, can you look in this issue.will dig and fix 
the test failure next patch,it's passing in my local linux also..

> Non-dfsUsed will be doubled on dead node re-registration in branch-2.7.
> ---
>
> Key: HDFS-11896
> URL: https://issues.apache.org/jira/browse/HDFS-11896
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>  Labels: release-blocker
> Attachments: HDFS-11896-002.patch, HDFS-11896-branch-2.7-001.patch, 
> HDFS-11896-branch-2.7-002.patch, HDFS-11896.patch
>
>
>  *Scenario:* 
> i)Make you sure you've non-dfs data.
> ii) Stop Datanode
> iii) wait it becomes dead
> iv) now restart and check the non-dfs data



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12030) Ozone: CLI: support infoKey command

2017-06-23 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin reassigned HDFS-12030:


Assignee: Yiqun Lin

> Ozone: CLI: support infoKey command
> ---
>
> Key: HDFS-12030
> URL: https://issues.apache.org/jira/browse/HDFS-12030
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>Assignee: Yiqun Lin
>
> {code}
> HW11717:ozone xyao$ hdfs oz -infoKey 
> http://localhost:9864/vol-2/bucket-1/key-1 -user xyao 
> Command Failed : {"httpCode":0,"shortMessage":"Not supported 
> yet","resource":null,"message":"Not supported 
> yet","requestID":null,"hostName":null}
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12033) DatanodeManager picking EC recovery tasks should also consider the number of regular replication tasks.

2017-06-23 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061665#comment-16061665
 ] 

Andrew Wang commented on HDFS-12033:


LGTM, possible to add a unit test too?

> DatanodeManager picking EC recovery tasks should also consider the number of 
> regular replication tasks.
> ---
>
> Key: HDFS-12033
> URL: https://issues.apache.org/jira/browse/HDFS-12033
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-12033.00.patch
>
>
> In {{DatanodeManager#handleHeartbeat}}, it choose both pending replication 
> list and pending EC list to up to {{maxTransfers}} items.
> It should only send {{maxTransfers}} tasks combined to DN.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-12033) DatanodeManager picking EC recovery tasks should also consider the number of regular replication tasks.

2017-06-23 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-12033 started by Lei (Eddy) Xu.

> DatanodeManager picking EC recovery tasks should also consider the number of 
> regular replication tasks.
> ---
>
> Key: HDFS-12033
> URL: https://issues.apache.org/jira/browse/HDFS-12033
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-12033.00.patch
>
>
> In {{DatanodeManager#handleHeartbeat}}, it choose both pending replication 
> list and pending EC list to up to {{maxTransfers}} items.
> It should only send {{maxTransfers}} tasks combined to DN.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12033) DatanodeManager picking EC recovery tasks should also consider the number of regular replication tasks.

2017-06-23 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-12033:
-
Status: Patch Available  (was: In Progress)

> DatanodeManager picking EC recovery tasks should also consider the number of 
> regular replication tasks.
> ---
>
> Key: HDFS-12033
> URL: https://issues.apache.org/jira/browse/HDFS-12033
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-12033.00.patch
>
>
> In {{DatanodeManager#handleHeartbeat}}, it choose both pending replication 
> list and pending EC list to up to {{maxTransfers}} items.
> It should only send {{maxTransfers}} tasks combined to DN.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12033) DatanodeManager picking EC recovery tasks should also consider the number of regular replication tasks.

2017-06-23 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-12033:
-
Attachment: HDFS-12033.00.patch

Adjust {{maxTransfer}} for {{pending replication tasks}}

> DatanodeManager picking EC recovery tasks should also consider the number of 
> regular replication tasks.
> ---
>
> Key: HDFS-12033
> URL: https://issues.apache.org/jira/browse/HDFS-12033
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-12033.00.patch
>
>
> In {{DatanodeManager#handleHeartbeat}}, it choose both pending replication 
> list and pending EC list to up to {{maxTransfers}} items.
> It should only send {{maxTransfers}} tasks combined to DN.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12033) DatanodeManager picking EC recovery tasks should also consider the number of regular replication tasks.

2017-06-23 Thread Lei (Eddy) Xu (JIRA)
Lei (Eddy) Xu created HDFS-12033:


 Summary: DatanodeManager picking EC recovery tasks should also 
consider the number of regular replication tasks.
 Key: HDFS-12033
 URL: https://issues.apache.org/jira/browse/HDFS-12033
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: erasure-coding
Affects Versions: 3.0.0-alpha3
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu


In {{DatanodeManager#handleHeartbeat}}, it choose both pending replication list 
and pending EC list to up to {{maxTransfers}} items.

It should only send {{maxTransfers}} tasks combined to DN.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11403) Zookeper ACLs on NN HA enabled clusters to be handled consistently

2017-06-23 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061654#comment-16061654
 ] 

Arpit Agarwal commented on HDFS-11403:
--

Thanks for the catch [~brahmareddy].

> Zookeper ACLs on NN HA enabled clusters to be handled consistently
> --
>
> Key: HDFS-11403
> URL: https://issues.apache.org/jira/browse/HDFS-11403
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Laszlo Puskas
>Assignee: Hanisha Koneru
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HDFS-11403.000.patch, HDFS-11403.001.patch
>
>
> On clusters where NN HA is enabled zookeper ACLs need to be handled 
> consistently when enabling security.
> The current behavior is as follows:
> * if HA is enabled before the cluster is made secure, proper ACLs are only 
> set on the leaf znodes, while there's no ACLs set on the path 
> (eg.:/hadoop-ha/mycluster/ActiveStandbyElectorLock)
> * if HA is enabled after the cluster is made secure ACLs are set on the root 
> znode as well 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12032) Inaccurate comment on DatanodeDescriptor#getNumberOfBlocksToBeErasureCoded

2017-06-23 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061651#comment-16061651
 ] 

Lei (Eddy) Xu commented on HDFS-12032:
--

+1.  Thanks [~andrew.wang]!

> Inaccurate comment on DatanodeDescriptor#getNumberOfBlocksToBeErasureCoded
> --
>
> Key: HDFS-12032
> URL: https://issues.apache.org/jira/browse/HDFS-12032
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0-alpha3
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Trivial
> Attachments: HDFS-12032.001.patch
>
>
> I saw this comment is an inaccurate copy paste:
> {noformat}
>   /**
>* The number of work items that are pending to be replicated
>*/
>   @VisibleForTesting
>   public int getNumberOfBlocksToBeErasureCoded() {
> return erasurecodeBlocks.size();
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12032) Inaccurate comment on DatanodeDescriptor#getNumberOfBlocksToBeErasureCoded

2017-06-23 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12032:
---
Status: Patch Available  (was: Open)

> Inaccurate comment on DatanodeDescriptor#getNumberOfBlocksToBeErasureCoded
> --
>
> Key: HDFS-12032
> URL: https://issues.apache.org/jira/browse/HDFS-12032
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0-alpha3
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Trivial
> Attachments: HDFS-12032.001.patch
>
>
> I saw this comment is an inaccurate copy paste:
> {noformat}
>   /**
>* The number of work items that are pending to be replicated
>*/
>   @VisibleForTesting
>   public int getNumberOfBlocksToBeErasureCoded() {
> return erasurecodeBlocks.size();
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12032) Inaccurate comment on DatanodeDescriptor#getNumberOfBlocksToBeErasureCoded

2017-06-23 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12032:
---
Attachment: HDFS-12032.001.patch

Trivial one attached.

> Inaccurate comment on DatanodeDescriptor#getNumberOfBlocksToBeErasureCoded
> --
>
> Key: HDFS-12032
> URL: https://issues.apache.org/jira/browse/HDFS-12032
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0-alpha3
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Trivial
> Attachments: HDFS-12032.001.patch
>
>
> I saw this comment is an inaccurate copy paste:
> {noformat}
>   /**
>* The number of work items that are pending to be replicated
>*/
>   @VisibleForTesting
>   public int getNumberOfBlocksToBeErasureCoded() {
> return erasurecodeBlocks.size();
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12032) Inaccurate comment on DatanodeDescriptor#getNumberOfBlocksToBeErasureCoded

2017-06-23 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-12032:
--

 Summary: Inaccurate comment on 
DatanodeDescriptor#getNumberOfBlocksToBeErasureCoded
 Key: HDFS-12032
 URL: https://issues.apache.org/jira/browse/HDFS-12032
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0-alpha3
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Trivial


I saw this comment is an inaccurate copy paste:

{noformat}
  /**
   * The number of work items that are pending to be replicated
   */
  @VisibleForTesting
  public int getNumberOfBlocksToBeErasureCoded() {
return erasurecodeBlocks.size();
  }
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12018) Ozone: Misc: Add CBlocks defaults to ozone-defaults.xml

2017-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061621#comment-16061621
 ] 

Hadoop QA commented on HDFS-12018:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
38s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}117m 29s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}147m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestNameNodeMXBean |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.server.namenode.TestFileContextXAttr |
|   | hadoop.cblock.TestCBlockReadWrite |
|   | hadoop.ozone.scm.node.TestNodeManager |
|   | hadoop.ozone.container.ozoneimpl.TestRatisManager |
| Timed out junit tests | org.apache.hadoop.cblock.TestLocalBlockCache |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12018 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12874311/HDFS-12018-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 2c07089335da 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / c395bc8 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20018/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20018/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20018/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |



[jira] [Commented] (HDFS-12031) Ozone: Rename OzoneClient to OzoneRestClient

2017-06-23 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061607#comment-16061607
 ] 

Anu Engineer commented on HDFS-12031:
-

+1, pending jenkins.

> Ozone: Rename OzoneClient to OzoneRestClient
> 
>
> Key: HDFS-12031
> URL: https://issues.apache.org/jira/browse/HDFS-12031
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
> Attachments: HDFS-12031-HDFS-7240.000.patch
>
>
> This JIRA is to rename existing 
> {{org.apache.hadoop.ozone.web.client.OzoneClient}} to 
> {{org.apache.hadoop.ozone.web.client.OzoneWebClient}} so that we can build 
> OzoneClient java API.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12031) Ozone: Rename OzoneClient to OzoneRestClient

2017-06-23 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12031:

Status: Patch Available  (was: Open)

> Ozone: Rename OzoneClient to OzoneRestClient
> 
>
> Key: HDFS-12031
> URL: https://issues.apache.org/jira/browse/HDFS-12031
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
> Attachments: HDFS-12031-HDFS-7240.000.patch
>
>
> This JIRA is to rename existing 
> {{org.apache.hadoop.ozone.web.client.OzoneClient}} to 
> {{org.apache.hadoop.ozone.web.client.OzoneWebClient}} so that we can build 
> OzoneClient java API.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12008) Improve the available-space block placement policy

2017-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061606#comment-16061606
 ] 

Hadoop QA commented on HDFS-12008:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
51s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
30s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 34s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 28 unchanged - 2 fixed = 30 total (was 30) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 58s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}184m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_131 Failed junit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
| JDK v1.8.0_131 Timed out junit tests | 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
| JDK v1.7.0_131 Failed junit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestDistributedFileSystem |
|   | hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
|
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestBlockStoragePolicy |
| JDK v1.7.0_131 Timed out junit tests | 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5e40efe |

[jira] [Updated] (HDFS-12031) Ozone: Rename OzoneClient to OzoneRestClient

2017-06-23 Thread Nandakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandakumar updated HDFS-12031:
--
Attachment: HDFS-12031-HDFS-7240.000.patch

> Ozone: Rename OzoneClient to OzoneRestClient
> 
>
> Key: HDFS-12031
> URL: https://issues.apache.org/jira/browse/HDFS-12031
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
> Attachments: HDFS-12031-HDFS-7240.000.patch
>
>
> This JIRA is to rename existing 
> {{org.apache.hadoop.ozone.web.client.OzoneClient}} to 
> {{org.apache.hadoop.ozone.web.client.OzoneWebClient}} so that we can build 
> OzoneClient java API.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12031) Ozone: Rename OzoneClient to OzoneRestClient

2017-06-23 Thread Nandakumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061598#comment-16061598
 ] 

Nandakumar commented on HDFS-12031:
---

Patch uploaded, please review.
Thanks.

> Ozone: Rename OzoneClient to OzoneRestClient
> 
>
> Key: HDFS-12031
> URL: https://issues.apache.org/jira/browse/HDFS-12031
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
> Attachments: HDFS-12031-HDFS-7240.000.patch
>
>
> This JIRA is to rename existing 
> {{org.apache.hadoop.ozone.web.client.OzoneClient}} to 
> {{org.apache.hadoop.ozone.web.client.OzoneWebClient}} so that we can build 
> OzoneClient java API.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12028) Ozone: CLI: remove noisy slf4j binding output from hdfs oz command.

2017-06-23 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061597#comment-16061597
 ] 

Chen Liang commented on HDFS-12028:
---

Thanks [~xyao] for filing this! I did some initial investigation by looking 
into output of "mvn dependency:tree". I think this is caused by jscsi adding 
this {{logback-classic-1.0.10.jar}} to its dependency. To resolve this I think 
we need to add {{...}} to certain pom files, I haven't 
looked into which and where exactly to add though.

> Ozone: CLI: remove noisy slf4j binding output from hdfs oz command.
> ---
>
> Key: HDFS-12028
> URL: https://issues.apache.org/jira/browse/HDFS-12028
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Xiaoyu Yao
>
> Currently when you run CLI "hdfs oz ...", there always a noisy slf4j biding 
> issue log output. This ticket is opened to removed it. 
> {code}
> xyao$ hdfs oz
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/Users/xyao/deploy/ozone/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/xyao/deploy/ozone/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/hdfs/lib/logback-classic-1.0.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12031) Ozone: Rename OzoneClient to OzoneRestClient

2017-06-23 Thread Nandakumar (JIRA)
Nandakumar created HDFS-12031:
-

 Summary: Ozone: Rename OzoneClient to OzoneRestClient
 Key: HDFS-12031
 URL: https://issues.apache.org/jira/browse/HDFS-12031
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Nandakumar
Assignee: Nandakumar


This JIRA is to rename existing 
{{org.apache.hadoop.ozone.web.client.OzoneClient}} to 
{{org.apache.hadoop.ozone.web.client.OzoneWebClient}} so that we can build 
OzoneClient java API.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12007) Ozone: Enable HttpServer2 for SCM and KSM

2017-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061575#comment-16061575
 ] 

Hadoop QA commented on HDFS-12007:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 3s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
37s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
41s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
43s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} HDFS-7240 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} hadoop-hdfs-project: The patch generated 0 new + 1 
unchanged - 7 fixed = 1 total (was 8) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
15s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m 33s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
| Timed out junit tests | 
org.apache.hadoop.ozone.container.ozoneimpl.TestRatisManager |
|   | org.apache.hadoop.hdfs.TestFileAppend3 |
|   | org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
|   | org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | org.apache.hadoop.hdfs.TestPersistBlocks |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12007 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12874310/HDFS-12007-HDFS-7240.006.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux adf9bcce8021 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / c395bc8 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| whitespace | 

[jira] [Updated] (HDFS-12029) Data node process crashes after kernel upgrade

2017-06-23 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDFS-12029:

Priority: Blocker  (was: Critical)

>  Data node process crashes after kernel upgrade
> ---
>
> Key: HDFS-12029
> URL: https://issues.apache.org/jira/browse/HDFS-12029
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Anu Engineer
>Assignee: Nandakumar
>Priority: Blocker
>
>  We have seen that when Linux kernel is upgraded to address a specific CVE 
>  ( https://access.redhat.com/security/vulnerabilities/stackguard ) it might 
> cause a datanode crash.
> We have observed this issue while upgrading from 3.10.0-514.6.2 to 
> 3.10.0-514.21.2 versions of the kernel.
> Original kernel fix is here -- 
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=1be7107fbe18eed3e319a6c3e83c78254b693acb
> Datanode fails with the following stack trace, 
> {noformat}
> # 
> # A fatal error has been detected by the Java Runtime Environment: 
> # 
> # SIGBUS (0x7) at pc=0x7f458d078b7c, pid=13214, tid=139936990349120 
> # 
> # JRE version: (8.0_40-b25) (build ) 
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.40-b25 mixed mode 
> linux-amd64 compressed oops) 
> # Problematic frame: 
> # j java.lang.Object.()V+0 
> # 
> # Failed to write core dump. Core dumps have been disabled. To enable core 
> dumping, try "ulimit -c unlimited" before starting Java again 
> # 
> # An error report file with more information is saved as: 
> # /tmp/hs_err_pid13214.log 
> # 
> # If you would like to submit a bug report, please visit: 
> # http://bugreport.java.com/bugreport/crash.jsp 
> # 
> {noformat}
> The root cause is a failure in jsvc. If we pass a greater than 1MB value as 
> the stack size argument, this can be mitigated.  Something like:
> {code}
> exec "$JSVC" \
> -Xss2m
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter "$@"
> {code}
> This JIRA tracks potential fixes for this problem. We don't have data on how 
> this impacts other applications that run on datanode as this might impact 
> datanodes memory usage.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12030) Ozone: CLI: support infoKey command

2017-06-23 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-12030:
-

 Summary: Ozone: CLI: support infoKey command
 Key: HDFS-12030
 URL: https://issues.apache.org/jira/browse/HDFS-12030
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Xiaoyu Yao


{code}
HW11717:ozone xyao$ hdfs oz -infoKey http://localhost:9864/vol-2/bucket-1/key-1 
-user xyao 
Command Failed : {"httpCode":0,"shortMessage":"Not supported 
yet","resource":null,"message":"Not supported 
yet","requestID":null,"hostName":null}
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12029) Data node process crashes after kernel upgrade

2017-06-23 Thread Nandakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandakumar reassigned HDFS-12029:
-

Assignee: Nandakumar

>  Data node process crashes after kernel upgrade
> ---
>
> Key: HDFS-12029
> URL: https://issues.apache.org/jira/browse/HDFS-12029
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Anu Engineer
>Assignee: Nandakumar
>Priority: Critical
>
>  We have seen that when Linux kernel is upgraded to address a specific CVE 
>  ( https://access.redhat.com/security/vulnerabilities/stackguard ) it might 
> cause a datanode crash.
> We have observed this issue while upgrading from 3.10.0-514.6.2 to 
> 3.10.0-514.21.2 versions of the kernel.
> Original kernel fix is here -- 
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=1be7107fbe18eed3e319a6c3e83c78254b693acb
> Datanode fails with the following stack trace, 
> {noformat}
> # 
> # A fatal error has been detected by the Java Runtime Environment: 
> # 
> # SIGBUS (0x7) at pc=0x7f458d078b7c, pid=13214, tid=139936990349120 
> # 
> # JRE version: (8.0_40-b25) (build ) 
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.40-b25 mixed mode 
> linux-amd64 compressed oops) 
> # Problematic frame: 
> # j java.lang.Object.()V+0 
> # 
> # Failed to write core dump. Core dumps have been disabled. To enable core 
> dumping, try "ulimit -c unlimited" before starting Java again 
> # 
> # An error report file with more information is saved as: 
> # /tmp/hs_err_pid13214.log 
> # 
> # If you would like to submit a bug report, please visit: 
> # http://bugreport.java.com/bugreport/crash.jsp 
> # 
> {noformat}
> The root cause is a failure in jsvc. If we pass a greater than 1MB value as 
> the stack size argument, this can be mitigated.  Something like:
> {code}
> exec "$JSVC" \
> -Xss2m
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter "$@"
> {code}
> This JIRA tracks potential fixes for this problem. We don't have data on how 
> this impacts other applications that run on datanode as this might impact 
> datanodes memory usage.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12029) Data node process crashes after kernel upgrade

2017-06-23 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-12029:
---

 Summary:  Data node process crashes after kernel upgrade
 Key: HDFS-12029
 URL: https://issues.apache.org/jira/browse/HDFS-12029
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Anu Engineer
Priority: Critical


 We have seen that when Linux kernel is upgraded to address a specific CVE 
 ( https://access.redhat.com/security/vulnerabilities/stackguard ) it might 
cause a datanode crash.

We have observed this issue while upgrading from 3.10.0-514.6.2 to 
3.10.0-514.21.2 versions of the kernel.

Original kernel fix is here -- 
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=1be7107fbe18eed3e319a6c3e83c78254b693acb

Datanode fails with the following stack trace, 

{noformat}

# 
# A fatal error has been detected by the Java Runtime Environment: 
# 
# SIGBUS (0x7) at pc=0x7f458d078b7c, pid=13214, tid=139936990349120 
# 
# JRE version: (8.0_40-b25) (build ) 
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.40-b25 mixed mode linux-amd64 
compressed oops) 
# Problematic frame: 
# j java.lang.Object.()V+0 
# 
# Failed to write core dump. Core dumps have been disabled. To enable core 
dumping, try "ulimit -c unlimited" before starting Java again 
# 
# An error report file with more information is saved as: 
# /tmp/hs_err_pid13214.log 
# 
# If you would like to submit a bug report, please visit: 
# http://bugreport.java.com/bugreport/crash.jsp 
# 
{noformat}

The root cause is a failure in jsvc. If we pass a greater than 1MB value as the 
stack size argument, this can be mitigated.  Something like:

{code}
exec "$JSVC" \
-Xss2m
org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter "$@"
{code}

This JIRA tracks potential fixes for this problem. We don't have data on how 
this impacts other applications that run on datanode as this might impact 
datanodes memory usage.





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11773) Ozone: KSM : add listVolumes

2017-06-23 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061545#comment-16061545
 ] 

Xiaoyu Yao commented on HDFS-11773:
---

ping [~cheersyang], can you post a patch for this one?

> Ozone: KSM : add listVolumes
> 
>
> Key: HDFS-11773
> URL: https://issues.apache.org/jira/browse/HDFS-11773
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Weiwei Yang
>
> List volume call can be used in three different contexts. One is for the 
> administrators to list all volumes in a cluster. Second is for the 
> administrator to list the volumes owned by a specific user. Third is a user 
> listing the volumes owned by himself/herself.
> Since these calls can return large number of entries the rest protocol 
> supports paging. Paging is supported by the use of prevKey, prefix and 
> maxKeys. The caller is aware the this call is neither atomic nor consistent. 
> So we can iterate over the list even while changes are happening to the list.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11955) Ozone: Set proper parameter default values for listBuckets http request

2017-06-23 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061542#comment-16061542
 ] 

Xiaoyu Yao commented on HDFS-11955:
---

[~cheersyang], I hit the same issue yesterday and was about to file a ticket. 
+1 for fixing this. 


> Ozone: Set proper parameter default values for listBuckets http request
> ---
>
> Key: HDFS-11955
> URL: https://issues.apache.org/jira/browse/HDFS-11955
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>
> HDFS-11779 implements the listBuckets function in ozone server side, the API 
> supports several parameters, startKey, count and prefix. But both of them are 
> optional for the client side rest API. This jira is to make sure we set 
> proper default values in the http request if they are not explicitly set by 
> users.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12028) Ozone: CLI: remove noisy slf4j binding output from hdfs oz command.

2017-06-23 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-12028:
-

 Summary: Ozone: CLI: remove noisy slf4j binding output from hdfs 
oz command.
 Key: HDFS-12028
 URL: https://issues.apache.org/jira/browse/HDFS-12028
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Xiaoyu Yao


Currently when you run CLI "hdfs oz ...", there always a noisy slf4j biding 
issue log output. This ticket is opened to removed it. 

{code}
xyao$ hdfs oz
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/Users/xyao/deploy/ozone/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/Users/xyao/deploy/ozone/hadoop-3.0.0-alpha4-SNAPSHOT/share/hadoop/hdfs/lib/logback-classic-1.0.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12016) Ozone: SCM: Container metadata are not loaded properly after datanode restart

2017-06-23 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-12016:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: HDFS-7240
Target Version/s: HDFS-7240
  Status: Resolved  (was: Patch Available)

Thanks [~nandakumar131] for reporting the issue and all for the reviews. I've 
commit the patch to the feature branch. 

> Ozone: SCM: Container metadata are not loaded properly after datanode restart
> -
>
> Key: HDFS-12016
> URL: https://issues.apache.org/jira/browse/HDFS-12016
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, scm
>Affects Versions: HDFS-7240
>Reporter: Nandakumar
>Assignee: Xiaoyu Yao
> Fix For: HDFS-7240
>
> Attachments: HDFS-12016-HDFS-7240.001.patch, 
> HDFS-12016-HDFS-7240.002.patch, HDFS-12016-HDFS-7240.003.patch, 
> HDFS-12016-HDFS-7240.004.patch, HDFS-12016-HDFS-7240.005.patch
>
>
> Repro steps (Credit to [~nandakumar131])
> 1. create volume/bucket/key 
> 2. putkey
> 3. restart DN
> 4. getkey will hit error on container not found like below.
> {code}
> 2017-06-22 15:28:29,950 [Thread-48] INFO  (OzoneExceptionMapper.java:39)  
> vol-2/bucket-1/key-1 xyao 8727acc4-c1e9-4ba3-a819-4c0e16957079 - Returning 
> exception. ex: 
> {"httpCode":500,"shortMessage":"internalServerError","resource":"xyao","message":"org.apache.hadoop.scm.container.common.helpers.StorageContainerException:
>  Unable to find the container. Name: 
> 48cb0c3d-0537-4cff-b716-a7f69ebf50bc","requestID":"8 
> {code}
> The root cause is OzoneContainer#OzoneContainer does not load containers from 
> repository properly when ozone.container.metadata.dirs are specified. 
> The fix is to append the CONTAINER_ROOT_PREFIX when looking for containers on 
> the datanode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11493) Ozone: SCM: Add the ability to handle container reports

2017-06-23 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061514#comment-16061514
 ] 

Xiaoyu Yao commented on HDFS-11493:
---

Thanks [~anu] for updating the patch. The patch looks pretty good to me. Here 
are my comments:

*HadoopExecutors.java*
Line 104-133
Should we reuse the existing one in 
ShutdownThreadsHelper.shutdownExecutorService?

*ScmConfigKeys.java*
Line 177: NIT: Dont -> Don’t

Line 186: Can we add some document on the other two configuration keys 
introduced? 

*OzoneConfigKeys.java*
NIT: Line 102-105 empty change


*ContainerReplicationManager.java*

Line 109: maxContainerReportThreads can be a local variable.

Line 133: NIT: javadoc incomplete

Line 191-199: this can be pulled into a generic CollectionUtil if it does not 
exist in hadoop-common/hadoop-hdfs

Line 201: conf parameter can be removed as it is not used

Line 216: let’s change the log level to debug to avoid flooding the scm log.

Line 244-249: could this cause infinite loop when hitting unexpected 
exceptions. Can we have a counter to limit the
number of time we will retry starting poolProcessThread? Also, adding a counter 
as you mentioned in the TODO is a good
idea? Can we open a follow up JIRA on that?


Line 289: NIT: we can remove some of the javadocs on streams as they don’t 
apply here.



*InProgressPools.java*

Should we rename it to InProgressPool if this is for a single pool being 
processed?

Line 116: should we change “<“ to “>” to indicate that we have done waiting for 
the maxWaitTime?

Line 207-228: one UNKNOWN node in the pool can cost 100s. Should we reduce the 
maxTry from 1000 to 100 here?


*SCMNodeManager.java*

Line 405: missing /Unknown


*ReplicationDatanodeStateManager.java*
Line 78-81: do we miss the size/keycount for the ContainerInfo of the 
ContainerReport?



> Ozone: SCM:  Add the ability to handle container reports 
> -
>
> Key: HDFS-11493
> URL: https://issues.apache.org/jira/browse/HDFS-11493
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: container-replication-storage.pdf, 
> exploring-scalability-scm.pdf, HDFS-11493-HDFS-7240.001.patch, 
> HDFS-11493-HDFS-7240.002.patch, HDFS-11493-HDFS-7240.003.patch, 
> HDFS-11493-HDFS-7240.004.patch
>
>
> Once a datanode sends the container report it is SCM's responsibility to 
> determine if the replication levels are acceptable. If it is not, SCM should 
> initiate a replication request to another datanode. This JIRA tracks how SCM  
> handles a container report.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12016) Ozone: SCM: Container metadata are not loaded properly after datanode restart

2017-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061489#comment-16061489
 ] 

Hadoop QA commented on HDFS-12016:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
46s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 28s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 8 unchanged - 0 fixed = 9 total (was 8) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 18s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 91m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.ozone.container.ozoneimpl.TestOzoneContainerRatis |
|   | hadoop.ozone.container.ozoneimpl.TestRatisManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12016 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12874301/HDFS-12016-HDFS-7240.005.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d9c71c3d410f 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / c395bc8 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20013/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20013/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20013/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20013/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: SCM: Container metadata are not loaded properly after datanode restart
> -
>
> Key: HDFS-12016
> URL: https://issues.apache.org/jira/browse/HDFS-12016
> Project: 

[jira] [Commented] (HDFS-12026) libhdfspp: Fix compilation errors and warnings when compiling with Clang

2017-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061482#comment-16061482
 ] 

Hadoop QA commented on HDFS-12026:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 
46s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
47s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
5s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
50s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
44s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  6m 44s{color} | 
{color:red} hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.8.0_131 with JDK 
v1.8.0_131 generated 594 new + 5 unchanged - 0 fixed = 599 total (was 5) 
{color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
49s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  6m 49s{color} | 
{color:red} hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.7.0_131 with JDK 
v1.7.0_131 generated 594 new + 5 unchanged - 0 fixed = 599 total (was 5) 
{color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m  
2s{color} | {color:green} hadoop-hdfs-native-client in the patch passed with 
JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5ae34ac |
| JIRA Issue | HDFS-12026 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12874304/HDFS-12026.HDFS-8707.000.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 1e397d9310d2 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 0d2d073 |
| Default Java | 1.7.0_131 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_131 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_131 |
| cc | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20014/artifact/patchprocess/diff-compile-cc-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.8.0_131.txt
 |
| cc | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20014/artifact/patchprocess/diff-compile-cc-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.7.0_131.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20014/artifact/patchprocess/whitespace-tabs.txt
 |
| JDK v1.7.0_131  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20014/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20014/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> libhdfspp: Fix 

[jira] [Commented] (HDFS-12016) Ozone: SCM: Container metadata are not loaded properly after datanode restart

2017-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061463#comment-16061463
 ] 

Hadoop QA commented on HDFS-12016:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  3m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
44s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 8 unchanged - 0 fixed = 9 total (was 8) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 13s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 |
|   | hadoop.ozone.container.common.TestDatanodeStateMachine |
|   | hadoop.ozone.container.ozoneimpl.TestRatisManager |
|   | hadoop.hdfs.TestDistributedFileSystem |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
|   | hadoop.ozone.container.ozoneimpl.TestOzoneContainerRatis |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12016 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12874298/HDFS-12016-HDFS-7240.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c7f5e339e192 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / c395bc8 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20012/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20012/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20012/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20012/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: SCM: Container metadata 

[jira] [Updated] (HDFS-12018) Ozone: Misc: Add CBlocks defaults to ozone-defaults.xml

2017-06-23 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12018:
--
Status: Patch Available  (was: Open)

> Ozone: Misc: Add CBlocks defaults to ozone-defaults.xml
> ---
>
> Key: HDFS-12018
> URL: https://issues.apache.org/jira/browse/HDFS-12018
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Chen Liang
>Priority: Trivial
> Attachments: HDFS-12018-HDFS-7240.001.patch
>
>
> We have just updated ozone-defaults.xml in HDFS-11990. This JIRA tracks the 
> issue that we need to the same for CBlocks, since CBlock uses Ozone's Config 
> files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12018) Ozone: Misc: Add CBlocks defaults to ozone-defaults.xml

2017-06-23 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12018:
--
Attachment: HDFS-12018-HDFS-7240.001.patch

Post initial patch, also removed a few unused keys in CBlockConfigKeys.

> Ozone: Misc: Add CBlocks defaults to ozone-defaults.xml
> ---
>
> Key: HDFS-12018
> URL: https://issues.apache.org/jira/browse/HDFS-12018
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Chen Liang
>Priority: Trivial
> Attachments: HDFS-12018-HDFS-7240.001.patch
>
>
> We have just updated ozone-defaults.xml in HDFS-11990. This JIRA tracks the 
> issue that we need to the same for CBlocks, since CBlock uses Ozone's Config 
> files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12007) Ozone: Enable HttpServer2 for SCM and KSM

2017-06-23 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12007:

Attachment: HDFS-12007-HDFS-7240.006.patch

Rebased.

> Ozone: Enable HttpServer2 for SCM and KSM
> -
>
> Key: HDFS-12007
> URL: https://issues.apache.org/jira/browse/HDFS-12007
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12007-HDFS-7240.001.patch, 
> HDFS-12007-HDFS-7240.002.patch, HDFS-12007-HDFS-7240.003.patch, 
> HDFS-12007-HDFS-7240.004.patch, HDFS-12007-HDFS-7240.005.patch, 
> HDFS-12007-HDFS-7240.006.patch, Screen Shot 2017-06-22 at 10.28.05 PM.png, 
> Screen Shot 2017-06-22 at 10.28.32 PM.png, Screen Shot 2017-06-22 at 10.28.48 
> PM.png
>
>
> As a first step toward a webui for KSM/SCM we need to start and stop a 
> HttpServer2 with KSM and SCM processes. Similar to the Namenode and Datanode 
> it could be done with a small wrapper class, but praticly it could be done 
> with a common super class to avoid duplicated code between KSM/SCM.
> As a result of this issue, we will have a listening web server in both 
> KSM/SCM with a simple template page and with all the default servlets 
> (conf/jmx/logLevel/stack).
> The https and kerberos code could be reused from the nameserver wrapper.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12007) Ozone: Enable HttpServer2 for SCM and KSM

2017-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061438#comment-16061438
 ] 

Hadoop QA commented on HDFS-12007:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HDFS-12007 does not apply to HDFS-7240. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-12007 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12874307/HDFS-12007-HDFS-7240.005.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20016/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: Enable HttpServer2 for SCM and KSM
> -
>
> Key: HDFS-12007
> URL: https://issues.apache.org/jira/browse/HDFS-12007
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12007-HDFS-7240.001.patch, 
> HDFS-12007-HDFS-7240.002.patch, HDFS-12007-HDFS-7240.003.patch, 
> HDFS-12007-HDFS-7240.004.patch, HDFS-12007-HDFS-7240.005.patch, Screen Shot 
> 2017-06-22 at 10.28.05 PM.png, Screen Shot 2017-06-22 at 10.28.32 PM.png, 
> Screen Shot 2017-06-22 at 10.28.48 PM.png
>
>
> As a first step toward a webui for KSM/SCM we need to start and stop a 
> HttpServer2 with KSM and SCM processes. Similar to the Namenode and Datanode 
> it could be done with a small wrapper class, but praticly it could be done 
> with a common super class to avoid duplicated code between KSM/SCM.
> As a result of this issue, we will have a listening web server in both 
> KSM/SCM with a simple template page and with all the default servlets 
> (conf/jmx/logLevel/stack).
> The https and kerberos code could be reused from the nameserver wrapper.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12007) Ozone: Enable HttpServer2 for SCM and KSM

2017-06-23 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061427#comment-16061427
 ] 

Elek, Marton commented on HDFS-12007:
-

1. I added the new configuration to the defaults. In the mean time I modified 
the name of the kerberos keytab/principal to follow the convention in the 
Namenode

2. I also found that the bind host was ignored so I fixed and added an 
additional unit test (host + port is in the 'address' configuration but the 
host could be overriden with the bind-host )

3. added hadoop.css and removed the unnecessary constants.

> Ozone: Enable HttpServer2 for SCM and KSM
> -
>
> Key: HDFS-12007
> URL: https://issues.apache.org/jira/browse/HDFS-12007
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12007-HDFS-7240.001.patch, 
> HDFS-12007-HDFS-7240.002.patch, HDFS-12007-HDFS-7240.003.patch, 
> HDFS-12007-HDFS-7240.004.patch, HDFS-12007-HDFS-7240.005.patch, Screen Shot 
> 2017-06-22 at 10.28.05 PM.png, Screen Shot 2017-06-22 at 10.28.32 PM.png, 
> Screen Shot 2017-06-22 at 10.28.48 PM.png
>
>
> As a first step toward a webui for KSM/SCM we need to start and stop a 
> HttpServer2 with KSM and SCM processes. Similar to the Namenode and Datanode 
> it could be done with a small wrapper class, but praticly it could be done 
> with a common super class to avoid duplicated code between KSM/SCM.
> As a result of this issue, we will have a listening web server in both 
> KSM/SCM with a simple template page and with all the default servlets 
> (conf/jmx/logLevel/stack).
> The https and kerberos code could be reused from the nameserver wrapper.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12007) Ozone: Enable HttpServer2 for SCM and KSM

2017-06-23 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12007:

Attachment: HDFS-12007-HDFS-7240.005.patch

> Ozone: Enable HttpServer2 for SCM and KSM
> -
>
> Key: HDFS-12007
> URL: https://issues.apache.org/jira/browse/HDFS-12007
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12007-HDFS-7240.001.patch, 
> HDFS-12007-HDFS-7240.002.patch, HDFS-12007-HDFS-7240.003.patch, 
> HDFS-12007-HDFS-7240.004.patch, HDFS-12007-HDFS-7240.005.patch, Screen Shot 
> 2017-06-22 at 10.28.05 PM.png, Screen Shot 2017-06-22 at 10.28.32 PM.png, 
> Screen Shot 2017-06-22 at 10.28.48 PM.png
>
>
> As a first step toward a webui for KSM/SCM we need to start and stop a 
> HttpServer2 with KSM and SCM processes. Similar to the Namenode and Datanode 
> it could be done with a small wrapper class, but praticly it could be done 
> with a common super class to avoid duplicated code between KSM/SCM.
> As a result of this issue, we will have a listening web server in both 
> KSM/SCM with a simple template page and with all the default servlets 
> (conf/jmx/logLevel/stack).
> The https and kerberos code could be reused from the nameserver wrapper.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12023) Ozone: test if all the configuration keys documented in ozone-defaults.xml

2017-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061414#comment-16061414
 ] 

Hadoop QA commented on HDFS-12023:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
43s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
15s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
27s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
11s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} HDFS-7240 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
18s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 39s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}103m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.ozone.scm.TestContainerSQLCli |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12023 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12874288/HDFS-12023-HDFS-7240.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1dfdfd8a0c7d 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / c395bc8 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20011/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20011/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client 
hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project |
| Console output | 

[jira] [Updated] (HDFS-12026) libhdfspp: Fix compilation errors and warnings when compiling with Clang

2017-06-23 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-12026:
-
Attachment: HDFS-12026.HDFS-8707.000.patch

Took first stab at this. Now it seems to compile fine on Clang.

> libhdfspp: Fix compilation errors and warnings when compiling with Clang 
> -
>
> Key: HDFS-12026
> URL: https://issues.apache.org/jira/browse/HDFS-12026
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-12026.HDFS-8707.000.patch
>
>
> Currently multiple errors and warnings prevent libhdfspp from being compiled 
> with clang. It should compile cleanly using flags:
> -std=c++11 -stdlib=libc++
> and also warning flags:
> -Weverything -Wno-c++98-compat -Wno-missing-prototypes 
> -Wno-c++98-compat-pedantic -Wno-padded -Wno-covered-switch-default 
> -Wno-missing-noreturn -Wno-unknown-pragmas -Wconversion -Werror



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12026) libhdfspp: Fix compilation errors and warnings when compiling with Clang

2017-06-23 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-12026:
-
Status: Patch Available  (was: Open)

> libhdfspp: Fix compilation errors and warnings when compiling with Clang 
> -
>
> Key: HDFS-12026
> URL: https://issues.apache.org/jira/browse/HDFS-12026
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
>
> Currently multiple errors and warnings prevent libhdfspp from being compiled 
> with clang. It should compile cleanly using flags:
> -std=c++11 -stdlib=libc++
> and also warning flags:
> -Weverything -Wno-c++98-compat -Wno-missing-prototypes 
> -Wno-c++98-compat-pedantic -Wno-padded -Wno-covered-switch-default 
> -Wno-missing-noreturn -Wno-unknown-pragmas -Wconversion -Werror



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11844) Ozone: Recover SCM state when SCM is restarted

2017-06-23 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061388#comment-16061388
 ] 

Xiaoyu Yao commented on HDFS-11844:
---

Resolve this as a dup of HDFS-12016

> Ozone: Recover SCM state when SCM is restarted
> --
>
> Key: HDFS-11844
> URL: https://issues.apache.org/jira/browse/HDFS-11844
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, scm
>Reporter: Weiwei Yang
>Assignee: Anu Engineer
>
> SCM losses its state once being restarted. This issue can be found by a 
> simple test with following steps
> # Start NN, DN, SCM
> # Create several containers via SCM CLI
> # Restart DN
> # Get existing container info via SCM CLI, this step will fail with container 
> doesn't exist error.
> {{ContainerManagerImpl}} maintains a cache of container mapping 
> {{containerMap}}, if DN is restarted, this information is lost. We need a way 
> to restore the state from DB in a background thread.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-11844) Ozone: Recover SCM state when SCM is restarted

2017-06-23 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao resolved HDFS-11844.
---
Resolution: Duplicate

> Ozone: Recover SCM state when SCM is restarted
> --
>
> Key: HDFS-11844
> URL: https://issues.apache.org/jira/browse/HDFS-11844
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, scm
>Reporter: Weiwei Yang
>Assignee: Anu Engineer
>
> SCM losses its state once being restarted. This issue can be found by a 
> simple test with following steps
> # Start NN, DN, SCM
> # Create several containers via SCM CLI
> # Restart DN
> # Get existing container info via SCM CLI, this step will fail with container 
> doesn't exist error.
> {{ContainerManagerImpl}} maintains a cache of container mapping 
> {{containerMap}}, if DN is restarted, this information is lost. We need a way 
> to restore the state from DB in a background thread.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12016) Ozone: SCM: Container metadata are not loaded properly after datanode restart

2017-06-23 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-12016:
--
Attachment: HDFS-12016-HDFS-7240.005.patch

Remove some unrelated changes. 

> Ozone: SCM: Container metadata are not loaded properly after datanode restart
> -
>
> Key: HDFS-12016
> URL: https://issues.apache.org/jira/browse/HDFS-12016
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, scm
>Affects Versions: HDFS-7240
>Reporter: Nandakumar
>Assignee: Xiaoyu Yao
> Attachments: HDFS-12016-HDFS-7240.001.patch, 
> HDFS-12016-HDFS-7240.002.patch, HDFS-12016-HDFS-7240.003.patch, 
> HDFS-12016-HDFS-7240.004.patch, HDFS-12016-HDFS-7240.005.patch
>
>
> Repro steps (Credit to [~nandakumar131])
> 1. create volume/bucket/key 
> 2. putkey
> 3. restart DN
> 4. getkey will hit error on container not found like below.
> {code}
> 2017-06-22 15:28:29,950 [Thread-48] INFO  (OzoneExceptionMapper.java:39)  
> vol-2/bucket-1/key-1 xyao 8727acc4-c1e9-4ba3-a819-4c0e16957079 - Returning 
> exception. ex: 
> {"httpCode":500,"shortMessage":"internalServerError","resource":"xyao","message":"org.apache.hadoop.scm.container.common.helpers.StorageContainerException:
>  Unable to find the container. Name: 
> 48cb0c3d-0537-4cff-b716-a7f69ebf50bc","requestID":"8 
> {code}
> The root cause is OzoneContainer#OzoneContainer does not load containers from 
> repository properly when ozone.container.metadata.dirs are specified. 
> The fix is to append the CONTAINER_ROOT_PREFIX when looking for containers on 
> the datanode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12009) Accept human-friendly units in dfsadmin -setBalancerBandwidth and -setQuota

2017-06-23 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061371#comment-16061371
 ] 

Andrew Wang commented on HDFS-12009:


Thanks for the review and commit Xiao!

> Accept human-friendly units in dfsadmin -setBalancerBandwidth and -setQuota
> ---
>
> Key: HDFS-12009
> URL: https://issues.apache.org/jira/browse/HDFS-12009
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 3.0.0-alpha3
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Fix For: 3.0.0-alpha4
>
> Attachments: HDFS-12009.001.patch, HDFS-12009.002.patch
>
>
> We support human readable units now. The default balancing bandwidth in the 
> conf is "10m". However, human readable units are not supported by dfsadmin 
> -setBalancerBandwidth. This means you can't pass the output of "hdfs getconf 
> -confKey dfs.datanode.balance.bandwidthPerSec" to "hdfs dfsadmin 
> -setBalancerBandwidth".
> This is a regression from pre-human readable units.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12018) Ozone: Misc: Add CBlocks defaults to ozone-defaults.xml

2017-06-23 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang reassigned HDFS-12018:
-

Assignee: Chen Liang

> Ozone: Misc: Add CBlocks defaults to ozone-defaults.xml
> ---
>
> Key: HDFS-12018
> URL: https://issues.apache.org/jira/browse/HDFS-12018
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Chen Liang
>Priority: Trivial
>
> We have just updated ozone-defaults.xml in HDFS-11990. This JIRA tracks the 
> issue that we need to the same for CBlocks, since CBlock uses Ozone's Config 
> files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12016) Ozone: SCM: Container metadata are not loaded properly after datanode restart

2017-06-23 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-12016:
--
Attachment: HDFS-12016-HDFS-7240.004.patch

Thanks [~cheersyang] and [~anu] for the review. Update a new patch that fixed 
the unit test in TestKeys. 

> Ozone: SCM: Container metadata are not loaded properly after datanode restart
> -
>
> Key: HDFS-12016
> URL: https://issues.apache.org/jira/browse/HDFS-12016
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, scm
>Affects Versions: HDFS-7240
>Reporter: Nandakumar
>Assignee: Xiaoyu Yao
> Attachments: HDFS-12016-HDFS-7240.001.patch, 
> HDFS-12016-HDFS-7240.002.patch, HDFS-12016-HDFS-7240.003.patch, 
> HDFS-12016-HDFS-7240.004.patch
>
>
> Repro steps (Credit to [~nandakumar131])
> 1. create volume/bucket/key 
> 2. putkey
> 3. restart DN
> 4. getkey will hit error on container not found like below.
> {code}
> 2017-06-22 15:28:29,950 [Thread-48] INFO  (OzoneExceptionMapper.java:39)  
> vol-2/bucket-1/key-1 xyao 8727acc4-c1e9-4ba3-a819-4c0e16957079 - Returning 
> exception. ex: 
> {"httpCode":500,"shortMessage":"internalServerError","resource":"xyao","message":"org.apache.hadoop.scm.container.common.helpers.StorageContainerException:
>  Unable to find the container. Name: 
> 48cb0c3d-0537-4cff-b716-a7f69ebf50bc","requestID":"8 
> {code}
> The root cause is OzoneContainer#OzoneContainer does not load containers from 
> repository properly when ozone.container.metadata.dirs are specified. 
> The fix is to append the CONTAINER_ROOT_PREFIX when looking for containers on 
> the datanode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12027) Add KMS API to get service version and health check

2017-06-23 Thread patrick white (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061300#comment-16061300
 ] 

patrick white commented on HDFS-12027:
--

Hi Wei-Chiu, thanks for the feedback.

Right, i tried kinit as both KMS and HDFS priv users, with same result.

> Add KMS API to get service version and health check
> ---
>
> Key: HDFS-12027
> URL: https://issues.apache.org/jira/browse/HDFS-12027
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: kms
>Reporter: patrick white
>Priority: Minor
>
> Enhancement request, add an API to the Key Management Server which can be 
> used for health monitoring as well as programatic version checks, such as to 
> return the service version identifier, suggest this;
> GET http://HOST:PORT/kms/v1/key/kms_version
> This API would be useful for production monitoring tools to quickly do KMS 
> instance reporting (dashboards) and basic health checks, as part of overall 
> monitoring of a Hadoop stack installation.
> Such an API would also be useful for debugging initial bring-up of a service 
> instance, such as validation of the KMS webserver and its interaction with ZK 
> before the key manager(s) are necessarily working. Currently i believe a 
> valid key needs to be setup and available before calls can return success.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12027) Add KMS API to get service version and health check

2017-06-23 Thread patrick white (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061295#comment-16061295
 ] 

patrick white commented on HDFS-12027:
--

Ah ok, thanks again Xiao, yes i am indeed still using the tomcat based version.



> Add KMS API to get service version and health check
> ---
>
> Key: HDFS-12027
> URL: https://issues.apache.org/jira/browse/HDFS-12027
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: kms
>Reporter: patrick white
>Priority: Minor
>
> Enhancement request, add an API to the Key Management Server which can be 
> used for health monitoring as well as programatic version checks, such as to 
> return the service version identifier, suggest this;
> GET http://HOST:PORT/kms/v1/key/kms_version
> This API would be useful for production monitoring tools to quickly do KMS 
> instance reporting (dashboards) and basic health checks, as part of overall 
> monitoring of a Hadoop stack installation.
> Such an API would also be useful for debugging initial bring-up of a service 
> instance, such as validation of the KMS webserver and its interaction with ZK 
> before the key manager(s) are necessarily working. Currently i believe a 
> valid key needs to be setup and available before calls can return success.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12007) Ozone: Enable HttpServer2 for SCM and KSM

2017-06-23 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061285#comment-16061285
 ] 

Elek, Marton commented on HDFS-12007:
-

Thanks the feedback/hints. Yes, I should include the hadoop.css as well not 
just the bootstrap. But the web ui could be improved on the following JIRAs.

Please hold on with merge, I would like to add the configuration to the 
ozone-default.xml to be compatible with HDFS-11990 / HDFS-12023 

> Ozone: Enable HttpServer2 for SCM and KSM
> -
>
> Key: HDFS-12007
> URL: https://issues.apache.org/jira/browse/HDFS-12007
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12007-HDFS-7240.001.patch, 
> HDFS-12007-HDFS-7240.002.patch, HDFS-12007-HDFS-7240.003.patch, 
> HDFS-12007-HDFS-7240.004.patch, Screen Shot 2017-06-22 at 10.28.05 PM.png, 
> Screen Shot 2017-06-22 at 10.28.32 PM.png, Screen Shot 2017-06-22 at 10.28.48 
> PM.png
>
>
> As a first step toward a webui for KSM/SCM we need to start and stop a 
> HttpServer2 with KSM and SCM processes. Similar to the Namenode and Datanode 
> it could be done with a small wrapper class, but praticly it could be done 
> with a common super class to avoid duplicated code between KSM/SCM.
> As a result of this issue, we will have a listening web server in both 
> KSM/SCM with a simple template page and with all the default servlets 
> (conf/jmx/logLevel/stack).
> The https and kerberos code could be reused from the nameserver wrapper.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11993) Add log info when connect to datanode socket address failed

2017-06-23 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061279#comment-16061279
 ] 

Chen Liang commented on HDFS-11993:
---

Thanks [~candychencan] for the patch. Since this seems to be slf4j logger, how 
about considering using {} placeholders? e.g.

{code}
DFSClient.LOG.warn("Failed to connect to " + targetAddr + " for block " + 
targetBlock.getBlock() + ", add to deadNodes and continue. " + ex, ex);
{code}

to something like

{code}
DFSClient.LOG.warn("Failed to connect to {} for block {}, add to deadNodes and 
continue. ", targetAddr, targetBlock.getBlock(), ex);
{code}

> Add log info when connect to datanode socket address failed
> ---
>
> Key: HDFS-11993
> URL: https://issues.apache.org/jira/browse/HDFS-11993
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: chencan
>Assignee: chencan
> Attachments: HADOOP-11993.patch
>
>
> In function BlockSeekTo, when connect faild to datanode socket address,log as 
> follow:
> DFSClient.LOG.warn("Failed to connect to " + targetAddr + " for block"
>   + ", add to deadNodes and continue. " + ex, ex);
> add block info may be more explicit.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12008) Improve the available-space block placement policy

2017-06-23 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061277#comment-16061277
 ] 

Kihwal Lee commented on HDFS-12008:
---

branch-2 precommit failure. HADOOP-14146 is still causing problems. Will bug 
Daryn again.

> Improve the available-space block placement policy
> --
>
> Key: HDFS-12008
> URL: https://issues.apache.org/jira/browse/HDFS-12008
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: block placement
>Affects Versions: 2.8.1
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Attachments: HDFS-12008.patch, HDFS-12008.v2.branch-2.patch, 
> HDFS-12008.v2.trunk.patch, RandomAllocationPolicy.png
>
>
> AvailableSpaceBlockPlacementPolicy currently picks two nodes unconditionally, 
> then picks one node. It could avoid picking the second node when not 
> necessary.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12023) Ozone: test if all the configuration keys documented in ozone-defaults.xml

2017-06-23 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061269#comment-16061269
 ] 

Elek, Marton commented on HDFS-12023:
-

Thanks the hint, TestConfigurationFieldsBase is exactly what I needed. Updated 
the patch.

It also shows if the defaults are different:

I increased the OZONE_SCM_HANDLER_COUNT_DEFAULT to 20 (as it was defined in 
ozone-default)

But I couldn't decide if the handler type should be fixed or not. (As I know 
the local is only for testing)

{code}
ozone-default.xml has 1 properties that do not match the default Config value
  XML Property: ozone.handler.type
  XML Value:local
  Config Name:  OZONE_HANDLER_TYPE_DEFAULT
  Config Value: distributed
{code}



> Ozone: test if all the configuration keys documented in ozone-defaults.xml
> --
>
> Key: HDFS-12023
> URL: https://issues.apache.org/jira/browse/HDFS-12023
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>  Labels: test
> Attachments: HDFS-12023-HDFS-7240.001.patch, 
> HDFS-12023-HDFS-7240.002.patch
>
>
> HDFS-11990 added the missing configuration entries the ozone-defaults.xml
> This patch contains a unit test which tests if all the configuration keys are 
> still documented.
> (constant fields of the specific configuration classes which ends with _KEY 
> should be part of the defaults.xml). 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12023) Ozone: test if all the configuration keys documented in ozone-defaults.xml

2017-06-23 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12023:

Attachment: HDFS-12023-HDFS-7240.002.patch

> Ozone: test if all the configuration keys documented in ozone-defaults.xml
> --
>
> Key: HDFS-12023
> URL: https://issues.apache.org/jira/browse/HDFS-12023
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>  Labels: test
> Attachments: HDFS-12023-HDFS-7240.001.patch, 
> HDFS-12023-HDFS-7240.002.patch
>
>
> HDFS-11990 added the missing configuration entries the ozone-defaults.xml
> This patch contains a unit test which tests if all the configuration keys are 
> still documented.
> (constant fields of the specific configuration classes which ends with _KEY 
> should be part of the defaults.xml). 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12027) Add KMS API to get service version and health check

2017-06-23 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061246#comment-16061246
 ] 

Wei-Chiu Chuang commented on HDFS-12027:


/jmx is protected by Kerberos/SPENGO in a secure cluster and you need 
authentication, IIRC.

> Add KMS API to get service version and health check
> ---
>
> Key: HDFS-12027
> URL: https://issues.apache.org/jira/browse/HDFS-12027
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: kms
>Reporter: patrick white
>Priority: Minor
>
> Enhancement request, add an API to the Key Management Server which can be 
> used for health monitoring as well as programatic version checks, such as to 
> return the service version identifier, suggest this;
> GET http://HOST:PORT/kms/v1/key/kms_version
> This API would be useful for production monitoring tools to quickly do KMS 
> instance reporting (dashboards) and basic health checks, as part of overall 
> monitoring of a Hadoop stack installation.
> Such an API would also be useful for debugging initial bring-up of a service 
> instance, such as validation of the KMS webserver and its interaction with ZK 
> before the key manager(s) are necessarily working. Currently i believe a 
> valid key needs to be setup and available before calls can return success.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12027) Add KMS API to get service version and health check

2017-06-23 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061240#comment-16061240
 ] 

Xiao Chen commented on HDFS-12027:
--

CNF sounds familiar, seems there was HADOOP-13872, but supposedly fixed by the 
tomcat -> jetty change HADOOP-13597 (for 3.0+). The security of that depends on 
HttpServer2 then, I asked the same before and here's [John's 
answer|https://issues.apache.org/jira/browse/HDFS-10860?focusedCommentId=15847657=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15847657].
 :)

Not sure about branch-2 though, haven't had a chance to look there. 
HADOOP-13872 seems to indicate it only happens 3.0.0-alpha2+.

> Add KMS API to get service version and health check
> ---
>
> Key: HDFS-12027
> URL: https://issues.apache.org/jira/browse/HDFS-12027
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: kms
>Reporter: patrick white
>Priority: Minor
>
> Enhancement request, add an API to the Key Management Server which can be 
> used for health monitoring as well as programatic version checks, such as to 
> return the service version identifier, suggest this;
> GET http://HOST:PORT/kms/v1/key/kms_version
> This API would be useful for production monitoring tools to quickly do KMS 
> instance reporting (dashboards) and basic health checks, as part of overall 
> monitoring of a Hadoop stack installation.
> Such an API would also be useful for debugging initial bring-up of a service 
> instance, such as validation of the KMS webserver and its interaction with ZK 
> before the key manager(s) are necessarily working. Currently i believe a 
> valid key needs to be setup and available before calls can return success.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12008) Improve the available-space block placement policy

2017-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061214#comment-16061214
 ] 

Hadoop QA commented on HDFS-12008:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  1m 
45s{color} | {color:red} root in branch-2 failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m  
5s{color} | {color:red} hadoop-hdfs in branch-2 failed with JDK v1.8.0_131. 
{color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-hdfs in branch-2 failed with JDK v1.7.0_131. 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} branch-2 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-hdfs in branch-2 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-hdfs in branch-2 failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_131. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 23s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_131. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_131. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 25s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_131. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 28 unchanged - 2 fixed = 30 total (was 30) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 26s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5e40efe |
| JIRA Issue | HDFS-12008 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12874280/HDFS-12008.v2.branch-2.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 175e3bbda187 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / 4c6184b |
| Default Java | 1.7.0_131 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_131 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_131 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20010/artifact/patchprocess/branch-mvninstall-root.txt
 |
| 

[jira] [Commented] (HDFS-12027) Add KMS API to get service version and health check

2017-06-23 Thread patrick white (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061210#comment-16061210
 ] 

patrick white commented on HDFS-12027:
--

Thanks Xiao, i was not aware of the jmx call support yet. I tried that real 
quick and got servlet CNF exceptions, so i need to look further.

Sounds promising though, especially if available to clients (which of course 
brings up security questions).



> Add KMS API to get service version and health check
> ---
>
> Key: HDFS-12027
> URL: https://issues.apache.org/jira/browse/HDFS-12027
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: kms
>Reporter: patrick white
>Priority: Minor
>
> Enhancement request, add an API to the Key Management Server which can be 
> used for health monitoring as well as programatic version checks, such as to 
> return the service version identifier, suggest this;
> GET http://HOST:PORT/kms/v1/key/kms_version
> This API would be useful for production monitoring tools to quickly do KMS 
> instance reporting (dashboards) and basic health checks, as part of overall 
> monitoring of a Hadoop stack installation.
> Such an API would also be useful for debugging initial bring-up of a service 
> instance, such as validation of the KMS webserver and its interaction with ZK 
> before the key manager(s) are necessarily working. Currently i believe a 
> valid key needs to be setup and available before calls can return success.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12027) Add KMS API to get service version and health check

2017-06-23 Thread patrick white (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061202#comment-16061202
 ] 

patrick white commented on HDFS-12027:
--

Thanks much for the feedback John. 

It would be good to have something a client can use, think 'names' is a 
privileged command. This would be to allow non-admins of the KMS instance to be 
able to run monitors or status checks on their own, cluster service engineers, 
users, so forth.

Right and understood, API version is embedded in the call. Thinking of a 
command with return that was specific to the installed KMS build's instance, 
something like Oozie's 'version' API.

Just for info, 'keys/names' is currently giving me 
UnsupportedOperationException, this is running as the priv user, with other 
cmds such as '_metdata' succeeding. I need to follow up though, not sure why 
i'm getting this.



> Add KMS API to get service version and health check
> ---
>
> Key: HDFS-12027
> URL: https://issues.apache.org/jira/browse/HDFS-12027
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: kms
>Reporter: patrick white
>Priority: Minor
>
> Enhancement request, add an API to the Key Management Server which can be 
> used for health monitoring as well as programatic version checks, such as to 
> return the service version identifier, suggest this;
> GET http://HOST:PORT/kms/v1/key/kms_version
> This API would be useful for production monitoring tools to quickly do KMS 
> instance reporting (dashboards) and basic health checks, as part of overall 
> monitoring of a Hadoop stack installation.
> Such an API would also be useful for debugging initial bring-up of a service 
> instance, such as validation of the KMS webserver and its interaction with ZK 
> before the key manager(s) are necessarily working. Currently i believe a 
> valid key needs to be setup and available before calls can return success.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12027) Add KMS API to get service version and health check

2017-06-23 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061188#comment-16061188
 ] 

Xiao Chen commented on HDFS-12027:
--

Would the jmx suffice the needs here? {{GET http://HOST:PORT/kms/jmx}}

> Add KMS API to get service version and health check
> ---
>
> Key: HDFS-12027
> URL: https://issues.apache.org/jira/browse/HDFS-12027
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: kms
>Reporter: patrick white
>Priority: Minor
>
> Enhancement request, add an API to the Key Management Server which can be 
> used for health monitoring as well as programatic version checks, such as to 
> return the service version identifier, suggest this;
> GET http://HOST:PORT/kms/v1/key/kms_version
> This API would be useful for production monitoring tools to quickly do KMS 
> instance reporting (dashboards) and basic health checks, as part of overall 
> monitoring of a Hadoop stack installation.
> Such an API would also be useful for debugging initial bring-up of a service 
> instance, such as validation of the KMS webserver and its interaction with ZK 
> before the key manager(s) are necessarily working. Currently i believe a 
> valid key needs to be setup and available before calls can return success.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12008) Improve the available-space block placement policy

2017-06-23 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-12008:
--
Attachment: HDFS-12008.v2.branch-2.patch
HDFS-12008.v2.trunk.patch

Attaching new patches. The target percentage has been corrected to 65 +/- 2%.

> Improve the available-space block placement policy
> --
>
> Key: HDFS-12008
> URL: https://issues.apache.org/jira/browse/HDFS-12008
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: block placement
>Affects Versions: 2.8.1
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Attachments: HDFS-12008.patch, HDFS-12008.v2.branch-2.patch, 
> HDFS-12008.v2.trunk.patch, RandomAllocationPolicy.png
>
>
> AvailableSpaceBlockPlacementPolicy currently picks two nodes unconditionally, 
> then picks one node. It could avoid picking the second node when not 
> necessary.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-11955) Ozone: Set proper parameter default values for listBuckets http request

2017-06-23 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-11955 started by Weiwei Yang.
--
> Ozone: Set proper parameter default values for listBuckets http request
> ---
>
> Key: HDFS-11955
> URL: https://issues.apache.org/jira/browse/HDFS-11955
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>
> HDFS-11779 implements the listBuckets function in ozone server side, the API 
> supports several parameters, startKey, count and prefix. But both of them are 
> optional for the client side rest API. This jira is to make sure we set 
> proper default values in the http request if they are not explicitly set by 
> users.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12027) Add KMS API to get service version and health check

2017-06-23 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061160#comment-16061160
 ] 

John Zhuge commented on HDFS-12027:
---

For health check, you may use "GET http://HOST:PORT/kms/v1/keys/names;. The 
response message may get too large if the number of the keys are significant.

KMS version is "v1" and already embedded in the URL.

> Add KMS API to get service version and health check
> ---
>
> Key: HDFS-12027
> URL: https://issues.apache.org/jira/browse/HDFS-12027
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: kms
>Reporter: patrick white
>Priority: Minor
>
> Enhancement request, add an API to the Key Management Server which can be 
> used for health monitoring as well as programatic version checks, such as to 
> return the service version identifier, suggest this;
> GET http://HOST:PORT/kms/v1/key/kms_version
> This API would be useful for production monitoring tools to quickly do KMS 
> instance reporting (dashboards) and basic health checks, as part of overall 
> monitoring of a Hadoop stack installation.
> Such an API would also be useful for debugging initial bring-up of a service 
> instance, such as validation of the KMS webserver and its interaction with ZK 
> before the key manager(s) are necessarily working. Currently i believe a 
> valid key needs to be setup and available before calls can return success.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12008) Improve the available-space block placement policy

2017-06-23 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061156#comment-16061156
 ] 

Kihwal Lee commented on HDFS-12008:
---

I changed the number of nodes from 20 (4 racks) to 100 (10 racks) and did 
evenly spreading nodes across racks.  The result seems closer to the ideal 
value.

||   || 1.0f || 0.6f ||
| ideal | 75% | 65% |
| trunk as is | 67.9% | 53.9% |
| trunk w/change | 74.2% | 64.7%  |

I will update the patch so that the test will check for more realistic target 
percentage.

> Improve the available-space block placement policy
> --
>
> Key: HDFS-12008
> URL: https://issues.apache.org/jira/browse/HDFS-12008
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: block placement
>Affects Versions: 2.8.1
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Attachments: HDFS-12008.patch, RandomAllocationPolicy.png
>
>
> AvailableSpaceBlockPlacementPolicy currently picks two nodes unconditionally, 
> then picks one node. It could avoid picking the second node when not 
> necessary.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12024) Fix typo's in FsDatasetImpl.java

2017-06-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061119#comment-16061119
 ] 

Hudson commented on HDFS-12024:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11916 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11916/])
HDFS-12024. Fix typo's in FsDatasetImpl.java. Contributed by Yasen liu. 
(brahma: rev abdea26280136587a47aea075ada6122d40d706e)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java


> Fix typo's in FsDatasetImpl.java
> 
>
> Key: HDFS-12024
> URL: https://issues.apache.org/jira/browse/HDFS-12024
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yasen Liu
>Assignee: Yasen Liu
> Fix For: 3.0.0-alpha4
>
> Attachments: HDFS-12024.001.patch
>
>
> 1.word "repot" spell error:
> “LOG.warn("Failed to {color:red}repot {color}bad block " + corruptBlock, e)”
> print content word “repot” Misspell,should be "report"
> 2.Also found a Document parameter error:
>   /**
>* Removes a set of volumes from FsDataset.
>* @param {color:red}storageLocationsToRemove {color}a set of
>* {@link StorageLocation}s for each volume.
>* @param clearFailure set true to clear failure information.
>*/
>   @Override
>   public void removeVolumes(
>   final Collection {color:red}storageLocsToRemove{color},
>   boolean clearFailure) {
> "storageLocationsToRemove" in  param document should be "storageLocsToRemove"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12027) Add KMS API to get service version and health check

2017-06-23 Thread patrick white (JIRA)
patrick white created HDFS-12027:


 Summary: Add KMS API to get service version and health check
 Key: HDFS-12027
 URL: https://issues.apache.org/jira/browse/HDFS-12027
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: kms
Reporter: patrick white
Priority: Minor


Enhancement request, add an API to the Key Management Server which can be used 
for health monitoring as well as programatic version checks, such as to return 
the service version identifier, suggest this;

GET http://HOST:PORT/kms/v1/key/kms_version

This API would be useful for production monitoring tools to quickly do KMS 
instance reporting (dashboards) and basic health checks, as part of overall 
monitoring of a Hadoop stack installation.

Such an API would also be useful for debugging initial bring-up of a service 
instance, such as validation of the KMS webserver and its interaction with ZK 
before the key manager(s) are necessarily working. Currently i believe a valid 
key needs to be setup and available before calls can return success.





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12024) Fix typo's in FsDatasetImpl.java

2017-06-23 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-12024:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha4
   Status: Resolved  (was: Patch Available)

Committed to trunk. {{test failures}} are unrelated. 

> Fix typo's in FsDatasetImpl.java
> 
>
> Key: HDFS-12024
> URL: https://issues.apache.org/jira/browse/HDFS-12024
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yasen Liu
>Assignee: Yasen Liu
> Fix For: 3.0.0-alpha4
>
> Attachments: HDFS-12024.001.patch
>
>
> 1.word "repot" spell error:
> “LOG.warn("Failed to {color:red}repot {color}bad block " + corruptBlock, e)”
> print content word “repot” Misspell,should be "report"
> 2.Also found a Document parameter error:
>   /**
>* Removes a set of volumes from FsDataset.
>* @param {color:red}storageLocationsToRemove {color}a set of
>* {@link StorageLocation}s for each volume.
>* @param clearFailure set true to clear failure information.
>*/
>   @Override
>   public void removeVolumes(
>   final Collection {color:red}storageLocsToRemove{color},
>   boolean clearFailure) {
> "storageLocationsToRemove" in  param document should be "storageLocsToRemove"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12024) Fix typo's in FsDatasetImpl.java

2017-06-23 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061086#comment-16061086
 ] 

Brahma Reddy Battula edited comment on HDFS-12024 at 6/23/17 3:06 PM:
--

Committed to trunk. {{test failures}} are unrelated. thanks [~Yasen Liu] for 
your contribution.


was (Author: brahmareddy):
Committed to trunk. {{test failures}} are unrelated. 

> Fix typo's in FsDatasetImpl.java
> 
>
> Key: HDFS-12024
> URL: https://issues.apache.org/jira/browse/HDFS-12024
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yasen Liu
>Assignee: Yasen Liu
> Fix For: 3.0.0-alpha4
>
> Attachments: HDFS-12024.001.patch
>
>
> 1.word "repot" spell error:
> “LOG.warn("Failed to {color:red}repot {color}bad block " + corruptBlock, e)”
> print content word “repot” Misspell,should be "report"
> 2.Also found a Document parameter error:
>   /**
>* Removes a set of volumes from FsDataset.
>* @param {color:red}storageLocationsToRemove {color}a set of
>* {@link StorageLocation}s for each volume.
>* @param clearFailure set true to clear failure information.
>*/
>   @Override
>   public void removeVolumes(
>   final Collection {color:red}storageLocsToRemove{color},
>   boolean clearFailure) {
> "storageLocationsToRemove" in  param document should be "storageLocsToRemove"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12024) Fix typo in FsDatasetImpl.java

2017-06-23 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-12024:

Summary: Fix typo in FsDatasetImpl.java  (was: spell error in 
FsDatasetImpl.java)

> Fix typo in FsDatasetImpl.java
> --
>
> Key: HDFS-12024
> URL: https://issues.apache.org/jira/browse/HDFS-12024
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yasen Liu
>Assignee: Yasen Liu
> Attachments: HDFS-12024.001.patch
>
>
> 1.word "repot" spell error:
> “LOG.warn("Failed to {color:red}repot {color}bad block " + corruptBlock, e)”
> print content word “repot” Misspell,should be "report"
> 2.Also found a Document parameter error:
>   /**
>* Removes a set of volumes from FsDataset.
>* @param {color:red}storageLocationsToRemove {color}a set of
>* {@link StorageLocation}s for each volume.
>* @param clearFailure set true to clear failure information.
>*/
>   @Override
>   public void removeVolumes(
>   final Collection {color:red}storageLocsToRemove{color},
>   boolean clearFailure) {
> "storageLocationsToRemove" in  param document should be "storageLocsToRemove"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12024) Fix typo's in FsDatasetImpl.java

2017-06-23 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-12024:

Summary: Fix typo's in FsDatasetImpl.java  (was: Fix typo in 
FsDatasetImpl.java)

> Fix typo's in FsDatasetImpl.java
> 
>
> Key: HDFS-12024
> URL: https://issues.apache.org/jira/browse/HDFS-12024
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yasen Liu
>Assignee: Yasen Liu
> Attachments: HDFS-12024.001.patch
>
>
> 1.word "repot" spell error:
> “LOG.warn("Failed to {color:red}repot {color}bad block " + corruptBlock, e)”
> print content word “repot” Misspell,should be "report"
> 2.Also found a Document parameter error:
>   /**
>* Removes a set of volumes from FsDataset.
>* @param {color:red}storageLocationsToRemove {color}a set of
>* {@link StorageLocation}s for each volume.
>* @param clearFailure set true to clear failure information.
>*/
>   @Override
>   public void removeVolumes(
>   final Collection {color:red}storageLocsToRemove{color},
>   boolean clearFailure) {
> "storageLocationsToRemove" in  param document should be "storageLocsToRemove"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11991) Ozone: Ozone shell: the root is assumed to hdfs

2017-06-23 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16060987#comment-16060987
 ] 

Weiwei Yang edited comment on HDFS-11991 at 6/23/17 2:32 PM:
-

Hi [~anu]

Sure we can discuss each item in more detail. One comment to item 2 - Discovery 
of Configs. This reminds me some work I've done in HADOOP-13628, it adds a 
common rest API to retrieve configuration from a server. This was done in a 
common {{ConfServlet}} which is enabled by default in HttpServer2. The work 
done by Elek in HDFS-12007 should be able to get this covered so it won't be a 
problem at all. 

For 3 - Discovery of the root user. I am a bit confused. The auth check is 
usually done in server side, which has access to configuration files. I am 
curious why it is necessary to discover root users.

Thank you.


was (Author: cheersyang):
Hi [~anu]

Sure we can discuss each item in more detail. One comment to item 2 - Discovery 
of Configs. This reminds me some work I've done in HADOOP-13628, it adds a 
common rest API to retrieve configuration from a server. This was done in a 
common {{ConfServlet}} which is enabled by default in HttpServer2. The work 
done by Elek in HDFS-12007 should be able to get this covered so it won't be a 
problem at all. 

> Ozone: Ozone shell: the root is assumed to hdfs
> ---
>
> Key: HDFS-11991
> URL: https://issues.apache.org/jira/browse/HDFS-11991
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Anu Engineer
>Assignee: Weiwei Yang
> Fix For: HDFS-7240
>
>
> *hdfs oz* command, or ozone shell has a command like option to run some 
> commands as root easily by specifying _--root_   as a command line option. 
> But after HDFS-11655 that assumption is no longer true. We need to detect the 
> user that started the scm/ksm service and _root_  should be mapped to that 
> user.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11991) Ozone: Ozone shell: the root is assumed to hdfs

2017-06-23 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16060987#comment-16060987
 ] 

Weiwei Yang commented on HDFS-11991:


Hi [~anu]

Sure we can discuss each item in more detail. One comment to item 2 - Discovery 
of Configs. This reminds me some work I've done in HADOOP-13628, it adds a 
common rest API to retrieve configuration from a server. This was done in a 
common {{ConfServlet}} which is enabled by default in HttpServer2. The work 
done by Elek in HDFS-12007 should be able to get this covered so it won't be a 
problem at all. 

> Ozone: Ozone shell: the root is assumed to hdfs
> ---
>
> Key: HDFS-11991
> URL: https://issues.apache.org/jira/browse/HDFS-11991
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Anu Engineer
>Assignee: Weiwei Yang
> Fix For: HDFS-7240
>
>
> *hdfs oz* command, or ozone shell has a command like option to run some 
> commands as root easily by specifying _--root_   as a command line option. 
> But after HDFS-11655 that assumption is no longer true. We need to detect the 
> user that started the scm/ksm service and _root_  should be mapped to that 
> user.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12026) libhdfspp: Fix compilation errors and warnings when compiling with Clang

2017-06-23 Thread Anatoli Shein (JIRA)
Anatoli Shein created HDFS-12026:


 Summary: libhdfspp: Fix compilation errors and warnings when 
compiling with Clang 
 Key: HDFS-12026
 URL: https://issues.apache.org/jira/browse/HDFS-12026
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Anatoli Shein
Assignee: Anatoli Shein


Currently multiple errors and warnings prevent libhdfspp from being compiled 
with clang. It should compile cleanly using flags:
-std=c++11 -stdlib=libc++

and also warning flags:
-Weverything -Wno-c++98-compat -Wno-missing-prototypes 
-Wno-c++98-compat-pedantic -Wno-padded -Wno-covered-switch-default 
-Wno-missing-noreturn -Wno-unknown-pragmas -Wconversion -Werror



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12021) Ozone: Documentation: Add Ozone-defaults documentation

2017-06-23 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16060911#comment-16060911
 ] 

Weiwei Yang commented on HDFS-12021:


Hi [~linyiqun]

Yeah, my point was if we add all necessary explanation in ozone-defaults.xml 
and make it available via web site, why we need another place for a similar 
document, that doubles the maintenance effort. But maybe I am missing the 
purpose of this jira, if that forgive me :P.

> Ozone: Documentation: Add Ozone-defaults  documentation
> ---
>
> Key: HDFS-12021
> URL: https://issues.apache.org/jira/browse/HDFS-12021
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
> Attachments: hadoop_doc_front.jpg
>
>
> We need to add documentation about the settings that are exposed via 
> ozone-defaults.xml
> Since ozone is new, we might have to put some extra effort into this to make 
> it easy to understand. In other words, we should write a proper doc 
> explaining what these settings mean and the rationale of various values we 
> choose, instead of a table with lots of settings.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >