[jira] [Commented] (HDDS-355) Disable OpenKeyDeleteService and DeleteKeysService.

2018-08-16 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16583400#comment-16583400
 ] 

Xiaoyu Yao commented on HDDS-355:
-

Thanks [~anu] for working on this. v1 patch LGTM, +1.

We will take care of the Delete/OpenKey related test failures when fixing these 
services. 

> Disable OpenKeyDeleteService and DeleteKeysService.
> ---
>
> Key: HDDS-355
> URL: https://issues.apache.org/jira/browse/HDDS-355
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: OM
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-355.001.patch
>
>
> We have identify performance issues with these two background services and 
> will improve it with several followup JIRAs after this one. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-119) Skip Apache license header check for some ozone doc scripts

2018-08-16 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16583393#comment-16583393
 ] 

Hudson commented on HDDS-119:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14790 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14790/])
HDDS-119. Skip Apache license header check for some ozone doc scripts. (xyao: 
rev 2d13e410d8a84b27e65fccff24bd8d86c3ab6b1d)
* (edit) hadoop-ozone/docs/pom.xml
* (edit) hadoop-ozone/docs/static/OzoneOverview.svg


> Skip Apache license header check for some ozone doc scripts
> ---
>
> Key: HDDS-119
> URL: https://issues.apache.org/jira/browse/HDDS-119
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: document
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-119.00.patch, HDDS-119.01.patch, HDDS-119.02.patch, 
> HDDS-119.03.patch
>
>
> {code}
> Lines that start with ? in the ASF License report indicate files that do 
> not have an Apache license header: !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/theme.toml !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/fonts/glyphicons-halflings-regular.svg
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/bootstrap.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/jquery.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css.map
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css.map
>  !? /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/layouts/index.html 
> !? /testptch/hadoop/hadoop-ozone/docs/static/OzoneOverview.svg
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-268) Add SCM close container watcher

2018-08-16 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16583369#comment-16583369
 ] 

Xiaoyu Yao commented on HDDS-268:
-

As mentioned in HDDS-343, the CLOSE_ALL semantic should guarantee that we don't 
need to wait acks from all DN to flip the state. I think the issue above should 
be address after that.  

> Add SCM close container watcher
> ---
>
> Key: HDDS-268
> URL: https://issues.apache.org/jira/browse/HDDS-268
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-268.00.patch, HDDS-268.01.patch, HDDS-268.02.patch, 
> HDDS-268.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-268) Add SCM close container watcher

2018-08-16 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16583369#comment-16583369
 ] 

Xiaoyu Yao edited comment on HDDS-268 at 8/17/18 5:22 AM:
--

As mentioned in HDDS-343, the REPLICATE_ALL semantic should guarantee that we 
don't need to wait acks from all DN to flip the state. I think the issue above 
should be address after that.  


was (Author: xyao):
As mentioned in HDDS-343, the CLOSE_ALL semantic should guarantee that we don't 
need to wait acks from all DN to flip the state. I think the issue above should 
be address after that.  

> Add SCM close container watcher
> ---
>
> Key: HDDS-268
> URL: https://issues.apache.org/jira/browse/HDDS-268
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-268.00.patch, HDDS-268.01.patch, HDDS-268.02.patch, 
> HDDS-268.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-119) Skip Apache license header check for some ozone doc scripts

2018-08-16 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-119:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks [~anu] and [~ajayydv] for the review. I just push the fix to trunk.

> Skip Apache license header check for some ozone doc scripts
> ---
>
> Key: HDDS-119
> URL: https://issues.apache.org/jira/browse/HDDS-119
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: document
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-119.00.patch, HDDS-119.01.patch, HDDS-119.02.patch, 
> HDDS-119.03.patch
>
>
> {code}
> Lines that start with ? in the ASF License report indicate files that do 
> not have an Apache license header: !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/theme.toml !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/fonts/glyphicons-halflings-regular.svg
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/bootstrap.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/jquery.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css.map
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css.map
>  !? /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/layouts/index.html 
> !? /testptch/hadoop/hadoop-ozone/docs/static/OzoneOverview.svg
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-119) Skip Apache license header check for some ozone doc scripts

2018-08-16 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16583358#comment-16583358
 ] 

Ajay Kumar commented on HDDS-119:
-

[~xyao] thanks for updated patch. Tested locally. +1 

> Skip Apache license header check for some ozone doc scripts
> ---
>
> Key: HDDS-119
> URL: https://issues.apache.org/jira/browse/HDDS-119
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: document
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-119.00.patch, HDDS-119.01.patch, HDDS-119.02.patch, 
> HDDS-119.03.patch
>
>
> {code}
> Lines that start with ? in the ASF License report indicate files that do 
> not have an Apache license header: !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/theme.toml !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/fonts/glyphicons-halflings-regular.svg
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/bootstrap.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/jquery.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css.map
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css.map
>  !? /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/layouts/index.html 
> !? /testptch/hadoop/hadoop-ozone/docs/static/OzoneOverview.svg
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-119) Skip Apache license header check for some ozone doc scripts

2018-08-16 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16583355#comment-16583355
 ] 

Anu Engineer commented on HDDS-119:
---

+1, LGTM

> Skip Apache license header check for some ozone doc scripts
> ---
>
> Key: HDDS-119
> URL: https://issues.apache.org/jira/browse/HDDS-119
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: document
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-119.00.patch, HDDS-119.01.patch, HDDS-119.02.patch, 
> HDDS-119.03.patch
>
>
> {code}
> Lines that start with ? in the ASF License report indicate files that do 
> not have an Apache license header: !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/theme.toml !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/fonts/glyphicons-halflings-regular.svg
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/bootstrap.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/jquery.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css.map
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css.map
>  !? /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/layouts/index.html 
> !? /testptch/hadoop/hadoop-ozone/docs/static/OzoneOverview.svg
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-313) Add metrics to containerState Machine

2018-08-16 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16583354#comment-16583354
 ] 

Xiaoyu Yao commented on HDDS-313:
-

Thanks [~candychencan] for the update. It looks good to me. Can you rebase the 
patch as it does not apply to trunk any more?

> Add metrics to containerState Machine
> -
>
> Key: HDDS-313
> URL: https://issues.apache.org/jira/browse/HDDS-313
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Mukul Kumar Singh
>Assignee: chencan
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-313.001.patch, HDDS-313.002.patch, 
> HDDS-313.003.patch, HDDS-313.004.patch
>
>
> metrics needs to be added to containerStateMachine to keep track of various 
> ratis ops like writeStateMachine/readStateMachine/applyTransactions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13772) Erasure coding: Unnecessary NameNode Logs displaying for Enabling/Disabling Erasure coding policies which are already enabled/disabled

2018-08-16 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16583353#comment-16583353
 ] 

genericqa commented on HDFS-13772:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
31s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 32s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
33s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
48s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
29s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 33s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13772 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12935965/HDFS-13772-04.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 873a275e690d 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1290e3c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24795/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24795/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24795/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24795/artifact/out/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| findbugs | 

[jira] [Commented] (HDDS-98) Adding Ozone Manager Audit Log

2018-08-16 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-98?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16583344#comment-16583344
 ] 

Dinesh Chitlangia commented on HDDS-98:
---

[~xyao] - Thank you for the input. I have started working on this again and 
will be testing in local before I provide next update.

> Adding Ozone Manager Audit Log
> --
>
> Key: HDDS-98
> URL: https://issues.apache.org/jira/browse/HDDS-98
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> This ticket is opened to add ozone manager's audit log. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-98) Adding Ozone Manager Audit Log

2018-08-16 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-98?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-98 started by Dinesh Chitlangia.
-
> Adding Ozone Manager Audit Log
> --
>
> Key: HDDS-98
> URL: https://issues.apache.org/jira/browse/HDDS-98
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> This ticket is opened to add ozone manager's audit log. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13806) EC: No error message for unsetting EC policy of the directory inherits the erasure coding policy from an ancestor directory

2018-08-16 Thread Vinayakumar B (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16583334#comment-16583334
 ] 

Vinayakumar B commented on HDFS-13806:
--

Yes, this is not correct.
attempt to Unset the ecpolicy on subdirectory, which actually dont have 
explicit policy set (inherits on file creation), should throw exception. May be 
suggestion to use "REPLICATION" for 3x replication also could be added in the 
exception message.

> EC: No error message for unsetting EC policy of the directory inherits the 
> erasure coding policy from an ancestor directory
> ---
>
> Key: HDFS-13806
> URL: https://issues.apache.org/jira/browse/HDFS-13806
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
> Environment: 3 Node SUSE Linux cluster
>Reporter: Souryakanta Dwivedy
>Assignee: Ayush Saxena
>Priority: Minor
> Attachments: No_error_unset_ec_policy.png
>
>
> No error message thrown for unsetting EC policy of the directory inherits the 
> erasure coding policy from an ancestor directory
> Steps :-
> --
>  * Create a Directory
>  - Set EC policy for the Directory
>  - Create a file in-side that Directory 
>  - Create a sub-directory inside the parent directory
>  - Check both the file and sub-directory inherit the EC policy from parent 
> directory
>  - Try to unset EC Policy for the file and check it will throw error as [ 
> Cannot unset an erasure coding policy on a file]
>  - Try to unset EC Policy for the sub-directory and check it will throw a 
> success message as [Unset erasure coding policy from ] 
>  instead of throwing the error message,which is wrong behavior
> Actual output :-
> No proper error message thrown for unsetting EC policy of the directory 
> inherits the erasure coding policy from an ancestor directory
>  A success message is displayed instead of throwing an error message
>  Expected output :-
>  
>  Proper error message should be thrown while trying to unset EC policy of the 
> directory inherits the erasure coding policy from an ancestor directory
>  like error message thrown while unsetting the EC policy of a file inherits 
> the erasure coding policy from an ancestor directory



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-313) Add metrics to containerState Machine

2018-08-16 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1658#comment-1658
 ] 

genericqa commented on HDDS-313:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDDS-313 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDDS-313 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12935959/HDDS-313.004.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/779/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add metrics to containerState Machine
> -
>
> Key: HDDS-313
> URL: https://issues.apache.org/jira/browse/HDDS-313
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Mukul Kumar Singh
>Assignee: chencan
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-313.001.patch, HDDS-313.002.patch, 
> HDDS-313.003.patch, HDDS-313.004.patch
>
>
> metrics needs to be added to containerStateMachine to keep track of various 
> ratis ops like writeStateMachine/readStateMachine/applyTransactions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13786) EC: Display erasure coding policy for sub-directories is not working

2018-08-16 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-13786:

Fix Version/s: 3.1.2
   3.0.4

> EC: Display erasure coding policy for sub-directories is not working
> 
>
> Key: HDFS-13786
> URL: https://issues.apache.org/jira/browse/HDFS-13786
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
> Environment: 3 Node SUSE Linux Cluster
>Reporter: Souryakanta Dwivedy
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: Display_EC_Policy_Missing_Sub_Dir.png, 
> HDFS-13786-01.patch
>
>
> EC: Display erasure coding policy for sub-directories is not working
> - Create a Directory 
>  - Set EC policy for the Directory
>  - Create a file in-side that Directory 
>  - Create a sub-directory inside the parent directory
>  - Check the EC policy set for the files and sub-folders of the parent 
> directory with command 
>  "hadoop fs -ls -e /ecdir" 
>  EC policy will be displayed only for files and missing for 
> sub-directories,which is wrong behavior
>  - But if you check the EC policy set of sub-directory with "hdfs ec 
> -getPolicy " ,it will show
>  the ec policy
>  
>  Actual ouput :-
>  
>  Display erasure coding policy for sub-directories is not working with 
> command "hadoop fs -ls -e "
> Expected output :-
> It should display erasure coding policy for sub-directories also with command 
> "hadoop fs -ls -e "



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13772) Erasure coding: Unnecessary NameNode Logs displaying for Enabling/Disabling Erasure coding policies which are already enabled/disabled

2018-08-16 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-13772:

Attachment: HDFS-13772-04.patch

> Erasure coding: Unnecessary NameNode Logs displaying for Enabling/Disabling 
> Erasure coding policies which are already enabled/disabled
> --
>
> Key: HDFS-13772
> URL: https://issues.apache.org/jira/browse/HDFS-13772
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
> Environment: 3 Node SuSE Linux cluster 
>Reporter: Souryakanta Dwivedy
>Assignee: Ayush Saxena
>Priority: Trivial
> Attachments: EC_capture1.PNG, HDFS-13772-01.patch, 
> HDFS-13772-02.patch, HDFS-13772-03 .patch, HDFS-13772-04.patch
>
>
> Unnecessary NameNode Logs displaying for Enabling/Disabling Erasure coding 
> policies which are already enabled/disabled
> - Enable any Erasure coding policy like "RS-LEGACY-6-3-1024k"
> - Check the console log display as "Erasure coding policy RS-LEGACY-6-3-1024k 
> is enabled"
> - Again try to enable the same policy multiple times "hdfs ec -enablePolicy 
> -policy RS-LEGACY-6-3-1024k"
>  instead of throwing error message as ""policy already enabled"" it will 
> display same messages as "Erasure coding policy RS-LEGACY-6-3-1024k is 
> enabled"
> - Also in NameNode log policy enabled logs are displaying multiple times 
> unnecessarily even though the policy is already enabled.
>  like this : 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Disable 
> the erasure coding policy RS-10-4-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Disable 
> the erasure coding policy RS-10-4-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Disable 
> the erasure coding policy RS-10-4-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Enable the 
> erasure coding policy RS-LEGACY-6-3-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Enable the 
> erasure coding policy RS-LEGACY-6-3-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Enable the 
> erasure coding policy RS-LEGACY-6-3-1024k
> - While executing the Erasure coding policy disable command also same type of 
> logs coming multiple times even though the policy is already 
>  disabled.It should throw error message as ""policy is already disabled"" for 
> already disabled policy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13831) Make block increment deletion number configurable

2018-08-16 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16583315#comment-16583315
 ] 

Yiqun Lin commented on HDFS-13831:
--

Anyone who can help add [~jianliang.wu] as a HDFS contributor? He is willing to 
work for this, :).

> Make block increment deletion number configurable
> -
>
> Key: HDFS-13831
> URL: https://issues.apache.org/jira/browse/HDFS-13831
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Yiqun Lin
>Priority: Major
>
> When NN deletes a large directory, it will hold the write lock long time. For 
> improving this, we remove the blocks in a batch way. So that other waiters 
> have a chance to get the lock. But right now, the batch number is a 
> hard-coded value.
> {code}
>   static int BLOCK_DELETION_INCREMENT = 1000;
> {code}
> We can make this value configurable, so that we can control the frequency of 
> other waiters to get the lock chance. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13831) Make block increment deletion number configurable

2018-08-16 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16583307#comment-16583307
 ] 

Wei-Chiu Chuang commented on HDFS-13831:


+1 to make it configurable. Better, make it time-aware, say release the lock 
every 1 second.

> Make block increment deletion number configurable
> -
>
> Key: HDFS-13831
> URL: https://issues.apache.org/jira/browse/HDFS-13831
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Yiqun Lin
>Priority: Major
>
> When NN deletes a large directory, it will hold the write lock long time. For 
> improving this, we remove the blocks in a batch way. So that other waiters 
> have a chance to get the lock. But right now, the batch number is a 
> hard-coded value.
> {code}
>   static int BLOCK_DELETION_INCREMENT = 1000;
> {code}
> We can make this value configurable, so that we can control the frequency of 
> other waiters to get the lock chance. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-313) Add metrics to containerState Machine

2018-08-16 Thread chencan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16583306#comment-16583306
 ] 

chencan commented on HDDS-313:
--

Hi [~xyao], thanks for your reply, i have added the Apache License Header in 
the v4  patch.

> Add metrics to containerState Machine
> -
>
> Key: HDDS-313
> URL: https://issues.apache.org/jira/browse/HDDS-313
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Mukul Kumar Singh
>Assignee: chencan
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-313.001.patch, HDDS-313.002.patch, 
> HDDS-313.003.patch, HDDS-313.004.patch
>
>
> metrics needs to be added to containerStateMachine to keep track of various 
> ratis ops like writeStateMachine/readStateMachine/applyTransactions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13831) Make block increment deletion number configurable

2018-08-16 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDFS-13831:


 Summary: Make block increment deletion number configurable
 Key: HDFS-13831
 URL: https://issues.apache.org/jira/browse/HDFS-13831
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.1.0
Reporter: Yiqun Lin


When NN deletes a large directory, it will hold the write lock long time. For 
improving this, we remove the blocks in a batch way. So that other waiters have 
a chance to get the lock. But right now, the batch number is a hard-coded value.
{code}
  static int BLOCK_DELETION_INCREMENT = 1000;
{code}
We can make this value configurable, so that we can control the frequency of 
other waiters to get the lock chance. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-313) Add metrics to containerState Machine

2018-08-16 Thread chencan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDDS-313:
-
Status: Patch Available  (was: Open)

> Add metrics to containerState Machine
> -
>
> Key: HDDS-313
> URL: https://issues.apache.org/jira/browse/HDDS-313
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Mukul Kumar Singh
>Assignee: chencan
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-313.001.patch, HDDS-313.002.patch, 
> HDDS-313.003.patch, HDDS-313.004.patch
>
>
> metrics needs to be added to containerStateMachine to keep track of various 
> ratis ops like writeStateMachine/readStateMachine/applyTransactions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-313) Add metrics to containerState Machine

2018-08-16 Thread chencan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDDS-313:
-
Attachment: HDDS-313.004.patch

> Add metrics to containerState Machine
> -
>
> Key: HDDS-313
> URL: https://issues.apache.org/jira/browse/HDDS-313
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Mukul Kumar Singh
>Assignee: chencan
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-313.001.patch, HDDS-313.002.patch, 
> HDDS-313.003.patch, HDDS-313.004.patch
>
>
> metrics needs to be added to containerStateMachine to keep track of various 
> ratis ops like writeStateMachine/readStateMachine/applyTransactions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-313) Add metrics to containerState Machine

2018-08-16 Thread chencan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDDS-313:
-
Status: Open  (was: Patch Available)

> Add metrics to containerState Machine
> -
>
> Key: HDDS-313
> URL: https://issues.apache.org/jira/browse/HDDS-313
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Mukul Kumar Singh
>Assignee: chencan
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-313.001.patch, HDDS-313.002.patch, 
> HDDS-313.003.patch, HDDS-313.004.patch
>
>
> metrics needs to be added to containerStateMachine to keep track of various 
> ratis ops like writeStateMachine/readStateMachine/applyTransactions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13821) RBF: Add dfs.federation.router.mount-table.cache.enable so that users can disable cache

2018-08-16 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16583288#comment-16583288
 ] 

Yiqun Lin commented on HDFS-13821:
--

I prefer to disable the cache as a temporary approach if we don't have a good 
way to improve this right now.

> RBF: Add dfs.federation.router.mount-table.cache.enable so that users can 
> disable cache
> ---
>
> Key: HDFS-13821
> URL: https://issues.apache.org/jira/browse/HDFS-13821
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.1.0, 2.9.1, 3.0.3
>Reporter: Fei Hui
>Priority: Major
> Attachments: HDFS-13821.001.patch, LocalCacheTest.java, 
> image-2018-08-13-11-27-49-023.png
>
>
> When i test rbf, if found performance problem.
> I found that ProxyAvgTime From Ganglia is so high, i run jstack on Router and 
> get the following stack frames
> {quote}
>    java.lang.Thread.State: WAITING (parking)
>     at sun.misc.Unsafe.park(Native Method)
>     - parking to wait for  <0x0005c264acd8> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
>     at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>     at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>     at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
>     at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
>     at 
> java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:209)
>     at 
> java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
>     at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2249)
>     at 
> com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
>     at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
>     at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4764)
>     at 
> org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver.getDestinationForPath(MountTableResolver.java:380)
>     at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:2104)
>     at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:2087)
>     at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getListing(RouterRpcServer.java:1050)
>     at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:640)
>     at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:422)
> {quote}
> Many threads blocked on *LocalCache*
> After disable the cache, ProxyAvgTime is down as follow showed
>  !image-2018-08-13-11-27-49-023.png! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10240) Race between close/recoverLease leads to missing block

2018-08-16 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-10240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16583264#comment-16583264
 ] 

genericqa commented on HDFS-10240:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  1m  
0s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m 51s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}159m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestRollingUpgrade |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-10240 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12935941/HDFS-10240.007.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 88548849d7c3 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d428061 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24794/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24794/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24794/testReport/ |
| Max. process+thread count | 3312 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Commented] (HDDS-355) Disable OpenKeyDeleteService and DeleteKeysService.

2018-08-16 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16583212#comment-16583212
 ] 

genericqa commented on HDDS-355:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} hadoop-ozone: The patch generated 0 new + 1 
unchanged - 4 fixed = 1 total (was 5) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 50s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
11s{color} | {color:red} hadoop-ozone/ozone-manager generated 3 new + 0 
unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m  9s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
27s{color} | {color:red} The patch generated 10 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-ozone/ozone-manager |
|  |  Dead store to blockDeleteInterval in new 
org.apache.hadoop.ozone.om.KeyManagerImpl(ScmBlockLocationProtocol, 
OMMetadataManager, OzoneConfiguration, String)  At KeyManagerImpl.java:new 
org.apache.hadoop.ozone.om.KeyManagerImpl(ScmBlockLocationProtocol, 
OMMetadataManager, OzoneConfiguration, String)  At KeyManagerImpl.java:[line 
107] |
| 

[jira] [Commented] (HDFS-10240) Race between close/recoverLease leads to missing block

2018-08-16 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-10240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16583200#comment-16583200
 ] 

Hudson commented on HDFS-10240:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14789 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14789/])
HDFS-10240. Race between close/recoverLease leads to missing block. (weichiu: 
rev 1290e3c647092f0bfbb250731a6805aba1be8e4b)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery2.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeTestUtils.java


> Race between close/recoverLease leads to missing block
> --
>
> Key: HDFS-10240
> URL: https://issues.apache.org/jira/browse/HDFS-10240
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: zhouyingchao
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-10240 scenarios.jpg, HDFS-10240-001.patch, 
> HDFS-10240-002.patch, HDFS-10240-003.patch, HDFS-10240-004.patch, 
> HDFS-10240.005.patch, HDFS-10240.006.patch, HDFS-10240.007.patch, 
> HDFS-10240.test.patch
>
>
> We got a missing block in our cluster, and logs related to the missing block 
> are as follows:
> 2016-03-28,10:00:06,188 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> allocateBlock: XX. BP-219149063-10.108.84.25-1446859315800 
> blk_1226490256_153006345{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[[DISK]DS-bcd22774-cf4d-45e9-a6a6-c475181271c9:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-ec1413ae-5541-4b44-8922-c928be3bb306:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-3f5032bc-6006-4fcc-b0f7-b355a5b94f1b:NORMAL|RBW]]}
> 2016-03-28,10:00:06,205 INFO BlockStateChange: BLOCK* 
> blk_1226490256_153006345{blockUCState=UNDER_RECOVERY, primaryNodeIndex=2, 
> replicas=[ReplicaUnderConstruction[[DISK]DS-bcd22774-cf4d-45e9-a6a6-c475181271c9:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-ec1413ae-5541-4b44-8922-c928be3bb306:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-3f5032bc-6006-4fcc-b0f7-b355a5b94f1b:NORMAL|RBW]]}
>  recovery started, 
> primary=ReplicaUnderConstruction[[DISK]DS-3f5032bc-6006-4fcc-b0f7-b355a5b94f1b:NORMAL|RBW]
> 2016-03-28,10:00:06,205 WARN org.apache.hadoop.hdfs.StateChange: DIR* 
> NameSystem.internalReleaseLease: File XX has not been closed. Lease 
> recovery is in progress. RecoveryId = 153006357 for block 
> blk_1226490256_153006345{blockUCState=UNDER_RECOVERY, primaryNodeIndex=2, 
> replicas=[ReplicaUnderConstruction[[DISK]DS-bcd22774-cf4d-45e9-a6a6-c475181271c9:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-ec1413ae-5541-4b44-8922-c928be3bb306:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-3f5032bc-6006-4fcc-b0f7-b355a5b94f1b:NORMAL|RBW]]}
> 2016-03-28,10:00:06,248 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* 
> checkFileProgress: blk_1226490256_153006345{blockUCState=COMMITTED, 
> primaryNodeIndex=2, 
> replicas=[ReplicaUnderConstruction[[DISK]DS-bcd22774-cf4d-45e9-a6a6-c475181271c9:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-ec1413ae-5541-4b44-8922-c928be3bb306:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-85819f0d-bdbb-4a9b-b90c-eba078547c23:NORMAL|RBW]]}
>  has not reached minimal replication 1
> 2016-03-28,10:00:06,358 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 10.114.5.53:11402 is added to 
> blk_1226490256_153006345{blockUCState=COMMITTED, primaryNodeIndex=2, 
> replicas=[ReplicaUnderConstruction[[DISK]DS-bcd22774-cf4d-45e9-a6a6-c475181271c9:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-ec1413ae-5541-4b44-8922-c928be3bb306:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-85819f0d-bdbb-4a9b-b90c-eba078547c23:NORMAL|RBW]]}
>  size 139
> 2016-03-28,10:00:06,441 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 10.114.5.44:11402 is added to blk_1226490256_153006345 size 
> 139
> 2016-03-28,10:00:06,660 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 10.114.6.14:11402 is added to blk_1226490256_153006345 size 
> 139
> 2016-03-28,10:00:08,808 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: 
> 

[jira] [Updated] (HDFS-10240) Race between close/recoverLease leads to missing block

2018-08-16 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-10240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-10240:
---
   Resolution: Fixed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

Committed rev 07 to trunk. Thanks [~LiJinglun] [~sinago] for the contribution!
I'll bring this to lower release lines later.

> Race between close/recoverLease leads to missing block
> --
>
> Key: HDFS-10240
> URL: https://issues.apache.org/jira/browse/HDFS-10240
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: zhouyingchao
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-10240 scenarios.jpg, HDFS-10240-001.patch, 
> HDFS-10240-002.patch, HDFS-10240-003.patch, HDFS-10240-004.patch, 
> HDFS-10240.005.patch, HDFS-10240.006.patch, HDFS-10240.007.patch, 
> HDFS-10240.test.patch
>
>
> We got a missing block in our cluster, and logs related to the missing block 
> are as follows:
> 2016-03-28,10:00:06,188 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> allocateBlock: XX. BP-219149063-10.108.84.25-1446859315800 
> blk_1226490256_153006345{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[[DISK]DS-bcd22774-cf4d-45e9-a6a6-c475181271c9:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-ec1413ae-5541-4b44-8922-c928be3bb306:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-3f5032bc-6006-4fcc-b0f7-b355a5b94f1b:NORMAL|RBW]]}
> 2016-03-28,10:00:06,205 INFO BlockStateChange: BLOCK* 
> blk_1226490256_153006345{blockUCState=UNDER_RECOVERY, primaryNodeIndex=2, 
> replicas=[ReplicaUnderConstruction[[DISK]DS-bcd22774-cf4d-45e9-a6a6-c475181271c9:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-ec1413ae-5541-4b44-8922-c928be3bb306:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-3f5032bc-6006-4fcc-b0f7-b355a5b94f1b:NORMAL|RBW]]}
>  recovery started, 
> primary=ReplicaUnderConstruction[[DISK]DS-3f5032bc-6006-4fcc-b0f7-b355a5b94f1b:NORMAL|RBW]
> 2016-03-28,10:00:06,205 WARN org.apache.hadoop.hdfs.StateChange: DIR* 
> NameSystem.internalReleaseLease: File XX has not been closed. Lease 
> recovery is in progress. RecoveryId = 153006357 for block 
> blk_1226490256_153006345{blockUCState=UNDER_RECOVERY, primaryNodeIndex=2, 
> replicas=[ReplicaUnderConstruction[[DISK]DS-bcd22774-cf4d-45e9-a6a6-c475181271c9:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-ec1413ae-5541-4b44-8922-c928be3bb306:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-3f5032bc-6006-4fcc-b0f7-b355a5b94f1b:NORMAL|RBW]]}
> 2016-03-28,10:00:06,248 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* 
> checkFileProgress: blk_1226490256_153006345{blockUCState=COMMITTED, 
> primaryNodeIndex=2, 
> replicas=[ReplicaUnderConstruction[[DISK]DS-bcd22774-cf4d-45e9-a6a6-c475181271c9:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-ec1413ae-5541-4b44-8922-c928be3bb306:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-85819f0d-bdbb-4a9b-b90c-eba078547c23:NORMAL|RBW]]}
>  has not reached minimal replication 1
> 2016-03-28,10:00:06,358 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 10.114.5.53:11402 is added to 
> blk_1226490256_153006345{blockUCState=COMMITTED, primaryNodeIndex=2, 
> replicas=[ReplicaUnderConstruction[[DISK]DS-bcd22774-cf4d-45e9-a6a6-c475181271c9:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-ec1413ae-5541-4b44-8922-c928be3bb306:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-85819f0d-bdbb-4a9b-b90c-eba078547c23:NORMAL|RBW]]}
>  size 139
> 2016-03-28,10:00:06,441 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 10.114.5.44:11402 is added to blk_1226490256_153006345 size 
> 139
> 2016-03-28,10:00:06,660 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 10.114.6.14:11402 is added to blk_1226490256_153006345 size 
> 139
> 2016-03-28,10:00:08,808 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: 
> commitBlockSynchronization(lastblock=BP-219149063-10.108.84.25-1446859315800:blk_1226490256_153006345,
>  newgenerationstamp=153006357, newlength=139, newtargets=[10.114.6.14:11402, 
> 10.114.5.53:11402, 10.114.5.44:11402], closeFile=true, deleteBlock=false)
> 2016-03-28,10:00:08,836 INFO BlockStateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: blk_1226490256 added as corrupt on 
> 10.114.6.14:11402 by /10.114.6.14 because block is COMPLETE and reported 
> genstamp 153006357 does not match genstamp in block map 153006345
> 2016-03-28,10:00:08,836 INFO BlockStateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: blk_1226490256 added as corrupt on 
> 10.114.5.53:11402 by /10.114.5.53 because block is COMPLETE and reported 
> genstamp 153006357 does not match genstamp in block map 153006345
> 

[jira] [Commented] (HDFS-10240) Race between close/recoverLease leads to missing block

2018-08-16 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-10240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16583172#comment-16583172
 ] 

Wei-Chiu Chuang commented on HDFS-10240:


Submitted rev 007 for posterity [^HDFS-10240.007.patch]   Will push rev007 in 
trunk shortly.

> Race between close/recoverLease leads to missing block
> --
>
> Key: HDFS-10240
> URL: https://issues.apache.org/jira/browse/HDFS-10240
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: zhouyingchao
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-10240 scenarios.jpg, HDFS-10240-001.patch, 
> HDFS-10240-002.patch, HDFS-10240-003.patch, HDFS-10240-004.patch, 
> HDFS-10240.005.patch, HDFS-10240.006.patch, HDFS-10240.007.patch, 
> HDFS-10240.test.patch
>
>
> We got a missing block in our cluster, and logs related to the missing block 
> are as follows:
> 2016-03-28,10:00:06,188 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> allocateBlock: XX. BP-219149063-10.108.84.25-1446859315800 
> blk_1226490256_153006345{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[[DISK]DS-bcd22774-cf4d-45e9-a6a6-c475181271c9:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-ec1413ae-5541-4b44-8922-c928be3bb306:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-3f5032bc-6006-4fcc-b0f7-b355a5b94f1b:NORMAL|RBW]]}
> 2016-03-28,10:00:06,205 INFO BlockStateChange: BLOCK* 
> blk_1226490256_153006345{blockUCState=UNDER_RECOVERY, primaryNodeIndex=2, 
> replicas=[ReplicaUnderConstruction[[DISK]DS-bcd22774-cf4d-45e9-a6a6-c475181271c9:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-ec1413ae-5541-4b44-8922-c928be3bb306:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-3f5032bc-6006-4fcc-b0f7-b355a5b94f1b:NORMAL|RBW]]}
>  recovery started, 
> primary=ReplicaUnderConstruction[[DISK]DS-3f5032bc-6006-4fcc-b0f7-b355a5b94f1b:NORMAL|RBW]
> 2016-03-28,10:00:06,205 WARN org.apache.hadoop.hdfs.StateChange: DIR* 
> NameSystem.internalReleaseLease: File XX has not been closed. Lease 
> recovery is in progress. RecoveryId = 153006357 for block 
> blk_1226490256_153006345{blockUCState=UNDER_RECOVERY, primaryNodeIndex=2, 
> replicas=[ReplicaUnderConstruction[[DISK]DS-bcd22774-cf4d-45e9-a6a6-c475181271c9:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-ec1413ae-5541-4b44-8922-c928be3bb306:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-3f5032bc-6006-4fcc-b0f7-b355a5b94f1b:NORMAL|RBW]]}
> 2016-03-28,10:00:06,248 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* 
> checkFileProgress: blk_1226490256_153006345{blockUCState=COMMITTED, 
> primaryNodeIndex=2, 
> replicas=[ReplicaUnderConstruction[[DISK]DS-bcd22774-cf4d-45e9-a6a6-c475181271c9:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-ec1413ae-5541-4b44-8922-c928be3bb306:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-85819f0d-bdbb-4a9b-b90c-eba078547c23:NORMAL|RBW]]}
>  has not reached minimal replication 1
> 2016-03-28,10:00:06,358 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 10.114.5.53:11402 is added to 
> blk_1226490256_153006345{blockUCState=COMMITTED, primaryNodeIndex=2, 
> replicas=[ReplicaUnderConstruction[[DISK]DS-bcd22774-cf4d-45e9-a6a6-c475181271c9:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-ec1413ae-5541-4b44-8922-c928be3bb306:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-85819f0d-bdbb-4a9b-b90c-eba078547c23:NORMAL|RBW]]}
>  size 139
> 2016-03-28,10:00:06,441 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 10.114.5.44:11402 is added to blk_1226490256_153006345 size 
> 139
> 2016-03-28,10:00:06,660 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 10.114.6.14:11402 is added to blk_1226490256_153006345 size 
> 139
> 2016-03-28,10:00:08,808 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: 
> commitBlockSynchronization(lastblock=BP-219149063-10.108.84.25-1446859315800:blk_1226490256_153006345,
>  newgenerationstamp=153006357, newlength=139, newtargets=[10.114.6.14:11402, 
> 10.114.5.53:11402, 10.114.5.44:11402], closeFile=true, deleteBlock=false)
> 2016-03-28,10:00:08,836 INFO BlockStateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: blk_1226490256 added as corrupt on 
> 10.114.6.14:11402 by /10.114.6.14 because block is COMPLETE and reported 
> genstamp 153006357 does not match genstamp in block map 153006345
> 2016-03-28,10:00:08,836 INFO BlockStateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: blk_1226490256 added as corrupt on 
> 10.114.5.53:11402 by /10.114.5.53 because block is COMPLETE and reported 
> genstamp 153006357 does not match genstamp in block map 153006345
> 2016-03-28,10:00:08,837 INFO BlockStateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: blk_1226490256 added as corrupt on 
> 

[jira] [Updated] (HDFS-10240) Race between close/recoverLease leads to missing block

2018-08-16 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-10240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-10240:
---
Attachment: HDFS-10240.007.patch

> Race between close/recoverLease leads to missing block
> --
>
> Key: HDFS-10240
> URL: https://issues.apache.org/jira/browse/HDFS-10240
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: zhouyingchao
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-10240 scenarios.jpg, HDFS-10240-001.patch, 
> HDFS-10240-002.patch, HDFS-10240-003.patch, HDFS-10240-004.patch, 
> HDFS-10240.005.patch, HDFS-10240.006.patch, HDFS-10240.007.patch, 
> HDFS-10240.test.patch
>
>
> We got a missing block in our cluster, and logs related to the missing block 
> are as follows:
> 2016-03-28,10:00:06,188 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> allocateBlock: XX. BP-219149063-10.108.84.25-1446859315800 
> blk_1226490256_153006345{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[[DISK]DS-bcd22774-cf4d-45e9-a6a6-c475181271c9:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-ec1413ae-5541-4b44-8922-c928be3bb306:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-3f5032bc-6006-4fcc-b0f7-b355a5b94f1b:NORMAL|RBW]]}
> 2016-03-28,10:00:06,205 INFO BlockStateChange: BLOCK* 
> blk_1226490256_153006345{blockUCState=UNDER_RECOVERY, primaryNodeIndex=2, 
> replicas=[ReplicaUnderConstruction[[DISK]DS-bcd22774-cf4d-45e9-a6a6-c475181271c9:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-ec1413ae-5541-4b44-8922-c928be3bb306:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-3f5032bc-6006-4fcc-b0f7-b355a5b94f1b:NORMAL|RBW]]}
>  recovery started, 
> primary=ReplicaUnderConstruction[[DISK]DS-3f5032bc-6006-4fcc-b0f7-b355a5b94f1b:NORMAL|RBW]
> 2016-03-28,10:00:06,205 WARN org.apache.hadoop.hdfs.StateChange: DIR* 
> NameSystem.internalReleaseLease: File XX has not been closed. Lease 
> recovery is in progress. RecoveryId = 153006357 for block 
> blk_1226490256_153006345{blockUCState=UNDER_RECOVERY, primaryNodeIndex=2, 
> replicas=[ReplicaUnderConstruction[[DISK]DS-bcd22774-cf4d-45e9-a6a6-c475181271c9:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-ec1413ae-5541-4b44-8922-c928be3bb306:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-3f5032bc-6006-4fcc-b0f7-b355a5b94f1b:NORMAL|RBW]]}
> 2016-03-28,10:00:06,248 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* 
> checkFileProgress: blk_1226490256_153006345{blockUCState=COMMITTED, 
> primaryNodeIndex=2, 
> replicas=[ReplicaUnderConstruction[[DISK]DS-bcd22774-cf4d-45e9-a6a6-c475181271c9:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-ec1413ae-5541-4b44-8922-c928be3bb306:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-85819f0d-bdbb-4a9b-b90c-eba078547c23:NORMAL|RBW]]}
>  has not reached minimal replication 1
> 2016-03-28,10:00:06,358 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 10.114.5.53:11402 is added to 
> blk_1226490256_153006345{blockUCState=COMMITTED, primaryNodeIndex=2, 
> replicas=[ReplicaUnderConstruction[[DISK]DS-bcd22774-cf4d-45e9-a6a6-c475181271c9:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-ec1413ae-5541-4b44-8922-c928be3bb306:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-85819f0d-bdbb-4a9b-b90c-eba078547c23:NORMAL|RBW]]}
>  size 139
> 2016-03-28,10:00:06,441 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 10.114.5.44:11402 is added to blk_1226490256_153006345 size 
> 139
> 2016-03-28,10:00:06,660 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 10.114.6.14:11402 is added to blk_1226490256_153006345 size 
> 139
> 2016-03-28,10:00:08,808 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: 
> commitBlockSynchronization(lastblock=BP-219149063-10.108.84.25-1446859315800:blk_1226490256_153006345,
>  newgenerationstamp=153006357, newlength=139, newtargets=[10.114.6.14:11402, 
> 10.114.5.53:11402, 10.114.5.44:11402], closeFile=true, deleteBlock=false)
> 2016-03-28,10:00:08,836 INFO BlockStateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: blk_1226490256 added as corrupt on 
> 10.114.6.14:11402 by /10.114.6.14 because block is COMPLETE and reported 
> genstamp 153006357 does not match genstamp in block map 153006345
> 2016-03-28,10:00:08,836 INFO BlockStateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: blk_1226490256 added as corrupt on 
> 10.114.5.53:11402 by /10.114.5.53 because block is COMPLETE and reported 
> genstamp 153006357 does not match genstamp in block map 153006345
> 2016-03-28,10:00:08,837 INFO BlockStateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: blk_1226490256 added as corrupt on 
> 10.114.5.44:11402 by /10.114.5.44 because block is COMPLETE and reported 
> genstamp 153006357 does 

[jira] [Commented] (HDDS-355) Disable OpenKeyDeleteService and DeleteKeysService.

2018-08-16 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16583165#comment-16583165
 ] 

Anu Engineer commented on HDDS-355:
---

There a bunch of test failures on my local machine with the patch  
[^HDDS-355.001.patch] , I have verified these failures happen independently, 
that is without this patch too.
Here is a list of tests that fail on my local machine without this patch.
{noformat}
TestBlockDeletion.testBlockDeletion
TestStorageContainerManager.testBlockDeletingThrottling
TestStorageContainerManager.testBlockDeletionTransactions
TestKeys.testDeleteKey
TestKeys.testDeleteKey
TestKeys.testPutKey
TestKeys.init
TestKeys.shutdown
{noformat}

> Disable OpenKeyDeleteService and DeleteKeysService.
> ---
>
> Key: HDDS-355
> URL: https://issues.apache.org/jira/browse/HDDS-355
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: OM
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-355.001.patch
>
>
> We have identify performance issues with these two background services and 
> will improve it with several followup JIRAs after this one. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-355) Disable OpenKeyDeleteService and DeleteKeysService.

2018-08-16 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-355:
--
Status: Patch Available  (was: Open)

> Disable OpenKeyDeleteService and DeleteKeysService.
> ---
>
> Key: HDDS-355
> URL: https://issues.apache.org/jira/browse/HDDS-355
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: OM
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-355.001.patch
>
>
> We have identify performance issues with these two background services and 
> will improve it with several followup JIRAs after this one. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-355) Disable OpenKeyDeleteService and DeleteKeysService.

2018-08-16 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-355:
--
Attachment: HDDS-355.001.patch

> Disable OpenKeyDeleteService and DeleteKeysService.
> ---
>
> Key: HDDS-355
> URL: https://issues.apache.org/jira/browse/HDDS-355
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: OM
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-355.001.patch
>
>
> We have identify performance issues with these two background services and 
> will improve it with several followup JIRAs after this one. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13779) Implement performFailover logic for ObserverReadProxyProvider.

2018-08-16 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16583130#comment-16583130
 ] 

Chen Liang commented on HDFS-13779:
---

Thanks for the correction [~xkrogen]! Makes sense to me. I recall one example 
case I was thinking of was say there are 0, 1, 2 proxies, and currentIndex = 2. 
If 1 gets removed and the list shrinks to 0, 2. Due to modulo on size, seems 
currentIndex would effectively point to 2%size (2) = 0, but it was meant to 
point to 2. I guess the whole point I was concerning was that since 
{{currentObservers}} is based on modulo of size, but size can be changing at 
any point of time, {{currentIndex}} can become not so current over time. But 
seems even if  {{currentIndex}} is wrong, it should be eventually fixed by the 
for loop, and  thanks for point out that this should only happen in failover 
actions. I agree this should not be an issue.

> Implement performFailover logic for ObserverReadProxyProvider.
> --
>
> Key: HDFS-13779
> URL: https://issues.apache.org/jira/browse/HDFS-13779
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Konstantin Shvachko
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13779-HDFS-12943.WIP00.patch
>
>
> Currently {{ObserverReadProxyProvider}} inherits {{performFailover()}} method 
> from {{ConfiguredFailoverProxyProvider}}, which simply increments the index 
> and switches over to another NameNode. The logic for ORPP should be smart 
> enough to choose another observer, otherwise it can switch to a SBN, where 
> reads are disallowed, or to an ANN, which defeats the purpose of reads from 
> standby.
> This was discussed in HDFS-12976.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13746) Still occasional "Should be different group" failure in TestRefreshUserMappings#testGroupMappingRefresh

2018-08-16 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16583128#comment-16583128
 ] 

Hudson commented on HDFS-13746:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14787 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14787/])
HDFS-13746. Still occasional "Should be different group" failure in (templedf: 
rev 8512e1a91be3e340d919c7cdc9c09dfb762a6a4e)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/security/TestRefreshUserMappings.java


> Still occasional "Should be different group" failure in 
> TestRefreshUserMappings#testGroupMappingRefresh
> ---
>
> Key: HDFS-13746
> URL: https://issues.apache.org/jira/browse/HDFS-13746
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.2.0, 3.1.1, 3.0.4
>
> Attachments: HDFS-13746.001.patch, HDFS-13746.002.patch, 
> HDFS-13746.003.patch, HDFS-13746.004.patch, HDFS-13746.005.patch, 
> HDFS-13746.006.patch, HDFS-13746.007.patch
>
>
> In https://issues.apache.org/jira/browse/HDFS-13723, increasing the amount of 
> time in sleep() helps but the problem still appears, which is annoying.
>  
> Solution:
> Use a loop to allow the test case to fail maxTrials times before declaring 
> failure. Wait 50 ms between each retry.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13746) Still occasional "Should be different group" failure in TestRefreshUserMappings#testGroupMappingRefresh

2018-08-16 Thread Daniel Templeton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HDFS-13746:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.4
   3.1.1
   3.2.0
 Release Note:   (was: Removed unused imports in rev 007.)
   Status: Resolved  (was: Patch Available)

Thanks for the patches, [~smeng].  Committed to branch-3.0, branch-3.1, and 
trunk.

> Still occasional "Should be different group" failure in 
> TestRefreshUserMappings#testGroupMappingRefresh
> ---
>
> Key: HDFS-13746
> URL: https://issues.apache.org/jira/browse/HDFS-13746
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.2.0, 3.1.1, 3.0.4
>
> Attachments: HDFS-13746.001.patch, HDFS-13746.002.patch, 
> HDFS-13746.003.patch, HDFS-13746.004.patch, HDFS-13746.005.patch, 
> HDFS-13746.006.patch, HDFS-13746.007.patch
>
>
> In https://issues.apache.org/jira/browse/HDFS-13723, increasing the amount of 
> time in sleep() helps but the problem still appears, which is annoying.
>  
> Solution:
> Use a loop to allow the test case to fail maxTrials times before declaring 
> failure. Wait 50 ms between each retry.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13830) Backport HDFS-13141 to branch-3.0.3: WebHDFS: Add support for getting snasphottable directory list

2018-08-16 Thread Siyao Meng (JIRA)
Siyao Meng created HDFS-13830:
-

 Summary: Backport HDFS-13141 to branch-3.0.3: WebHDFS: Add support 
for getting snasphottable directory list
 Key: HDFS-13830
 URL: https://issues.apache.org/jira/browse/HDFS-13830
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: webhdfs
Affects Versions: 3.0.3
Reporter: Siyao Meng
Assignee: Siyao Meng


HDFS-13141 conflicts with 3.0.3 because of interface change in HdfsFileStatus.

This Jira aims to backport the getSnapshottableDirListing() to branch-3.0.3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13579) Out of memory when running TestDFSStripedOutputStreamWithFailure testCloseWithExceptionsInStreamer

2018-08-16 Thread Jonathan Hung (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16583037#comment-16583037
 ] 

Jonathan Hung commented on HDFS-13579:
--

FYI I see this on branch-2 as well (which has neither HDFS-13251 or HDFS-11600).

I don't think its just this particular test either, running hadoop-hdfs tests 
locally fails on different tests (depending on how far the test goal happens to 
get) e.g.

{noformat}[INFO] Running org.apache.hadoop.TestRefreshCallQueue
[ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.574 s 
<<< FAILURE! - in org.apache.hadoop.TestRefreshCallQueue
[ERROR] testRefresh(org.apache.hadoop.TestRefreshCallQueue)  Time elapsed: 
0.807 s  <<< ERROR!
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at 
io.netty.util.concurrent.SingleThreadEventExecutor.shutdownGracefully(SingleThreadEventExecutor.java:557)
at 
io.netty.util.concurrent.MultithreadEventExecutorGroup.shutdownGracefully(MultithreadEventExecutorGroup.java:146)
at 
io.netty.util.concurrent.AbstractEventExecutorGroup.shutdownGracefully(AbstractEventExecutorGroup.java:69)
at 
org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.close(DatanodeHttpServer.java:285)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:1986)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNode(MiniDFSCluster.java:1892)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNodes(MiniDFSCluster.java:1882)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1861)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1835)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1828)
at 
org.apache.hadoop.TestRefreshCallQueue.tearDown(TestRefreshCallQueue.java:83){noformat}

> Out of memory when running TestDFSStripedOutputStreamWithFailure 
> testCloseWithExceptionsInStreamer
> --
>
> Key: HDFS-13579
> URL: https://issues.apache.org/jira/browse/HDFS-13579
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ewan Higgs
>Priority: Major
>
> When running  TestDFSStripedOutputStreamWithFailure 
> testCloseWithExceptionsInStreamer we often get OOM errors. It's not every 
> time, but it occurs frequently. We have reproduced this on a few different 
> machines. This seems to have been introduced in 
> f83716b7f2e5b63e4c2302c374982755233d4dd6 by HDFS-13251.
> Output from the test:
> {code:java}
> java.lang.OutOfMemoryError: unable to create new native thread
>     at java.lang.Thread.start0(Native Method)
>     at java.lang.Thread.start(Thread.java:714)
>     at 
> io.netty.util.concurrent.SingleThreadEventExecutor.shutdownGracefully(SingleThreadEventExecutor.java:578)
>     at 
> io.netty.util.concurrent.MultithreadEventExecutorGroup.shutdownGracefully(MultithreadEventExecutorGroup.java:146)
>     at 
> io.netty.util.concurrent.AbstractEventExecutorGroup.shutdownGracefully(AbstractEventExecutorGroup.java:69)
>     at 
> org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.close(DatanodeHttpServer.java:270)
>     at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:2023)
>     at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNode(MiniDFSCluster.java:2023)
>     at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNodes(MiniDFSCluster.java:2013)
>     at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1992)
>     at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1966)
>     at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1959)
>     at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailureBase.tearDown(TestDFSStripedOutputStreamWithFailureBase.java:222)
>     at 
> org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.testCloseWithExceptionsInStreamer(TestDFSStripedOutputStreamWithFailure.java:266)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>     at 
> 

[jira] [Commented] (HDFS-13774) EC: "hdfs ec -getPolicy" is not retrieving policy details when the special REPLICATION policy set on the directory

2018-08-16 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16583028#comment-16583028
 ] 

Xiao Chen commented on HDFS-13774:
--

Thanks for reporting the issue and the discussion [~SouryakantaDwivedy] and 
[~ayushtkn].

I agree this would be confusing to users - we can update the description and 
documentation around this.

 

Unfortunately, for compatibility reasons, we cannot change the getPolicy output 
(e.g. is unspecified). The message is deliberately printed from 
[here|https://github.com/apache/hadoop/blob/branch-3.0.0/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/ECAdmin.java#L238-L243]
 

> EC: "hdfs ec -getPolicy" is not retrieving policy details when the special 
> REPLICATION policy set on the directory
> --
>
> Key: HDFS-13774
> URL: https://issues.apache.org/jira/browse/HDFS-13774
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
> Environment: 3 Node Linux Cluster
>Reporter: Souryakanta Dwivedy
>Assignee: Ayush Saxena
>Priority: Minor
> Attachments: GetPolicy_EC.png
>
>
>  Erasure coding: "hdfs ec -getPolicy"" is not retrieving policy details when 
> the special REPLICATION policy set on the directory
> Steps :-
>  - Create a directory "testEC"
> - Get the EC policy for the directory [Received message as : "The erasure 
> coding policy of /testEC is unspecified" ]
> - Enable any Erasure coding policy like "XOR-2-1-1024k"
> - Set the EC Policy on the Directory
> - Get the EC policy for the directory [Received message as : "XOR-2-1-1024k" ]
> - Now again set the EC Policy on the directory as "replicate" special 
> REPLICATION policy
> - Get the EC policy for the directory [Received message as : "The erasure 
> coding policy of /testEC is unspecified" ]
>  The policy is being set for the Directory ,but while retrieving policy 
> details its throwing error as 
>  policy for the directory is unspecified which is wrong behavior



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13822) speedup libhdfs++ build (enable parallel build)

2018-08-16 Thread James Clampffer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16583025#comment-16583025
 ] 

James Clampffer commented on HDFS-13822:


[~jlowe] I agree that this isn't breaking the tests any more than they already 
were. One of the native client issues is libhdfs++ related (HDFS-9610), however 
it looks like that provided cover for another bug to be introduced in the JNI 
libhdfs hdfsGetLastExceptionRootCause a month or two ago.

The only test coverage added for hdfsGetLastExceptionRootCause was one call in 
test_libhdfs_threaded (both libhdfs and libhdfs++ run that test). Now it 
deterministically returns a null String ref rather than the expected output.  I 
noticed that when I was trying to sort out HDFS-9610 and add test coverage for 
the libhdfs++ C++ API but haven't had a chance to figure out which commit broke 
it.

> speedup libhdfs++ build (enable parallel build)
> ---
>
> Key: HDFS-13822
> URL: https://issues.apache.org/jira/browse/HDFS-13822
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Pradeep Ambati
>Assignee: Allen Wittenauer
>Priority: Minor
> Attachments: HDFS-13382.000.patch, HDFS-13822.01.patch, 
> HDFS-13822.02.patch
>
>
> libhdfs++ has significantly increased clean build times for the native client 
> on trunk. Problem is that libhdfs++ isn't build in parallel. When I tried to 
> force a parallel build by specifying -Dnative_make_args=-j4, the build fails 
> due to dependencies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13779) Implement performFailover logic for ObserverReadProxyProvider.

2018-08-16 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16583012#comment-16583012
 ] 

Erik Krogen commented on HDFS-13779:


{quote}
And {{getProxy}} will trigger {{getAllProxies}} which further triggers 
{{refreshProxyState}}.
{quote}
Actually this is not quite true; {{getAllProxies}} only calls 
{{refreshProxyState}} when {{proxies}} is empty (i.e., initialization). After 
that it just returns the current value of {{proxies}}. However, 
{{refreshProxyState}} may be called at any time by, for example, 
{{performFailover}}, so your point below about {{isObserver}} is still valid:
{quote}
In short, the value of {{isObserver}} could change at any point of time  So 
when invoke gets called each time, there seems no guarantee what will (not) be 
in the list returned from {{getFilteredProxies}} with regard to the previous 
state.
{quote}
So yes, I agree that there isn't a guarantee that {{getFilteredProxies}} will 
return the same thing when invoked multiple times. That's part of why, at the 
top of {{invoke}}, I call {{getFilteredProxies}} a single time and then use 
that value throughout the function. But the list should only change upon 
failover-style actions (e.g. Observer changes to Standby) and the only thing 
that happens if the list changes is to start talking to a different observer, 
so I don't think it's an issue, but let me know if there's something I'm 
missing.

> Implement performFailover logic for ObserverReadProxyProvider.
> --
>
> Key: HDFS-13779
> URL: https://issues.apache.org/jira/browse/HDFS-13779
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Konstantin Shvachko
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13779-HDFS-12943.WIP00.patch
>
>
> Currently {{ObserverReadProxyProvider}} inherits {{performFailover()}} method 
> from {{ConfiguredFailoverProxyProvider}}, which simply increments the index 
> and switches over to another NameNode. The logic for ORPP should be smart 
> enough to choose another observer, otherwise it can switch to a SBN, where 
> reads are disallowed, or to an ANN, which defeats the purpose of reads from 
> standby.
> This was discussed in HDFS-12976.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10240) Race between close/recoverLease leads to missing block

2018-08-16 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-10240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582999#comment-16582999
 ] 

Wei-Chiu Chuang commented on HDFS-10240:


The failed tests don't reproduce in my local tree. I'll take care of the 
checkstyle and post an updated patch for posterity. Other than that I am +1.

> Race between close/recoverLease leads to missing block
> --
>
> Key: HDFS-10240
> URL: https://issues.apache.org/jira/browse/HDFS-10240
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: zhouyingchao
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-10240 scenarios.jpg, HDFS-10240-001.patch, 
> HDFS-10240-002.patch, HDFS-10240-003.patch, HDFS-10240-004.patch, 
> HDFS-10240.005.patch, HDFS-10240.006.patch, HDFS-10240.test.patch
>
>
> We got a missing block in our cluster, and logs related to the missing block 
> are as follows:
> 2016-03-28,10:00:06,188 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> allocateBlock: XX. BP-219149063-10.108.84.25-1446859315800 
> blk_1226490256_153006345{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[[DISK]DS-bcd22774-cf4d-45e9-a6a6-c475181271c9:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-ec1413ae-5541-4b44-8922-c928be3bb306:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-3f5032bc-6006-4fcc-b0f7-b355a5b94f1b:NORMAL|RBW]]}
> 2016-03-28,10:00:06,205 INFO BlockStateChange: BLOCK* 
> blk_1226490256_153006345{blockUCState=UNDER_RECOVERY, primaryNodeIndex=2, 
> replicas=[ReplicaUnderConstruction[[DISK]DS-bcd22774-cf4d-45e9-a6a6-c475181271c9:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-ec1413ae-5541-4b44-8922-c928be3bb306:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-3f5032bc-6006-4fcc-b0f7-b355a5b94f1b:NORMAL|RBW]]}
>  recovery started, 
> primary=ReplicaUnderConstruction[[DISK]DS-3f5032bc-6006-4fcc-b0f7-b355a5b94f1b:NORMAL|RBW]
> 2016-03-28,10:00:06,205 WARN org.apache.hadoop.hdfs.StateChange: DIR* 
> NameSystem.internalReleaseLease: File XX has not been closed. Lease 
> recovery is in progress. RecoveryId = 153006357 for block 
> blk_1226490256_153006345{blockUCState=UNDER_RECOVERY, primaryNodeIndex=2, 
> replicas=[ReplicaUnderConstruction[[DISK]DS-bcd22774-cf4d-45e9-a6a6-c475181271c9:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-ec1413ae-5541-4b44-8922-c928be3bb306:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-3f5032bc-6006-4fcc-b0f7-b355a5b94f1b:NORMAL|RBW]]}
> 2016-03-28,10:00:06,248 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* 
> checkFileProgress: blk_1226490256_153006345{blockUCState=COMMITTED, 
> primaryNodeIndex=2, 
> replicas=[ReplicaUnderConstruction[[DISK]DS-bcd22774-cf4d-45e9-a6a6-c475181271c9:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-ec1413ae-5541-4b44-8922-c928be3bb306:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-85819f0d-bdbb-4a9b-b90c-eba078547c23:NORMAL|RBW]]}
>  has not reached minimal replication 1
> 2016-03-28,10:00:06,358 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 10.114.5.53:11402 is added to 
> blk_1226490256_153006345{blockUCState=COMMITTED, primaryNodeIndex=2, 
> replicas=[ReplicaUnderConstruction[[DISK]DS-bcd22774-cf4d-45e9-a6a6-c475181271c9:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-ec1413ae-5541-4b44-8922-c928be3bb306:NORMAL|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-85819f0d-bdbb-4a9b-b90c-eba078547c23:NORMAL|RBW]]}
>  size 139
> 2016-03-28,10:00:06,441 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 10.114.5.44:11402 is added to blk_1226490256_153006345 size 
> 139
> 2016-03-28,10:00:06,660 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 10.114.6.14:11402 is added to blk_1226490256_153006345 size 
> 139
> 2016-03-28,10:00:08,808 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: 
> commitBlockSynchronization(lastblock=BP-219149063-10.108.84.25-1446859315800:blk_1226490256_153006345,
>  newgenerationstamp=153006357, newlength=139, newtargets=[10.114.6.14:11402, 
> 10.114.5.53:11402, 10.114.5.44:11402], closeFile=true, deleteBlock=false)
> 2016-03-28,10:00:08,836 INFO BlockStateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: blk_1226490256 added as corrupt on 
> 10.114.6.14:11402 by /10.114.6.14 because block is COMPLETE and reported 
> genstamp 153006357 does not match genstamp in block map 153006345
> 2016-03-28,10:00:08,836 INFO BlockStateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: blk_1226490256 added as corrupt on 
> 10.114.5.53:11402 by /10.114.5.53 because block is COMPLETE and reported 
> genstamp 153006357 does not match genstamp in block map 153006345
> 2016-03-28,10:00:08,837 INFO BlockStateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: 

[jira] [Assigned] (HDFS-13822) speedup libhdfs++ build (enable parallel build)

2018-08-16 Thread Jason Lowe (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe reassigned HDFS-13822:
-

Assignee: Allen Wittenauer

Thanks for the patches, [~pradeepambati] and [~aw]!  This looks like a massive 
improvement.  I personally would prefer not to have the portability fixes 
mashed in with the build performance changes since it adds a chunk to the patch 
unrelated to the JIRA, but overall it looks like a great change.

I verified that the same libhdfs++ tests are currently broken even without this 
patch, so the tests do not appear to be any worse off after the patch.

+1 lgtm.  I'll commit this tomorrow if nobody objects.  I'll credit both 
contributors in the commit message since Allen's patch includes Pradeep's 
original patch.


> speedup libhdfs++ build (enable parallel build)
> ---
>
> Key: HDFS-13822
> URL: https://issues.apache.org/jira/browse/HDFS-13822
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Pradeep Ambati
>Assignee: Allen Wittenauer
>Priority: Minor
> Attachments: HDFS-13382.000.patch, HDFS-13822.01.patch, 
> HDFS-13822.02.patch
>
>
> libhdfs++ has significantly increased clean build times for the native client 
> on trunk. Problem is that libhdfs++ isn't build in parallel. When I tried to 
> force a parallel build by specifying -Dnative_make_args=-j4, the build fails 
> due to dependencies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13744) OIV tool should better handle control characters present in file or directory names

2018-08-16 Thread Zsolt Venczel (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582928#comment-16582928
 ] 

Zsolt Venczel commented on HDFS-13744:
--

I could not reproduce the above test failure with or without the patch 
therefore it should be unrelated:
{code}
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 74.774 s 
- in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy
[INFO] 
[INFO] Results:
[INFO] 
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0
[INFO] 
{code}

> OIV tool should better handle control characters present in file or directory 
> names
> ---
>
> Key: HDFS-13744
> URL: https://issues.apache.org/jira/browse/HDFS-13744
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, tools
>Affects Versions: 2.6.5, 2.9.1, 2.8.4, 2.7.6, 3.0.3
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>Priority: Critical
> Attachments: HDFS-13744.01.patch
>
>
> In certain cases when control characters or white space is present in file or 
> directory names OIV tool processors can export data in a misleading format.
> In the below examples we have EXAMPLE_NAME as a file and a directory name 
> where the directory has a line feed character at the end (the actual 
> production case has multiple line feeds and multiple spaces)
>  * Delimited processor case:
>  ** misleading example:
> {code:java}
> /user/data/EXAMPLE_NAME
> ,0,2017-04-24 04:34,1969-12-31 16:00,0,0,0,-1,-1,drwxrwxr-x+,user,group
> /user/data/EXAMPLE_NAME,2016-08-26 03:00,2017-05-16 
> 10:05,134217728,1,520,0,0,-rw-rwxr--+,user,group
> {code}
>  * 
>  ** expected example as suggested by 
> [https://tools.ietf.org/html/rfc4180#section-2]:
> {code:java}
> "/user/data/EXAMPLE_NAME%x0A",0,2017-04-24 04:34,1969-12-31 
> 16:00,0,0,0,-1,-1,drwxrwxr-x+,user,group
> "/user/data/EXAMPLE_NAME",2016-08-26 03:00,2017-05-16 
> 10:05,134217728,1,520,0,0,-rw-rwxr--+,user,group
> {code}
>  * XML processor case:
>  ** misleading example:
> {code:java}
> 479867791DIRECTORYEXAMPLE_NAME
> 1493033668294user:group:0775
> 113632535FILEEXAMPLE_NAME314722056575041494954320141134217728user:group:0674
> {code}
>  * 
>  ** expected example as specified in 
> [https://www.w3.org/TR/REC-xml/#sec-line-ends]:
> {code:java}
> 479867791DIRECTORYEXAMPLE_NAME#xA1493033668294user:group:0775
> 113632535FILEEXAMPLE_NAME314722056575041494954320141134217728user:group:0674
> {code}
>  * JSON:
>  The OIV Web Processor behaves correctly and produces the following:
> {code:java}
> {
>   "FileStatuses": {
> "FileStatus": [
>   {
> "fileId": 113632535,
> "accessTime": 1494954320141,
> "replication": 3,
> "owner": "user",
> "length": 520,
> "permission": "674",
> "blockSize": 134217728,
> "modificationTime": 1472205657504,
> "type": "FILE",
> "group": "group",
> "childrenNum": 0,
> "pathSuffix": "EXAMPLE_NAME"
>   },
>   {
> "fileId": 479867791,
> "accessTime": 0,
> "replication": 0,
> "owner": "user",
> "length": 0,
> "permission": "775",
> "blockSize": 0,
> "modificationTime": 1493033668294,
> "type": "DIRECTORY",
> "group": "group",
> "childrenNum": 0,
> "pathSuffix": "EXAMPLE_NAME\n"
>   }
> ]
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13744) OIV tool should better handle control characters present in file or directory names

2018-08-16 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582916#comment-16582916
 ] 

genericqa commented on HDFS-13744:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 41s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}161m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13744 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12935885/HDFS-13744.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d676efd6de41 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / cb21eaa |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24793/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24793/testReport/ |
| Max. process+thread count | 3100 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24793/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   

[jira] [Created] (HDDS-362) Modify functions impacted by SCM chill mode in ScmBlockLocationProtocol

2018-08-16 Thread Ajay Kumar (JIRA)
Ajay Kumar created HDDS-362:
---

 Summary: Modify functions impacted by SCM chill mode in 
ScmBlockLocationProtocol
 Key: HDDS-362
 URL: https://issues.apache.org/jira/browse/HDDS-362
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Ajay Kumar


Modify functions impacted by SCM chill mode in ScmBlockLocationProtocol



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-179) CloseContainer/PutKey command should be syncronized with write operations.

2018-08-16 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582909#comment-16582909
 ] 

Hudson commented on HDDS-179:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14786 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14786/])
HDDS-179. CloseContainer/PutKey command should be syncronized with write 
(msingh: rev 5ef29087ad27f4f6b815dbc08ea7427d14df58e1)
* (add) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/server/TestContainerStateMachine.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java


> CloseContainer/PutKey command should be syncronized with write operations.
> --
>
> Key: HDDS-179
> URL: https://issues.apache.org/jira/browse/HDDS-179
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client, Ozone Datanode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-179.01.patch, HDDS-179.02.patch, HDDS-179.03.patch, 
> HDDS-179.04.patch, HDDS-179.05.patch, HDDS-179.06.patch, HDDS-179.07.patch, 
> HDDS-179.08.patch, HDDS-179.09,patch, HDDS-179.10.patch, HDDS-179.11.patch, 
> HDDS-179.12.patch
>
>
> When a close Container command request comes to a Datanode (via SCM hearbeat 
> response) through the Ratis protocol, all the prior enqueued "Write" type of 
> request like  WriteChunk etc should be executed first before CloseContainer 
> request gets executed. This synchronization needs to be handled in the 
> containerStateMachine. This Jira aims to address this.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13633) RBF: Synchronous way to create RPC client connections to NN

2018-08-16 Thread CR Hota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CR Hota reassigned HDFS-13633:
--

Assignee: CR Hota  (was: CR Hota(invalid))

> RBF: Synchronous way to create RPC client connections to NN
> ---
>
> Key: HDFS-13633
> URL: https://issues.apache.org/jira/browse/HDFS-13633
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
>
> Currently the router code does the following.
>  # IPC handler thread gets a connection from the pool, even if the connection 
> is NOT usable.
>  # At the same time the IPC thread also submits a request to connection 
> creator thread for adding a new connection to the pool asynchronously.
>  # The new connection is NOT utilized by the IPC threads that get back an 
> unusable connection.
> With this approach burst behaviors of clients, fill up the pool without 
> necessarily using the connections. Also the approach is indeterministic.
> We propose a flag that can allow router admins to control the behavior of 
> getting connections by the IPC handler threads. The flag would allow to 
> toggle ON/OFF asynchronous vs synchronous way of connection creation.
> In the new model, if a connection is unusable, IPC handler thread would go 
> ahead and create a connection and add to the pool and utilize it 
> subsequently. It would still utilize the unusable connection if the pool is 
> full.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13634) RBF: Configurable value in xml for async connection request queue size.

2018-08-16 Thread CR Hota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CR Hota reassigned HDFS-13634:
--

Assignee: CR Hota  (was: CR Hota(invalid))

> RBF: Configurable value in xml for async connection request queue size.
> ---
>
> Key: HDFS-13634
> URL: https://issues.apache.org/jira/browse/HDFS-13634
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
>
> The below in ConnectionManager.java should be configurable via hdfs-site.xml. 
> This a very critical parameter for routers, admins would like to change this 
> without doing a new build.
> {code:java}
>   /** Number of parallel new connections to create. */
>   protected static final int MAX_NEW_CONNECTIONS = 100;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13746) Still occasional "Should be different group" failure in TestRefreshUserMappings#testGroupMappingRefresh

2018-08-16 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582908#comment-16582908
 ] 

Siyao Meng commented on HDFS-13746:
---

+1 jenkins. unrelated flaky tests.

> Still occasional "Should be different group" failure in 
> TestRefreshUserMappings#testGroupMappingRefresh
> ---
>
> Key: HDFS-13746
> URL: https://issues.apache.org/jira/browse/HDFS-13746
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13746.001.patch, HDFS-13746.002.patch, 
> HDFS-13746.003.patch, HDFS-13746.004.patch, HDFS-13746.005.patch, 
> HDFS-13746.006.patch, HDFS-13746.007.patch
>
>
> In https://issues.apache.org/jira/browse/HDFS-13723, increasing the amount of 
> time in sleep() helps but the problem still appears, which is annoying.
>  
> Solution:
> Use a loop to allow the test case to fail maxTrials times before declaring 
> failure. Wait 50 ms between each retry.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-351) Add chill mode state to SCM

2018-08-16 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-351:

Attachment: HDDS-351.00.patch

> Add chill mode state to SCM
> ---
>
> Key: HDDS-351
> URL: https://issues.apache.org/jira/browse/HDDS-351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-351.00.patch
>
>
> Add chill mode state to SCM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-179) CloseContainer/PutKey command should be syncronized with write operations.

2018-08-16 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-179:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks for the contribution [~shashikant]. I have committed this to trunk.

> CloseContainer/PutKey command should be syncronized with write operations.
> --
>
> Key: HDDS-179
> URL: https://issues.apache.org/jira/browse/HDDS-179
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client, Ozone Datanode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-179.01.patch, HDDS-179.02.patch, HDDS-179.03.patch, 
> HDDS-179.04.patch, HDDS-179.05.patch, HDDS-179.06.patch, HDDS-179.07.patch, 
> HDDS-179.08.patch, HDDS-179.09,patch, HDDS-179.10.patch, HDDS-179.11.patch, 
> HDDS-179.12.patch
>
>
> When a close Container command request comes to a Datanode (via SCM hearbeat 
> response) through the Ratis protocol, all the prior enqueued "Write" type of 
> request like  WriteChunk etc should be executed first before CloseContainer 
> request gets executed. This synchronization needs to be handled in the 
> containerStateMachine. This Jira aims to address this.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13818) Extend OIV to detect FSImage corruption

2018-08-16 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582888#comment-16582888
 ] 

Arpit Agarwal commented on HDFS-13818:
--

bq. Although the safest option is the NN-startup, I still believe the OIV worth 
a shot. What is your opinion about this?
Yes I think that's fine. Some offline validation is better than nothing. We 
could make it clear via help/output messages that the OIV check is not 
exhaustive and only catches some kinds of corruption issues.

> Extend OIV to detect FSImage corruption
> ---
>
> Key: HDFS-13818
> URL: https://issues.apache.org/jira/browse/HDFS-13818
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Major
>
> A follow-up Jira for HDFS-13031: an improvement of the OIV is suggested for 
> detecting corruptions like HDFS-13101 in an offline way.
> The reasoning is the following. Apart from a NN startup throwing the error, 
> there is nothing in the customer's hand that could reassure him/her that the 
> FSImages is good or corrupted.
> Although real full checking of the FSImage is only possible by the NN, for 
> stack traces associated with the observed corruption cases the solution of 
> putting up a tertiary NN is a little bit of overkill. The OIV would be a 
> handy choice, already having functionality like loading the fsimage and 
> constructing the folder structure, we just have to add the option of 
> detecting the null INodes. For e.g. the Delimited OIV processor can already 
> use in disk MetadataMap, which reduces memory consumption. Also there may be 
> a window for parallelizing: iterating through INodes for e.g. could be done 
> distributed, increasing efficiency, and we wouldn't need a high mem-high CPU 
> setup for just checking the FSImage.
> The suggestion is to add a --detectCorruption option to the OIV which would 
> check the FSImage for consistency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-361) Use DBStore and TableStore for DN metadata

2018-08-16 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-361:
---

 Summary: Use DBStore and TableStore for DN metadata
 Key: HDDS-361
 URL: https://issues.apache.org/jira/browse/HDDS-361
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Lokesh Jain
 Fix For: 0.2.1






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-360) Use RocksDBStore and TableStore for SCM Metadata

2018-08-16 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-360:
---

 Summary: Use RocksDBStore and TableStore for SCM Metadata
 Key: HDDS-360
 URL: https://issues.apache.org/jira/browse/HDDS-360
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Anu Engineer
 Fix For: 0.2.1






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-359) RocksDB Profiles support

2018-08-16 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-359:
---

 Summary: RocksDB Profiles support
 Key: HDDS-359
 URL: https://issues.apache.org/jira/browse/HDDS-359
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Anu Engineer
 Fix For: 0.2.1


This allows us to tune the OM/SCM DB for different machine configurations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-358) Use DBStore and TableStore for OzoneManager background services

2018-08-16 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-358:

Description: DeleteKeysService and OpenKeyDeleteService.

> Use DBStore and TableStore for OzoneManager background services
> ---
>
> Key: HDDS-358
> URL: https://issues.apache.org/jira/browse/HDDS-358
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1
>
>
> DeleteKeysService and OpenKeyDeleteService.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-358) Use DBStore and TableStore for OzoneManager background services

2018-08-16 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-358:
---

 Summary: Use DBStore and TableStore for OzoneManager background 
services
 Key: HDDS-358
 URL: https://issues.apache.org/jira/browse/HDDS-358
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Anu Engineer
 Fix For: 0.2.1






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-357) Use DBStore and TableStore for OzoneManager non-background service

2018-08-16 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-357:
---

 Summary: Use DBStore and TableStore for OzoneManager 
non-background service
 Key: HDDS-357
 URL: https://issues.apache.org/jira/browse/HDDS-357
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Anu Engineer
 Fix For: 0.2.1






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-356) Support ColumnFamily based RockDBStore and TableStore

2018-08-16 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-356:
---

 Summary: Support ColumnFamily based RockDBStore and TableStore
 Key: HDDS-356
 URL: https://issues.apache.org/jira/browse/HDDS-356
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Anu Engineer


This is to minimize the performance impacts of the expensive RocksDB table scan 
problems from background services disabled by HDDS-355.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-355) Disable OpenKeyDeleteService and DeleteKeysService.

2018-08-16 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-355:
---

 Summary: Disable OpenKeyDeleteService and DeleteKeysService.
 Key: HDDS-355
 URL: https://issues.apache.org/jira/browse/HDDS-355
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: OM
Reporter: Xiaoyu Yao
Assignee: Anu Engineer
 Fix For: 0.2.1


We have identify performance issues with these two background services and will 
improve it with several followup JIRAs after this one. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-179) CloseContainer/PutKey command should be syncronized with write operations.

2018-08-16 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-179:
---
Summary: CloseContainer/PutKey command should be syncronized with write 
operations.  (was: CloseContainer command should be executed only if all the  
prior "Write" type container requests get executed)

> CloseContainer/PutKey command should be syncronized with write operations.
> --
>
> Key: HDDS-179
> URL: https://issues.apache.org/jira/browse/HDDS-179
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client, Ozone Datanode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-179.01.patch, HDDS-179.02.patch, HDDS-179.03.patch, 
> HDDS-179.04.patch, HDDS-179.05.patch, HDDS-179.06.patch, HDDS-179.07.patch, 
> HDDS-179.08.patch, HDDS-179.09,patch, HDDS-179.10.patch, HDDS-179.11.patch, 
> HDDS-179.12.patch
>
>
> When a close Container command request comes to a Datanode (via SCM hearbeat 
> response) through the Ratis protocol, all the prior enqueued "Write" type of 
> request like  WriteChunk etc should be executed first before CloseContainer 
> request gets executed. This synchronization needs to be handled in the 
> containerStateMachine. This Jira aims to address this.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-353) Multiple delete Blocks tests are failing consistetly

2018-08-16 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain reassigned HDDS-353:


Assignee: Lokesh Jain

> Multiple delete Blocks tests are failing consistetly
> 
>
> Key: HDDS-353
> URL: https://issues.apache.org/jira/browse/HDDS-353
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager, SCM
>Reporter: Shashikant Banerjee
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
>
> As per the test reports here:
> [https://builds.apache.org/job/PreCommit-HDDS-Build/771/testReport/], 
> following tests are failing:
> 1 . TestStorageContainerManager#testBlockDeletionTransactions
> 2. TestStorageContainerManager#testBlockDeletingThrottling
> 3.TestBlockDeletion#testBlockDeletion



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-297) Add pipeline actions in Ozone

2018-08-16 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-297:
---
Fix Version/s: 0.2.1

> Add pipeline actions in Ozone
> -
>
> Key: HDDS-297
> URL: https://issues.apache.org/jira/browse/HDDS-297
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-297.001.patch, HDDS-297.002.patch
>
>
> Pipeline in Ozone are created out of a group of nodes depending upon the 
> replication factor and type. These pipeline provide a transport protocol for 
> data transfer.
> Inorder to detect any failure of pipeline, SCM should receive pipeline 
> reports from Datanodes and process it to identify various raft rings.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-179) CloseContainer command should be executed only if all the prior "Write" type container requests get executed

2018-08-16 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582853#comment-16582853
 ] 

Mukul Kumar Singh commented on HDDS-179:


Thanks for the updated patch [~shashikant].
+1 for the v12 patch. I will commit it shortly.

> CloseContainer command should be executed only if all the  prior "Write" type 
> container requests get executed
> -
>
> Key: HDDS-179
> URL: https://issues.apache.org/jira/browse/HDDS-179
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client, Ozone Datanode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-179.01.patch, HDDS-179.02.patch, HDDS-179.03.patch, 
> HDDS-179.04.patch, HDDS-179.05.patch, HDDS-179.06.patch, HDDS-179.07.patch, 
> HDDS-179.08.patch, HDDS-179.09,patch, HDDS-179.10.patch, HDDS-179.11.patch, 
> HDDS-179.12.patch
>
>
> When a close Container command request comes to a Datanode (via SCM hearbeat 
> response) through the Ratis protocol, all the prior enqueued "Write" type of 
> request like  WriteChunk etc should be executed first before CloseContainer 
> request gets executed. This synchronization needs to be handled in the 
> containerStateMachine. This Jira aims to address this.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13821) RBF: Add dfs.federation.router.mount-table.cache.enable so that users can disable cache

2018-08-16 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582848#comment-16582848
 ] 

Íñigo Goiri commented on HDFS-13821:


Let's replace the local cache then. Any proposals? 

> RBF: Add dfs.federation.router.mount-table.cache.enable so that users can 
> disable cache
> ---
>
> Key: HDFS-13821
> URL: https://issues.apache.org/jira/browse/HDFS-13821
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.1.0, 2.9.1, 3.0.3
>Reporter: Fei Hui
>Priority: Major
> Attachments: HDFS-13821.001.patch, LocalCacheTest.java, 
> image-2018-08-13-11-27-49-023.png
>
>
> When i test rbf, if found performance problem.
> I found that ProxyAvgTime From Ganglia is so high, i run jstack on Router and 
> get the following stack frames
> {quote}
>    java.lang.Thread.State: WAITING (parking)
>     at sun.misc.Unsafe.park(Native Method)
>     - parking to wait for  <0x0005c264acd8> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
>     at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>     at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>     at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
>     at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
>     at 
> java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:209)
>     at 
> java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
>     at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2249)
>     at 
> com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
>     at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
>     at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4764)
>     at 
> org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver.getDestinationForPath(MountTableResolver.java:380)
>     at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:2104)
>     at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:2087)
>     at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getListing(RouterRpcServer.java:1050)
>     at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:640)
>     at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:422)
> {quote}
> Many threads blocked on *LocalCache*
> After disable the cache, ProxyAvgTime is down as follow showed
>  !image-2018-08-13-11-27-49-023.png! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-119) Skip Apache license header check for some ozone doc scripts

2018-08-16 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582826#comment-16582826
 ] 

genericqa commented on HDDS-119:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 31m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
43m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
19s{color} | {color:green} docs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDDS-119 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12935889/HDDS-119.03.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  |
| uname | Linux 95525156bf62 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / cb21eaa |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/777/testReport/ |
| Max. process+thread count | 301 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/docs U: hadoop-ozone/docs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/777/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Skip Apache license header check for some ozone doc scripts
> ---
>
> Key: HDDS-119
> URL: https://issues.apache.org/jira/browse/HDDS-119
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: document
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-119.00.patch, HDDS-119.01.patch, 

[jira] [Updated] (HDDS-354) VolumeInfo.getScmUsed throws NPE

2018-08-16 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-354:

Description: 
{code}java.lang.NullPointerException
at 
org.apache.hadoop.ozone.container.common.volume.VolumeInfo.getScmUsed(VolumeInfo.java:107)
at 
org.apache.hadoop.ozone.container.common.volume.VolumeSet.getNodeReport(VolumeSet.java:366)
at 
org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getNodeReport(OzoneContainer.java:264)
at 
org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:64)
at 
org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:39)
at 
org.apache.hadoop.ozone.container.common.report.ReportPublisher.publishReport(ReportPublisher.java:86)
at 
org.apache.hadoop.ozone.container.common.report.ReportPublisher.run(ReportPublisher.java:73)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
at java.util.concurrent.FutureTask.run(FutureTask.java)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745){code}


  was:
java.lang.NullPointerException
at 
org.apache.hadoop.ozone.container.common.volume.VolumeInfo.getScmUsed(VolumeInfo.java:107)
at 
org.apache.hadoop.ozone.container.common.volume.VolumeSet.getNodeReport(VolumeSet.java:366)
at 
org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getNodeReport(OzoneContainer.java:264)
at 
org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:64)
at 
org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:39)
at 
org.apache.hadoop.ozone.container.common.report.ReportPublisher.publishReport(ReportPublisher.java:86)
at 
org.apache.hadoop.ozone.container.common.report.ReportPublisher.run(ReportPublisher.java:73)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
at java.util.concurrent.FutureTask.run(FutureTask.java)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)



> VolumeInfo.getScmUsed throws NPE
> 
>
> Key: HDDS-354
> URL: https://issues.apache.org/jira/browse/HDDS-354
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Priority: Major
>
> {code}java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeInfo.getScmUsed(VolumeInfo.java:107)
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.getNodeReport(VolumeSet.java:366)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getNodeReport(OzoneContainer.java:264)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:39)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.publishReport(ReportPublisher.java:86)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.run(ReportPublisher.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
>   at java.util.concurrent.FutureTask.run(FutureTask.java)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   

[jira] [Created] (HDDS-354) VolumeInfo.getScmUsed throws NPE

2018-08-16 Thread Ajay Kumar (JIRA)
Ajay Kumar created HDDS-354:
---

 Summary: VolumeInfo.getScmUsed throws NPE
 Key: HDDS-354
 URL: https://issues.apache.org/jira/browse/HDDS-354
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Ajay Kumar


java.lang.NullPointerException
at 
org.apache.hadoop.ozone.container.common.volume.VolumeInfo.getScmUsed(VolumeInfo.java:107)
at 
org.apache.hadoop.ozone.container.common.volume.VolumeSet.getNodeReport(VolumeSet.java:366)
at 
org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getNodeReport(OzoneContainer.java:264)
at 
org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:64)
at 
org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:39)
at 
org.apache.hadoop.ozone.container.common.report.ReportPublisher.publishReport(ReportPublisher.java:86)
at 
org.apache.hadoop.ozone.container.common.report.ReportPublisher.run(ReportPublisher.java:73)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
at java.util.concurrent.FutureTask.run(FutureTask.java)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-343) Containers are stuck in closing state in scm

2018-08-16 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582785#comment-16582785
 ] 

genericqa commented on HDDS-343:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 50s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
25s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
28s{color} | {color:red} The patch generated 10 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDDS-343 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12935880/HDDS-343.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 96ea5091ab2e 4.4.0-130-generic #156-Ubuntu SMP Thu Jun 14 
08:53:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / cb21eaa |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/776/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HDDS-Build/776/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 408 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/776/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically 

[jira] [Commented] (HDDS-313) Add metrics to containerState Machine

2018-08-16 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582774#comment-16582774
 ] 

Xiaoyu Yao commented on HDDS-313:
-

[~candychencan], thanks for the update. The patch v3 looks good to me.

CSMMetrics.java class misses the Apache License Header, +1 after fixing that.

 

 

> Add metrics to containerState Machine
> -
>
> Key: HDDS-313
> URL: https://issues.apache.org/jira/browse/HDDS-313
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Mukul Kumar Singh
>Assignee: chencan
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-313.001.patch, HDDS-313.002.patch, 
> HDDS-313.003.patch
>
>
> metrics needs to be added to containerStateMachine to keep track of various 
> ratis ops like writeStateMachine/readStateMachine/applyTransactions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13747) Statistic for list_located_status is incremented incorrectly by listStatusIterator

2018-08-16 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582768#comment-16582768
 ] 

genericqa commented on HDFS-13747:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
55s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 58s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
36s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m  4s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}202m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestAddStripedBlockInFBR |
|   | hadoop.hdfs.server.namenode.ha.TestHAFsck |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13747 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12935858/HDFS-13747.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0d6abae4f14b 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6df606f |
| maven | version: Apache Maven 3.3.9 |
| 

[jira] [Commented] (HDDS-119) Skip Apache license header check for some ozone doc scripts

2018-08-16 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582750#comment-16582750
 ] 

Xiaoyu Yao commented on HDDS-119:
-

Also found [~elek]'s earlier comments 
{quote}Do you have any reason to add the exclusions to the  
hadoop-ozone/pom.xml instead of hadoop-ozone/docs/pom.xml?
{quote}
The right place to put the exclusions is in hadoop-ozone/docs/pom.xml as the 
former does not work as expected.

> Skip Apache license header check for some ozone doc scripts
> ---
>
> Key: HDDS-119
> URL: https://issues.apache.org/jira/browse/HDDS-119
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: document
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-119.00.patch, HDDS-119.01.patch, HDDS-119.02.patch, 
> HDDS-119.03.patch
>
>
> {code}
> Lines that start with ? in the ASF License report indicate files that do 
> not have an Apache license header: !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/theme.toml !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/fonts/glyphicons-halflings-regular.svg
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/bootstrap.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/jquery.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css.map
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css.map
>  !? /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/layouts/index.html 
> !? /testptch/hadoop/hadoop-ozone/docs/static/OzoneOverview.svg
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-119) Skip Apache license header check for some ozone doc scripts

2018-08-16 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582723#comment-16582723
 ] 

Xiaoyu Yao commented on HDDS-119:
-

Submit a new patch that passed the apache-rat plugin check locally.

{code}

[*INFO*] *---* apache-rat-plugin:0.12:check *(default-cli)* @ hadoop-ozone-docs 
*---*

[*INFO*] Enabled default license matchers.

[*INFO*] Will parse SCM ignores for exclusions...

[*INFO*] Finished adding exclusions from SCM ignore files.

[*INFO*] 61 implicit excludes (use -debug for more details).

[*INFO*] Exclude: themes/ozonedoc/static/js/bootstrap.min.js

[*INFO*] Exclude: themes/ozonedoc/static/js/jquery.min.js

[*INFO*] Exclude: themes/ozonedoc/static/css/bootstrap-theme.min.css

[*INFO*] Exclude: themes/ozonedoc/static/css/bootstrap.min.css.map

[*INFO*] Exclude: themes/ozonedoc/static/css/bootstrap.min.css

[*INFO*] Exclude: themes/ozonedoc/static/css/bootstrap-theme.min.css.map

[*INFO*] Exclude: themes/ozonedoc/static/fonts/glyphicons-halflings-regular.svg

[*INFO*] Exclude: themes/ozonedoc/layouts/index.html

[*INFO*] Exclude: themes/ozonedoc/theme.toml

[*INFO*] 24 resources included (use -debug for more details)

[*INFO*] Rat check: Summary over all files. Unapproved: 0, unknown: 0, 
generated: 0, approved: 18 licenses.

[*INFO*] 
**

[*INFO*] *Reactor Summary:*

[*INFO*] 

[*INFO*] Apache Hadoop Ozone 0.2.1-SNAPSHOT . *SUCCESS* [  
0.626 s]

[*INFO*] Apache Hadoop Ozone Common . *SUCCESS* [  
0.101 s]

[*INFO*] Apache Hadoop Ozone Client . *SUCCESS* [  
0.042 s]

[*INFO*] Apache Hadoop Ozone Manager Server . *SUCCESS* [  
0.051 s]

[*INFO*] Apache Hadoop Ozone Tools .. *SUCCESS* [  
0.035 s]

[*INFO*] Apache Hadoop Ozone Object Store REST Service .. *SUCCESS* [  
0.042 s]

[*INFO*] Apache Hadoop Ozone Integration Tests .. *SUCCESS* [  
0.043 s]

[*INFO*] Apache Hadoop Ozone FileSystem . *SUCCESS* [  
0.030 s]

[*INFO*] Apache Hadoop Ozone Documentation 0.2.1-SNAPSHOT ... *SUCCESS* [  
0.029 s]

[*INFO*] 
**

[*INFO*] *BUILD SUCCESS*

[*INFO*] 
**

[*INFO*] Total time: 1.882 s

[*INFO*] Finished at: 2018-08-16T08:47:04-07:00

[*INFO*] 
**

{code}

> Skip Apache license header check for some ozone doc scripts
> ---
>
> Key: HDDS-119
> URL: https://issues.apache.org/jira/browse/HDDS-119
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: document
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-119.00.patch, HDDS-119.01.patch, HDDS-119.02.patch, 
> HDDS-119.03.patch
>
>
> {code}
> Lines that start with ? in the ASF License report indicate files that do 
> not have an Apache license header: !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/theme.toml !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/fonts/glyphicons-halflings-regular.svg
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/bootstrap.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/jquery.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css.map
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css.map
>  !? /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/layouts/index.html 
> !? /testptch/hadoop/hadoop-ozone/docs/static/OzoneOverview.svg
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-119) Skip Apache license header check for some ozone doc scripts

2018-08-16 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-119:

Status: Patch Available  (was: Reopened)

> Skip Apache license header check for some ozone doc scripts
> ---
>
> Key: HDDS-119
> URL: https://issues.apache.org/jira/browse/HDDS-119
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: document
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-119.00.patch, HDDS-119.01.patch, HDDS-119.02.patch, 
> HDDS-119.03.patch
>
>
> {code}
> Lines that start with ? in the ASF License report indicate files that do 
> not have an Apache license header: !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/theme.toml !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/fonts/glyphicons-halflings-regular.svg
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/bootstrap.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/jquery.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css.map
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css.map
>  !? /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/layouts/index.html 
> !? /testptch/hadoop/hadoop-ozone/docs/static/OzoneOverview.svg
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-119) Skip Apache license header check for some ozone doc scripts

2018-08-16 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-119:

Attachment: HDDS-119.03.patch

> Skip Apache license header check for some ozone doc scripts
> ---
>
> Key: HDDS-119
> URL: https://issues.apache.org/jira/browse/HDDS-119
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: document
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-119.00.patch, HDDS-119.01.patch, HDDS-119.02.patch, 
> HDDS-119.03.patch
>
>
> {code}
> Lines that start with ? in the ASF License report indicate files that do 
> not have an Apache license header: !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/theme.toml !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/fonts/glyphicons-halflings-regular.svg
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/bootstrap.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/jquery.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css.map
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css.map
>  !? /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/layouts/index.html 
> !? /testptch/hadoop/hadoop-ozone/docs/static/OzoneOverview.svg
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13744) OIV tool should better handle control characters present in file or directory names

2018-08-16 Thread Zsolt Venczel (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsolt Venczel updated HDFS-13744:
-
Status: Patch Available  (was: In Progress)

> OIV tool should better handle control characters present in file or directory 
> names
> ---
>
> Key: HDFS-13744
> URL: https://issues.apache.org/jira/browse/HDFS-13744
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, tools
>Affects Versions: 3.0.3, 2.7.6, 2.8.4, 2.9.1, 2.6.5
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>Priority: Critical
> Attachments: HDFS-13744.01.patch
>
>
> In certain cases when control characters or white space is present in file or 
> directory names OIV tool processors can export data in a misleading format.
> In the below examples we have EXAMPLE_NAME as a file and a directory name 
> where the directory has a line feed character at the end (the actual 
> production case has multiple line feeds and multiple spaces)
>  * Delimited processor case:
>  ** misleading example:
> {code:java}
> /user/data/EXAMPLE_NAME
> ,0,2017-04-24 04:34,1969-12-31 16:00,0,0,0,-1,-1,drwxrwxr-x+,user,group
> /user/data/EXAMPLE_NAME,2016-08-26 03:00,2017-05-16 
> 10:05,134217728,1,520,0,0,-rw-rwxr--+,user,group
> {code}
>  * 
>  ** expected example as suggested by 
> [https://tools.ietf.org/html/rfc4180#section-2]:
> {code:java}
> "/user/data/EXAMPLE_NAME%x0A",0,2017-04-24 04:34,1969-12-31 
> 16:00,0,0,0,-1,-1,drwxrwxr-x+,user,group
> "/user/data/EXAMPLE_NAME",2016-08-26 03:00,2017-05-16 
> 10:05,134217728,1,520,0,0,-rw-rwxr--+,user,group
> {code}
>  * XML processor case:
>  ** misleading example:
> {code:java}
> 479867791DIRECTORYEXAMPLE_NAME
> 1493033668294user:group:0775
> 113632535FILEEXAMPLE_NAME314722056575041494954320141134217728user:group:0674
> {code}
>  * 
>  ** expected example as specified in 
> [https://www.w3.org/TR/REC-xml/#sec-line-ends]:
> {code:java}
> 479867791DIRECTORYEXAMPLE_NAME#xA1493033668294user:group:0775
> 113632535FILEEXAMPLE_NAME314722056575041494954320141134217728user:group:0674
> {code}
>  * JSON:
>  The OIV Web Processor behaves correctly and produces the following:
> {code:java}
> {
>   "FileStatuses": {
> "FileStatus": [
>   {
> "fileId": 113632535,
> "accessTime": 1494954320141,
> "replication": 3,
> "owner": "user",
> "length": 520,
> "permission": "674",
> "blockSize": 134217728,
> "modificationTime": 1472205657504,
> "type": "FILE",
> "group": "group",
> "childrenNum": 0,
> "pathSuffix": "EXAMPLE_NAME"
>   },
>   {
> "fileId": 479867791,
> "accessTime": 0,
> "replication": 0,
> "owner": "user",
> "length": 0,
> "permission": "775",
> "blockSize": 0,
> "modificationTime": 1493033668294,
> "type": "DIRECTORY",
> "group": "group",
> "childrenNum": 0,
> "pathSuffix": "EXAMPLE_NAME\n"
>   }
> ]
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13744) OIV tool should better handle control characters present in file or directory names

2018-08-16 Thread Zsolt Venczel (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsolt Venczel updated HDFS-13744:
-
Description: 
In certain cases when control characters or white space is present in file or 
directory names OIV tool processors can export data in a misleading format.

In the below examples we have EXAMPLE_NAME as a file and a directory name where 
the directory has a line feed character at the end (the actual production case 
has multiple line feeds and multiple spaces)
 * Delimited processor case:
 ** misleading example:
{code:java}
/user/data/EXAMPLE_NAME
,0,2017-04-24 04:34,1969-12-31 16:00,0,0,0,-1,-1,drwxrwxr-x+,user,group
/user/data/EXAMPLE_NAME,2016-08-26 03:00,2017-05-16 
10:05,134217728,1,520,0,0,-rw-rwxr--+,user,group
{code}

 * 
 ** expected example as suggested by 
[https://tools.ietf.org/html/rfc4180#section-2]:
{code:java}
"/user/data/EXAMPLE_NAME%x0A",0,2017-04-24 04:34,1969-12-31 
16:00,0,0,0,-1,-1,drwxrwxr-x+,user,group
"/user/data/EXAMPLE_NAME",2016-08-26 03:00,2017-05-16 
10:05,134217728,1,520,0,0,-rw-rwxr--+,user,group
{code}

 * XML processor case:
 ** misleading example:
{code:java}
479867791DIRECTORYEXAMPLE_NAME
1493033668294user:group:0775

113632535FILEEXAMPLE_NAME314722056575041494954320141134217728user:group:0674
{code}

 * 
 ** expected example as specified in 
[https://www.w3.org/TR/REC-xml/#sec-line-ends]:
{code:java}
479867791DIRECTORYEXAMPLE_NAME#xA1493033668294user:group:0775

113632535FILEEXAMPLE_NAME314722056575041494954320141134217728user:group:0674
{code}

 * JSON:
 The OIV Web Processor behaves correctly and produces the following:
{code:java}
{
  "FileStatuses": {
"FileStatus": [
  {
"fileId": 113632535,
"accessTime": 1494954320141,
"replication": 3,
"owner": "user",
"length": 520,
"permission": "674",
"blockSize": 134217728,
"modificationTime": 1472205657504,
"type": "FILE",
"group": "group",
"childrenNum": 0,
"pathSuffix": "EXAMPLE_NAME"
  },
  {
"fileId": 479867791,
"accessTime": 0,
"replication": 0,
"owner": "user",
"length": 0,
"permission": "775",
"blockSize": 0,
"modificationTime": 1493033668294,
"type": "DIRECTORY",
"group": "group",
"childrenNum": 0,
"pathSuffix": "EXAMPLE_NAME\n"
  }
]
  }
}
{code}

  was:
In certain cases when control characters or white space is present in file or 
directory names OIV tool processors can export data in a misleading format.

In the below examples we have EXAMPLE_NAME as a file and a directory name where 
the directory has a line feed character at the end (the actual production case 
has multiple line feeds and multiple spaces)
 * CSV processor case:
 ** misleading example:
{code:java}
/user/data/EXAMPLE_NAME
,0,2017-04-24 04:34,1969-12-31 16:00,0,0,0,-1,-1,drwxrwxr-x+,user,group
/user/data/EXAMPLE_NAME,2016-08-26 03:00,2017-05-16 
10:05,134217728,1,520,0,0,-rw-rwxr--+,user,group
{code}

 ** expected example as suggested by 
[https://tools.ietf.org/html/rfc4180#section-2]:
{code:java}
"/user/data/EXAMPLE_NAME%x0A",0,2017-04-24 04:34,1969-12-31 
16:00,0,0,0,-1,-1,drwxrwxr-x+,user,group
"/user/data/EXAMPLE_NAME",2016-08-26 03:00,2017-05-16 
10:05,134217728,1,520,0,0,-rw-rwxr--+,user,group
{code}

 * XML processor case:
 ** misleading example:
{code:java}
479867791DIRECTORYEXAMPLE_NAME
1493033668294user:group:0775

113632535FILEEXAMPLE_NAME314722056575041494954320141134217728user:group:0674
{code}

 ** expected example as specified in 
[https://www.w3.org/TR/REC-xml/#sec-line-ends]:
{code:java}
479867791DIRECTORYEXAMPLE_NAME#xA1493033668294user:group:0775

113632535FILEEXAMPLE_NAME314722056575041494954320141134217728user:group:0674
{code}

 * JSON:
 The OIV Web Processor behaves correctly and produces the following:
{code:java}
{
  "FileStatuses": {
"FileStatus": [
  {
"fileId": 113632535,
"accessTime": 1494954320141,
"replication": 3,
"owner": "user",
"length": 520,
"permission": "674",
"blockSize": 134217728,
"modificationTime": 1472205657504,
"type": "FILE",
"group": "group",
"childrenNum": 0,
"pathSuffix": "EXAMPLE_NAME"
  },
  {
"fileId": 479867791,
"accessTime": 0,
"replication": 0,
"owner": "user",
"length": 0,
"permission": "775",
"blockSize": 0,
"modificationTime": 1493033668294,
"type": "DIRECTORY",
"group": "group",
"childrenNum": 0,
"pathSuffix": "EXAMPLE_NAME\n"
  }
]
  }
}
{code}


> OIV tool should better handle control characters present in file or directory 
> names
> ---
>
> Key: HDFS-13744

[jira] [Commented] (HDFS-13744) OIV tool should better handle control characters present in file or directory names

2018-08-16 Thread Zsolt Venczel (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582703#comment-16582703
 ] 

Zsolt Venczel commented on HDFS-13744:
--

After doing some more analysis it turns out that very few CSV and XML clients 
are following the LF character encoding specifications.

This can have the following impact:

* For the XML processor:
Escaping the LF character following the specification can distort an XML parser 
to correctly reproduce a file name. It can also modify filenames when using the 
ReverseXML processor. *I would not recommend escaping here.*

* For the Delimited processor:
The output of the Delimited processor is handy for report creation and grepping 
where a wrongly displayed filename or directory name having LF can cause more 
problems than the appearance of an escaped LF character therefore *I would 
recommend escaping in this scenario*.

In my uploaded patch I added escaping for the Delimited processor only.

> OIV tool should better handle control characters present in file or directory 
> names
> ---
>
> Key: HDFS-13744
> URL: https://issues.apache.org/jira/browse/HDFS-13744
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, tools
>Affects Versions: 2.6.5, 2.9.1, 2.8.4, 2.7.6, 3.0.3
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>Priority: Critical
> Attachments: HDFS-13744.01.patch
>
>
> In certain cases when control characters or white space is present in file or 
> directory names OIV tool processors can export data in a misleading format.
> In the below examples we have EXAMPLE_NAME as a file and a directory name 
> where the directory has a line feed character at the end (the actual 
> production case has multiple line feeds and multiple spaces)
>  * CSV processor case:
>  ** misleading example:
> {code:java}
> /user/data/EXAMPLE_NAME
> ,0,2017-04-24 04:34,1969-12-31 16:00,0,0,0,-1,-1,drwxrwxr-x+,user,group
> /user/data/EXAMPLE_NAME,2016-08-26 03:00,2017-05-16 
> 10:05,134217728,1,520,0,0,-rw-rwxr--+,user,group
> {code}
>  ** expected example as suggested by 
> [https://tools.ietf.org/html/rfc4180#section-2]:
> {code:java}
> "/user/data/EXAMPLE_NAME%x0A",0,2017-04-24 04:34,1969-12-31 
> 16:00,0,0,0,-1,-1,drwxrwxr-x+,user,group
> "/user/data/EXAMPLE_NAME",2016-08-26 03:00,2017-05-16 
> 10:05,134217728,1,520,0,0,-rw-rwxr--+,user,group
> {code}
>  * XML processor case:
>  ** misleading example:
> {code:java}
> 479867791DIRECTORYEXAMPLE_NAME
> 1493033668294user:group:0775
> 113632535FILEEXAMPLE_NAME314722056575041494954320141134217728user:group:0674
> {code}
>  ** expected example as specified in 
> [https://www.w3.org/TR/REC-xml/#sec-line-ends]:
> {code:java}
> 479867791DIRECTORYEXAMPLE_NAME#xA1493033668294user:group:0775
> 113632535FILEEXAMPLE_NAME314722056575041494954320141134217728user:group:0674
> {code}
>  * JSON:
>  The OIV Web Processor behaves correctly and produces the following:
> {code:java}
> {
>   "FileStatuses": {
> "FileStatus": [
>   {
> "fileId": 113632535,
> "accessTime": 1494954320141,
> "replication": 3,
> "owner": "user",
> "length": 520,
> "permission": "674",
> "blockSize": 134217728,
> "modificationTime": 1472205657504,
> "type": "FILE",
> "group": "group",
> "childrenNum": 0,
> "pathSuffix": "EXAMPLE_NAME"
>   },
>   {
> "fileId": 479867791,
> "accessTime": 0,
> "replication": 0,
> "owner": "user",
> "length": 0,
> "permission": "775",
> "blockSize": 0,
> "modificationTime": 1493033668294,
> "type": "DIRECTORY",
> "group": "group",
> "childrenNum": 0,
> "pathSuffix": "EXAMPLE_NAME\n"
>   }
> ]
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-313) Add metrics to containerState Machine

2018-08-16 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582688#comment-16582688
 ] 

genericqa commented on HDDS-313:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 32m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 31m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 31m  
8s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 36s{color} | {color:orange} root: The patch generated 15 new + 0 unchanged - 
0 fixed = 15 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
2s{color} | {color:red} hadoop-hdds/container-service generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
56s{color} | {color:green} container-service in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 18s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
47s{color} | {color:red} The patch generated 11 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}146m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdds/container-service |
|  |  Write to static field 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.metrics
 from instance method new 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine(ContainerDispatcher,
 ThreadPoolExecutor)  At ContainerStateMachine.java:from instance method new 

[jira] [Updated] (HDFS-13744) OIV tool should better handle control characters present in file or directory names

2018-08-16 Thread Zsolt Venczel (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsolt Venczel updated HDFS-13744:
-
Attachment: HDFS-13744.01.patch

> OIV tool should better handle control characters present in file or directory 
> names
> ---
>
> Key: HDFS-13744
> URL: https://issues.apache.org/jira/browse/HDFS-13744
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, tools
>Affects Versions: 2.6.5, 2.9.1, 2.8.4, 2.7.6, 3.0.3
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>Priority: Critical
> Attachments: HDFS-13744.01.patch
>
>
> In certain cases when control characters or white space is present in file or 
> directory names OIV tool processors can export data in a misleading format.
> In the below examples we have EXAMPLE_NAME as a file and a directory name 
> where the directory has a line feed character at the end (the actual 
> production case has multiple line feeds and multiple spaces)
>  * CSV processor case:
>  ** misleading example:
> {code:java}
> /user/data/EXAMPLE_NAME
> ,0,2017-04-24 04:34,1969-12-31 16:00,0,0,0,-1,-1,drwxrwxr-x+,user,group
> /user/data/EXAMPLE_NAME,2016-08-26 03:00,2017-05-16 
> 10:05,134217728,1,520,0,0,-rw-rwxr--+,user,group
> {code}
>  ** expected example as suggested by 
> [https://tools.ietf.org/html/rfc4180#section-2]:
> {code:java}
> "/user/data/EXAMPLE_NAME%x0A",0,2017-04-24 04:34,1969-12-31 
> 16:00,0,0,0,-1,-1,drwxrwxr-x+,user,group
> "/user/data/EXAMPLE_NAME",2016-08-26 03:00,2017-05-16 
> 10:05,134217728,1,520,0,0,-rw-rwxr--+,user,group
> {code}
>  * XML processor case:
>  ** misleading example:
> {code:java}
> 479867791DIRECTORYEXAMPLE_NAME
> 1493033668294user:group:0775
> 113632535FILEEXAMPLE_NAME314722056575041494954320141134217728user:group:0674
> {code}
>  ** expected example as specified in 
> [https://www.w3.org/TR/REC-xml/#sec-line-ends]:
> {code:java}
> 479867791DIRECTORYEXAMPLE_NAME#xA1493033668294user:group:0775
> 113632535FILEEXAMPLE_NAME314722056575041494954320141134217728user:group:0674
> {code}
>  * JSON:
>  The OIV Web Processor behaves correctly and produces the following:
> {code:java}
> {
>   "FileStatuses": {
> "FileStatus": [
>   {
> "fileId": 113632535,
> "accessTime": 1494954320141,
> "replication": 3,
> "owner": "user",
> "length": 520,
> "permission": "674",
> "blockSize": 134217728,
> "modificationTime": 1472205657504,
> "type": "FILE",
> "group": "group",
> "childrenNum": 0,
> "pathSuffix": "EXAMPLE_NAME"
>   },
>   {
> "fileId": 479867791,
> "accessTime": 0,
> "replication": 0,
> "owner": "user",
> "length": 0,
> "permission": "775",
> "blockSize": 0,
> "modificationTime": 1493033668294,
> "type": "DIRECTORY",
> "group": "group",
> "childrenNum": 0,
> "pathSuffix": "EXAMPLE_NAME\n"
>   }
> ]
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-179) CloseContainer command should be executed only if all the prior "Write" type container requests get executed

2018-08-16 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582675#comment-16582675
 ] 

genericqa commented on HDDS-179:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
59s{color} | {color:green} container-service in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 34s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
40s{color} | {color:red} The patch generated 10 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}135m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
|   | hadoop.ozone.TestStorageContainerManager |
|   | hadoop.ozone.web.client.TestKeys |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDDS-179 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12935853/HDDS-179.12.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  

[jira] [Commented] (HDDS-343) Containers are stuck in closing state in scm

2018-08-16 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582643#comment-16582643
 ] 

Elek, Marton commented on HDDS-343:
---

I moved back to the safe side. It's not a generic solution I just added this 
specific transition (CLOSING->CLOSED). In this special case we can use the 
closed state in case of any report contains the closed state as it is saved 
with ALL replication type (thx to [~nandakumar131] who pointed it to me...)

> Containers are stuck in closing state in scm
> 
>
> Key: HDDS-343
> URL: https://issues.apache.org/jira/browse/HDDS-343
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-343.001.patch, HDDS-343.002.patch, 
> HDDS-343.003.patch, HDDS-343.004.patch
>
>
> Containers could not been closed currently.
> The datanode is closing the containers and sending the CLOSED state in the 
> container report but SCM doesn't register that the state is closed and 
> sending the close command again and again.
> I think the ContainerMapping.processContainerReport should be improved.
> {code}
> scm_1   | --> RPC message request: SCMHeartbeatRequestProto from 
> 172.25.0.2:33912
> scm_1   | datanodeDetails {
> scm_1   |   uuid: "9c8f80bd-9424-4d74-99ef-a2bd58e66d7f"
> scm_1   |   ipAddress: "172.25.0.2"
> scm_1   |   hostName: "365fd1f44f0b"
> scm_1   |   ports {
> scm_1   | name: "STANDALONE"
> scm_1   | value: 9859
> scm_1   |   }
> scm_1   |   ports {
> scm_1   | name: "RATIS"
> scm_1   | value: 9858
> scm_1   |   }
> scm_1   |   ports {
> scm_1   | name: "REST"
> scm_1   | value: 9880
> scm_1   |   }
> scm_1   | }
> scm_1   | nodeReport {
> scm_1   |   storageReport {
> scm_1   | storageUuid: "DS-61e76107-85c5-437a-95a7-aeb8b3e7827f"
> scm_1   | storageLocation: "/tmp/hadoop-hadoop/dfs/data"
> scm_1   | capacity: 491630870528
> scm_1   | scmUsed: 2708828160
> scm_1   | remaining: 24263614464
> scm_1   | storageType: DISK
> scm_1   | failed: false
> scm_1   |   }
> scm_1   | }
> scm_1   | containerReport {
> scm_1   |   reports {
> scm_1   | containerID: 1
> scm_1   | used: 1061158912
> scm_1   | readCount: 0
> scm_1   | writeCount: 64
> scm_1   | readBytes: 0
> scm_1   | writeBytes: 1061158912
> scm_1   | state: CLOSED
> scm_1   |   }
> scm_1   |   reports {
> scm_1   | containerID: 2
> scm_1   | used: 1048576000
> scm_1   | readCount: 0
> scm_1   | writeCount: 64
> scm_1   | readBytes: 0
> scm_1   | writeBytes: 1048576000
> scm_1   | state: CLOSED
> scm_1   |   }
> scm_1   |   reports {
> scm_1   | containerID: 3
> scm_1   | used: 511705088
> scm_1   | readCount: 0
> scm_1   | writeCount: 32
> scm_1   | readBytes: 0
> scm_1   | writeBytes: 511705088
> scm_1   | state: OPEN
> scm_1   |   }
> scm_1   | }
> scm_1   | commandStatusReport {
> scm_1   | }
> scm_1   | containerActions {
> scm_1   |   containerActions {
> scm_1   | containerID: 1
> scm_1   | action: CLOSE
> scm_1   | reason: CONTAINER_FULL
> scm_1   |   }
> scm_1   |   containerActions {
> scm_1   | containerID: 2
> scm_1   | action: CLOSE
> scm_1   | reason: CONTAINER_FULL
> scm_1   |   }
> scm_1   | }
> scm_1   | 
> scm_1   | --> RPC message response: SCMHeartbeatRequestProto to 
> 172.25.0.2:33912
> scm_1   | datanodeUUID: "9c8f80bd-9424-4d74-99ef-a2bd58e66d7f"
> scm_1   | 
> scm_1   | 2018-08-08 16:22:51 INFO  CloseContainerEventHandler:56 - 
> Close container Event triggered for container : 1
> scm_1   | 2018-08-08 16:22:51 INFO  CloseContainerEventHandler:105 - 
> container with id : 1 is in CLOSING state and need not be closed.
> scm_1   | 2018-08-08 16:22:51 INFO  CloseContainerEventHandler:56 - 
> Close container Event triggered for container : 2
> scm_1   | 2018-08-08 16:22:51 INFO  CloseContainerEventHandler:105 - 
> container with id : 2 is in CLOSING state and need not be closed.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HDDS-343) Containers are stuck in closing state in scm

2018-08-16 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-343:
--
Attachment: HDDS-343.004.patch

> Containers are stuck in closing state in scm
> 
>
> Key: HDDS-343
> URL: https://issues.apache.org/jira/browse/HDDS-343
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-343.001.patch, HDDS-343.002.patch, 
> HDDS-343.003.patch, HDDS-343.004.patch
>
>
> Containers could not been closed currently.
> The datanode is closing the containers and sending the CLOSED state in the 
> container report but SCM doesn't register that the state is closed and 
> sending the close command again and again.
> I think the ContainerMapping.processContainerReport should be improved.
> {code}
> scm_1   | --> RPC message request: SCMHeartbeatRequestProto from 
> 172.25.0.2:33912
> scm_1   | datanodeDetails {
> scm_1   |   uuid: "9c8f80bd-9424-4d74-99ef-a2bd58e66d7f"
> scm_1   |   ipAddress: "172.25.0.2"
> scm_1   |   hostName: "365fd1f44f0b"
> scm_1   |   ports {
> scm_1   | name: "STANDALONE"
> scm_1   | value: 9859
> scm_1   |   }
> scm_1   |   ports {
> scm_1   | name: "RATIS"
> scm_1   | value: 9858
> scm_1   |   }
> scm_1   |   ports {
> scm_1   | name: "REST"
> scm_1   | value: 9880
> scm_1   |   }
> scm_1   | }
> scm_1   | nodeReport {
> scm_1   |   storageReport {
> scm_1   | storageUuid: "DS-61e76107-85c5-437a-95a7-aeb8b3e7827f"
> scm_1   | storageLocation: "/tmp/hadoop-hadoop/dfs/data"
> scm_1   | capacity: 491630870528
> scm_1   | scmUsed: 2708828160
> scm_1   | remaining: 24263614464
> scm_1   | storageType: DISK
> scm_1   | failed: false
> scm_1   |   }
> scm_1   | }
> scm_1   | containerReport {
> scm_1   |   reports {
> scm_1   | containerID: 1
> scm_1   | used: 1061158912
> scm_1   | readCount: 0
> scm_1   | writeCount: 64
> scm_1   | readBytes: 0
> scm_1   | writeBytes: 1061158912
> scm_1   | state: CLOSED
> scm_1   |   }
> scm_1   |   reports {
> scm_1   | containerID: 2
> scm_1   | used: 1048576000
> scm_1   | readCount: 0
> scm_1   | writeCount: 64
> scm_1   | readBytes: 0
> scm_1   | writeBytes: 1048576000
> scm_1   | state: CLOSED
> scm_1   |   }
> scm_1   |   reports {
> scm_1   | containerID: 3
> scm_1   | used: 511705088
> scm_1   | readCount: 0
> scm_1   | writeCount: 32
> scm_1   | readBytes: 0
> scm_1   | writeBytes: 511705088
> scm_1   | state: OPEN
> scm_1   |   }
> scm_1   | }
> scm_1   | commandStatusReport {
> scm_1   | }
> scm_1   | containerActions {
> scm_1   |   containerActions {
> scm_1   | containerID: 1
> scm_1   | action: CLOSE
> scm_1   | reason: CONTAINER_FULL
> scm_1   |   }
> scm_1   |   containerActions {
> scm_1   | containerID: 2
> scm_1   | action: CLOSE
> scm_1   | reason: CONTAINER_FULL
> scm_1   |   }
> scm_1   | }
> scm_1   | 
> scm_1   | --> RPC message response: SCMHeartbeatRequestProto to 
> 172.25.0.2:33912
> scm_1   | datanodeUUID: "9c8f80bd-9424-4d74-99ef-a2bd58e66d7f"
> scm_1   | 
> scm_1   | 2018-08-08 16:22:51 INFO  CloseContainerEventHandler:56 - 
> Close container Event triggered for container : 1
> scm_1   | 2018-08-08 16:22:51 INFO  CloseContainerEventHandler:105 - 
> container with id : 1 is in CLOSING state and need not be closed.
> scm_1   | 2018-08-08 16:22:51 INFO  CloseContainerEventHandler:56 - 
> Close container Event triggered for container : 2
> scm_1   | 2018-08-08 16:22:51 INFO  CloseContainerEventHandler:105 - 
> container with id : 2 is in CLOSING state and need not be closed.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-297) Add pipeline actions in Ozone

2018-08-16 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582587#comment-16582587
 ] 

genericqa commented on HDDS-297:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
45s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 31m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
2s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
34s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 31m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 31m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 31m  
1s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 33s{color} | {color:orange} root: The patch generated 3 new + 22 unchanged - 
0 fixed = 25 total (was 22) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
24s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
6s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
58s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
38s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 23s{color} 
| {color:red} integration-test in the patch failed. {color} |
| 

[jira] [Commented] (HDFS-13747) Statistic for list_located_status is incremented incorrectly by listStatusIterator

2018-08-16 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582583#comment-16582583
 ] 

Gabor Bota commented on HDFS-13747:
---

Thanks [~amihalyi], +1 on the v2 patch.

> Statistic for list_located_status is incremented incorrectly by 
> listStatusIterator
> --
>
> Key: HDFS-13747
> URL: https://issues.apache.org/jira/browse/HDFS-13747
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.3
>Reporter: Todd Lipcon
>Assignee: Antal Mihalyi
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-13747.001.patch, HDFS-13747.002.patch
>
>
> The DirListingIterator constructor calls 
> storageStatistics.incrementOpCounter(OpType.LIST_LOCATED_STATUS) 
> unconditionally even if 'needLocation' is false. It seems that if 
> needLocation is false, it should increment the LIST_STATUS counter instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13772) Erasure coding: Unnecessary NameNode Logs displaying for Enabling/Disabling Erasure coding policies which are already enabled/disabled

2018-08-16 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582554#comment-16582554
 ] 

genericqa commented on HDFS-13772:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 32m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
33s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 33s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
35s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
40s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 35s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13772 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12935856/HDFS-13772-03%20.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux cec305e5b46e 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6df606f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24791/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24791/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24791/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| mvnsite | 

[jira] [Commented] (HDFS-13818) Extend OIV to detect FSImage corruption

2018-08-16 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582536#comment-16582536
 ] 

Adam Antal commented on HDFS-13818:
---

Thanks for look into this, [~arpitagarwal].

Firstly, to the justification of the OIV method:

I agree that the most easiest way to check whether the NN wouldn't fail if it 
loaded the fsimage is to actually do it, but as opposed to that the OIV is an 
alternative to protect against corruption in an _offline_ way - particularly 
for the reason to do that in a light node regardless of the cluster.

As you wrote, for full checking (in an offline way) one has to replicate the 
same code paths that the NN does during startup. Starting a modified NN process 
or calling some modified functions from that path could require lot of work and 
cause further problems, so I don't see that track justified - in that case the 
best is to put up a new NN. As I see it, the OIV-detectCorruption utility 
should not address full checking, rather a way to look for the known corruption 
cases. I came to this conclusion in HDFS-13031, and I also added some other 
points of its practicability there.

Secondly, by corruption I mainly focused on stack traces like HDFS-9406: when 
fsimage _is being loaded_, and not after it has been successfully done so. 
Given a bad fsimage, there is no other choice of the corruption being detected 
other than to start a NN.

And thirdly, in my opinion you can target any of the following to handle the 
case:
 # The FSImage writer to not produce corrupted image
 # The FSImage reader to detect the corruption during read
 # an independent checker to check a written fsimage at any time

The optimal would be to prevent writing (first) and HDFS-13314 also went for 
the first one, but my solution is just another safety layer following the third 
option.

Although the safest option is the NN-startup, I still believe the OIV worth a 
shot. What is your opinion about this?

> Extend OIV to detect FSImage corruption
> ---
>
> Key: HDFS-13818
> URL: https://issues.apache.org/jira/browse/HDFS-13818
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Major
>
> A follow-up Jira for HDFS-13031: an improvement of the OIV is suggested for 
> detecting corruptions like HDFS-13101 in an offline way.
> The reasoning is the following. Apart from a NN startup throwing the error, 
> there is nothing in the customer's hand that could reassure him/her that the 
> FSImages is good or corrupted.
> Although real full checking of the FSImage is only possible by the NN, for 
> stack traces associated with the observed corruption cases the solution of 
> putting up a tertiary NN is a little bit of overkill. The OIV would be a 
> handy choice, already having functionality like loading the fsimage and 
> constructing the folder structure, we just have to add the option of 
> detecting the null INodes. For e.g. the Delimited OIV processor can already 
> use in disk MetadataMap, which reduces memory consumption. Also there may be 
> a window for parallelizing: iterating through INodes for e.g. could be done 
> distributed, increasing efficiency, and we wouldn't need a high mem-high CPU 
> setup for just checking the FSImage.
> The suggestion is to add a --detectCorruption option to the OIV which would 
> check the FSImage for consistency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10240) Race between close/recoverLease leads to missing block

2018-08-16 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-10240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582534#comment-16582534
 ] 

genericqa commented on HDFS-10240:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 48s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 326 unchanged - 0 fixed = 327 total (was 326) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 58s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}109m 14s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}170m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.client.impl.TestBlockReaderLocal |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-10240 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12935842/HDFS-10240.006.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f29a03e7c06e 4.4.0-130-generic #156-Ubuntu SMP Thu Jun 14 
08:53:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6df606f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24790/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24790/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24790/testReport/ |
| Max. process+thread count | 

[jira] [Comment Edited] (HDFS-13747) Statistic for list_located_status is incremented incorrectly by listStatusIterator

2018-08-16 Thread Antal Mihalyi (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582456#comment-16582456
 ] 

Antal Mihalyi edited comment on HDFS-13747 at 8/16/18 1:01 PM:
---

[~gabor.bota] , [~xiaochen] , thank you for your reviews and thanks for the 
comments. I have fixed the deleted whitespaces and uploaded a new, improved 
patch.


was (Author: amihalyi):
[~gabor.bota] , [~xiaochen] , thank you for your reviews and thanks for the 
comments. I have fixed the deleted whitespaces and uploaded a new improved 
patch.

> Statistic for list_located_status is incremented incorrectly by 
> listStatusIterator
> --
>
> Key: HDFS-13747
> URL: https://issues.apache.org/jira/browse/HDFS-13747
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.3
>Reporter: Todd Lipcon
>Assignee: Antal Mihalyi
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-13747.001.patch, HDFS-13747.002.patch
>
>
> The DirListingIterator constructor calls 
> storageStatistics.incrementOpCounter(OpType.LIST_LOCATED_STATUS) 
> unconditionally even if 'needLocation' is false. It seems that if 
> needLocation is false, it should increment the LIST_STATUS counter instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13747) Statistic for list_located_status is incremented incorrectly by listStatusIterator

2018-08-16 Thread Antal Mihalyi (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582456#comment-16582456
 ] 

Antal Mihalyi edited comment on HDFS-13747 at 8/16/18 1:01 PM:
---

[~gabor.bota] , [~xiaochen] , thank you for your reviews and thanks for the 
comments. I have fixed the deleted whitespaces and uploaded a new, improved 
patch.

The timeouted test looks unrelated to me too.


was (Author: amihalyi):
[~gabor.bota] , [~xiaochen] , thank you for your reviews and thanks for the 
comments. I have fixed the deleted whitespaces and uploaded a new, improved 
patch.

> Statistic for list_located_status is incremented incorrectly by 
> listStatusIterator
> --
>
> Key: HDFS-13747
> URL: https://issues.apache.org/jira/browse/HDFS-13747
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.3
>Reporter: Todd Lipcon
>Assignee: Antal Mihalyi
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-13747.001.patch, HDFS-13747.002.patch
>
>
> The DirListingIterator constructor calls 
> storageStatistics.incrementOpCounter(OpType.LIST_LOCATED_STATUS) 
> unconditionally even if 'needLocation' is false. It seems that if 
> needLocation is false, it should increment the LIST_STATUS counter instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13747) Statistic for list_located_status is incremented incorrectly by listStatusIterator

2018-08-16 Thread Antal Mihalyi (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582456#comment-16582456
 ] 

Antal Mihalyi edited comment on HDFS-13747 at 8/16/18 1:00 PM:
---

[~gabor.bota] , [~xiaochen] , thank you for your reviews and thanks for the 
comments. I have fixed the deleted whitespaces and uploaded a new improved 
patch.


was (Author: amihalyi):
[~gabor.bota] , [~xiaochen] , thank you for your reviews and thanks for the 
comment. I have fixed the deleted whitespaces and uploaded a new improved patch.

> Statistic for list_located_status is incremented incorrectly by 
> listStatusIterator
> --
>
> Key: HDFS-13747
> URL: https://issues.apache.org/jira/browse/HDFS-13747
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.3
>Reporter: Todd Lipcon
>Assignee: Antal Mihalyi
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-13747.001.patch, HDFS-13747.002.patch
>
>
> The DirListingIterator constructor calls 
> storageStatistics.incrementOpCounter(OpType.LIST_LOCATED_STATUS) 
> unconditionally even if 'needLocation' is false. It seems that if 
> needLocation is false, it should increment the LIST_STATUS counter instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13747) Statistic for list_located_status is incremented incorrectly by listStatusIterator

2018-08-16 Thread Antal Mihalyi (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582456#comment-16582456
 ] 

Antal Mihalyi commented on HDFS-13747:
--

[~gabor.bota] , [~xiaochen] , thank you for your reviews and thanks for the 
comment. I have fixed the deleted whitespaces and uploaded a new improved patch.

> Statistic for list_located_status is incremented incorrectly by 
> listStatusIterator
> --
>
> Key: HDFS-13747
> URL: https://issues.apache.org/jira/browse/HDFS-13747
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.3
>Reporter: Todd Lipcon
>Assignee: Antal Mihalyi
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-13747.001.patch, HDFS-13747.002.patch
>
>
> The DirListingIterator constructor calls 
> storageStatistics.incrementOpCounter(OpType.LIST_LOCATED_STATUS) 
> unconditionally even if 'needLocation' is false. It seems that if 
> needLocation is false, it should increment the LIST_STATUS counter instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13747) Statistic for list_located_status is incremented incorrectly by listStatusIterator

2018-08-16 Thread Antal Mihalyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Antal Mihalyi updated HDFS-13747:
-
Attachment: HDFS-13747.002.patch

> Statistic for list_located_status is incremented incorrectly by 
> listStatusIterator
> --
>
> Key: HDFS-13747
> URL: https://issues.apache.org/jira/browse/HDFS-13747
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.3
>Reporter: Todd Lipcon
>Assignee: Antal Mihalyi
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-13747.001.patch, HDFS-13747.002.patch
>
>
> The DirListingIterator constructor calls 
> storageStatistics.incrementOpCounter(OpType.LIST_LOCATED_STATUS) 
> unconditionally even if 'needLocation' is false. It seems that if 
> needLocation is false, it should increment the LIST_STATUS counter instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13772) Erasure coding: Unnecessary NameNode Logs displaying for Enabling/Disabling Erasure coding policies which are already enabled/disabled

2018-08-16 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-13772:

Attachment: HDFS-13772-03 .patch

> Erasure coding: Unnecessary NameNode Logs displaying for Enabling/Disabling 
> Erasure coding policies which are already enabled/disabled
> --
>
> Key: HDFS-13772
> URL: https://issues.apache.org/jira/browse/HDFS-13772
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
> Environment: 3 Node SuSE Linux cluster 
>Reporter: Souryakanta Dwivedy
>Assignee: Ayush Saxena
>Priority: Trivial
> Attachments: EC_capture1.PNG, HDFS-13772-01.patch, 
> HDFS-13772-02.patch, HDFS-13772-03 .patch
>
>
> Unnecessary NameNode Logs displaying for Enabling/Disabling Erasure coding 
> policies which are already enabled/disabled
> - Enable any Erasure coding policy like "RS-LEGACY-6-3-1024k"
> - Check the console log display as "Erasure coding policy RS-LEGACY-6-3-1024k 
> is enabled"
> - Again try to enable the same policy multiple times "hdfs ec -enablePolicy 
> -policy RS-LEGACY-6-3-1024k"
>  instead of throwing error message as ""policy already enabled"" it will 
> display same messages as "Erasure coding policy RS-LEGACY-6-3-1024k is 
> enabled"
> - Also in NameNode log policy enabled logs are displaying multiple times 
> unnecessarily even though the policy is already enabled.
>  like this : 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Disable 
> the erasure coding policy RS-10-4-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Disable 
> the erasure coding policy RS-10-4-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Disable 
> the erasure coding policy RS-10-4-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Enable the 
> erasure coding policy RS-LEGACY-6-3-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Enable the 
> erasure coding policy RS-LEGACY-6-3-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Enable the 
> erasure coding policy RS-LEGACY-6-3-1024k
> - While executing the Erasure coding policy disable command also same type of 
> logs coming multiple times even though the policy is already 
>  disabled.It should throw error message as ""policy is already disabled"" for 
> already disabled policy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13772) Erasure coding: Unnecessary NameNode Logs displaying for Enabling/Disabling Erasure coding policies which are already enabled/disabled

2018-08-16 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-13772:

Attachment: (was: HDFS-13772-03.patch)

> Erasure coding: Unnecessary NameNode Logs displaying for Enabling/Disabling 
> Erasure coding policies which are already enabled/disabled
> --
>
> Key: HDFS-13772
> URL: https://issues.apache.org/jira/browse/HDFS-13772
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
> Environment: 3 Node SuSE Linux cluster 
>Reporter: Souryakanta Dwivedy
>Assignee: Ayush Saxena
>Priority: Trivial
> Attachments: EC_capture1.PNG, HDFS-13772-01.patch, HDFS-13772-02.patch
>
>
> Unnecessary NameNode Logs displaying for Enabling/Disabling Erasure coding 
> policies which are already enabled/disabled
> - Enable any Erasure coding policy like "RS-LEGACY-6-3-1024k"
> - Check the console log display as "Erasure coding policy RS-LEGACY-6-3-1024k 
> is enabled"
> - Again try to enable the same policy multiple times "hdfs ec -enablePolicy 
> -policy RS-LEGACY-6-3-1024k"
>  instead of throwing error message as ""policy already enabled"" it will 
> display same messages as "Erasure coding policy RS-LEGACY-6-3-1024k is 
> enabled"
> - Also in NameNode log policy enabled logs are displaying multiple times 
> unnecessarily even though the policy is already enabled.
>  like this : 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Disable 
> the erasure coding policy RS-10-4-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Disable 
> the erasure coding policy RS-10-4-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Disable 
> the erasure coding policy RS-10-4-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Enable the 
> erasure coding policy RS-LEGACY-6-3-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Enable the 
> erasure coding policy RS-LEGACY-6-3-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Enable the 
> erasure coding policy RS-LEGACY-6-3-1024k
> - While executing the Erasure coding policy disable command also same type of 
> logs coming multiple times even though the policy is already 
>  disabled.It should throw error message as ""policy is already disabled"" for 
> already disabled policy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-328) Support export and import of the KeyValueContainer

2018-08-16 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582442#comment-16582442
 ] 

genericqa commented on HDDS-328:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
47s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 24s{color} 
| {color:red} hadoop-hdds_container-service generated 2 new + 0 unchanged - 0 
fixed = 2 total (was 0) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} hadoop-hdds/container-service: The patch 
generated 10 new + 2 unchanged - 1 fixed = 12 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 53s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
55s{color} | {color:red} hadoop-hdds/container-service generated 3 new + 0 
unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
50s{color} | {color:green} container-service in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
26s{color} | {color:red} The patch generated 10 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdds/container-service |
|  |  Found reliance on default encoding in 
org.apache.hadoop.ozone.container.common.impl.ContainerDataYaml.readContainer(String):in
 
org.apache.hadoop.ozone.container.common.impl.ContainerDataYaml.readContainer(String):
 String.getBytes()  At ContainerDataYaml.java:[line 128] |
|  |  Found reliance on default encoding in 
org.apache.hadoop.ozone.container.keyvalue.TarContainerPacker.readEntryToString(TarArchiveInputStream,
 TarArchiveEntry):in 
org.apache.hadoop.ozone.container.keyvalue.TarContainerPacker.readEntryToString(TarArchiveInputStream,
 TarArchiveEntry): java.io.ByteArrayOutputStream.toString()  At 
TarContainerPacker.java:[line 220] |
|  |  Possible null pointer dereference in 
org.apache.hadoop.ozone.container.keyvalue.TarContainerPacker.extractEntry(TarArchiveInputStream,
 long, Path) due to return value of called method  Method invoked at 
TarContainerPacker.java:org.apache.hadoop.ozone.container.keyvalue.TarContainerPacker.extractEntry(TarArchiveInputStream,
 long, Path) due to return value of called method  Method invoked 

[jira] [Commented] (HDDS-179) CloseContainer command should be executed only if all the prior "Write" type container requests get executed

2018-08-16 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582446#comment-16582446
 ] 

Shashikant Banerjee commented on HDDS-179:
--

Patch v12 fixes the checkstyle issues. Test failures are not related as these 
tests are failing without the patch as well.

Opened HDDS-353 to track the same.

> CloseContainer command should be executed only if all the  prior "Write" type 
> container requests get executed
> -
>
> Key: HDDS-179
> URL: https://issues.apache.org/jira/browse/HDDS-179
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client, Ozone Datanode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-179.01.patch, HDDS-179.02.patch, HDDS-179.03.patch, 
> HDDS-179.04.patch, HDDS-179.05.patch, HDDS-179.06.patch, HDDS-179.07.patch, 
> HDDS-179.08.patch, HDDS-179.09,patch, HDDS-179.10.patch, HDDS-179.11.patch, 
> HDDS-179.12.patch
>
>
> When a close Container command request comes to a Datanode (via SCM hearbeat 
> response) through the Ratis protocol, all the prior enqueued "Write" type of 
> request like  WriteChunk etc should be executed first before CloseContainer 
> request gets executed. This synchronization needs to be handled in the 
> containerStateMachine. This Jira aims to address this.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-353) Multiple delete Blocks tests are failing consistetly

2018-08-16 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-353:


 Summary: Multiple delete Blocks tests are failing consistetly
 Key: HDDS-353
 URL: https://issues.apache.org/jira/browse/HDDS-353
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Manager, SCM
Reporter: Shashikant Banerjee
 Fix For: 0.2.1


As per the test reports here:

[https://builds.apache.org/job/PreCommit-HDDS-Build/771/testReport/], following 
tests are failing:

1 . TestStorageContainerManager#testBlockDeletionTransactions

2. TestStorageContainerManager#testBlockDeletingThrottling

3.TestBlockDeletion#testBlockDeletion



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13772) Erasure coding: Unnecessary NameNode Logs displaying for Enabling/Disabling Erasure coding policies which are already enabled/disabled

2018-08-16 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-13772:

Attachment: HDFS-13772-03.patch

> Erasure coding: Unnecessary NameNode Logs displaying for Enabling/Disabling 
> Erasure coding policies which are already enabled/disabled
> --
>
> Key: HDFS-13772
> URL: https://issues.apache.org/jira/browse/HDFS-13772
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
> Environment: 3 Node SuSE Linux cluster 
>Reporter: Souryakanta Dwivedy
>Assignee: Ayush Saxena
>Priority: Trivial
> Attachments: EC_capture1.PNG, HDFS-13772-01.patch, 
> HDFS-13772-02.patch, HDFS-13772-03.patch
>
>
> Unnecessary NameNode Logs displaying for Enabling/Disabling Erasure coding 
> policies which are already enabled/disabled
> - Enable any Erasure coding policy like "RS-LEGACY-6-3-1024k"
> - Check the console log display as "Erasure coding policy RS-LEGACY-6-3-1024k 
> is enabled"
> - Again try to enable the same policy multiple times "hdfs ec -enablePolicy 
> -policy RS-LEGACY-6-3-1024k"
>  instead of throwing error message as ""policy already enabled"" it will 
> display same messages as "Erasure coding policy RS-LEGACY-6-3-1024k is 
> enabled"
> - Also in NameNode log policy enabled logs are displaying multiple times 
> unnecessarily even though the policy is already enabled.
>  like this : 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Disable 
> the erasure coding policy RS-10-4-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Disable 
> the erasure coding policy RS-10-4-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Disable 
> the erasure coding policy RS-10-4-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Enable the 
> erasure coding policy RS-LEGACY-6-3-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Enable the 
> erasure coding policy RS-LEGACY-6-3-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Enable the 
> erasure coding policy RS-LEGACY-6-3-1024k
> - While executing the Erasure coding policy disable command also same type of 
> logs coming multiple times even though the policy is already 
>  disabled.It should throw error message as ""policy is already disabled"" for 
> already disabled policy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-179) CloseContainer command should be executed only if all the prior "Write" type container requests get executed

2018-08-16 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-179:
-
Attachment: HDDS-179.12.patch

> CloseContainer command should be executed only if all the  prior "Write" type 
> container requests get executed
> -
>
> Key: HDDS-179
> URL: https://issues.apache.org/jira/browse/HDDS-179
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client, Ozone Datanode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-179.01.patch, HDDS-179.02.patch, HDDS-179.03.patch, 
> HDDS-179.04.patch, HDDS-179.05.patch, HDDS-179.06.patch, HDDS-179.07.patch, 
> HDDS-179.08.patch, HDDS-179.09,patch, HDDS-179.10.patch, HDDS-179.11.patch, 
> HDDS-179.12.patch
>
>
> When a close Container command request comes to a Datanode (via SCM hearbeat 
> response) through the Ratis protocol, all the prior enqueued "Write" type of 
> request like  WriteChunk etc should be executed first before CloseContainer 
> request gets executed. This synchronization needs to be handled in the 
> containerStateMachine. This Jira aims to address this.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >