[jira] [Commented] (HDFS-14516) RBF: Create hdfs-rbf-site.xml for RBF specific properties

2019-05-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16849357#comment-16849357
 ] 

Hadoop QA commented on HDFS-14516:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 
58s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
34s{color} | {color:red} The patch generated 17 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14516 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12969959/HDFS-14516.1.patch |
| Optional Tests |  dupname  asflicense  mvnsite  unit  xml  compile  javac  
javadoc  mvninstall  shadedclient  findbugs  checkstyle  |
| uname | Linux 43bddff823a4 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 72dd790 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26849/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26849/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 974 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 

[jira] [Assigned] (HDFS-14455) Fix typo in HAState.java

2019-05-27 Thread hemanthboyina (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina reassigned HDFS-14455:


Assignee: hemanthboyina

> Fix typo in HAState.java
> 
>
> Key: HDFS-14455
> URL: https://issues.apache.org/jira/browse/HDFS-14455
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: hunshenshi
>Assignee: hemanthboyina
>Priority: Major
>
> There are some typo in HAState
> destructuve -> destructive
> Aleady -> Already
> Transtion -> Transition



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14516) RBF: Create hdfs-rbf-site.xml for RBF specific properties

2019-05-27 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16849326#comment-16849326
 ] 

Takanobu Asanuma commented on HDFS-14516:
-

Uploaded the 1st patch.

I don't think this change breaks the current compatibility.

> RBF: Create hdfs-rbf-site.xml for RBF specific properties
> -
>
> Key: HDFS-14516
> URL: https://issues.apache.org/jira/browse/HDFS-14516
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-14516.1.patch
>
>
> Currently, users write rbf properties in {{hdfs-site.xml}} though the 
> definitions are in {{hdfs-rbf-default.xml}}. Like other modules, it would be 
> better if there is a specific configuration file, {{hdfs-rbf-site.xml}}.
> {{hdfs-rbf-default.xml}} also should be loaded when it exists in the 
> configuration directory. It is just a document at the moment.
> There is an early discussion in HDFS-13215.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14516) RBF: Create hdfs-rbf-site.xml for RBF specific properties

2019-05-27 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-14516:

Status: Patch Available  (was: Open)

> RBF: Create hdfs-rbf-site.xml for RBF specific properties
> -
>
> Key: HDFS-14516
> URL: https://issues.apache.org/jira/browse/HDFS-14516
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-14516.1.patch
>
>
> Currently, users write rbf properties in {{hdfs-site.xml}} though the 
> definitions are in {{hdfs-rbf-default.xml}}. Like other modules, it would be 
> better if there is a specific configuration file, {{hdfs-rbf-site.xml}}.
> {{hdfs-rbf-default.xml}} also should be loaded when it exists in the 
> configuration directory. It is just a document at the moment.
> There is an early discussion in HDFS-13215.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14516) RBF: Create hdfs-rbf-site.xml for RBF specific properties

2019-05-27 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-14516:

Attachment: HDFS-14516.1.patch

> RBF: Create hdfs-rbf-site.xml for RBF specific properties
> -
>
> Key: HDFS-14516
> URL: https://issues.apache.org/jira/browse/HDFS-14516
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-14516.1.patch
>
>
> Currently, users write rbf properties in {{hdfs-site.xml}} though the 
> definitions are in {{hdfs-rbf-default.xml}}. Like other modules, it would be 
> better if there is a specific configuration file, {{hdfs-rbf-site.xml}}.
> {{hdfs-rbf-default.xml}} also should be loaded when it exists in the 
> configuration directory. It is just a document at the moment.
> There is an early discussion in HDFS-13215.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14516) RBF: Create hdfs-rbf-site.xml for RBF specific properties

2019-05-27 Thread Takanobu Asanuma (JIRA)
Takanobu Asanuma created HDFS-14516:
---

 Summary: RBF: Create hdfs-rbf-site.xml for RBF specific properties
 Key: HDFS-14516
 URL: https://issues.apache.org/jira/browse/HDFS-14516
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: rbf
Reporter: Takanobu Asanuma
Assignee: Takanobu Asanuma


Currently, users write rbf properties in {{hdfs-site.xml}} though the 
definitions are in {{hdfs-rbf-default.xml}}. Like other modules, it would be 
better if there is a specific configuration file, {{hdfs-rbf-site.xml}}.
{{hdfs-rbf-default.xml}} also should be loaded when it exists in the 
configuration directory. It is just a document at the moment.

There is an early discussion in HDFS-13215.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13255) RBF: Fail when try to remove mount point paths

2019-05-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16849300#comment-16849300
 ] 

Hadoop QA commented on HDFS-13255:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
56s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
20s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 49s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 27m  
4s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 90m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-13255 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12969956/HDFS-13255-HDFS-13891-004.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 71bd3e2a0442 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 8f1f042 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26848/testReport/ |
| Max. process+thread count | 916 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26848/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: Fail when try to remove mount point paths
> --
>
>

[jira] [Work logged] (HDDS-1600) Add userName and IPAddress as part of OMRequest.

2019-05-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1600?focusedWorklogId=249087=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-249087
 ]

ASF GitHub Bot logged work on HDDS-1600:


Author: ASF GitHub Bot
Created on: 28/May/19 03:04
Start Date: 28/May/19 03:04
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #857: HDDS-1600. Add 
userName and IPAddress as part of OMRequest.
URL: https://github.com/apache/hadoop/pull/857#issuecomment-496348219
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 37 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 16 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 65 | Maven dependency ordering for branch |
   | +1 | mvninstall | 527 | trunk passed |
   | +1 | compile | 266 | trunk passed |
   | +1 | checkstyle | 78 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 932 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 149 | trunk passed |
   | 0 | spotbugs | 325 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 531 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for patch |
   | +1 | mvninstall | 505 | the patch passed |
   | +1 | compile | 286 | the patch passed |
   | +1 | cc | 286 | the patch passed |
   | +1 | javac | 286 | the patch passed |
   | -0 | checkstyle | 42 | hadoop-hdds: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | -0 | checkstyle | 38 | hadoop-ozone: The patch generated 3 new + 0 
unchanged - 0 fixed = 3 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 3 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 717 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 78 | hadoop-ozone generated 6 new + 5 unchanged - 0 fixed = 
11 total (was 5) |
   | +1 | findbugs | 535 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 167 | hadoop-hdds in the patch failed. |
   | -1 | unit | 51 | hadoop-ozone in the patch failed. |
   | -1 | asflicense | 44 | The patch generated 17 ASF License warnings. |
   | | | 12976 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.exceptions.TestResultCodes |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-857/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/857 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle cc |
   | uname | Linux e214af0213dd 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed 
Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b70d1be |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-857/3/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-857/3/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-857/3/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-857/3/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-857/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-857/3/testReport/ |
   | asflicense | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-857/3/artifact/out/patch-asflicense-problems.txt
 |
   | Max. process+thread count | 356 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-857/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, 

[jira] [Commented] (HDDS-1534) freon should return non-zero exit code on failure

2019-05-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16849296#comment-16849296
 ] 

Hudson commented on HDDS-1534:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16612 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16612/])
HDDS-1534. freon should return non-zero exit code on failure. (msingh: rev 
72dd79015a00d29015bec30f1bfc7ededab6a2b1)
* (edit) 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/RandomKeyGenerator.java


> freon should return non-zero exit code on failure
> -
>
> Key: HDDS-1534
> URL: https://issues.apache.org/jira/browse/HDDS-1534
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Major
> Fix For: 0.4.1
>
> Attachments: HDDS-1534.001.patch, HDDS-1534.002.patch
>
>
> Currently freon does not return any non-zero exit code even on failure.
> The status shows as "Failed" but the exit code is always zero.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-283) Need an option to list all volumes created in the cluster

2019-05-27 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16849290#comment-16849290
 ] 

Mukul Kumar Singh commented on HDDS-283:


Thanks for working on this [~nilotpalnandi]. Can you please rebase the patch to 
latest trunk ?

> Need an option to list all volumes created in the cluster
> -
>
> Key: HDDS-283
> URL: https://issues.apache.org/jira/browse/HDDS-283
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Blocker
> Attachments: HDDS-283.001.patch
>
>
> Currently , listVolume command either gives :
> 1) all the volumes created by a particular user , using -user argument.
> 2) or , all the volumes created by the logged in user , if no -user argument 
> is provided.
>  
> We need an option to list all the volumes created in the cluster



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1534) freon should return non-zero exit code on failure

2019-05-27 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-1534:

   Resolution: Fixed
Fix Version/s: 0.4.1
   Status: Resolved  (was: Patch Available)

Thanks for the contribution [~nilotpalnandi]  and thanks to [~sdeka] for the 
reviews. I have committed this to trunk.

> freon should return non-zero exit code on failure
> -
>
> Key: HDDS-1534
> URL: https://issues.apache.org/jira/browse/HDDS-1534
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Major
> Fix For: 0.4.1
>
> Attachments: HDDS-1534.001.patch, HDDS-1534.002.patch
>
>
> Currently freon does not return any non-zero exit code even on failure.
> The status shows as "Failed" but the exit code is always zero.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1600) Add userName and IPAddress as part of OMRequest.

2019-05-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1600?focusedWorklogId=249079=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-249079
 ]

ASF GitHub Bot logged work on HDDS-1600:


Author: ASF GitHub Bot
Created on: 28/May/19 02:06
Start Date: 28/May/19 02:06
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #857: HDDS-1600. Add 
userName and IPAddress as part of OMRequest.
URL: https://github.com/apache/hadoop/pull/857#issuecomment-496338716
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 1613 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 16 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 74 | Maven dependency ordering for branch |
   | +1 | mvninstall | 565 | trunk passed |
   | +1 | compile | 274 | trunk passed |
   | +1 | checkstyle | 73 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 872 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 146 | trunk passed |
   | 0 | spotbugs | 304 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 491 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 32 | Maven dependency ordering for patch |
   | +1 | mvninstall | 509 | the patch passed |
   | +1 | compile | 278 | the patch passed |
   | +1 | cc | 278 | the patch passed |
   | +1 | javac | 278 | the patch passed |
   | -0 | checkstyle | 41 | hadoop-hdds: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | -0 | checkstyle | 44 | hadoop-ozone: The patch generated 3 new + 0 
unchanged - 0 fixed = 3 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 3 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 682 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 73 | hadoop-ozone generated 6 new + 5 unchanged - 0 fixed = 
11 total (was 5) |
   | +1 | findbugs | 514 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 200 | hadoop-hdds in the patch failed. |
   | -1 | unit | 52 | hadoop-ozone in the patch failed. |
   | -1 | asflicense | 38 | The patch generated 17 ASF License warnings. |
   | | | 13655 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.exceptions.TestResultCodes |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-857/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/857 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle cc |
   | uname | Linux 072b779556db 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ec92ca6 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-857/1/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-857/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-857/1/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-857/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-857/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-857/1/testReport/ |
   | asflicense | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-857/1/artifact/out/patch-asflicense-problems.txt
 |
   | Max. process+thread count | 412 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-857/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log 

[jira] [Created] (HDDS-1603) Handle Ratis Append Failure in Container State Machine

2019-05-27 Thread Supratim Deka (JIRA)
Supratim Deka created HDDS-1603:
---

 Summary: Handle Ratis Append Failure in Container State Machine
 Key: HDDS-1603
 URL: https://issues.apache.org/jira/browse/HDDS-1603
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Datanode, SCM
Reporter: Supratim Deka


RATIS-573 would add notification to the State Machine on encountering failure 
during Log append. 

The scope of this jira is to build on RATIS-573 and define the handling for log 
append failure in Container State Machine.
1. Enqueue pipeline unhealthy action to SCM, add a reason code to the message.
2. Trigger heartbeat to SCM
3. Notify Ratis volume unhealthy to the Datanode, so that DN can trigger async 
volume checker

Changes in the SCM to leverage the additional failure reason code, is outside 
the scope of this jira.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10210) Remove the defunct startKdc profile from hdfs

2019-05-27 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-10210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16849281#comment-16849281
 ] 

Akira Ajisaka commented on HDFS-10210:
--

Ping [~jojochuang]. If you are busy, I'd like to rebase the patch. Thanks.

> Remove the defunct startKdc profile from hdfs
> -
>
> Key: HDFS-10210
> URL: https://issues.apache.org/jira/browse/HDFS-10210
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HDFS-10210.001.patch, HDFS-10210.002.patch
>
>
> This is the corresponding HDFS jira of HADOOP-12948.
> The startKdc profile introduced in HDFS-3016 is broken, and is actually no 
> longer used at all. 
> Let's remove it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13255) RBF: Fail when try to remove mount point paths

2019-05-27 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16849280#comment-16849280
 ] 

Akira Ajisaka commented on HDFS-13255:
--

Thanks [~ayushtkn] for reviewing this. Fixed the checkstyle warning.

> RBF: Fail when try to remove mount point paths
> --
>
> Key: HDFS-13255
> URL: https://issues.apache.org/jira/browse/HDFS-13255
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wu Weiwei
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HDFS-13255-HDFS-13891-002.patch, 
> HDFS-13255-HDFS-13891-003.patch, HDFS-13255-HDFS-13891-004.patch, 
> HDFS-13255-HDFS-13891-wip-001.patch
>
>
> when delete a ns-fed path which include mount point paths, will issue a error.
> Need to delete each mount point path independently.
> Operation step:
> {code:java}
> [hadp@root]$ hdfs dfsrouteradmin -ls
> Mount Table Entries:
> Source Destinations Owner Group Mode Quota/Usage
> /rm-test-all/rm-test-ns10 ns10->/rm-test hadp hadp rwxr-xr-x [NsQuota: -/-, 
> SsQuota: -/-]
> /rm-test-all/rm-test-ns2 ns1->/rm-test hadp hadp rwxr-xr-x [NsQuota: -/-, 
> SsQuota: -/-]
> [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns10/
> Found 2 items
> -rw-r--r-- 3 hadp supergroup 3118 2018-03-07 21:52 
> hdfs://ns-fed/rm-test-all/rm-test-ns10/core-site.xml
> -rw-r--r-- 3 hadp supergroup 7481 2018-03-07 21:52 
> hdfs://ns-fed/rm-test-all/rm-test-ns10/hdfs-site.xml
> [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns2/
> Found 2 items
> -rw-r--r-- 3 hadp supergroup 101 2018-03-07 16:57 
> hdfs://ns-fed/rm-test-all/rm-test-ns2/NOTICE.txt
> -rw-r--r-- 3 hadp supergroup 1366 2018-03-07 16:57 
> hdfs://ns-fed/rm-test-all/rm-test-ns2/README.txt
> [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns10/
> Found 2 items
> -rw-r--r-- 3 hadp supergroup 3118 2018-03-07 21:52 
> hdfs://ns-fed/rm-test-all/rm-test-ns10/core-site.xml
> -rw-r--r-- 3 hadp supergroup 7481 2018-03-07 21:52 
> hdfs://ns-fed/rm-test-all/rm-test-ns10/hdfs-site.xml
> [hadp@root]$ hdfs dfs -rm -r hdfs://ns-fed/rm-test-all/
> rm: Failed to move to trash: hdfs://ns-fed/rm-test-all. Consider using 
> -skipTrash option
> [hadp@root]$ hdfs dfs -rm -r -skipTrash hdfs://ns-fed/rm-test-all/
> rm: `hdfs://ns-fed/rm-test-all': Input/output error
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13255) RBF: Fail when try to remove mount point paths

2019-05-27 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-13255:
-
Attachment: HDFS-13255-HDFS-13891-004.patch

> RBF: Fail when try to remove mount point paths
> --
>
> Key: HDFS-13255
> URL: https://issues.apache.org/jira/browse/HDFS-13255
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wu Weiwei
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HDFS-13255-HDFS-13891-002.patch, 
> HDFS-13255-HDFS-13891-003.patch, HDFS-13255-HDFS-13891-004.patch, 
> HDFS-13255-HDFS-13891-wip-001.patch
>
>
> when delete a ns-fed path which include mount point paths, will issue a error.
> Need to delete each mount point path independently.
> Operation step:
> {code:java}
> [hadp@root]$ hdfs dfsrouteradmin -ls
> Mount Table Entries:
> Source Destinations Owner Group Mode Quota/Usage
> /rm-test-all/rm-test-ns10 ns10->/rm-test hadp hadp rwxr-xr-x [NsQuota: -/-, 
> SsQuota: -/-]
> /rm-test-all/rm-test-ns2 ns1->/rm-test hadp hadp rwxr-xr-x [NsQuota: -/-, 
> SsQuota: -/-]
> [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns10/
> Found 2 items
> -rw-r--r-- 3 hadp supergroup 3118 2018-03-07 21:52 
> hdfs://ns-fed/rm-test-all/rm-test-ns10/core-site.xml
> -rw-r--r-- 3 hadp supergroup 7481 2018-03-07 21:52 
> hdfs://ns-fed/rm-test-all/rm-test-ns10/hdfs-site.xml
> [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns2/
> Found 2 items
> -rw-r--r-- 3 hadp supergroup 101 2018-03-07 16:57 
> hdfs://ns-fed/rm-test-all/rm-test-ns2/NOTICE.txt
> -rw-r--r-- 3 hadp supergroup 1366 2018-03-07 16:57 
> hdfs://ns-fed/rm-test-all/rm-test-ns2/README.txt
> [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns10/
> Found 2 items
> -rw-r--r-- 3 hadp supergroup 3118 2018-03-07 21:52 
> hdfs://ns-fed/rm-test-all/rm-test-ns10/core-site.xml
> -rw-r--r-- 3 hadp supergroup 7481 2018-03-07 21:52 
> hdfs://ns-fed/rm-test-all/rm-test-ns10/hdfs-site.xml
> [hadp@root]$ hdfs dfs -rm -r hdfs://ns-fed/rm-test-all/
> rm: Failed to move to trash: hdfs://ns-fed/rm-test-all. Consider using 
> -skipTrash option
> [hadp@root]$ hdfs dfs -rm -r -skipTrash hdfs://ns-fed/rm-test-all/
> rm: `hdfs://ns-fed/rm-test-all': Input/output error
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1539) Implement addAcl,removeAcl,setAcl,getAcl for Volume

2019-05-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1539?focusedWorklogId=249072=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-249072
 ]

ASF GitHub Bot logged work on HDDS-1539:


Author: ASF GitHub Bot
Created on: 28/May/19 01:18
Start Date: 28/May/19 01:18
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #847: HDDS-1539. 
Implement addAcl,removeAcl,setAcl,getAcl for Volume. Contributed Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/847#discussion_r287902348
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/protocolPB/OMPBHelper.java
 ##
 @@ -122,10 +121,9 @@ public static OzoneAcl convertOzoneAcl(OzoneAclInfo 
aclInfo) {
   throw new IllegalArgumentException("ACL type is not recognized");
 }
 
-List aclRights = new ArrayList<>();
-for (OzoneAclRights acl : aclInfo.getRightsList()) {
-  aclRights.add(ACLType.valueOf(acl.name()));
-}
+BitSet aclRights = new BitSet(aclInfo.getRightsList().size());
 
 Review comment:
   We should not use this to set the BitSet size 
*aclInfo.getRightsList().size()*
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 249072)
Time Spent: 1h 10m  (was: 1h)

> Implement addAcl,removeAcl,setAcl,getAcl for Volume
> ---
>
> Key: HDDS-1539
> URL: https://issues.apache.org/jira/browse/HDDS-1539
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Implement addAcl,removeAcl,setAcl,getAcl for Volume



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=249071=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-249071
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 28/May/19 01:14
Start Date: 28/May/19 01:14
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #850: HDDS-1551. 
Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#issuecomment-496330778
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 540 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 15 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 68 | Maven dependency ordering for branch |
   | +1 | mvninstall | 522 | trunk passed |
   | +1 | compile | 247 | trunk passed |
   | +1 | checkstyle | 61 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 823 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 183 | trunk passed |
   | 0 | spotbugs | 288 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 478 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 73 | Maven dependency ordering for patch |
   | +1 | mvninstall | 478 | the patch passed |
   | +1 | compile | 276 | the patch passed |
   | +1 | cc | 276 | the patch passed |
   | +1 | javac | 276 | the patch passed |
   | -0 | checkstyle | 46 | hadoop-hdds: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 3 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 663 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 69 | hadoop-ozone generated 4 new + 5 unchanged - 0 fixed = 
9 total (was 5) |
   | +1 | findbugs | 496 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 143 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1009 | hadoop-ozone in the patch failed. |
   | -1 | asflicense | 61 | The patch generated 17 ASF License warnings. |
   | | | 6590 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.impl.TestContainerPersistence |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.web.client.TestKeys |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/850 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle cc |
   | uname | Linux 9c7a1764a1e1 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ec92ca6 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/4/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/4/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/4/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/4/testReport/ |
   | asflicense | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/4/artifact/out/patch-asflicense-problems.txt
 |
   | Max. process+thread count | 5319 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact 

[jira] [Work logged] (HDDS-1599) Fix TestReplicationManager and checkstyle issues.

2019-05-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1599?focusedWorklogId=249070=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-249070
 ]

ASF GitHub Bot logged work on HDDS-1599:


Author: ASF GitHub Bot
Created on: 28/May/19 01:14
Start Date: 28/May/19 01:14
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #856: HDDS-1599. Fix 
TestReplicationManager.
URL: https://github.com/apache/hadoop/pull/856#issuecomment-496330738
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 36 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 528 | trunk passed |
   | +1 | compile | 256 | trunk passed |
   | +1 | checkstyle | 69 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 819 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 147 | trunk passed |
   | 0 | spotbugs | 286 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 467 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 489 | the patch passed |
   | +1 | compile | 271 | the patch passed |
   | +1 | javac | 271 | the patch passed |
   | +1 | checkstyle | 83 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 654 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 147 | the patch passed |
   | +1 | findbugs | 559 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 148 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1695 | hadoop-ozone in the patch failed. |
   | -1 | asflicense | 46 | The patch generated 17 ASF License warnings. |
   | | | 6621 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestReadRetries |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.container.common.impl.TestContainerPersistence |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-856/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/856 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 3417dee5d6cb 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ec92ca6 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-856/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-856/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-856/2/testReport/ |
   | asflicense | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-856/2/artifact/out/patch-asflicense-problems.txt
 |
   | Max. process+thread count | 4874 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-856/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 249070)
Time Spent: 1h 10m  (was: 1h)

> Fix TestReplicationManager and checkstyle issues.
> -
>
> Key: HDDS-1599
> URL: https://issues.apache.org/jira/browse/HDDS-1599
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat 

[jira] [Work logged] (HDDS-1539) Implement addAcl,removeAcl,setAcl,getAcl for Volume

2019-05-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1539?focusedWorklogId=249069=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-249069
 ]

ASF GitHub Bot logged work on HDDS-1539:


Author: ASF GitHub Bot
Created on: 28/May/19 01:11
Start Date: 28/May/19 01:11
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #847: HDDS-1539. 
Implement addAcl,removeAcl,setAcl,getAcl for Volume. Contributed Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/847#discussion_r287901527
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -56,8 +58,8 @@ public OzoneAcl() {
*/
   public OzoneAcl(ACLIdentityType type, String name, ACLType acl) {
 this.name = name;
-this.rights = new ArrayList<>();
-this.rights.add(acl);
+this.aclBitSet = new BitSet(ACLType.values().length);
 
 Review comment:
   Use ACLType#getNoOfAcls()?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 249069)
Time Spent: 1h  (was: 50m)

> Implement addAcl,removeAcl,setAcl,getAcl for Volume
> ---
>
> Key: HDDS-1539
> URL: https://issues.apache.org/jira/browse/HDDS-1539
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Implement addAcl,removeAcl,setAcl,getAcl for Volume



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1559) Include committedBytes to determine Out of Space in VolumeChoosingPolicy

2019-05-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16849277#comment-16849277
 ] 

Hudson commented on HDDS-1559:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16611 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16611/])
HDDS-1559. Fix TestReplicationManager. Contributed by Bharat (xyao: rev 
b70d1be685c5f9d08ab39f9ea73fc0561e037c74)
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestReplicationManager.java


> Include committedBytes to determine Out of Space in VolumeChoosingPolicy
> 
>
> Key: HDDS-1559
> URL: https://issues.apache.org/jira/browse/HDDS-1559
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> This is a follow-up from HDDS-1511 and HDDS-1535
> Currently  when creating a new Container, the DN invokes 
> RoundRobinVolumeChoosingPolicy:chooseVolume(). This routine checks for 
> (volume available space > container max size). If no eligible volume is 
> found, the policy throws a DiskOutOfSpaceException. This is the current 
> behaviour.
> However, the computation of available space does not take into consideration 
> the space
> that is going to be consumed by writes to existing containers which are still 
> Open and accepting chunk writes.
> This Jira proposes to enhance the space availability check in chooseVolume by 
> inclusion of committed space(committedBytes in HddsVolume) in the equation.
> The handling/management of the exception in Ratis will not be modified in 
> this Jira. That will be scoped separately as part of Datanode IO Failure 
> handling work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1539) Implement addAcl,removeAcl,setAcl,getAcl for Volume

2019-05-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1539?focusedWorklogId=249068=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-249068
 ]

ASF GitHub Bot logged work on HDDS-1539:


Author: ASF GitHub Bot
Created on: 28/May/19 01:07
Start Date: 28/May/19 01:07
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #847: HDDS-1539. 
Implement addAcl,removeAcl,setAcl,getAcl for Volume. Contributed Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/847#discussion_r287901207
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -75,9 +77,20 @@ public OzoneAcl(ACLIdentityType type, String name, ACLType 
acl) {
* @param name - Name of user
* @param acls - Rights
*/
-  public OzoneAcl(ACLIdentityType type, String name, List acls) {
+  public OzoneAcl(ACLIdentityType type, String name, BitSet acls) {
+Objects.requireNonNull(type);
+Objects.requireNonNull(acls);
+
+if(acls.cardinality() > ACLType.getNoOfAcls()) {
+  throw new IllegalArgumentException("Acl bitset passed has unexpected " +
+  "size. bitset size:" + acls.cardinality() + ", bitset:"
+  + acls.toString());
+}
+
+this.aclBitSet = new BitSet();
 
 Review comment:
   this can be simplified as this.aclBitSet= acls.clone();
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 249068)
Time Spent: 50m  (was: 40m)

> Implement addAcl,removeAcl,setAcl,getAcl for Volume
> ---
>
> Key: HDDS-1539
> URL: https://issues.apache.org/jira/browse/HDDS-1539
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Implement addAcl,removeAcl,setAcl,getAcl for Volume



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1539) Implement addAcl,removeAcl,setAcl,getAcl for Volume

2019-05-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1539?focusedWorklogId=249067=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-249067
 ]

ASF GitHub Bot logged work on HDDS-1539:


Author: ASF GitHub Bot
Created on: 28/May/19 00:59
Start Date: 28/May/19 00:59
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #847: HDDS-1539. 
Implement addAcl,removeAcl,setAcl,getAcl for Volume. Contributed Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/847#discussion_r287900521
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/ObjectStore.java
 ##
 @@ -444,4 +446,50 @@ public String getCanonicalServiceName() {
 return proxy.getCanonicalServiceName();
   }
 
+  /**
+   * Add acl for Ozone object. Return true if acl is added successfully else
+   * false.
+   * @param obj Ozone object for which acl should be added.
+   * @param acl ozone acl top be added.
+   *
 
 Review comment:
   Can you add javadoc for the @return values? Same for the other ACL APIs.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 249067)
Time Spent: 40m  (was: 0.5h)

> Implement addAcl,removeAcl,setAcl,getAcl for Volume
> ---
>
> Key: HDDS-1539
> URL: https://issues.apache.org/jira/browse/HDDS-1539
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Implement addAcl,removeAcl,setAcl,getAcl for Volume



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1565) Rename k8s-dev and k8s-dev-push profiles to docker-build and docker-push

2019-05-27 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16849272#comment-16849272
 ] 

Eric Yang edited comment on HDDS-1565 at 5/28/19 12:55 AM:
---

{quote}No circular dependency is introduced{quote}

I don't think this is good enough reason.  There is circular dependency 
introduced in the current dist project and docker-compose mount.  User can not 
run blockade test until release tarball is made then change directory to 
OZONE_HOME to run.  This is very tiresome process to have to wait tarball 
creation to test blockade.  Same problem applies to docker development that 
needs to wait for tarball creation, can't work independently.

{quote}No significant IO usage is added (copy 500MB tar files multiple times is 
a significant IO usage){quote}

My experience of downloading hadoop-runner and related docker images over 3GB 
is much more costly to developer than local disk IO.


was (Author: eyang):
{quote}No circular dependency is introduced{quote}

I don't think this is good enough reason.  There is circular dependency 
introduced in the current dist mount.  User can not run blockade test until 
release tarball is made then change directory to OZONE_HOME to run.  This is 
very tiresome process to have to wait tarball creation to test blockade.  Same 
problem applies to docker development that needs to wait for tarball creation, 
can't work independently.

{quote}No significant IO usage is added (copy 500MB tar files multiple times is 
a significant IO usage){quote}

My experience of downloading hadoop-runner and related docker images over 3GB 
is much more costly to developer than local disk IO.

> Rename k8s-dev and k8s-dev-push profiles to docker-build and docker-push
> 
>
> Key: HDDS-1565
> URL: https://issues.apache.org/jira/browse/HDDS-1565
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Based on the feedback from [~eyang] I realized that the names of the k8s-dev 
> and k8s-dev-push profiles are not expressive enough as the created containers 
> can be used not only for kubernetes but can be used together with any other 
> container orchestrator.
> I propose to rename them to docker-build/docker-push.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1599) Fix TestReplicationManager and checkstyle issues.

2019-05-27 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1599:
-
   Resolution: Fixed
Fix Version/s: 0.4.1
   Status: Resolved  (was: Patch Available)

> Fix TestReplicationManager and checkstyle issues.
> -
>
> Key: HDDS-1599
> URL: https://issues.apache.org/jira/browse/HDDS-1599
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> When working on HDDS-1551, found some test failures which are not related to 
> HDDS-1551.
> This is caused by HDDS-700. 
>  
> This has not caught by Jenkins run because our Jenkins run does not run UT's 
> for all the sub-modules. In this case, it should have run UT's for 
> hadoop-hdds-server-scm, as there are some changes in src/test files in that 
> module, but still, it has not run for it. I think Jenkins run for ozone 
> project is not properly setup.
> [https://ci.anzix.net/job/ozone/16895/testReport/]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1565) Rename k8s-dev and k8s-dev-push profiles to docker-build and docker-push

2019-05-27 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16849272#comment-16849272
 ] 

Eric Yang commented on HDDS-1565:
-

{quote}No circular dependency is introduced{quote}

I don't think this is good enough reason.  There is circular dependency 
introduced in the current dist mount.  User can not run blockade test until 
release tarball is made then change directory to OZONE_HOME to run.  This is 
very tiresome process to have to wait tarball creation to test blockade.  Same 
problem applies to docker development that needs to wait for tarball creation, 
can't work independently.

{quote}No significant IO usage is added (copy 500MB tar files multiple times is 
a significant IO usage){quote}

My experience of downloading hadoop-runner and related docker images over 3GB 
is much more costly to developer than local disk IO.

> Rename k8s-dev and k8s-dev-push profiles to docker-build and docker-push
> 
>
> Key: HDDS-1565
> URL: https://issues.apache.org/jira/browse/HDDS-1565
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Based on the feedback from [~eyang] I realized that the names of the k8s-dev 
> and k8s-dev-push profiles are not expressive enough as the created containers 
> can be used not only for kubernetes but can be used together with any other 
> container orchestrator.
> I propose to rename them to docker-build/docker-push.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1599) Fix TestReplicationManager and checkstyle issues.

2019-05-27 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16849271#comment-16849271
 ] 

Xiaoyu Yao commented on HDDS-1599:
--

Good catch, [~bharatviswa], I will suggest the team to use github PR in future 
patches. +1, I've commit the patch to trunk. 

> Fix TestReplicationManager and checkstyle issues.
> -
>
> Key: HDDS-1599
> URL: https://issues.apache.org/jira/browse/HDDS-1599
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> When working on HDDS-1551, found some test failures which are not related to 
> HDDS-1551.
> This is caused by HDDS-700. 
>  
> This has not caught by Jenkins run because our Jenkins run does not run UT's 
> for all the sub-modules. In this case, it should have run UT's for 
> hadoop-hdds-server-scm, as there are some changes in src/test files in that 
> module, but still, it has not run for it. I think Jenkins run for ozone 
> project is not properly setup.
> [https://ci.anzix.net/job/ozone/16895/testReport/]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1599) Fix TestReplicationManager and checkstyle issues.

2019-05-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1599?focusedWorklogId=249066=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-249066
 ]

ASF GitHub Bot logged work on HDDS-1599:


Author: ASF GitHub Bot
Created on: 28/May/19 00:52
Start Date: 28/May/19 00:52
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #856: HDDS-1599. 
Fix TestReplicationManager.
URL: https://github.com/apache/hadoop/pull/856
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 249066)
Time Spent: 1h  (was: 50m)

> Fix TestReplicationManager and checkstyle issues.
> -
>
> Key: HDDS-1599
> URL: https://issues.apache.org/jira/browse/HDDS-1599
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> When working on HDDS-1551, found some test failures which are not related to 
> HDDS-1551.
> This is caused by HDDS-700. 
>  
> This has not caught by Jenkins run because our Jenkins run does not run UT's 
> for all the sub-modules. In this case, it should have run UT's for 
> hadoop-hdds-server-scm, as there are some changes in src/test files in that 
> module, but still, it has not run for it. I think Jenkins run for ozone 
> project is not properly setup.
> [https://ci.anzix.net/job/ozone/16895/testReport/]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1599) Fix TestReplicationManager and checkstyle issues.

2019-05-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1599?focusedWorklogId=249065=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-249065
 ]

ASF GitHub Bot logged work on HDDS-1599:


Author: ASF GitHub Bot
Created on: 28/May/19 00:50
Start Date: 28/May/19 00:50
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on issue #856: HDDS-1599. Fix 
TestReplicationManager.
URL: https://github.com/apache/hadoop/pull/856#issuecomment-496328020
 
 
   +1, thanks for fixing this, @bharatviswa504.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 249065)
Time Spent: 50m  (was: 40m)

> Fix TestReplicationManager and checkstyle issues.
> -
>
> Key: HDDS-1599
> URL: https://issues.apache.org/jira/browse/HDDS-1599
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> When working on HDDS-1551, found some test failures which are not related to 
> HDDS-1551.
> This is caused by HDDS-700. 
>  
> This has not caught by Jenkins run because our Jenkins run does not run UT's 
> for all the sub-modules. In this case, it should have run UT's for 
> hadoop-hdds-server-scm, as there are some changes in src/test files in that 
> module, but still, it has not run for it. I think Jenkins run for ozone 
> project is not properly setup.
> [https://ci.anzix.net/job/ozone/16895/testReport/]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1554) Create disk tests for fault injection test

2019-05-27 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16849266#comment-16849266
 ] 

Eric Yang commented on HDDS-1554:
-

1 {quote}The new tests are missing from the distribution tar file 
(hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/tests/). We agreed to support 
the execution of all the new tests from the final tar.{quote}

Yes, I remember that conversation, and not discounting that agreement.  The 
code needs to be rewritten in Python and move to be built prior to distribution 
project to achieve what we agreed on.  What we will lose as part of the process 
are:
* Lose ability to accurately pin point where exception occurs because Java 
stacktrace may not be captured by python tests.
* Working against maven life cycle.  Integration test suppose to come after 
package has happened.  We are sending more testing binaries in release tarball 
that are irrelevant in production.
* Wasting time packaging integration test binaries in release tarball.

2 {quote} I am not sure why we need the normal read/write test. All of the 
smoketests and integration-tests are testing this scenario{quote}

The only difference between this version and smoke test is the cilent is not 
running in the same network as the docker containers.  This has actually helped 
us to catch a few bugs, like SCMCLI client retries, and protobuf versioning 
problem.  It also help us to test if client JDK is different from cluster JDK.  
It provides a better testbed to show what it is like for data injection to 
container cluster look like from external clients.

3 {quote}With the Read/Only test: I don't think that we need to support 
read-only disks. The only question is if the right exception is thrown. I think 
it also can be tested from MiniOzoneCluster / real unit tests in a more 
lightweight way.{quote}

Read only is to prevent disk write to simulate configuration issue for data 
directory, or disk is mounted as read-only incorrectly.  This injects faults 
into normal workflow by changing a few docker parameters, and easy to clean up 
without leaving read-only debris in build directory.  This area needs more 
expansion.  We can add test case that focus on making metadata disk read-only, 
or datanode disk read-only.  Then measure if strained process have negative 
side effect to the cluster, and check replication proceeded correctly.

4 {quote}Anu Engineer suggested multiple times to do the disk failure injection 
on the java code level where more sophisticated tests can be added (eg. 
generate corrupt read with low probability with using specific 
Input/OutputStream). Can you please explain the design consideration to use 
docker images? Why is it better than the suggested solution?{quote}

We have already done that with aspect/J in HDFS-435.  The work was not fruitful 
and [proposed for 
removal|https://issues.apache.org/jira/browse/HDFS-6819?focusedCommentId=15235595=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-15235595].
  The key point of fault injection is to catch exceptions that may not have 
been handled correctly.  By randomly adding junk to data file or change files 
to read-only, the tests can exercise the normal routines to generate exceptions 
that may not have been tested as fully.  By using Docker mounted volumes, we 
can generate the faults outside of normal Java code path.  This provides better 
opportunity to create errors asynchronously.

> Create disk tests for fault injection test
> --
>
> Key: HDDS-1554
> URL: https://issues.apache.org/jira/browse/HDDS-1554
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1554.001.patch
>
>
> The current plan for fault injection disk tests are:
>  # Scenario 1 - Read/Write test
>  ## Run docker-compose to bring up a cluster
>  ## Initialize scm and om
>  ## Upload data to Ozone cluster
>  ## Verify data is correct
>  ## Shutdown cluster
>  # Scenario 2 - Read/Only test
>  ## Repeat Scenario 1
>  ## Mount data disk as read only
>  ## Try to write data to Ozone cluster
>  ## Validate error message is correct
>  ## Shutdown cluster
>  # Scenario 3 - Corruption test
>  ## Repeat Scenario 2
>  ## Shutdown cluster
>  ## Modify data disk data
>  ## Restart cluster
>  ## Validate error message for read from corrupted data
>  ## Validate error message for write to corrupted volume



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1602) Fix TestContainerPersistence#testDeleteBlockTwice

2019-05-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1602?focusedWorklogId=249062=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-249062
 ]

ASF GitHub Bot logged work on HDDS-1602:


Author: ASF GitHub Bot
Created on: 28/May/19 00:27
Start Date: 28/May/19 00:27
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on issue #858: HDDS-1602. Fix 
TestContainerPersistence#testDeleteBlockTwice.
URL: https://github.com/apache/hadoop/pull/858#issuecomment-496325768
 
 
   Change LGTM, +1. We have similar logic in 
o.a.ha.o.container.keyvalue.impl.BlockManagerImpl#delete, it is a surprise that 
we handle it differently here. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 249062)
Time Spent: 20m  (was: 10m)

> Fix TestContainerPersistence#testDeleteBlockTwice
> -
>
> Key: HDDS-1602
> URL: https://issues.apache.org/jira/browse/HDDS-1602
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> [https://ci.anzix.net/job/ozone/16899/testReport/org.apache.hadoop.ozone.container.common.impl/TestContainerPersistence/testDeleteBlockTwice/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-27 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16849256#comment-16849256
 ] 

Eric Yang commented on HDDS-1458:
-

[~elek] 1. The purpose of this change is to illustrate the frequent use cases 
that the script may be moved to a location other than the relative structure of 
OZONE_HOME.  For example, source code location which is different from binary 
package.  The patched code is written to support the most frequent scenarios 
rather than making only one assumption that the script is always fixated to the 
relative location of docker compose file in binary release tarball.  Ozone 
0.4.0 has incorrectly documented how to run the script the proper way.  
Therefore, the code has been corrected to match documentation.  I do not agree 
to remove os.getcwd() is the right solution.  Your subconscious mind insisted 
on undocumented behavior:

{code}
cd $OZONE_HOME/blockade
python -m pytest -s .
{code}

However, this is not what is documented even through README is located in the 
blockade location.  The documented procedure is going to OZONE_HOME, which is 
top level of Ozone tarball.  Therefore, the code is more accurately reflected 
using getcwd() as OZONE_HOME to locate compose file.  For any reason if 
blockade test is moved again in tarball structure, getcwd() as OZONE_HOME is 
more accurately referencing compose file location.  Script path approach is 
actually less optimal because any later decision to change python script 
location will result in code changes to discovering new relative location of 
compose file location.  It is also very common for package maintainer to move 
scripts into /usr/bin, and use environment variable to locate rest of the 
binaries.  With current code, it is less work to maintain script location 
change IMHO.

2. {quote}You don't need to set both as setting MAVEN_TEST is enough. I would 
suggest to remove one of the environment variables.{quote}

I know that.  It is just for convenience to the reader that both can be 
supported and MAVEN_TEST takes precedence over OZONE_HOME for new developers.

3. {quote}BTW, I think it would better to use one environment variable: 
FAULT_INJECTION_COMPOSE_DIR. Using OZONE_HOME or MAVEN_TEST we don't have the 
context how is it used (in fact just to locate the compose files). Using more 
meaningful env variable name can help to understand what is the goal of the 
specific environment variables.{quote}

We don't want to make one extra environment variable just for one purpose.  
When it is possible to use one variable to solve multiple problems, then it is 
worth the time to define the variable name.  For example, to figure out where 
the location of the program.  It is worth the time to define JAVA_HOME, or 
OZONE_HOME.  FAULT_INJECTION_COMPOSE_DIR is same as rest of the compose file 
used in release tarball, then it is a waste of time and labor to define this 
unique name.

4. {quote}The content of the docker-compose files in 
hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/tests/compose are changed. As we 
agreed multiple times, we should keep the current content of the docker-compose 
files. Please put back the volumes to the docker-compose files and please use 
apache/hadoop-runner image.{quote}

The removal of ozoneblockade compose file is based on [your 
comment|https://issues.apache.org/jira/browse/HDDS-1458?focusedCommentId=16845291=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16845291]
 point 1.  I also confirmed that the tests can run with global docker compose 
file in release tarball.  I do not understand why you insist on doing 
development style docker image with release tarball.  The current patch made no 
change to global docker compose file.  I am confused by your statement about 
put volumes back in docker-compose-files or apache/hadoop-runner image.

I feel the conversation has not been fruitful with you insist on doing 
everything the old way, which has been identified as none-salable approach.  I 
feel like chasing you in circles did not show a bit of productivity over the 
length conversations.  Please help with a break through in the conversation 
rather than insist on going back on the broken model when I did nothing to 
break the broken model and followed your words accurately.

5. {quote}hadoop-ozone/dist depends from the network-tests as it copies the 
files from the network-tests. This is not a hard dependency as of now as we 
copy the files directly from the src folder (build is not required) but I think 
it's more clear to add a provided dependency to hadoop-ozone/dist to show it's 
dependency. (As I remember you also suggested to use maven instead of direct 
copy from dist-layout stitching){quote}

The dissected conversation is to move docker build and maven assembly change to 
HDDS-1495.  It seems that assembly and dependency issue will not be addressed 
unless the code are 

[jira] [Updated] (HDDS-1602) Fix TestContainerPersistence#testDeleteBlockTwice

2019-05-27 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1602:
-
Target Version/s: 0.5.0, 0.4.1

> Fix TestContainerPersistence#testDeleteBlockTwice
> -
>
> Key: HDDS-1602
> URL: https://issues.apache.org/jira/browse/HDDS-1602
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> [https://ci.anzix.net/job/ozone/16899/testReport/org.apache.hadoop.ozone.container.common.impl/TestContainerPersistence/testDeleteBlockTwice/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1602) Fix TestContainerPersistence#testDeleteBlockTwice

2019-05-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1602:
-
Labels: pull-request-available  (was: )

> Fix TestContainerPersistence#testDeleteBlockTwice
> -
>
> Key: HDDS-1602
> URL: https://issues.apache.org/jira/browse/HDDS-1602
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> [https://ci.anzix.net/job/ozone/16899/testReport/org.apache.hadoop.ozone.container.common.impl/TestContainerPersistence/testDeleteBlockTwice/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1602) Fix TestContainerPersistence#testDeleteBlockTwice

2019-05-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1602?focusedWorklogId=249056=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-249056
 ]

ASF GitHub Bot logged work on HDDS-1602:


Author: ASF GitHub Bot
Created on: 27/May/19 23:04
Start Date: 27/May/19 23:04
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #858: 
HDDS-1602. Fix TestContainerPersistence#testDeleteBlockTwice.
URL: https://github.com/apache/hadoop/pull/858
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 249056)
Time Spent: 10m
Remaining Estimate: 0h

> Fix TestContainerPersistence#testDeleteBlockTwice
> -
>
> Key: HDDS-1602
> URL: https://issues.apache.org/jira/browse/HDDS-1602
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> [https://ci.anzix.net/job/ozone/16899/testReport/org.apache.hadoop.ozone.container.common.impl/TestContainerPersistence/testDeleteBlockTwice/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1602) Fix TestContainerPersistence#testDeleteBlockTwice

2019-05-27 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1602:
-
Status: Patch Available  (was: Open)

> Fix TestContainerPersistence#testDeleteBlockTwice
> -
>
> Key: HDDS-1602
> URL: https://issues.apache.org/jira/browse/HDDS-1602
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> [https://ci.anzix.net/job/ozone/16899/testReport/org.apache.hadoop.ozone.container.common.impl/TestContainerPersistence/testDeleteBlockTwice/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1602) Fix TestContainerPersistence#testDeleteBlockTwice

2019-05-27 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1602:


 Summary: Fix TestContainerPersistence#testDeleteBlockTwice
 Key: HDDS-1602
 URL: https://issues.apache.org/jira/browse/HDDS-1602
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


[https://ci.anzix.net/job/ozone/16899/testReport/org.apache.hadoop.ozone.container.common.impl/TestContainerPersistence/testDeleteBlockTwice/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1598) Fix Ozone checkstyle issues on trunk

2019-05-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16849241#comment-16849241
 ] 

Hudson commented on HDDS-1598:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16610 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16610/])
HDDS-1598. Fix Ozone checkstyle issues on trunk. Contributed by Elek, (bharat: 
rev ec92ca6575e0074ed4983fa8b34324bdbeb23499)
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/TestSCMContainerPlacementRackAware.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/SCMContainerPlacementRackAware.java


> Fix Ozone checkstyle issues on trunk
> 
>
> Key: HDDS-1598
> URL: https://issues.apache.org/jira/browse/HDDS-1598
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Some small checkstyle issues are accidentally committed with HDDS-700.
> Trivial fixes are coming here...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1599) Fix TestReplicationManager and checkstyle issues.

2019-05-27 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1599:
-
Target Version/s: 0.5.0, 0.4.1  (was: 0.4.1)

> Fix TestReplicationManager and checkstyle issues.
> -
>
> Key: HDDS-1599
> URL: https://issues.apache.org/jira/browse/HDDS-1599
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> When working on HDDS-1551, found some test failures which are not related to 
> HDDS-1551.
> This is caused by HDDS-700. 
>  
> This has not caught by Jenkins run because our Jenkins run does not run UT's 
> for all the sub-modules. In this case, it should have run UT's for 
> hadoop-hdds-server-scm, as there are some changes in src/test files in that 
> module, but still, it has not run for it. I think Jenkins run for ozone 
> project is not properly setup.
> [https://ci.anzix.net/job/ozone/16895/testReport/]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1231) Add ChillMode metrics

2019-05-27 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1231:
-
Target Version/s: 0.5.0, 0.4.1  (was: 0.4.1)

> Add ChillMode metrics
> -
>
> Key: HDDS-1231
> URL: https://issues.apache.org/jira/browse/HDDS-1231
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> This Jira is to add few of the chill mode metrics:
>  # NumberofHealthyPipelinesThreshold
>  # currentHealthyPipelinesCount
>  # NumberofPipelinesWithAtleastOneReplicaThreshold
>  # CurrentPipelinesWithAtleastOneReplicaCount
>  # ChillModeContainerWithOneReplicaReportedCutoff
>  # CurrentContainerCutoff
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1600) Add userName and IPAddress as part of OMRequest.

2019-05-27 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1600:
-
Target Version/s: 0.5.0

> Add userName and IPAddress as part of OMRequest.
> 
>
> Key: HDDS-1600
> URL: https://issues.apache.org/jira/browse/HDDS-1600
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In OM HA, the actual execution of request happens under GRPC context, so UGI 
> object which we retrieve from ProtobufRpcEngine.Server.getRemoteUser(); will 
> not be available.
> In similar manner ProtobufRpcEngine.Server.getRemoteIp().
>  
> So, during preExecute(which happens under RPC context) extract userName and 
> IPAddress and add it to the OMRequest, and then send the request to ratis 
> server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1601) Implement updating lastAppliedIndex after buffer flush to OM DB.

2019-05-27 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1601 started by Bharat Viswanadham.

> Implement updating lastAppliedIndex after buffer flush to OM DB.
> 
>
> Key: HDDS-1601
> URL: https://issues.apache.org/jira/browse/HDDS-1601
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> This Jira is to implement updating lastAppliedIndex in 
> OzoneManagerStateMachine once after the buffer is flushed to OM DB. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1598) Fix Ozone checkstyle issues on trunk

2019-05-27 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1598:
-
   Resolution: Fixed
Fix Version/s: 0.4.1
   Status: Resolved  (was: Patch Available)

> Fix Ozone checkstyle issues on trunk
> 
>
> Key: HDDS-1598
> URL: https://issues.apache.org/jira/browse/HDDS-1598
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Some small checkstyle issues are accidentally committed with HDDS-700.
> Trivial fixes are coming here...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1598) Fix Ozone checkstyle issues on trunk

2019-05-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1598?focusedWorklogId=249045=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-249045
 ]

ASF GitHub Bot logged work on HDDS-1598:


Author: ASF GitHub Bot
Created on: 27/May/19 21:40
Start Date: 27/May/19 21:40
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #854: 
HDDS-1598. Fix Ozone checkstyle issues on trunk
URL: https://github.com/apache/hadoop/pull/854
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 249045)
Time Spent: 40m  (was: 0.5h)

> Fix Ozone checkstyle issues on trunk
> 
>
> Key: HDDS-1598
> URL: https://issues.apache.org/jira/browse/HDDS-1598
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Some small checkstyle issues are accidentally committed with HDDS-700.
> Trivial fixes are coming here...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1598) Fix Ozone checkstyle issues on trunk

2019-05-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1598?focusedWorklogId=249044=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-249044
 ]

ASF GitHub Bot logged work on HDDS-1598:


Author: ASF GitHub Bot
Created on: 27/May/19 21:40
Start Date: 27/May/19 21:40
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #854: HDDS-1598. Fix 
Ozone checkstyle issues on trunk
URL: https://github.com/apache/hadoop/pull/854#issuecomment-496310005
 
 
   Test failures are not related to this patch.
   Thank You @elek for the fix. I will commit this.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 249044)
Time Spent: 0.5h  (was: 20m)

> Fix Ozone checkstyle issues on trunk
> 
>
> Key: HDDS-1598
> URL: https://issues.apache.org/jira/browse/HDDS-1598
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Some small checkstyle issues are accidentally committed with HDDS-700.
> Trivial fixes are coming here...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1601) Implement updating lastAppliedIndex after buffer flush to OM DB.

2019-05-27 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1601:


 Summary: Implement updating lastAppliedIndex after buffer flush to 
OM DB.
 Key: HDDS-1601
 URL: https://issues.apache.org/jira/browse/HDDS-1601
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


This Jira is to implement updating lastAppliedIndex in OzoneManagerStateMachine 
once after the buffer is flushed to OM DB. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1601) Implement updating lastAppliedIndex after buffer flush to OM DB.

2019-05-27 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1601:
-
Issue Type: Sub-task  (was: Task)
Parent: HDDS-505

> Implement updating lastAppliedIndex after buffer flush to OM DB.
> 
>
> Key: HDDS-1601
> URL: https://issues.apache.org/jira/browse/HDDS-1601
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> This Jira is to implement updating lastAppliedIndex in 
> OzoneManagerStateMachine once after the buffer is flushed to OM DB. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1600) Add userName and IPAddress as part of OMRequest.

2019-05-27 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1600:
-
Status: Patch Available  (was: In Progress)

> Add userName and IPAddress as part of OMRequest.
> 
>
> Key: HDDS-1600
> URL: https://issues.apache.org/jira/browse/HDDS-1600
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In OM HA, the actual execution of request happens under GRPC context, so UGI 
> object which we retrieve from ProtobufRpcEngine.Server.getRemoteUser(); will 
> not be available.
> In similar manner ProtobufRpcEngine.Server.getRemoteIp().
>  
> So, during preExecute(which happens under RPC context) extract userName and 
> IPAddress and add it to the OMRequest, and then send the request to ratis 
> server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1600) Add userName and IPAddress as part of OMRequest.

2019-05-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1600:
-
Labels: pull-request-available  (was: )

> Add userName and IPAddress as part of OMRequest.
> 
>
> Key: HDDS-1600
> URL: https://issues.apache.org/jira/browse/HDDS-1600
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> In OM HA, the actual execution of request happens under GRPC context, so UGI 
> object which we retrieve from ProtobufRpcEngine.Server.getRemoteUser(); will 
> not be available.
> In similar manner ProtobufRpcEngine.Server.getRemoteIp().
>  
> So, during preExecute(which happens under RPC context) extract userName and 
> IPAddress and add it to the OMRequest, and then send the request to ratis 
> server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1600) Add userName and IPAddress as part of OMRequest.

2019-05-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1600?focusedWorklogId=249037=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-249037
 ]

ASF GitHub Bot logged work on HDDS-1600:


Author: ASF GitHub Bot
Created on: 27/May/19 21:22
Start Date: 27/May/19 21:22
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #857: 
HDDS-1600. Add userName and IPAddress as part of OMRequest.
URL: https://github.com/apache/hadoop/pull/857
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 249037)
Time Spent: 10m
Remaining Estimate: 0h

> Add userName and IPAddress as part of OMRequest.
> 
>
> Key: HDDS-1600
> URL: https://issues.apache.org/jira/browse/HDDS-1600
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In OM HA, the actual execution of request happens under GRPC context, so UGI 
> object which we retrieve from ProtobufRpcEngine.Server.getRemoteUser(); will 
> not be available.
> In similar manner ProtobufRpcEngine.Server.getRemoteIp().
>  
> So, during preExecute(which happens under RPC context) extract userName and 
> IPAddress and add it to the OMRequest, and then send the request to ratis 
> server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1599) Fix TestReplicationManager and checkstyle issues.

2019-05-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1599?focusedWorklogId=249030=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-249030
 ]

ASF GitHub Bot logged work on HDDS-1599:


Author: ASF GitHub Bot
Created on: 27/May/19 21:02
Start Date: 27/May/19 21:02
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #855: HDDS-1599. Fix 
TestReplicationManager and checkstyle issues.
URL: https://github.com/apache/hadoop/pull/855#issuecomment-496305106
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 56 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 538 | trunk passed |
   | +1 | compile | 252 | trunk passed |
   | +1 | checkstyle | 68 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 804 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 136 | trunk passed |
   | 0 | spotbugs | 284 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 472 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 461 | the patch passed |
   | +1 | compile | 271 | the patch passed |
   | +1 | javac | 271 | the patch passed |
   | +1 | checkstyle | 46 | hadoop-hdds: The patch generated 0 new + 0 
unchanged - 3 fixed = 0 total (was 3) |
   | +1 | checkstyle | 46 | The patch passed checkstyle in hadoop-ozone |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 651 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 143 | the patch passed |
   | +1 | findbugs | 492 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 178 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1455 | hadoop-ozone in the patch failed. |
   | -1 | asflicense | 45 | The patch generated 17 ASF License warnings. |
   | | | 12628 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.container.common.impl.TestContainerPersistence |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-855/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/855 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 9de41ccad2be 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 83549db |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-855/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-855/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-855/1/testReport/ |
   | asflicense | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-855/1/artifact/out/patch-asflicense-problems.txt
 |
   | Max. process+thread count | 4748 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-855/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 249030)
Time Spent: 40m  (was: 0.5h)

> Fix TestReplicationManager and checkstyle issues.
> -
>
> Key: HDDS-1599
> URL: https://issues.apache.org/jira/browse/HDDS-1599
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>

[jira] [Work logged] (HDDS-1599) Fix TestReplicationManager and checkstyle issues.

2019-05-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1599?focusedWorklogId=249029=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-249029
 ]

ASF GitHub Bot logged work on HDDS-1599:


Author: ASF GitHub Bot
Created on: 27/May/19 20:58
Start Date: 27/May/19 20:58
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #855: HDDS-1599. Fix 
TestReplicationManager and checkstyle issues.
URL: https://github.com/apache/hadoop/pull/855#issuecomment-496304562
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 37 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 531 | trunk passed |
   | +1 | compile | 281 | trunk passed |
   | +1 | checkstyle | 88 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 878 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 140 | trunk passed |
   | 0 | spotbugs | 288 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 472 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 479 | the patch passed |
   | +1 | compile | 269 | the patch passed |
   | +1 | javac | 269 | the patch passed |
   | +1 | checkstyle | 36 | hadoop-hdds: The patch generated 0 new + 0 
unchanged - 3 fixed = 0 total (was 3) |
   | +1 | checkstyle | 33 | The patch passed checkstyle in hadoop-ozone |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 687 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 150 | the patch passed |
   | +1 | findbugs | 524 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 145 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1141 | hadoop-ozone in the patch failed. |
   | -1 | asflicense | 42 | The patch generated 17 ASF License warnings. |
   | | | 12132 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.web.client.TestKeys |
   |   | hadoop.ozone.container.common.impl.TestContainerPersistence |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-855/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/855 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux e73825602821 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 83549db |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-855/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-855/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-855/2/testReport/ |
   | asflicense | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-855/2/artifact/out/patch-asflicense-problems.txt
 |
   | Max. process+thread count | 5237 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-855/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 249029)
Time Spent: 0.5h  (was: 20m)

> Fix TestReplicationManager and checkstyle issues.
> -
>
> Key: HDDS-1599
> URL: https://issues.apache.org/jira/browse/HDDS-1599
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>

[jira] [Work logged] (HDDS-1599) Fix TestReplicationManager and checkstyle issues.

2019-05-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1599?focusedWorklogId=249013=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-249013
 ]

ASF GitHub Bot logged work on HDDS-1599:


Author: ASF GitHub Bot
Created on: 27/May/19 19:55
Start Date: 27/May/19 19:55
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #856: HDDS-1599. Fix 
TestReplicationManager.
URL: https://github.com/apache/hadoop/pull/856#issuecomment-496294982
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 513 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 404 | hadoop-ozone in trunk failed. |
   | -1 | compile | 35 | hadoop-hdds in trunk failed. |
   | -1 | compile | 45 | hadoop-ozone in trunk failed. |
   | -0 | checkstyle | 35 | The patch fails to run checkstyle in hadoop-hdds |
   | -0 | checkstyle | 46 | The patch fails to run checkstyle in hadoop-ozone |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 861 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 182 | trunk passed |
   | 0 | spotbugs | 283 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 472 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 491 | the patch passed |
   | +1 | compile | 289 | the patch passed |
   | -1 | javac | 103 | hadoop-hdds generated 11 new + 0 unchanged - 0 fixed = 
11 total (was 0) |
   | +1 | checkstyle | 92 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 689 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 160 | the patch passed |
   | +1 | findbugs | 498 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 149 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1152 | hadoop-ozone in the patch failed. |
   | -1 | asflicense | 58 | The patch generated 17 ASF License warnings. |
   | | | 6538 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.web.client.TestKeys |
   |   | hadoop.ozone.container.common.impl.TestContainerPersistence |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.hdds.scm.safemode.TestSCMSafeModeWithPipelineRules |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-856/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/856 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux b635f65190f3 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 83549db |
   | Default Java | 1.8.0_212 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-856/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-856/1/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-856/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-856/1/artifact/out//home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-856/out/maven-branch-checkstyle-hadoop-hdds.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-856/1/artifact/out//home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-856/out/maven-branch-checkstyle-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-856/1/artifact/out/diff-compile-javac-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-856/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-856/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-856/1/testReport/ |
   | asflicense | 

[jira] [Work logged] (HDDS-1231) Add ChillMode metrics

2019-05-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1231?focusedWorklogId=249010=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-249010
 ]

ASF GitHub Bot logged work on HDDS-1231:


Author: ASF GitHub Bot
Created on: 27/May/19 19:48
Start Date: 27/May/19 19:48
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #851: HDDS-1231. Add 
ChillMode metrics.
URL: https://github.com/apache/hadoop/pull/851#issuecomment-496293876
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 473 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 537 | trunk passed |
   | +1 | compile | 264 | trunk passed |
   | +1 | checkstyle | 76 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 937 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 150 | trunk passed |
   | 0 | spotbugs | 296 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 486 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 478 | the patch passed |
   | +1 | compile | 267 | the patch passed |
   | +1 | javac | 267 | the patch passed |
   | +1 | checkstyle | 80 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 735 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 147 | the patch passed |
   | +1 | findbugs | 512 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 168 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1232 | hadoop-ozone in the patch failed. |
   | -1 | asflicense | 45 | The patch generated 17 ASF License warnings. |
   | | | 6706 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.container.common.impl.TestContainerPersistence |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-851/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/851 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 4269c84dcf75 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed 
Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 83549db |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-851/3/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-851/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-851/3/testReport/ |
   | asflicense | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-851/3/artifact/out/patch-asflicense-problems.txt
 |
   | Max. process+thread count | 4630 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-851/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 249010)
Time Spent: 1h  (was: 50m)

> Add ChillMode metrics
> -
>
> Key: HDDS-1231
> URL: https://issues.apache.org/jira/browse/HDDS-1231
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time 

[jira] [Work logged] (HDDS-1598) Fix Ozone checkstyle issues on trunk

2019-05-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1598?focusedWorklogId=248997=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-248997
 ]

ASF GitHub Bot logged work on HDDS-1598:


Author: ASF GitHub Bot
Created on: 27/May/19 18:52
Start Date: 27/May/19 18:52
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #854: HDDS-1598. Fix 
Ozone checkstyle issues on trunk
URL: https://github.com/apache/hadoop/pull/854#issuecomment-496285178
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 45 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 544 | trunk passed |
   | +1 | compile | 280 | trunk passed |
   | +1 | checkstyle | 85 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 890 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 162 | trunk passed |
   | 0 | spotbugs | 290 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 482 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 488 | the patch passed |
   | +1 | compile | 289 | the patch passed |
   | +1 | javac | 289 | the patch passed |
   | +1 | checkstyle | 48 | hadoop-hdds: The patch generated 0 new + 0 
unchanged - 3 fixed = 0 total (was 3) |
   | +1 | checkstyle | 45 | The patch passed checkstyle in hadoop-ozone |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 682 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 163 | the patch passed |
   | +1 | findbugs | 494 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 162 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1636 | hadoop-ozone in the patch failed. |
   | -1 | asflicense | 52 | The patch generated 17 ASF License warnings. |
   | | | 6747 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.web.client.TestKeys |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.container.common.impl.TestContainerPersistence |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-854/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/854 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux a11057d3d820 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 83549db |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-854/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-854/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-854/1/testReport/ |
   | asflicense | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-854/1/artifact/out/patch-asflicense-problems.txt
 |
   | Max. process+thread count | 4700 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-854/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 248997)
Time Spent: 20m  (was: 10m)

> Fix Ozone checkstyle issues on trunk
> 
>
> Key: HDDS-1598
> URL: https://issues.apache.org/jira/browse/HDDS-1598
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: 

[jira] [Work started] (HDDS-1600) Add userName and IPAddress as part of OMRequest.

2019-05-27 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1600 started by Bharat Viswanadham.

> Add userName and IPAddress as part of OMRequest.
> 
>
> Key: HDDS-1600
> URL: https://issues.apache.org/jira/browse/HDDS-1600
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> In OM HA, the actual execution of request happens under GRPC context, so UGI 
> object which we retrieve from ProtobufRpcEngine.Server.getRemoteUser(); will 
> not be available.
> In similar manner ProtobufRpcEngine.Server.getRemoteIp().
>  
> So, during preExecute(which happens under RPC context) extract userName and 
> IPAddress and add it to the OMRequest, and then send the request to ratis 
> server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1600) Add userName and IPAddress as part of OMRequest.

2019-05-27 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1600:
-
Description: 
In OM HA, the actual execution of request happens under GRPC context, so UGI 
object which we retrieve from ProtobufRpcEngine.Server.getRemoteUser(); will 
not be available.

In similar manner ProtobufRpcEngine.Server.getRemoteIp().

 

So, during preExecute(which happens under RPC context) extract userName and 
IPAddress and add it to the OMRequest, and then send the request to ratis 
server.

  was:
In OM HA, the actual execution of request happens under GRPC context, so UGI 
object which we retrieve from ProtobufRpcEngine.Server.getRemoteUser(); will 
not be available.

In similar manner ProtobufRpcEngine.Server.getRemoteIp().

 

So, during preExecute extract userName and IPAddress and add it to the 
OMRequest, and then send the request to ratis server.


> Add userName and IPAddress as part of OMRequest.
> 
>
> Key: HDDS-1600
> URL: https://issues.apache.org/jira/browse/HDDS-1600
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> In OM HA, the actual execution of request happens under GRPC context, so UGI 
> object which we retrieve from ProtobufRpcEngine.Server.getRemoteUser(); will 
> not be available.
> In similar manner ProtobufRpcEngine.Server.getRemoteIp().
>  
> So, during preExecute(which happens under RPC context) extract userName and 
> IPAddress and add it to the OMRequest, and then send the request to ratis 
> server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1600) Add userName and IPAddress as part of OMRequest.

2019-05-27 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1600:
-
Issue Type: Sub-task  (was: Task)
Parent: HDDS-505

> Add userName and IPAddress as part of OMRequest.
> 
>
> Key: HDDS-1600
> URL: https://issues.apache.org/jira/browse/HDDS-1600
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> In OM HA, the actual execution of request happens under GRPC context, so UGI 
> object which we retrieve from ProtobufRpcEngine.Server.getRemoteUser(); will 
> not be available.
> In similar manner ProtobufRpcEngine.Server.getRemoteIp().
>  
> So, during preExecute extract userName and IPAddress and add it to the 
> OMRequest, and then send the request to ratis server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1600) Add userName and IPAddress as part of OMRequest.

2019-05-27 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1600:


 Summary: Add userName and IPAddress as part of OMRequest.
 Key: HDDS-1600
 URL: https://issues.apache.org/jira/browse/HDDS-1600
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


In OM HA, the actual execution of request happens under GRPC context, so UGI 
object which we retrieve from ProtobufRpcEngine.Server.getRemoteUser(); will 
not be available.

In similar manner ProtobufRpcEngine.Server.getRemoteIp().

 

So, during preExecute extract userName and IPAddress and add it to the 
OMRequest, and then send the request to ratis server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1599) Fix TestReplicationManager and checkstyle issues.

2019-05-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1599:
-
Labels: pull-request-available  (was: )

> Fix TestReplicationManager and checkstyle issues.
> -
>
> Key: HDDS-1599
> URL: https://issues.apache.org/jira/browse/HDDS-1599
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> When working on HDDS-1551, found some test failures which are not related to 
> HDDS-1551.
> This is caused by HDDS-700. 
>  
> This has not caught by Jenkins run because our Jenkins run does not run UT's 
> for all the sub-modules. In this case, it should have run UT's for 
> hadoop-hdds-server-scm, as there are some changes in src/test files in that 
> module, but still, it has not run for it. I think Jenkins run for ozone 
> project is not properly setup.
> [https://ci.anzix.net/job/ozone/16895/testReport/]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1599) Fix TestReplicationManager and checkstyle issues.

2019-05-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1599?focusedWorklogId=248982=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-248982
 ]

ASF GitHub Bot logged work on HDDS-1599:


Author: ASF GitHub Bot
Created on: 27/May/19 18:05
Start Date: 27/May/19 18:05
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #855: 
HDDS-1599. Fix TestReplicationManager and checkstyle issues.
URL: https://github.com/apache/hadoop/pull/855
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 248982)
Time Spent: 10m
Remaining Estimate: 0h

> Fix TestReplicationManager and checkstyle issues.
> -
>
> Key: HDDS-1599
> URL: https://issues.apache.org/jira/browse/HDDS-1599
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When working on HDDS-1551, found some test failures which are not related to 
> HDDS-1551.
> This is caused by HDDS-700. 
>  
> This has not caught by Jenkins run because our Jenkins run does not run UT's 
> for all the sub-modules. In this case, it should have run UT's for 
> hadoop-hdds-server-scm, as there are some changes in src/test files in that 
> module, but still, it has not run for it. I think Jenkins run for ozone 
> project is not properly setup.
> [https://ci.anzix.net/job/ozone/16895/testReport/]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1559) Include committedBytes to determine Out of Space in VolumeChoosingPolicy

2019-05-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1559?focusedWorklogId=248981=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-248981
 ]

ASF GitHub Bot logged work on HDDS-1559:


Author: ASF GitHub Bot
Created on: 27/May/19 18:05
Start Date: 27/May/19 18:05
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #856: 
HDDS-1559. Fix TestReplicationManager.
URL: https://github.com/apache/hadoop/pull/856
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 248981)
Time Spent: 1h 20m  (was: 1h 10m)

> Include committedBytes to determine Out of Space in VolumeChoosingPolicy
> 
>
> Key: HDDS-1559
> URL: https://issues.apache.org/jira/browse/HDDS-1559
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> This is a follow-up from HDDS-1511 and HDDS-1535
> Currently  when creating a new Container, the DN invokes 
> RoundRobinVolumeChoosingPolicy:chooseVolume(). This routine checks for 
> (volume available space > container max size). If no eligible volume is 
> found, the policy throws a DiskOutOfSpaceException. This is the current 
> behaviour.
> However, the computation of available space does not take into consideration 
> the space
> that is going to be consumed by writes to existing containers which are still 
> Open and accepting chunk writes.
> This Jira proposes to enhance the space availability check in chooseVolume by 
> inclusion of committed space(committedBytes in HddsVolume) in the equation.
> The handling/management of the exception in Ratis will not be modified in 
> this Jira. That will be scoped separately as part of Datanode IO Failure 
> handling work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1599) Fix TestReplicationManager and checkstyle issues.

2019-05-27 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1599:
-
Description: 
When working on HDDS-1551, found some test failures which are not related to 
HDDS-1551.

This is caused by HDDS-700. 

 

This has not caught by Jenkins run because our Jenkins run does not run UT's 
for all the sub-modules. In this case, it should have run UT's for 
hadoop-hdds-server-scm, as there are some changes in src/test files in that 
module, but still, it has not run for it. I think Jenkins run for ozone project 
is not properly setup.

[https://ci.anzix.net/job/ozone/16895/testReport/]

 

  was:
When working on HDDS-1551, found some test failures which are not related to 
HDDS-1551.

This Jira also fix checkstyle issue's caused by HDDS-700.

This is caused by HDDS-700. 

 

This has not caught by Jenkins run because our Jenkins run does not run UT's 
for all the sub-modules. In this case, it should have run UT's for 
hadoop-hdds-server-scm, as there are some changes in src/test files in that 
module, but still, it has not run for it. I think Jenkins run for ozone project 
is not properly setup.

[https://ci.anzix.net/job/ozone/16895/testReport/]

 


> Fix TestReplicationManager and checkstyle issues.
> -
>
> Key: HDDS-1599
> URL: https://issues.apache.org/jira/browse/HDDS-1599
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> When working on HDDS-1551, found some test failures which are not related to 
> HDDS-1551.
> This is caused by HDDS-700. 
>  
> This has not caught by Jenkins run because our Jenkins run does not run UT's 
> for all the sub-modules. In this case, it should have run UT's for 
> hadoop-hdds-server-scm, as there are some changes in src/test files in that 
> module, but still, it has not run for it. I think Jenkins run for ozone 
> project is not properly setup.
> [https://ci.anzix.net/job/ozone/16895/testReport/]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1592) TestReplicationManager failed in pre-commit run

2019-05-27 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16849112#comment-16849112
 ] 

Bharat Viswanadham edited comment on HDDS-1592 at 5/27/19 5:57 PM:
---

Ohh not seen this Jira, fixed this as part of HDDS-1599.

Will close this as a duplicate of HDDS-1599.


was (Author: bharatviswa):
Ohh not seen this Jira, fixed this as part of HDDS-1599.

> TestReplicationManager failed in pre-commit run
> ---
>
> Key: HDDS-1592
> URL: https://issues.apache.org/jira/browse/HDDS-1592
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Priority: Major
>
> E.g. https://ci.anzix.net/job/ozone/16892/testReport/
> Exception details in comment below.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1592) TestReplicationManager failed in pre-commit run

2019-05-27 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-1592.
--
Resolution: Duplicate
  Assignee: Bharat Viswanadham

> TestReplicationManager failed in pre-commit run
> ---
>
> Key: HDDS-1592
> URL: https://issues.apache.org/jira/browse/HDDS-1592
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Bharat Viswanadham
>Priority: Major
>
> E.g. https://ci.anzix.net/job/ozone/16892/testReport/
> Exception details in comment below.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1592) TestReplicationManager failed in pre-commit run

2019-05-27 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16849112#comment-16849112
 ] 

Bharat Viswanadham commented on HDDS-1592:
--

Ohh not seen this Jira, fixed this as part of HDDS-1599.

> TestReplicationManager failed in pre-commit run
> ---
>
> Key: HDDS-1592
> URL: https://issues.apache.org/jira/browse/HDDS-1592
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Priority: Major
>
> E.g. https://ci.anzix.net/job/ozone/16892/testReport/
> Exception details in comment below.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1231) Add ChillMode metrics

2019-05-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1231?focusedWorklogId=248976=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-248976
 ]

ASF GitHub Bot logged work on HDDS-1231:


Author: ASF GitHub Bot
Created on: 27/May/19 17:55
Start Date: 27/May/19 17:55
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #851: HDDS-1231. Add 
ChillMode metrics.
URL: https://github.com/apache/hadoop/pull/851#issuecomment-496275894
 
 
   Thank You for the review @jiwq.
   Fixed review comments and also fixed other checkstyle issues.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 248976)
Time Spent: 50m  (was: 40m)

> Add ChillMode metrics
> -
>
> Key: HDDS-1231
> URL: https://issues.apache.org/jira/browse/HDDS-1231
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> This Jira is to add few of the chill mode metrics:
>  # NumberofHealthyPipelinesThreshold
>  # currentHealthyPipelinesCount
>  # NumberofPipelinesWithAtleastOneReplicaThreshold
>  # CurrentPipelinesWithAtleastOneReplicaCount
>  # ChillModeContainerWithOneReplicaReportedCutoff
>  # CurrentContainerCutoff
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1231) Add ChillMode metrics

2019-05-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1231?focusedWorklogId=248975=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-248975
 ]

ASF GitHub Bot logged work on HDDS-1231:


Author: ASF GitHub Bot
Created on: 27/May/19 17:55
Start Date: 27/May/19 17:55
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #851: 
HDDS-1231. Add ChillMode metrics.
URL: https://github.com/apache/hadoop/pull/851#discussion_r287856341
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/safemode/SafeModeMetrics.java
 ##
 @@ -0,0 +1,89 @@
+package org.apache.hadoop.hdds.scm.safemode;
 
 Review comment:
   Fixed.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 248975)
Time Spent: 40m  (was: 0.5h)

> Add ChillMode metrics
> -
>
> Key: HDDS-1231
> URL: https://issues.apache.org/jira/browse/HDDS-1231
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> This Jira is to add few of the chill mode metrics:
>  # NumberofHealthyPipelinesThreshold
>  # currentHealthyPipelinesCount
>  # NumberofPipelinesWithAtleastOneReplicaThreshold
>  # CurrentPipelinesWithAtleastOneReplicaCount
>  # ChillModeContainerWithOneReplicaReportedCutoff
>  # CurrentContainerCutoff
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=248970=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-248970
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 27/May/19 17:40
Start Date: 27/May/19 17:40
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #850: HDDS-1551. 
Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#issuecomment-496273324
 
 
   Fixed findbug and checkstyle issues.
   For test failures related to TestReplicationManager opened HDDS-1599.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 248970)
Time Spent: 3h 10m  (was: 3h)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1599) Fix TestReplicationManager

2019-05-27 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1599:
-
Description: 
When working on HDDS-1551, found some test failures which are not related to 
HDDS-1551.

This Jira also fix checkstyle issue's caused by HDDS-700.

This is caused by HDDS-700. 

 

This has not caught by Jenkins run because our Jenkins run does not run UT's 
for all the sub-modules. In this case, it should have run UT's for 
hadoop-hdds-server-scm, as there are some changes in src/test files in that 
module, but still, it has not run for it. I think Jenkins run for ozone project 
is not properly setup.

[https://ci.anzix.net/job/ozone/16895/testReport/]

 

  was:
When working on HDDS-1551, found some test failures which are not related to 
HDDS-1551.

This is caused by HDDS-700. 

 

This has not caught by Jenkins run because our Jenkins run does not run UT's 
for all the sub-modules. In this case, it should have run UT's for 
hadoop-hdds-server-scm, as there are some changes in src/test files in that 
module, but still, it has not run for it. I think Jenkins run for ozone project 
is not properly setup.

[https://ci.anzix.net/job/ozone/16895/testReport/]

 


> Fix TestReplicationManager
> --
>
> Key: HDDS-1599
> URL: https://issues.apache.org/jira/browse/HDDS-1599
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> When working on HDDS-1551, found some test failures which are not related to 
> HDDS-1551.
> This Jira also fix checkstyle issue's caused by HDDS-700.
> This is caused by HDDS-700. 
>  
> This has not caught by Jenkins run because our Jenkins run does not run UT's 
> for all the sub-modules. In this case, it should have run UT's for 
> hadoop-hdds-server-scm, as there are some changes in src/test files in that 
> module, but still, it has not run for it. I think Jenkins run for ozone 
> project is not properly setup.
> [https://ci.anzix.net/job/ozone/16895/testReport/]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1599) Fix TestReplicationManager and checkstyle issues.

2019-05-27 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1599:
-
Summary: Fix TestReplicationManager and checkstyle issues.  (was: Fix 
TestReplicationManager)

> Fix TestReplicationManager and checkstyle issues.
> -
>
> Key: HDDS-1599
> URL: https://issues.apache.org/jira/browse/HDDS-1599
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> When working on HDDS-1551, found some test failures which are not related to 
> HDDS-1551.
> This Jira also fix checkstyle issue's caused by HDDS-700.
> This is caused by HDDS-700. 
>  
> This has not caught by Jenkins run because our Jenkins run does not run UT's 
> for all the sub-modules. In this case, it should have run UT's for 
> hadoop-hdds-server-scm, as there are some changes in src/test files in that 
> module, but still, it has not run for it. I think Jenkins run for ozone 
> project is not properly setup.
> [https://ci.anzix.net/job/ozone/16895/testReport/]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1599) Fix TestReplicationManager

2019-05-27 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1599:
-
Status: Patch Available  (was: In Progress)

> Fix TestReplicationManager
> --
>
> Key: HDDS-1599
> URL: https://issues.apache.org/jira/browse/HDDS-1599
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> When working on HDDS-1551, found some test failures which are not related to 
> HDDS-1551.
> This is caused by HDDS-700. 
>  
> This has not caught by Jenkins run because our Jenkins run does not run UT's 
> for all the sub-modules. In this case, it should have run UT's for 
> hadoop-hdds-server-scm, as there are some changes in src/test files in that 
> module, but still, it has not run for it. I think Jenkins run for ozone 
> project is not properly setup.
> [https://ci.anzix.net/job/ozone/16895/testReport/]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1559) Include committedBytes to determine Out of Space in VolumeChoosingPolicy

2019-05-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1559?focusedWorklogId=248968=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-248968
 ]

ASF GitHub Bot logged work on HDDS-1559:


Author: ASF GitHub Bot
Created on: 27/May/19 17:30
Start Date: 27/May/19 17:30
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #855: 
HDDS-1559. Fix TestReplicationManager.
URL: https://github.com/apache/hadoop/pull/855
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 248968)
Time Spent: 1h 10m  (was: 1h)

> Include committedBytes to determine Out of Space in VolumeChoosingPolicy
> 
>
> Key: HDDS-1559
> URL: https://issues.apache.org/jira/browse/HDDS-1559
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> This is a follow-up from HDDS-1511 and HDDS-1535
> Currently  when creating a new Container, the DN invokes 
> RoundRobinVolumeChoosingPolicy:chooseVolume(). This routine checks for 
> (volume available space > container max size). If no eligible volume is 
> found, the policy throws a DiskOutOfSpaceException. This is the current 
> behaviour.
> However, the computation of available space does not take into consideration 
> the space
> that is going to be consumed by writes to existing containers which are still 
> Open and accepting chunk writes.
> This Jira proposes to enhance the space availability check in chooseVolume by 
> inclusion of committed space(committedBytes in HddsVolume) in the equation.
> The handling/management of the exception in Ratis will not be modified in 
> this Jira. That will be scoped separately as part of Datanode IO Failure 
> handling work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-700) Support rack awared node placement policy based on network topology

2019-05-27 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16849097#comment-16849097
 ] 

Bharat Viswanadham edited comment on HDDS-700 at 5/27/19 5:28 PM:
--

This has caused some UT failures. Reported HDDS-1599 to fix this.

I think our Jenkins run is not properly running UT's for all modules. I feel it 
is better to use the PR model for HDDS jiras. As for PR's we have another CI 
which run's UT's for all modules and also smoke tests.


was (Author: bharatviswa):
This has caused some UT failures. Reported HDDS-1559 to fix this.

I think our Jenkins run is not properly running UT's for all modules. I feel it 
is better to use the PR model for HDDS jiras. As for PR's we have another CI 
which run's UT's for all modules and also smoke tests.

> Support rack awared node placement policy based on network topology
> ---
>
> Key: HDDS-700
> URL: https://issues.apache.org/jira/browse/HDDS-700
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Sammi Chen
>Priority: Major
> Fix For: 0.4.1
>
> Attachments: HDDS-700.01.patch, HDDS-700.02.patch, HDDS-700.03.patch
>
>
> Implement a new container placement policy implementation based datanode's 
> network topology.  It follows the same rule as HDFS.
> By default with 3 replica, two replica will be on the same rack, the third 
> replica and all the remaining replicas will be on different racks. 
>  
> {color:#808080} {color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-700) Support rack awared node placement policy based on network topology

2019-05-27 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16849097#comment-16849097
 ] 

Bharat Viswanadham commented on HDDS-700:
-

This has caused some UT failures. Reported HDDS-1559 to fix this.

I think our Jenkins run is not properly running UT's for all modules. I feel it 
is better to use the PR model for HDDS jiras. As for PR's we have another CI 
which run's UT's for all modules and also smoke tests.

> Support rack awared node placement policy based on network topology
> ---
>
> Key: HDDS-700
> URL: https://issues.apache.org/jira/browse/HDDS-700
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Sammi Chen
>Priority: Major
> Fix For: 0.4.1
>
> Attachments: HDDS-700.01.patch, HDDS-700.02.patch, HDDS-700.03.patch
>
>
> Implement a new container placement policy implementation based datanode's 
> network topology.  It follows the same rule as HDFS.
> By default with 3 replica, two replica will be on the same rack, the third 
> replica and all the remaining replicas will be on different racks. 
>  
> {color:#808080} {color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1599) Fix TestReplicationManager

2019-05-27 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1599 started by Bharat Viswanadham.

> Fix TestReplicationManager
> --
>
> Key: HDDS-1599
> URL: https://issues.apache.org/jira/browse/HDDS-1599
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> When working on HDDS-1551, found some test failures which are not related to 
> HDDS-1551.
> This is caused by HDDS-700. 
>  
> This has not caught by Jenkins run because our Jenkins run does not run UT's 
> for all the sub-modules. In this case, it should have run UT's for 
> hadoop-hdds-server-scm, as there are some changes in src/test files in that 
> module, but still, it has not run for it. I think Jenkins run for ozone 
> project is not properly setup.
> [https://ci.anzix.net/job/ozone/16895/testReport/]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1599) Fix TestReplicationManager

2019-05-27 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1599:


 Summary: Fix TestReplicationManager
 Key: HDDS-1599
 URL: https://issues.apache.org/jira/browse/HDDS-1599
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


When working on HDDS-1551, found some test failures which are not related to 
HDDS-1551.

This is caused by HDDS-700. 

 

This has not caught by Jenkins run because our Jenkins run does not run UT's 
for all the sub-modules. In this case, it should have run UT's for 
hadoop-hdds-server-scm, as there are some changes in src/test files in that 
module, but still, it has not run for it. I think Jenkins run for ozone project 
is not properly setup.

[https://ci.anzix.net/job/ozone/16895/testReport/]

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1598) Fix Ozone checkstyle issues on trunk

2019-05-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1598:
-
Labels: pull-request-available  (was: )

> Fix Ozone checkstyle issues on trunk
> 
>
> Key: HDDS-1598
> URL: https://issues.apache.org/jira/browse/HDDS-1598
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>
> Some small checkstyle issues are accidentally committed with HDDS-700.
> Trivial fixes are coming here...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1598) Fix Ozone checkstyle issues on trunk

2019-05-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1598?focusedWorklogId=248958=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-248958
 ]

ASF GitHub Bot logged work on HDDS-1598:


Author: ASF GitHub Bot
Created on: 27/May/19 16:59
Start Date: 27/May/19 16:59
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #854: HDDS-1598. Fix 
Ozone checkstyle issues on trunk
URL: https://github.com/apache/hadoop/pull/854
 
 
   Some small checkstyle issues are accidentally committed with HDDS-700.
   
   Trivial fixes are coming here...
   
   
   See: https://issues.apache.org/jira/browse/HDDS-1598
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 248958)
Time Spent: 10m
Remaining Estimate: 0h

> Fix Ozone checkstyle issues on trunk
> 
>
> Key: HDDS-1598
> URL: https://issues.apache.org/jira/browse/HDDS-1598
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Some small checkstyle issues are accidentally committed with HDDS-700.
> Trivial fixes are coming here...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1598) Fix Ozone checkstyle issues on trunk

2019-05-27 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-1598:
---
Status: Patch Available  (was: Open)

> Fix Ozone checkstyle issues on trunk
> 
>
> Key: HDDS-1598
> URL: https://issues.apache.org/jira/browse/HDDS-1598
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Some small checkstyle issues are accidentally committed with HDDS-700.
> Trivial fixes are coming here...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1598) Fix Ozone checkstyle issues on trunk

2019-05-27 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-1598:
--

 Summary: Fix Ozone checkstyle issues on trunk
 Key: HDDS-1598
 URL: https://issues.apache.org/jira/browse/HDDS-1598
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Elek, Marton
Assignee: Elek, Marton


Some small checkstyle issues are accidentally committed with HDDS-700.

Trivial fixes are coming here...




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1490) Support configurable containerPlacement policy

2019-05-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16849055#comment-16849055
 ] 

Hadoop QA commented on HDDS-1490:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  8m 
59s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 22s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
55s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  6m 
12s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 10m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
25s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 47s{color} | {color:orange} hadoop-hdds: The patch generated 10 new + 0 
unchanged - 0 fixed = 10 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 10m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 44s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 57s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
45s{color} | {color:red} The patch generated 17 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.om.TestKeyManagerImpl |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HDDS-Build/2710/artifact/out/Dockerfile 
|
| JIRA Issue | HDDS-1490 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12969910/HDDS-1490.01.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 7b41a9a00a78 

[jira] [Created] (HDDS-1597) Remove hdds-server-scm dependency from ozone-common

2019-05-27 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-1597:
--

 Summary: Remove hdds-server-scm dependency from ozone-common
 Key: HDDS-1597
 URL: https://issues.apache.org/jira/browse/HDDS-1597
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Elek, Marton
Assignee: Elek, Marton


I noticed that the hadoop-ozone/common project depends on 
hadoop-hdds-server-scm project.

The common projects are designed to be a shared artifacts between client and 
server side. Adding additional dependency to the common pom means that the 
dependency will be available for all the clients as well.

We definitely don't need scm server dependency on the client side.

The code dependency is just one class (ScmUtils) and the shared code can be 
easily moved to the common.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1596) Create service endpoint to download configuration from SCM

2019-05-27 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-1596:
--

 Summary: Create service endpoint to download configuration from SCM
 Key: HDDS-1596
 URL: https://issues.apache.org/jira/browse/HDDS-1596
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Elek, Marton
Assignee: Elek, Marton


As written in the design doc (see the parent issue) it was proposed to download 
the configuration from the scm by the other services.

I propose to create a separated endpoint to provide the ozone configuration. 
/conf can't be used as it contains *all* the configuration and we need only the 
modified configuration.

The easiest way to implement this feature is:

 * Create a simple rest endpoint which publishes all the configuration
 * Download the configurations to $HADOOP_CONF_DIR/ozone-global.xml during the 
service startup.
 * Add ozone-global.xml as an additional config source (before ozone-site.xml 
but after ozone-default.xml)
 * The download can be optional

With this approach we keep the support of the existing manual configuration 
(ozone-site.xml has higher priority) but we can download the configuration to a 
separated file during the startup, which will be loaded.

There is no magic: the configuration file is saved and it's easy to debug 
what's going on as the OzoneConfiguration is loaded from the $HADOOP_CONF_DIR 
as before.

Possible follow-up steps:

 * Migrate all the other services (recon, s3g) to the new approach. (possible 
newbie jiras)
 * Improve the CLI to define the SCM address. (As of now we use ozone.scm.names)
 * Create a service/hostname registration mechanism and autofill some of the 
configuration based on the topology information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1596) Create service endpoint to download configuration from SCM

2019-05-27 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1596 started by Elek, Marton.
--
> Create service endpoint to download configuration from SCM
> --
>
> Key: HDDS-1596
> URL: https://issues.apache.org/jira/browse/HDDS-1596
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> As written in the design doc (see the parent issue) it was proposed to 
> download the configuration from the scm by the other services.
> I propose to create a separated endpoint to provide the ozone configuration. 
> /conf can't be used as it contains *all* the configuration and we need only 
> the modified configuration.
> The easiest way to implement this feature is:
>  * Create a simple rest endpoint which publishes all the configuration
>  * Download the configurations to $HADOOP_CONF_DIR/ozone-global.xml during 
> the service startup.
>  * Add ozone-global.xml as an additional config source (before ozone-site.xml 
> but after ozone-default.xml)
>  * The download can be optional
> With this approach we keep the support of the existing manual configuration 
> (ozone-site.xml has higher priority) but we can download the configuration to 
> a separated file during the startup, which will be loaded.
> There is no magic: the configuration file is saved and it's easy to debug 
> what's going on as the OzoneConfiguration is loaded from the $HADOOP_CONF_DIR 
> as before.
> Possible follow-up steps:
>  * Migrate all the other services (recon, s3g) to the new approach. (possible 
> newbie jiras)
>  * Improve the CLI to define the SCM address. (As of now we use 
> ozone.scm.names)
>  * Create a service/hostname registration mechanism and autofill some of the 
> configuration based on the topology information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14499) Misleading REM_QUOTA value with snasphot and trash feature enabled for a directory

2019-05-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16848964#comment-16848964
 ] 

Hadoop QA commented on HDFS-14499:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 47s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
9s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m 36s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
35s{color} | {color:red} The patch generated 17 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}133m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  
org.apache.hadoop.hdfs.server.namenode.INodeReference$WithName.computeContentSummary(int,
 ContentSummaryComputationContext) uses the same code for two branches  At 
INodeReference.java:code for two branches  At INodeReference.java:[line 509] |
|  |  Self comparison of INodeReference$WithName.lastSnapshotId with itself in 
org.apache.hadoop.hdfs.server.namenode.INodeReference$WithName.computeContentSummary(int,
 ContentSummaryComputationContext)  At INodeReference.java:itself in 
org.apache.hadoop.hdfs.server.namenode.INodeReference$WithName.computeContentSummary(int,
 ContentSummaryComputationContext)  At INodeReference.java:[line 505] |
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.balancer.TestBalancer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14499 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12969890/HDFS-14499.000.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 85933fec94be 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 

[jira] [Updated] (HDDS-1490) Support configurable containerPlacement policy

2019-05-27 Thread Sammi Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sammi Chen updated HDDS-1490:
-
Assignee: Sammi Chen
  Status: Patch Available  (was: Open)

> Support configurable containerPlacement policy
> --
>
> Key: HDDS-1490
> URL: https://issues.apache.org/jira/browse/HDDS-1490
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
> Attachments: HDDS-1490.01.patch
>
>
> Support configurable containerPlacement policy to meet different requirements



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1490) Support configurable containerPlacement policy

2019-05-27 Thread Sammi Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sammi Chen updated HDDS-1490:
-
Attachment: HDDS-1490.01.patch

> Support configurable containerPlacement policy
> --
>
> Key: HDDS-1490
> URL: https://issues.apache.org/jira/browse/HDDS-1490
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Priority: Major
> Attachments: HDDS-1490.01.patch
>
>
> Support configurable containerPlacement policy to meet different requirements



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1595) Handling IO Failures on the Datanode

2019-05-27 Thread Supratim Deka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Supratim Deka updated HDDS-1595:

Attachment: Handling IO Failures on the Datanode.pdf

> Handling IO Failures on the Datanode
> 
>
> Key: HDDS-1595
> URL: https://issues.apache.org/jira/browse/HDDS-1595
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Priority: Major
> Attachments: Handling IO Failures on the Datanode.pdf, Raft IO v2.svg
>
>
> This Jira covers all the changes required to handle IO Failures on the 
> Datanode. Handling an IO failure on the Datanode involves detecting failures 
> as they happen and propagating the failure to the appropriate component in 
> the system - possibly the Client and/or the SCM based on the type of failure.
> At a high-level, IO Failure handling has the following goals:
> 1. Prevent Inconsistencies and corruption - due to non-handling or 
> mishandling of failures.
> 2. Prevent any data loss - timely detection of failure and propagate correct 
> error back to the initiator instead of silently dropping the data while the 
> client assumes the operation is committed.
> 3. Contain the disruption in the system - if a disk volume fails on a DN, 
> operations to the other nodes and volumes should not be affected.
> Details pertaining to design and changes required are covered in the attached 
> pdf document.
> A sequence diagram used to analyse the Datanode IO Path is also attached, in 
> svg format.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1595) Handling IO Failures on the Datanode

2019-05-27 Thread Supratim Deka (JIRA)
Supratim Deka created HDDS-1595:
---

 Summary: Handling IO Failures on the Datanode
 Key: HDDS-1595
 URL: https://issues.apache.org/jira/browse/HDDS-1595
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Datanode
Reporter: Supratim Deka
 Attachments: Raft IO v2.svg

This Jira covers all the changes required to handle IO Failures on the 
Datanode. Handling an IO failure on the Datanode involves detecting failures as 
they happen and propagating the failure to the appropriate component in the 
system - possibly the Client and/or the SCM based on the type of failure.

At a high-level, IO Failure handling has the following goals:
1. Prevent Inconsistencies and corruption - due to non-handling or mishandling 
of failures.
2. Prevent any data loss - timely detection of failure and propagate correct 
error back to the initiator instead of silently dropping the data while the 
client assumes the operation is committed.
3. Contain the disruption in the system - if a disk volume fails on a DN, 
operations to the other nodes and volumes should not be affected.

Details pertaining to design and changes required are covered in the attached 
pdf document.
A sequence diagram used to analyse the Datanode IO Path is also attached, in 
svg format.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14486) The exception classes in some throw statements do not accurately describe why they are thrown

2019-05-27 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16848875#comment-16848875
 ] 

Ayush Saxena commented on HDFS-14486:
-

[~elgoiri] can you help review?

> The exception classes in some throw statements do not accurately describe why 
> they are thrown
> -
>
> Key: HDFS-14486
> URL: https://issues.apache.org/jira/browse/HDFS-14486
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: eBugs
>Assignee: Ayush Saxena
>Priority: Minor
> Attachments: HDFS-14486-01.patch, HDFS-14486-02.patch, 
> HDFS-14486-03.patch, HDFS-14486-04.patch
>
>
> Dear HDFS developers, we are developing a tool to detect exception-related 
> bugs in Java. Our prototype has spotted a few {{throw}} statements whose 
> exception class does not accurately describe why they are thrown. This can be 
> dangerous since it makes correctly handling them challenging. For example, in 
> an old bug, HDFS-8224, throwing a general {{IOException}} makes it difficult 
> to perform data recovery specifically when a metadata file is corrupted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14499) Misleading REM_QUOTA value with snasphot and trash feature enabled for a directory

2019-05-27 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDFS-14499:
---
Attachment: HDFS-14499.000.patch

> Misleading REM_QUOTA value with snasphot and trash feature enabled for a 
> directory
> --
>
> Key: HDFS-14499
> URL: https://issues.apache.org/jira/browse/HDFS-14499
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-14499.000.patch
>
>
> This is the flow of steps where we see a discrepancy between REM_QUOTA and 
> new file operation failure. REM_QUOTA shows a value of  1 but file creation 
> operation does not succeed.
> {code:java}
> hdfs@c3265-node3 root$ hdfs dfs -mkdir /dir1
> hdfs@c3265-node3 root$ hdfs dfsadmin -setQuota 2 /dir1
> hdfs@c3265-node3 root$ hdfs dfsadmin -allowSnapshot /dir1
> Allowing snaphot on /dir1 succeeded
> hdfs@c3265-node3 root$ hdfs dfs -touchz /dir1/file1
> hdfs@c3265-node3 root$ hdfs dfs -createSnapshot /dir1 snap1
> Created snapshot /dir1/.snapshot/snap1
> hdfs@c3265-node3 root$ hdfs dfs -count -v -q /dir1
> QUOTA REM_QUOTA SPACE_QUOTA REM_SPACE_QUOTA DIR_COUNT FILE_COUNT CONTENT_SIZE 
> PATHNAME
> 2 0 none inf 1 1 0 /dir1
> hdfs@c3265-node3 root$ hdfs dfs -rm /dir1/file1
> 19/03/26 11:20:25 INFO fs.TrashPolicyDefault: Moved: 
> 'hdfs://smajetinn/dir1/file1' to trash at: 
> hdfs://smajetinn/user/hdfs/.Trash/Current/dir1/file11553599225772
> hdfs@c3265-node3 root$ hdfs dfs -count -v -q /dir1
> QUOTA REM_QUOTA SPACE_QUOTA REM_SPACE_QUOTA DIR_COUNT FILE_COUNT CONTENT_SIZE 
> PATHNAME
> 2 1 none inf 1 0 0 /dir1
> hdfs@c3265-node3 root$ hdfs dfs -touchz /dir1/file1
> touchz: The NameSpace quota (directories and files) of directory /dir1 is 
> exceeded: quota=2 file count=3{code}
> The issue here, is that the count command takes only files and directories 
> into account not the inode references. When trash is enabled, the deletion of 
> files inside a directory actually does a rename operation as a result of 
> which an inode reference is maintained in the deleted list of the snapshot 
> diff which is taken into account while computing the namespace quota, but 
> count command (getContentSummary()) ,just takes into account just the files 
> and directories, not the referenced entity for calculating the REM_QUOTA. The 
> referenced entity is taken into account for space quota only.
> InodeReference.java:
> ---
> {code:java}
>  @Override
> public final ContentSummaryComputationContext computeContentSummary(
> int snapshotId, ContentSummaryComputationContext summary) {
>   final int s = snapshotId < lastSnapshotId ? snapshotId : lastSnapshotId;
>   // only count storagespace for WithName
>   final QuotaCounts q = computeQuotaUsage(
>   summary.getBlockStoragePolicySuite(), getStoragePolicyID(), false, 
> s);
>   summary.getCounts().addContent(Content.DISKSPACE, q.getStorageSpace());
>   summary.getCounts().addTypeSpaces(q.getTypeSpaces());
>   return summary;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14499) Misleading REM_QUOTA value with snasphot and trash feature enabled for a directory

2019-05-27 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDFS-14499:
---
Status: Patch Available  (was: Open)

> Misleading REM_QUOTA value with snasphot and trash feature enabled for a 
> directory
> --
>
> Key: HDFS-14499
> URL: https://issues.apache.org/jira/browse/HDFS-14499
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-14499.000.patch
>
>
> This is the flow of steps where we see a discrepancy between REM_QUOTA and 
> new file operation failure. REM_QUOTA shows a value of  1 but file creation 
> operation does not succeed.
> {code:java}
> hdfs@c3265-node3 root$ hdfs dfs -mkdir /dir1
> hdfs@c3265-node3 root$ hdfs dfsadmin -setQuota 2 /dir1
> hdfs@c3265-node3 root$ hdfs dfsadmin -allowSnapshot /dir1
> Allowing snaphot on /dir1 succeeded
> hdfs@c3265-node3 root$ hdfs dfs -touchz /dir1/file1
> hdfs@c3265-node3 root$ hdfs dfs -createSnapshot /dir1 snap1
> Created snapshot /dir1/.snapshot/snap1
> hdfs@c3265-node3 root$ hdfs dfs -count -v -q /dir1
> QUOTA REM_QUOTA SPACE_QUOTA REM_SPACE_QUOTA DIR_COUNT FILE_COUNT CONTENT_SIZE 
> PATHNAME
> 2 0 none inf 1 1 0 /dir1
> hdfs@c3265-node3 root$ hdfs dfs -rm /dir1/file1
> 19/03/26 11:20:25 INFO fs.TrashPolicyDefault: Moved: 
> 'hdfs://smajetinn/dir1/file1' to trash at: 
> hdfs://smajetinn/user/hdfs/.Trash/Current/dir1/file11553599225772
> hdfs@c3265-node3 root$ hdfs dfs -count -v -q /dir1
> QUOTA REM_QUOTA SPACE_QUOTA REM_SPACE_QUOTA DIR_COUNT FILE_COUNT CONTENT_SIZE 
> PATHNAME
> 2 1 none inf 1 0 0 /dir1
> hdfs@c3265-node3 root$ hdfs dfs -touchz /dir1/file1
> touchz: The NameSpace quota (directories and files) of directory /dir1 is 
> exceeded: quota=2 file count=3{code}
> The issue here, is that the count command takes only files and directories 
> into account not the inode references. When trash is enabled, the deletion of 
> files inside a directory actually does a rename operation as a result of 
> which an inode reference is maintained in the deleted list of the snapshot 
> diff which is taken into account while computing the namespace quota, but 
> count command (getContentSummary()) ,just takes into account just the files 
> and directories, not the referenced entity for calculating the REM_QUOTA. The 
> referenced entity is taken into account for space quota only.
> InodeReference.java:
> ---
> {code:java}
>  @Override
> public final ContentSummaryComputationContext computeContentSummary(
> int snapshotId, ContentSummaryComputationContext summary) {
>   final int s = snapshotId < lastSnapshotId ? snapshotId : lastSnapshotId;
>   // only count storagespace for WithName
>   final QuotaCounts q = computeQuotaUsage(
>   summary.getBlockStoragePolicySuite(), getStoragePolicyID(), false, 
> s);
>   summary.getCounts().addContent(Content.DISKSPACE, q.getStorageSpace());
>   summary.getCounts().addTypeSpaces(q.getTypeSpaces());
>   return summary;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1509) TestBlockOutputStreamWithFailures#test2DatanodesFailure fails intermittently

2019-05-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16848862#comment-16848862
 ] 

Hudson commented on HDDS-1509:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16609 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16609/])
HDDS-1509. TestBlockOutputStreamWithFailures#test2DatanodesFailure fails 
(shashikant: rev 83549dbbea4f79a51b1289590f10f43794b09c17)
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/KeyOutputStream.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestBlockOutputStreamWithFailures.java


> TestBlockOutputStreamWithFailures#test2DatanodesFailure fails intermittently
> 
>
> Key: HDDS-1509
> URL: https://issues.apache.org/jira/browse/HDDS-1509
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> The test fails because, the test expects a exception after 2 datanodes 
> failures to be of type RaftRetryFailureException. But it might happen that, 
> the pipeline gets destroyed quickly then actual write executes over Ratis, 
> hence it will fail with GroupMismatchhException in such case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1584) Fix TestFailureHandlingByClient tests

2019-05-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16848861#comment-16848861
 ] 

Hudson commented on HDDS-1584:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16609 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16609/])
HDDS-1584. Fix TestFailureHandlingByClient tests. Contributed by (shashikant: 
rev f0e44b3a3fa20b0be5b1e1c2bae7b5a8b73f4828)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestFailureHandlingByClient.java


> Fix TestFailureHandlingByClient tests
> -
>
> Key: HDDS-1584
> URL: https://issues.apache.org/jira/browse/HDDS-1584
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.4.1
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The test failures are caused bcoz the test relies on 
> KeyoutputStream#getLocationList() to validate the no of preallocated blocks, 
> but it has been changed recently to exclude the empty blocks. The fix is 
> mostly to use KeyOutputStream#getStreamEntries() to get the no of 
> preallocated blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13255) RBF: Fail when try to remove mount point paths

2019-05-27 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16848856#comment-16848856
 ] 

Ayush Saxena commented on HDFS-13255:
-

Thanx [~aajisaka] for the patch.
v003 seems fair enough to me.
There is a checkstyle warning regarding the unused import, that you may need to 
fix.
As we need to update to fix the checkstyle, you may even drop comments in the 
test for delete too as you did in the rename one. :)
overall LGTM

> RBF: Fail when try to remove mount point paths
> --
>
> Key: HDFS-13255
> URL: https://issues.apache.org/jira/browse/HDFS-13255
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wu Weiwei
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HDFS-13255-HDFS-13891-002.patch, 
> HDFS-13255-HDFS-13891-003.patch, HDFS-13255-HDFS-13891-wip-001.patch
>
>
> when delete a ns-fed path which include mount point paths, will issue a error.
> Need to delete each mount point path independently.
> Operation step:
> {code:java}
> [hadp@root]$ hdfs dfsrouteradmin -ls
> Mount Table Entries:
> Source Destinations Owner Group Mode Quota/Usage
> /rm-test-all/rm-test-ns10 ns10->/rm-test hadp hadp rwxr-xr-x [NsQuota: -/-, 
> SsQuota: -/-]
> /rm-test-all/rm-test-ns2 ns1->/rm-test hadp hadp rwxr-xr-x [NsQuota: -/-, 
> SsQuota: -/-]
> [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns10/
> Found 2 items
> -rw-r--r-- 3 hadp supergroup 3118 2018-03-07 21:52 
> hdfs://ns-fed/rm-test-all/rm-test-ns10/core-site.xml
> -rw-r--r-- 3 hadp supergroup 7481 2018-03-07 21:52 
> hdfs://ns-fed/rm-test-all/rm-test-ns10/hdfs-site.xml
> [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns2/
> Found 2 items
> -rw-r--r-- 3 hadp supergroup 101 2018-03-07 16:57 
> hdfs://ns-fed/rm-test-all/rm-test-ns2/NOTICE.txt
> -rw-r--r-- 3 hadp supergroup 1366 2018-03-07 16:57 
> hdfs://ns-fed/rm-test-all/rm-test-ns2/README.txt
> [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns10/
> Found 2 items
> -rw-r--r-- 3 hadp supergroup 3118 2018-03-07 21:52 
> hdfs://ns-fed/rm-test-all/rm-test-ns10/core-site.xml
> -rw-r--r-- 3 hadp supergroup 7481 2018-03-07 21:52 
> hdfs://ns-fed/rm-test-all/rm-test-ns10/hdfs-site.xml
> [hadp@root]$ hdfs dfs -rm -r hdfs://ns-fed/rm-test-all/
> rm: Failed to move to trash: hdfs://ns-fed/rm-test-all. Consider using 
> -skipTrash option
> [hadp@root]$ hdfs dfs -rm -r -skipTrash hdfs://ns-fed/rm-test-all/
> rm: `hdfs://ns-fed/rm-test-all': Input/output error
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14499) Misleading REM_QUOTA value with snasphot and trash feature enabled for a directory

2019-05-27 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee reassigned HDFS-14499:
--

Assignee: Shashikant Banerjee

> Misleading REM_QUOTA value with snasphot and trash feature enabled for a 
> directory
> --
>
> Key: HDFS-14499
> URL: https://issues.apache.org/jira/browse/HDFS-14499
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>
> This is the flow of steps where we see a discrepancy between REM_QUOTA and 
> new file operation failure. REM_QUOTA shows a value of  1 but file creation 
> operation does not succeed.
> {code:java}
> hdfs@c3265-node3 root$ hdfs dfs -mkdir /dir1
> hdfs@c3265-node3 root$ hdfs dfsadmin -setQuota 2 /dir1
> hdfs@c3265-node3 root$ hdfs dfsadmin -allowSnapshot /dir1
> Allowing snaphot on /dir1 succeeded
> hdfs@c3265-node3 root$ hdfs dfs -touchz /dir1/file1
> hdfs@c3265-node3 root$ hdfs dfs -createSnapshot /dir1 snap1
> Created snapshot /dir1/.snapshot/snap1
> hdfs@c3265-node3 root$ hdfs dfs -count -v -q /dir1
> QUOTA REM_QUOTA SPACE_QUOTA REM_SPACE_QUOTA DIR_COUNT FILE_COUNT CONTENT_SIZE 
> PATHNAME
> 2 0 none inf 1 1 0 /dir1
> hdfs@c3265-node3 root$ hdfs dfs -rm /dir1/file1
> 19/03/26 11:20:25 INFO fs.TrashPolicyDefault: Moved: 
> 'hdfs://smajetinn/dir1/file1' to trash at: 
> hdfs://smajetinn/user/hdfs/.Trash/Current/dir1/file11553599225772
> hdfs@c3265-node3 root$ hdfs dfs -count -v -q /dir1
> QUOTA REM_QUOTA SPACE_QUOTA REM_SPACE_QUOTA DIR_COUNT FILE_COUNT CONTENT_SIZE 
> PATHNAME
> 2 1 none inf 1 0 0 /dir1
> hdfs@c3265-node3 root$ hdfs dfs -touchz /dir1/file1
> touchz: The NameSpace quota (directories and files) of directory /dir1 is 
> exceeded: quota=2 file count=3{code}
> The issue here, is that the count command takes only files and directories 
> into account not the inode references. When trash is enabled, the deletion of 
> files inside a directory actually does a rename operation as a result of 
> which an inode reference is maintained in the deleted list of the snapshot 
> diff which is taken into account while computing the namespace quota, but 
> count command (getContentSummary()) ,just takes into account just the files 
> and directories, not the referenced entity for calculating the REM_QUOTA. The 
> referenced entity is taken into account for space quota only.
> InodeReference.java:
> ---
> {code:java}
>  @Override
> public final ContentSummaryComputationContext computeContentSummary(
> int snapshotId, ContentSummaryComputationContext summary) {
>   final int s = snapshotId < lastSnapshotId ? snapshotId : lastSnapshotId;
>   // only count storagespace for WithName
>   final QuotaCounts q = computeQuotaUsage(
>   summary.getBlockStoragePolicySuite(), getStoragePolicyID(), false, 
> s);
>   summary.getCounts().addContent(Content.DISKSPACE, q.getStorageSpace());
>   summary.getCounts().addTypeSpaces(q.getTypeSpaces());
>   return summary;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1509) TestBlockOutputStreamWithFailures#test2DatanodesFailure fails intermittently

2019-05-27 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-1509:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> TestBlockOutputStreamWithFailures#test2DatanodesFailure fails intermittently
> 
>
> Key: HDDS-1509
> URL: https://issues.apache.org/jira/browse/HDDS-1509
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> The test fails because, the test expects a exception after 2 datanodes 
> failures to be of type RaftRetryFailureException. But it might happen that, 
> the pipeline gets destroyed quickly then actual write executes over Ratis, 
> hence it will fail with GroupMismatchhException in such case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1509) TestBlockOutputStreamWithFailures#test2DatanodesFailure fails intermittently

2019-05-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1509?focusedWorklogId=248763=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-248763
 ]

ASF GitHub Bot logged work on HDDS-1509:


Author: ASF GitHub Bot
Created on: 27/May/19 11:03
Start Date: 27/May/19 11:03
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on issue #805: HDDS-1509. 
TestBlockOutputStreamWithFailures#test2DatanodesFailure fails intermittently
URL: https://github.com/apache/hadoop/pull/805#issuecomment-496173998
 
 
   Thanks @mukul1987 for the review. I have committed this change to trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 248763)
Time Spent: 1h 10m  (was: 1h)

> TestBlockOutputStreamWithFailures#test2DatanodesFailure fails intermittently
> 
>
> Key: HDDS-1509
> URL: https://issues.apache.org/jira/browse/HDDS-1509
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> The test fails because, the test expects a exception after 2 datanodes 
> failures to be of type RaftRetryFailureException. But it might happen that, 
> the pipeline gets destroyed quickly then actual write executes over Ratis, 
> hence it will fail with GroupMismatchhException in such case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1509) TestBlockOutputStreamWithFailures#test2DatanodesFailure fails intermittently

2019-05-27 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1509?focusedWorklogId=248761=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-248761
 ]

ASF GitHub Bot logged work on HDDS-1509:


Author: ASF GitHub Bot
Created on: 27/May/19 11:02
Start Date: 27/May/19 11:02
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on pull request #805: HDDS-1509. 
TestBlockOutputStreamWithFailures#test2DatanodesFailure fails intermittently
URL: https://github.com/apache/hadoop/pull/805
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 248761)
Time Spent: 1h  (was: 50m)

> TestBlockOutputStreamWithFailures#test2DatanodesFailure fails intermittently
> 
>
> Key: HDDS-1509
> URL: https://issues.apache.org/jira/browse/HDDS-1509
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The test fails because, the test expects a exception after 2 datanodes 
> failures to be of type RaftRetryFailureException. But it might happen that, 
> the pipeline gets destroyed quickly then actual write executes over Ratis, 
> hence it will fail with GroupMismatchhException in such case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >