[jira] [Updated] (HDDS-1281) Fix the findbug issue caused by HDDS-1163

2019-03-15 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1281:
-
  Resolution: Fixed
Target Version/s: 0.5.0
  Status: Resolved  (was: Patch Available)

Thank You [~avijayan] for the contribution.

I have committed this to the trunk.

> Fix the findbug issue caused by HDDS-1163
> -
>
> Key: HDDS-1281
> URL: https://issues.apache.org/jira/browse/HDDS-1281
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Aravindan Vijayan
>Priority: Minor
>  Labels: newbie
> Attachments: HDDS-1281-000.patch
>
>
> https://ci.anzix.net/job/ozone-nightly/30/findbugs/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1281) Fix the findbug issue caused by HDDS-1163

2019-03-15 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16794126#comment-16794126
 ] 

Hudson commented on HDDS-1281:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16226 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16226/])
HDDS-1281. Fix the findbug issue caused by HDDS-1163. Contributed by (bharat: 
rev 926d548caabdfcfbf7a75dcf0657e8dde6d9710a)
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainer.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueContainerCheck.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainerCheck.java


> Fix the findbug issue caused by HDDS-1163
> -
>
> Key: HDDS-1281
> URL: https://issues.apache.org/jira/browse/HDDS-1281
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Aravindan Vijayan
>Priority: Minor
>  Labels: newbie
> Fix For: 0.5.0
>
> Attachments: HDDS-1281-000.patch
>
>
> https://ci.anzix.net/job/ozone-nightly/30/findbugs/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1163) Basic framework for Ozone Data Scrubber

2019-03-15 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16794127#comment-16794127
 ] 

Hudson commented on HDDS-1163:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16226 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16226/])
HDDS-1281. Fix the findbug issue caused by HDDS-1163. Contributed by (bharat: 
rev 926d548caabdfcfbf7a75dcf0657e8dde6d9710a)
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainer.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainerCheck.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueContainerCheck.java


> Basic framework for Ozone Data Scrubber
> ---
>
> Key: HDDS-1163
> URL: https://issues.apache.org/jira/browse/HDDS-1163
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1163.000.patch, HDDS-1163.001.patch, 
> HDDS-1163.002.patch, HDDS-1163.003.patch, HDDS-1163.004.patch, 
> HDDS-1163.005.patch, HDDS-1163.006.patch, HDDS-1163.007.patch
>
>
> Included in the scope:
> 1. Background scanner thread to iterate over container set and dispatch check 
> tasks for individual containers
> 2. Fixed rate scheduling - dispatch tasks at a pre-determined rate (for 
> example 1 container/s)
> 3. Check disk layout of Container - basic check for integrity of the 
> directory hierarchy inside the container, include chunk directory and 
> metadata directories
> 4. Check container file - basic sanity checks for the container metafile
> 5. Check Block Database - iterate over entries in the container block 
> database and check for the existence and accessibility of the chunks for each 
> block.
> Not in scope (will be done as separate subtasks):
> 1. Dynamic scheduling/pacing of background scan based on system load and 
> available resources.
> 2. Detection and handling of orphan chunks
> 3. Checksum verification for Chunks
> 4. Corruption handling - reporting (to SCM) and subsequent handling of any 
> corruption detected by the scanner. The current subtask will simply log any 
> corruption which is detected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1281) Fix the findbug issue caused by HDDS-1163

2019-03-15 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1281:
-
Fix Version/s: 0.5.0

> Fix the findbug issue caused by HDDS-1163
> -
>
> Key: HDDS-1281
> URL: https://issues.apache.org/jira/browse/HDDS-1281
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Aravindan Vijayan
>Priority: Minor
>  Labels: newbie
> Fix For: 0.5.0
>
> Attachments: HDDS-1281-000.patch
>
>
> https://ci.anzix.net/job/ozone-nightly/30/findbugs/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1263) SCM CLI does not list container with id 1

2019-03-15 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1263:
-
Fix Version/s: 0.4.0

> SCM CLI does not list container with id 1
> -
>
> Key: HDDS-1263
> URL: https://issues.apache.org/jira/browse/HDDS-1263
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone CLI
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.0, 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Steps to reproduce
>  # Create two containers 
> {code:java}
> ozone scmcli create
> ozone scmcli create{code}
>  # Try to list containers
> {code:java}
> hadoop@7a73695402ae:~$ ozone scmcli list --start=0
>  Container ID should be a positive long. 0
> hadoop@7a73695402ae:~$ ozone scmcli list --start=1 
> { 
> "state" : "OPEN",
> "replicationFactor" : "ONE",
> "replicationType" : "STAND_ALONE",
> "usedBytes" : 0,
> "numberOfKeys" : 0,
> "lastUsed" : 274660388,
> "stateEnterTime" : 274646481,
> "owner" : "OZONE",
> "containerID" : 2,
> "deleteTransactionId" : 0,
> "sequenceId" : 0,
> "open" : true 
> }{code}
> There is no way to list the container with containerID 1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1281) Fix the findbug issue caused by HDDS-1163

2019-03-15 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16794119#comment-16794119
 ] 

Bharat Viswanadham commented on HDDS-1281:
--

+1 LGTM.

I will commit this shortly.

> Fix the findbug issue caused by HDDS-1163
> -
>
> Key: HDDS-1281
> URL: https://issues.apache.org/jira/browse/HDDS-1281
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Aravindan Vijayan
>Priority: Minor
>  Labels: newbie
> Attachments: HDDS-1281-000.patch
>
>
> https://ci.anzix.net/job/ozone-nightly/30/findbugs/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1263) SCM CLI does not list container with id 1

2019-03-15 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793999#comment-16793999
 ] 

Bharat Viswanadham edited comment on HDDS-1263 at 3/16/19 3:58 AM:
---

Thank You [~vivekratnavel] for the contribution.

I have committed this to the trunk and ozone-0.4


was (Author: bharatviswa):
Thank You [~vivekratnavel] for the contribution.

I have committed this to the trunk.

> SCM CLI does not list container with id 1
> -
>
> Key: HDDS-1263
> URL: https://issues.apache.org/jira/browse/HDDS-1263
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone CLI
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.0, 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Steps to reproduce
>  # Create two containers 
> {code:java}
> ozone scmcli create
> ozone scmcli create{code}
>  # Try to list containers
> {code:java}
> hadoop@7a73695402ae:~$ ozone scmcli list --start=0
>  Container ID should be a positive long. 0
> hadoop@7a73695402ae:~$ ozone scmcli list --start=1 
> { 
> "state" : "OPEN",
> "replicationFactor" : "ONE",
> "replicationType" : "STAND_ALONE",
> "usedBytes" : 0,
> "numberOfKeys" : 0,
> "lastUsed" : 274660388,
> "stateEnterTime" : 274646481,
> "owner" : "OZONE",
> "containerID" : 2,
> "deleteTransactionId" : 0,
> "sequenceId" : 0,
> "open" : true 
> }{code}
> There is no way to list the container with containerID 1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1292) Fix nightly run findbugs and checkstyle issues

2019-03-15 Thread Supratim Deka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Supratim Deka resolved HDDS-1292.
-
Resolution: Duplicate

> Fix nightly run findbugs and checkstyle issues
> --
>
> Key: HDDS-1292
> URL: https://issues.apache.org/jira/browse/HDDS-1292
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Priority: Major
>
> [https://ci.anzix.net/job/ozone/3775/findbugs/]
>  
> https://ci.anzix.net/job/ozone/3775/checkstyle/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14327) Support security for DNS resolving

2019-03-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16794076#comment-16794076
 ] 

Hadoop QA commented on HDFS-14327:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
10s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 23s{color} | {color:orange} root: The patch generated 3 new + 6 unchanged - 
0 fixed = 9 total (was 6) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
8s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
57s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}110m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14327 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12962663/HDFS-14327.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b85f05196ed5 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2064ca0 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Work logged] (HDDS-1119) DN get OM certificate from SCM CA for block token validation

2019-03-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1119?focusedWorklogId=214124=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-214124
 ]

ASF GitHub Bot logged work on HDDS-1119:


Author: ASF GitHub Bot
Created on: 16/Mar/19 00:22
Start Date: 16/Mar/19 00:22
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #601: HDDS-1119. DN get 
OM certificate from SCM CA for block token validat…
URL: https://github.com/apache/hadoop/pull/601#issuecomment-473479535
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 24 | Docker mode activated. |
   ||| _ Prechecks _ |
   | 0 | yamllint | 0 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 25 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for branch |
   | +1 | mvninstall | 978 | trunk passed |
   | +1 | compile | 920 | trunk passed |
   | +1 | checkstyle | 192 | trunk passed |
   | -1 | mvnsite | 41 | container-service in trunk failed. |
   | -1 | mvnsite | 43 | server-scm in trunk failed. |
   | -1 | mvnsite | 41 | integration-test in trunk failed. |
   | -1 | mvnsite | 37 | ozone-manager in trunk failed. |
   | +1 | shadedclient | 1218 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist hadoop-ozone/integration-test |
   | -1 | findbugs | 35 | container-service in trunk failed. |
   | -1 | findbugs | 36 | server-scm in trunk failed. |
   | -1 | findbugs | 36 | ozone-manager in trunk failed. |
   | +1 | javadoc | 263 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for patch |
   | -1 | mvninstall | 19 | dist in the patch failed. |
   | +1 | compile | 893 | the patch passed |
   | +1 | cc | 893 | the patch passed |
   | +1 | javac | 893 | the patch passed |
   | +1 | checkstyle | 186 | the patch passed |
   | +1 | mvnsite | 308 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 682 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist hadoop-ozone/integration-test |
   | -1 | findbugs | 91 | hadoop-hdds/common generated 1 new + 0 unchanged - 0 
fixed = 1 total (was 0) |
   | +1 | javadoc | 261 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 87 | common in the patch passed. |
   | -1 | unit | 72 | container-service in the patch failed. |
   | -1 | unit | 93 | server-scm in the patch failed. |
   | +1 | unit | 49 | common in the patch passed. |
   | +1 | unit | 21 | dist in the patch passed. |
   | -1 | unit | 261 | integration-test in the patch failed. |
   | +1 | unit | 43 | ozone-manager in the patch passed. |
   | +1 | asflicense | 39 | The patch does not generate ASF License warnings. |
   | | | 7582 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-hdds/common |
   |  |  Possible null pointer dereference of cert in 
org.apache.hadoop.hdds.security.x509.certificate.client.DefaultCertificateClient.loadAllCertificates()
  Dereferenced at DefaultCertificateClient.java:cert in 
org.apache.hadoop.hdds.security.x509.certificate.client.DefaultCertificateClient.loadAllCertificates()
  Dereferenced at DefaultCertificateClient.java:[line 130] |
   | Failed junit tests | hadoop.hdds.scm.node.TestSCMNodeManager |
   |   | hadoop.ozone.ozShell.TestOzoneDatanodeShell |
   |   | hadoop.ozone.web.client.TestBuckets |
   |   | hadoop.ozone.scm.TestSCMMXBean |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.TestContainerStateMachineIdempotency |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.om.TestOzoneManagerConfiguration |
   |   | hadoop.ozone.om.TestOmMetrics |
   |   | hadoop.ozone.TestContainerOperations |
   |   | hadoop.ozone.container.TestContainerReplication |
   |   | hadoop.ozone.client.rpc.TestReadRetries |
   |   | hadoop.ozone.om.TestOmAcls |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.scm.pipeline.TestSCMPipelineMetrics |
   |   | hadoop.ozone.om.TestMultipleContainerReadWrite |
   |   | hadoop.ozone.client.rpc.TestHybridPipelineOnDatanode |
   |   | hadoop.hdds.scm.pipeline.TestPipelineClose |
   |   | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.om.TestOMDbCheckpointServlet |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.scm.pipeline.TestPipelineManagerMXBean |
   |   | 

[jira] [Work logged] (HDDS-1119) DN get OM certificate from SCM CA for block token validation

2019-03-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1119?focusedWorklogId=214108=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-214108
 ]

ASF GitHub Bot logged work on HDDS-1119:


Author: ASF GitHub Bot
Created on: 15/Mar/19 23:46
Start Date: 15/Mar/19 23:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #601: HDDS-1119. DN get 
OM certificate from SCM CA for block token validat…
URL: https://github.com/apache/hadoop/pull/601#issuecomment-473474591
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 22 | Docker mode activated. |
   ||| _ Prechecks _ |
   | 0 | yamllint | 0 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 25 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for branch |
   | +1 | mvninstall | 979 | trunk passed |
   | +1 | compile | 970 | trunk passed |
   | +1 | checkstyle | 193 | trunk passed |
   | +1 | mvnsite | 259 | trunk passed |
   | +1 | shadedclient | 1103 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist hadoop-ozone/integration-test |
   | -1 | findbugs | 51 | hadoop-hdds/container-service in trunk has 1 extant 
Findbugs warnings. |
   | +1 | javadoc | 216 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for patch |
   | -1 | mvninstall | 18 | dist in the patch failed. |
   | -1 | mvninstall | 23 | integration-test in the patch failed. |
   | -1 | compile | 50 | root in the patch failed. |
   | -1 | cc | 50 | root in the patch failed. |
   | -1 | javac | 50 | root in the patch failed. |
   | +1 | checkstyle | 189 | the patch passed |
   | +1 | mvnsite | 189 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | -1 | shadedclient | 187 | patch has errors when building and testing our 
client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist hadoop-ozone/integration-test |
   | -1 | findbugs | 8 | common in the patch failed. |
   | -1 | findbugs | 11 | container-service in the patch failed. |
   | -1 | findbugs | 8 | server-scm in the patch failed. |
   | -1 | findbugs | 22 | common in the patch failed. |
   | -1 | findbugs | 22 | ozone-manager in the patch failed. |
   | -1 | javadoc | 23 | common in the patch failed. |
   | -1 | javadoc | 21 | container-service in the patch failed. |
   | -1 | javadoc | 21 | server-scm in the patch failed. |
   | -1 | javadoc | 22 | common in the patch failed. |
   | -1 | javadoc | 21 | dist in the patch failed. |
   | -1 | javadoc | 22 | integration-test in the patch failed. |
   | -1 | javadoc | 21 | ozone-manager in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 21 | common in the patch failed. |
   | -1 | unit | 21 | container-service in the patch failed. |
   | -1 | unit | 22 | server-scm in the patch failed. |
   | -1 | unit | 21 | common in the patch failed. |
   | -1 | unit | 21 | dist in the patch failed. |
   | -1 | unit | 21 | integration-test in the patch failed. |
   | -1 | unit | 22 | ozone-manager in the patch failed. |
   | +1 | asflicense | 35 | The patch does not generate ASF License warnings. |
   | | | 5086 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/22/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/601 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  cc  yamllint  |
   | uname | Linux 36fe025e3c7a 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 03f3c8a |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/22/artifact/out/branch-findbugs-hadoop-hdds_container-service-warnings.html
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/22/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/22/artifact/out/patch-mvninstall-hadoop-ozone_integration-test.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/22/artifact/out/patch-compile-root.txt
 |
   | cc | 

[jira] [Commented] (HDFS-14353) Erasure Coding: metrics xmitsInProgress become to negative.

2019-03-15 Thread maobaolong (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16794038#comment-16794038
 ] 

maobaolong commented on HDFS-14353:
---

[~ayushtkn]Please take a look at this issue, maybe it can lead to missing 
block.

> Erasure Coding: metrics xmitsInProgress become to negative.
> ---
>
> Key: HDFS-14353
> URL: https://issues.apache.org/jira/browse/HDFS-14353
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, erasure-coding
>Affects Versions: 3.3.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14353.001, screenshot-1.png
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1233) Create an Ozone Manager Service provider for Recon.

2019-03-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16794034#comment-16794034
 ] 

Hadoop QA commented on HDDS-1233:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
55s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdds hadoop-ozone {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 56s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdds hadoop-ozone {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 40s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 23s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.container.common.TestDatanodeStateMachine |
|   | hadoop.ozone.web.TestOzoneWebAccess |
|   | hadoop.ozone.web.TestOzoneRestWithMiniCluster |
|   | hadoop.ozone.scm.pipeline.TestSCMPipelineMetrics |

[jira] [Commented] (HDDS-1250) In OM HA AllocateBlock call where connecting to SCM from OM should not happen on Ratis

2019-03-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16794029#comment-16794029
 ] 

Hadoop QA commented on HDDS-1250:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
51s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
28s{color} | {color:red} ozone-manager in trunk failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
23s{color} | {color:red} ozone-manager in trunk failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} hadoop-ozone: The patch generated 0 new + 1 
unchanged - 1 fixed = 1 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 29s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
43s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m 48s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.om.TestMultipleContainerReadWrite |
|   | hadoop.ozone.client.rpc.TestReadRetries |
|   | 

[jira] [Work logged] (HDDS-1250) In OM HA AllocateBlock call where connecting to SCM from OM should not happen on Ratis

2019-03-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1250?focusedWorklogId=214104=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-214104
 ]

ASF GitHub Bot logged work on HDDS-1250:


Author: ASF GitHub Bot
Created on: 15/Mar/19 23:38
Start Date: 15/Mar/19 23:38
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #591: HDDS-1250: IIn OM 
HA AllocateBlock call where connecting to SCM from OM should not happen on 
Ratis.
URL: https://github.com/apache/hadoop/pull/591#issuecomment-473473319
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 51 | Maven dependency ordering for branch |
   | +1 | mvninstall | 977 | trunk passed |
   | +1 | compile | 93 | trunk passed |
   | +1 | checkstyle | 26 | trunk passed |
   | -1 | mvnsite | 28 | ozone-manager in trunk failed. |
   | +1 | shadedclient | 741 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | -1 | findbugs | 23 | ozone-manager in trunk failed. |
   | +1 | javadoc | 69 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 12 | Maven dependency ordering for patch |
   | +1 | mvninstall | 95 | the patch passed |
   | +1 | compile | 90 | the patch passed |
   | +1 | cc | 90 | the patch passed |
   | +1 | javac | 90 | the patch passed |
   | +1 | checkstyle | 24 | hadoop-ozone: The patch generated 0 new + 1 
unchanged - 1 fixed = 1 total (was 2) |
   | +1 | mvnsite | 85 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 749 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 112 | the patch passed |
   | +1 | javadoc | 71 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 37 | common in the patch passed. |
   | +1 | unit | 43 | ozone-manager in the patch passed. |
   | -1 | unit | 1248 | integration-test in the patch failed. |
   | +1 | asflicense | 35 | The patch does not generate ASF License warnings. |
   | | | 4785 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestMultipleContainerReadWrite |
   |   | hadoop.ozone.client.rpc.TestReadRetries |
   |   | hadoop.ozone.web.client.TestKeys |
   |   | hadoop.ozone.web.client.TestKeysRatis |
   |   | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.ozone.om.TestScmChillMode |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.web.TestOzoneRestWithMiniCluster |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachine |
   |   | hadoop.ozone.client.rpc.TestHybridPipelineOnDatanode |
   |   | hadoop.ozone.container.TestContainerReplication |
   |   | hadoop.ozone.web.client.TestOzoneClient |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerHandler
 |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.om.TestOmBlockVersioning |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.om.TestContainerReportWithKeys |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.ozShell.TestOzoneShell |
   |   | hadoop.ozone.container.server.TestSecureContainerServer |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestDeleteContainerHandler
 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-591/13/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/591 |
   | JIRA Issue | HDDS-1250 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  cc  |
   | uname | Linux 848ec59fb1ce 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | 

[jira] [Updated] (HDFS-14327) Support security for DNS resolving

2019-03-15 Thread Fengnan Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fengnan Li updated HDFS-14327:
--
Status: Patch Available  (was: Open)

> Support security for DNS resolving
> --
>
> Key: HDFS-14327
> URL: https://issues.apache.org/jira/browse/HDFS-14327
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Attachments: HDFS-14327.001.patch
>
>
> The DNS resolving, clients will get the IP of the servers (NN/Routers) and 
> use the IP addresses to access the machine. This will fail in secure 
> environment as Kerberos is using the domain name in the principal so it won't 
> recognize the IP addresses.
> This task is mainly adding a reverse look up on the current basis and get the 
> domain name after the IP is fetched. After that clients will still use the 
> domain name to access the servers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14327) Support security for DNS resolving

2019-03-15 Thread Fengnan Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fengnan Li updated HDFS-14327:
--
Attachment: HDFS-14327.001.patch

> Support security for DNS resolving
> --
>
> Key: HDFS-14327
> URL: https://issues.apache.org/jira/browse/HDFS-14327
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Attachments: HDFS-14327.001.patch
>
>
> The DNS resolving, clients will get the IP of the servers (NN/Routers) and 
> use the IP addresses to access the machine. This will fail in secure 
> environment as Kerberos is using the domain name in the principal so it won't 
> recognize the IP addresses.
> This task is mainly adding a reverse look up on the current basis and get the 
> domain name after the IP is fetched. After that clients will still use the 
> domain name to access the servers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14316) RBF: Support unavailable subclusters for mount points with multiple destinations

2019-03-15 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16794023#comment-16794023
 ] 

Íñigo Goiri commented on HDFS-14316:


{quote}
I guess we are retrying here for all exceptions encountered? Might be we should 
restrict retrying to just certain cases and let fail for some genuine ones like 
AccessControlException,Which are supposed to fail for all subclusters.
{quote}
Good call on the type of exception to retry.
I'll take a try there.

{quote}
The createLocation stays null. So in the above Loop we land up iterating 
checking no null entry.Literally doing nothing Got the Log from the UT too as :
{quote}
For the create case, yes, we would retry everything again.
We don't really know the create destination in that case, not sure if we should 
discard the first one.

{quote}
If we supress the exception here. Is there a chance we may land up creating a 
file that already existed in the other subCluster?
{quote}
Yes, that's why I proposed for the user to explicitly mark the mount point to 
allow this (fault tolerant).
If the user wants to make sure that the file is non-existent, should not set 
this flag.

> RBF: Support unavailable subclusters for mount points with multiple 
> destinations
> 
>
> Key: HDFS-14316
> URL: https://issues.apache.org/jira/browse/HDFS-14316
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14316-HDFS-13891.000.patch, 
> HDFS-14316-HDFS-13891.001.patch, HDFS-14316-HDFS-13891.002.patch, 
> HDFS-14316-HDFS-13891.003.patch, HDFS-14316-HDFS-13891.004.patch, 
> HDFS-14316-HDFS-13891.005.patch, HDFS-14316-HDFS-13891.006.patch, 
> HDFS-14316-HDFS-13891.007.patch
>
>
> Currently mount points with multiple destinations (e.g., HASH_ALL) fail 
> writes when the destination subcluster is down. We need an option to allow 
> writing in other subclusters when one is down.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1246) Add ozone delegation token utility subcmd for Ozone CLI

2019-03-15 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16794017#comment-16794017
 ] 

Hudson commented on HDDS-1246:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16224 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16224/])
HDDS-1246. Add ozone delegation token utility subcmd for Ozone CLI. 
(7813154+ajayydv: rev 5cfb88a225157366e194fc7fb2e20141b1ad24db)
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/ha/OMFailoverProxyProvider.java
* (add) 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/O3fsDtFetcher.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneDelegationTokenSelector.java
* (add) 
hadoop-ozone/ozonefs/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenIdentifier
* (add) 
hadoop-ozone/ozonefs/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenRenewer
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
* (add) 
hadoop-ozone/ozonefs/src/main/resources/META-INF/services/org.apache.hadoop.security.token.DtFetcher
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocol/OzoneManagerProtocol.java
* (edit) hadoop-ozone/common/src/main/bin/ozone


> Add ozone delegation token utility subcmd for Ozone CLI
> ---
>
> Key: HDDS-1246
> URL: https://issues.apache.org/jira/browse/HDDS-1246
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> This allow running dtutil with integration test and dev test for demo of 
> Ozone security.
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1246) Add ozone delegation token utility subcmd for Ozone CLI

2019-03-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1246?focusedWorklogId=214100=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-214100
 ]

ASF GitHub Bot logged work on HDDS-1246:


Author: ASF GitHub Bot
Created on: 15/Mar/19 23:08
Start Date: 15/Mar/19 23:08
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on pull request #594: HDDS-1246. Add 
ozone delegation token utility subcmd for Ozone CLI. Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/594
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 214100)
Time Spent: 3h 40m  (was: 3.5h)

> Add ozone delegation token utility subcmd for Ozone CLI
> ---
>
> Key: HDDS-1246
> URL: https://issues.apache.org/jira/browse/HDDS-1246
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> This allow running dtutil with integration test and dev test for demo of 
> Ozone security.
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1246) Add ozone delegation token utility subcmd for Ozone CLI

2019-03-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1246?focusedWorklogId=214098=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-214098
 ]

ASF GitHub Bot logged work on HDDS-1246:


Author: ASF GitHub Bot
Created on: 15/Mar/19 23:06
Start Date: 15/Mar/19 23:06
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on pull request #594: HDDS-1246. Add 
ozone delegation token utility subcmd for Ozone CLI. Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/594#discussion_r266171177
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/ha/OMFailoverProxyProvider.java
 ##
 @@ -84,16 +86,23 @@ public OMFailoverProxyProvider(OzoneConfiguration 
configuration,
   public final class OMProxyInfo
   extends FailoverProxyProvider.ProxyInfo {
 private InetSocketAddress address;
+private Text dtService;
 
 Review comment:
   oh, you mean in follow up jira it will be uri. Make sense.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 214098)
Time Spent: 3.5h  (was: 3h 20m)

> Add ozone delegation token utility subcmd for Ozone CLI
> ---
>
> Key: HDDS-1246
> URL: https://issues.apache.org/jira/browse/HDDS-1246
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> This allow running dtutil with integration test and dev test for demo of 
> Ozone security.
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1263) SCM CLI does not list container with id 1

2019-03-15 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16794004#comment-16794004
 ] 

Hudson commented on HDDS-1263:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16223 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16223/])
HDDS-1263. SCM CLI does not list container with id 1. (bharat: rev 
af2dfc9f3d3661d1837a8b749882d557834c39fe)
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/SCMContainerManager.java


> SCM CLI does not list container with id 1
> -
>
> Key: HDDS-1263
> URL: https://issues.apache.org/jira/browse/HDDS-1263
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone CLI
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Steps to reproduce
>  # Create two containers 
> {code:java}
> ozone scmcli create
> ozone scmcli create{code}
>  # Try to list containers
> {code:java}
> hadoop@7a73695402ae:~$ ozone scmcli list --start=0
>  Container ID should be a positive long. 0
> hadoop@7a73695402ae:~$ ozone scmcli list --start=1 
> { 
> "state" : "OPEN",
> "replicationFactor" : "ONE",
> "replicationType" : "STAND_ALONE",
> "usedBytes" : 0,
> "numberOfKeys" : 0,
> "lastUsed" : 274660388,
> "stateEnterTime" : 274646481,
> "owner" : "OZONE",
> "containerID" : 2,
> "deleteTransactionId" : 0,
> "sequenceId" : 0,
> "open" : true 
> }{code}
> There is no way to list the container with containerID 1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14374) Expose total number of delegation tokens in AbstractDelegationTokenSecretManager

2019-03-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16794000#comment-16794000
 ] 

Hadoop QA commented on HDFS-14374:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
6s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 95m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14374 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12962655/HDFS-14374.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c7eaeaeddb09 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ff06ef0 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26486/testReport/ |
| Max. process+thread count | 1347 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26486/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Expose total number of delegation tokens in 
> AbstractDelegationTokenSecretManager
> 

[jira] [Resolved] (HDDS-1263) SCM CLI does not list container with id 1

2019-03-15 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-1263.
--
   Resolution: Fixed
Fix Version/s: 0.5.0

Thank You [~vivekratnavel] for the contribution.

I have committed this to the trunk.

> SCM CLI does not list container with id 1
> -
>
> Key: HDDS-1263
> URL: https://issues.apache.org/jira/browse/HDDS-1263
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone CLI
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Steps to reproduce
>  # Create two containers 
> {code:java}
> ozone scmcli create
> ozone scmcli create{code}
>  # Try to list containers
> {code:java}
> hadoop@7a73695402ae:~$ ozone scmcli list --start=0
>  Container ID should be a positive long. 0
> hadoop@7a73695402ae:~$ ozone scmcli list --start=1 
> { 
> "state" : "OPEN",
> "replicationFactor" : "ONE",
> "replicationType" : "STAND_ALONE",
> "usedBytes" : 0,
> "numberOfKeys" : 0,
> "lastUsed" : 274660388,
> "stateEnterTime" : 274646481,
> "owner" : "OZONE",
> "containerID" : 2,
> "deleteTransactionId" : 0,
> "sequenceId" : 0,
> "open" : true 
> }{code}
> There is no way to list the container with containerID 1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1263) SCM CLI does not list container with id 1

2019-03-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1263?focusedWorklogId=214085=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-214085
 ]

ASF GitHub Bot logged work on HDDS-1263:


Author: ASF GitHub Bot
Created on: 15/Mar/19 22:39
Start Date: 15/Mar/19 22:39
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #613: 
HDDS-1263. SCM CLI does not list container with id 1
URL: https://github.com/apache/hadoop/pull/613
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 214085)
Time Spent: 50m  (was: 40m)

> SCM CLI does not list container with id 1
> -
>
> Key: HDDS-1263
> URL: https://issues.apache.org/jira/browse/HDDS-1263
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone CLI
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Steps to reproduce
>  # Create two containers 
> {code:java}
> ozone scmcli create
> ozone scmcli create{code}
>  # Try to list containers
> {code:java}
> hadoop@7a73695402ae:~$ ozone scmcli list --start=0
>  Container ID should be a positive long. 0
> hadoop@7a73695402ae:~$ ozone scmcli list --start=1 
> { 
> "state" : "OPEN",
> "replicationFactor" : "ONE",
> "replicationType" : "STAND_ALONE",
> "usedBytes" : 0,
> "numberOfKeys" : 0,
> "lastUsed" : 274660388,
> "stateEnterTime" : 274646481,
> "owner" : "OZONE",
> "containerID" : 2,
> "deleteTransactionId" : 0,
> "sequenceId" : 0,
> "open" : true 
> }{code}
> There is no way to list the container with containerID 1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1250) In OM HA AllocateBlock call where connecting to SCM from OM should not happen on Ratis

2019-03-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793993#comment-16793993
 ] 

Hadoop QA commented on HDDS-1250:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
50s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
31s{color} | {color:red} ozone-manager in trunk failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
25s{color} | {color:red} ozone-manager in trunk failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} hadoop-ozone: The patch generated 0 new + 1 
unchanged - 1 fixed = 1 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
39s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
40s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m  3s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestStorageContainerManager |
|   | hadoop.ozone.web.client.TestKeysRatis |
|   | 

[jira] [Work logged] (HDDS-1250) In OM HA AllocateBlock call where connecting to SCM from OM should not happen on Ratis

2019-03-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1250?focusedWorklogId=214078=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-214078
 ]

ASF GitHub Bot logged work on HDDS-1250:


Author: ASF GitHub Bot
Created on: 15/Mar/19 22:28
Start Date: 15/Mar/19 22:28
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #591: HDDS-1250: IIn OM 
HA AllocateBlock call where connecting to SCM from OM should not happen on 
Ratis.
URL: https://github.com/apache/hadoop/pull/591#issuecomment-473460817
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 503 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 49 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1024 | trunk passed |
   | +1 | compile | 104 | trunk passed |
   | +1 | checkstyle | 24 | trunk passed |
   | -1 | mvnsite | 27 | ozone-manager in trunk failed. |
   | +1 | shadedclient | 723 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | -1 | findbugs | 24 | ozone-manager in trunk failed. |
   | +1 | javadoc | 79 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 14 | Maven dependency ordering for patch |
   | +1 | mvninstall | 101 | the patch passed |
   | +1 | compile | 93 | the patch passed |
   | +1 | cc | 93 | the patch passed |
   | +1 | javac | 93 | the patch passed |
   | +1 | checkstyle | 23 | hadoop-ozone: The patch generated 0 new + 1 
unchanged - 1 fixed = 1 total (was 2) |
   | +1 | mvnsite | 85 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 735 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 109 | the patch passed |
   | +1 | javadoc | 73 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 37 | common in the patch passed. |
   | +1 | unit | 42 | ozone-manager in the patch passed. |
   | -1 | unit | 1043 | integration-test in the patch failed. |
   | +1 | asflicense | 26 | The patch does not generate ASF License warnings. |
   | | | 5109 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.ozShell.TestOzoneShell |
   |   | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.ozone.client.rpc.TestReadRetries |
   |   | hadoop.ozone.om.TestScmChillMode |
   |   | hadoop.ozone.web.client.TestKeysRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestDeleteContainerHandler
 |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.om.TestContainerReportWithKeys |
   |   | hadoop.ozone.om.TestMultipleContainerReadWrite |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerHandler
 |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachine |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.web.client.TestKeys |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.om.TestOmBlockVersioning |
   |   | hadoop.ozone.web.TestOzoneRestWithMiniCluster |
   |   | hadoop.ozone.client.rpc.TestHybridPipelineOnDatanode |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-591/11/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/591 |
   | JIRA Issue | HDDS-1250 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  cc  |
   | uname | Linux 45cf5d7b3fb9 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 16b7862 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | 

[jira] [Commented] (HDDS-1281) Fix the findbug issue caused by HDDS-1163

2019-03-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793985#comment-16793985
 ] 

Hadoop QA commented on HDDS-1281:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  0s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdds hadoop-ozone {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdds hadoop-ozone {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 58s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 12m 46s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
|   | hadoop.ozone.client.rpc.TestBCSID |
|   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
|   | hadoop.ozone.om.TestOzoneManager |
|   | hadoop.ozone.om.TestScmChillMode |
|   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HDDS-Build/2536/artifact/out/Dockerfile 
|
| JIRA Issue | HDDS-1281 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12962656/HDDS-1281-000.patch |
| Optional Tests | dupname asflicense compile 

[jira] [Updated] (HDDS-1233) Create an Ozone Manager Service provider for Recon.

2019-03-15 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1233:

Status: Patch Available  (was: Open)

Fixed findbug issue.

> Create an Ozone Manager Service provider for Recon.
> ---
>
> Key: HDDS-1233
> URL: https://issues.apache.org/jira/browse/HDDS-1233
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1233-000.patch, HDDS-1233-001.patch, 
> HDDS-1233-002.patch, HDDS-1233-003.patch
>
>
> * Implement an abstraction to let Recon make OM specific requests.
> * At this point of time, the only request is to get the DB snapshot. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1250) In OM HA AllocateBlock call where connecting to SCM from OM should not happen on Ratis

2019-03-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793995#comment-16793995
 ] 

Hadoop QA commented on HDDS-1250:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  8m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
49s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
27s{color} | {color:red} ozone-manager in trunk failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
24s{color} | {color:red} ozone-manager in trunk failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-ozone: The patch generated 0 new + 1 
unchanged - 1 fixed = 1 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
42s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 23s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 85m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.ozShell.TestOzoneShell |
|   | hadoop.ozone.om.TestOzoneManager |
|   | hadoop.ozone.client.rpc.TestReadRetries |
|   

[jira] [Work logged] (HDDS-1250) In OM HA AllocateBlock call where connecting to SCM from OM should not happen on Ratis

2019-03-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1250?focusedWorklogId=214077=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-214077
 ]

ASF GitHub Bot logged work on HDDS-1250:


Author: ASF GitHub Bot
Created on: 15/Mar/19 22:25
Start Date: 15/Mar/19 22:25
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #591: HDDS-1250: IIn OM 
HA AllocateBlock call where connecting to SCM from OM should not happen on 
Ratis.
URL: https://github.com/apache/hadoop/pull/591#issuecomment-473460090
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 25 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 50 | Maven dependency ordering for branch |
   | +1 | mvninstall | 988 | trunk passed |
   | +1 | compile | 97 | trunk passed |
   | +1 | checkstyle | 30 | trunk passed |
   | -1 | mvnsite | 31 | ozone-manager in trunk failed. |
   | +1 | shadedclient | 801 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | -1 | findbugs | 25 | ozone-manager in trunk failed. |
   | +1 | javadoc | 84 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 13 | Maven dependency ordering for patch |
   | +1 | mvninstall | 100 | the patch passed |
   | +1 | compile | 91 | the patch passed |
   | +1 | cc | 91 | the patch passed |
   | +1 | javac | 91 | the patch passed |
   | +1 | checkstyle | 22 | hadoop-ozone: The patch generated 0 new + 1 
unchanged - 1 fixed = 1 total (was 2) |
   | +1 | mvnsite | 81 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 685 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 108 | the patch passed |
   | +1 | javadoc | 62 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 39 | common in the patch passed. |
   | +1 | unit | 40 | ozone-manager in the patch passed. |
   | -1 | unit | 603 | integration-test in the patch failed. |
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 4155 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.web.client.TestKeysRatis |
   |   | hadoop.ozone.om.TestOmBlockVersioning |
   |   | hadoop.ozone.om.TestContainerReportWithKeys |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestDeleteContainerHandler
 |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 |
   |   | hadoop.ozone.ozShell.TestOzoneShell |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachine |
   |   | hadoop.ozone.web.TestOzoneRestWithMiniCluster |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.om.TestMultipleContainerReadWrite |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerHandler
 |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestReadRetries |
   |   | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.om.TestOmInit |
   |   | hadoop.ozone.client.rpc.TestHybridPipelineOnDatanode |
   |   | hadoop.ozone.om.TestScmChillMode |
   |   | hadoop.ozone.web.client.TestKeys |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-591/12/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/591 |
   | JIRA Issue | HDDS-1250 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  cc  |
   | uname | Linux 7f096b5ed173 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 16b7862 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | mvnsite | 

[jira] [Updated] (HDDS-1233) Create an Ozone Manager Service provider for Recon.

2019-03-15 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1233:

Status: Open  (was: Patch Available)

> Create an Ozone Manager Service provider for Recon.
> ---
>
> Key: HDDS-1233
> URL: https://issues.apache.org/jira/browse/HDDS-1233
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1233-000.patch, HDDS-1233-001.patch, 
> HDDS-1233-002.patch, HDDS-1233-003.patch
>
>
> * Implement an abstraction to let Recon make OM specific requests.
> * At this point of time, the only request is to get the DB snapshot. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1233) Create an Ozone Manager Service provider for Recon.

2019-03-15 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1233:

Attachment: HDDS-1233-003.patch

> Create an Ozone Manager Service provider for Recon.
> ---
>
> Key: HDDS-1233
> URL: https://issues.apache.org/jira/browse/HDDS-1233
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1233-000.patch, HDDS-1233-001.patch, 
> HDDS-1233-002.patch, HDDS-1233-003.patch
>
>
> * Implement an abstraction to let Recon make OM specific requests.
> * At this point of time, the only request is to get the DB snapshot. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1250) In OM HA AllocateBlock call where connecting to SCM from OM should not happen on Ratis

2019-03-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1250?focusedWorklogId=214069=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-214069
 ]

ASF GitHub Bot logged work on HDDS-1250:


Author: ASF GitHub Bot
Created on: 15/Mar/19 22:08
Start Date: 15/Mar/19 22:08
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on issue #591: HDDS-1250: IIn 
OM HA AllocateBlock call where connecting to SCM from OM should not happen on 
Ratis.
URL: https://github.com/apache/hadoop/pull/591#issuecomment-473456453
 
 
   Thanks for working on this Bharat.
   LGTM overall. Just one comment - In addAllocateBlock(), we do not need the 
ExcludeList. ExcludeList is only used while getting the key locations from scm 
(in allocateBlock)
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 214069)
Time Spent: 2h 40m  (was: 2.5h)

> In OM HA AllocateBlock call where connecting to SCM from OM should not happen 
> on Ratis
> --
>
> Key: HDDS-1250
> URL: https://issues.apache.org/jira/browse/HDDS-1250
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> In OM HA, currently when allocateBlock is called, in applyTransaction() on 
> all OM nodes, we make a call to SCM and write the allocateBlock information 
> into OM DB. The problem with this is, every OM allocateBlock and appends new 
> BlockInfo into OMKeyInfom and also this a correctness issue. (As all OM's 
> should have the same block information for a key, even though eventually this 
> might be changed during key commit)
>  
> The proposed approach is:
> 1. Calling SCM for allocation of block will happen outside of ratis, and this 
> block information is passed and writing to DB will happen via Ratis.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1289) get Key failed on SCM restart

2019-03-15 Thread Jitendra Nath Pandey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDDS-1289:
---
Priority: Blocker  (was: Critical)

> get Key failed on SCM restart
> -
>
> Key: HDDS-1289
> URL: https://issues.apache.org/jira/browse/HDDS-1289
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Nilotpal Nandi
>Assignee: Shashikant Banerjee
>Priority: Blocker
> Attachments: 
> hadoop-hdfs-scm-ctr-e139-1542663976389-86524-01-03.log
>
>
> Seeing ContainerNotFoundException in scm log when get key operation tried 
> after scm restart.
> scm.log:
> [^hadoop-hdfs-scm-ctr-e139-1542663976389-86524-01-03.log]
>  
> {noformat}
>  
>  
> ozone version :
> 
> Source code repository g...@github.com:hortonworks/ozone.git -r 
> 67b7c4fd071b3f557bdb54be2a266b8a611cbce6
> Compiled by jenkins on 2019-03-06T22:02Z
> Compiled with protoc 2.5.0
> From source with checksum 65be9a337d178cd3855f5c5a2f111
> Using HDDS 0.4.0.3.0.100.0-348
> Source code repository g...@github.com:hortonworks/ozone.git -r 
> 67b7c4fd071b3f557bdb54be2a266b8a611cbce6
> Compiled by jenkins on 2019-03-06T22:01Z
> Compiled with protoc 2.5.0
> From source with checksum 324109cb3e8b188c1b89dc0b328c3a
> root@ctr-e139-1542663976389-86524-01-06 hdfs# hadoop version
> Hadoop 3.1.1.3.0.100.0-348
> Source code repository g...@github.com:hortonworks/hadoop.git -r 
> 484434b1c2480bdc9314a7ee1ade8a0f4db1758f
> Compiled by jenkins on 2019-03-06T22:14Z
> Compiled with protoc 2.5.0
> From source with checksum ba6aad94c14256ef3ad8634e3b5086
> This command was run using 
> /usr/hdp/3.0.100.0-348/hadoop/hadoop-common-3.1.1.3.0.100.0-348.jar
> {noformat}
>  
>  
>  
> {noformat}
> 2019-03-13 17:00:54,348 ERROR container.ContainerReportHandler 
> (ContainerReportHandler.java:processContainerReplicas(173)) - Received 
> container report for an unknown container 22 from datanode 
> 80f046cb-6fe2-4a05-bb67-9bf46f48723b{ip: 172.27.69.155, host: 
> ctr-e139-1542663976389-86524-01-05.hwx.site} {} 
> org.apache.hadoop.hdds.scm.container.ContainerNotFoundException: #22 at 
> org.apache.hadoop.hdds.scm.container.states.ContainerStateMap.checkIfContainerExist(ContainerStateMap.java:543)
>  at 
> org.apache.hadoop.hdds.scm.container.states.ContainerStateMap.updateContainerReplica(ContainerStateMap.java:230)
>  at 
> org.apache.hadoop.hdds.scm.container.ContainerStateManager.updateContainerReplica(ContainerStateManager.java:565)
>  at 
> org.apache.hadoop.hdds.scm.container.SCMContainerManager.updateContainerReplica(SCMContainerManager.java:393)
>  at 
> org.apache.hadoop.hdds.scm.container.ReportHandlerHelper.processContainerReplica(ReportHandlerHelper.java:74)
>  at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.processContainerReplicas(ContainerReportHandler.java:159)
>  at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:110)
>  at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:51)
>  at 
> org.apache.hadoop.hdds.server.events.SingleThreadExecutor.lambda$onMessage$1(SingleThreadExecutor.java:85)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748) 2019-03-13 17:00:54,349 ERROR 
> container.ContainerReportHandler 
> (ContainerReportHandler.java:processContainerReplicas(173)) - Received 
> container report for an unknown container 23 from datanode 
> 80f046cb-6fe2-4a05-bb67-9bf46f48723b{ip: 172.27.69.155, host: 
> ctr-e139-1542663976389-86524-01-05.hwx.site} {} 
> org.apache.hadoop.hdds.scm.container.ContainerNotFoundException: #23 at 
> org.apache.hadoop.hdds.scm.container.states.ContainerStateMap.checkIfContainerExist(ContainerStateMap.java:543)
>  at 
> org.apache.hadoop.hdds.scm.container.states.ContainerStateMap.updateContainerReplica(ContainerStateMap.java:230)
>  at 
> org.apache.hadoop.hdds.scm.container.ContainerStateManager.updateContainerReplica(ContainerStateManager.java:565)
>  at 
> org.apache.hadoop.hdds.scm.container.SCMContainerManager.updateContainerReplica(SCMContainerManager.java:393)
>  at 
> org.apache.hadoop.hdds.scm.container.ReportHandlerHelper.processContainerReplica(ReportHandlerHelper.java:74)
>  at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.processContainerReplicas(ContainerReportHandler.java:159)
>  at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:110)
>  at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:51)
>  at 
> 

[jira] [Updated] (HDDS-1289) get Key failed on SCM restart

2019-03-15 Thread Jitendra Nath Pandey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDDS-1289:
---
Target Version/s: 0.4.0

> get Key failed on SCM restart
> -
>
> Key: HDDS-1289
> URL: https://issues.apache.org/jira/browse/HDDS-1289
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Nilotpal Nandi
>Assignee: Shashikant Banerjee
>Priority: Critical
> Attachments: 
> hadoop-hdfs-scm-ctr-e139-1542663976389-86524-01-03.log
>
>
> Seeing ContainerNotFoundException in scm log when get key operation tried 
> after scm restart.
> scm.log:
> [^hadoop-hdfs-scm-ctr-e139-1542663976389-86524-01-03.log]
>  
> {noformat}
>  
>  
> ozone version :
> 
> Source code repository g...@github.com:hortonworks/ozone.git -r 
> 67b7c4fd071b3f557bdb54be2a266b8a611cbce6
> Compiled by jenkins on 2019-03-06T22:02Z
> Compiled with protoc 2.5.0
> From source with checksum 65be9a337d178cd3855f5c5a2f111
> Using HDDS 0.4.0.3.0.100.0-348
> Source code repository g...@github.com:hortonworks/ozone.git -r 
> 67b7c4fd071b3f557bdb54be2a266b8a611cbce6
> Compiled by jenkins on 2019-03-06T22:01Z
> Compiled with protoc 2.5.0
> From source with checksum 324109cb3e8b188c1b89dc0b328c3a
> root@ctr-e139-1542663976389-86524-01-06 hdfs# hadoop version
> Hadoop 3.1.1.3.0.100.0-348
> Source code repository g...@github.com:hortonworks/hadoop.git -r 
> 484434b1c2480bdc9314a7ee1ade8a0f4db1758f
> Compiled by jenkins on 2019-03-06T22:14Z
> Compiled with protoc 2.5.0
> From source with checksum ba6aad94c14256ef3ad8634e3b5086
> This command was run using 
> /usr/hdp/3.0.100.0-348/hadoop/hadoop-common-3.1.1.3.0.100.0-348.jar
> {noformat}
>  
>  
>  
> {noformat}
> 2019-03-13 17:00:54,348 ERROR container.ContainerReportHandler 
> (ContainerReportHandler.java:processContainerReplicas(173)) - Received 
> container report for an unknown container 22 from datanode 
> 80f046cb-6fe2-4a05-bb67-9bf46f48723b{ip: 172.27.69.155, host: 
> ctr-e139-1542663976389-86524-01-05.hwx.site} {} 
> org.apache.hadoop.hdds.scm.container.ContainerNotFoundException: #22 at 
> org.apache.hadoop.hdds.scm.container.states.ContainerStateMap.checkIfContainerExist(ContainerStateMap.java:543)
>  at 
> org.apache.hadoop.hdds.scm.container.states.ContainerStateMap.updateContainerReplica(ContainerStateMap.java:230)
>  at 
> org.apache.hadoop.hdds.scm.container.ContainerStateManager.updateContainerReplica(ContainerStateManager.java:565)
>  at 
> org.apache.hadoop.hdds.scm.container.SCMContainerManager.updateContainerReplica(SCMContainerManager.java:393)
>  at 
> org.apache.hadoop.hdds.scm.container.ReportHandlerHelper.processContainerReplica(ReportHandlerHelper.java:74)
>  at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.processContainerReplicas(ContainerReportHandler.java:159)
>  at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:110)
>  at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:51)
>  at 
> org.apache.hadoop.hdds.server.events.SingleThreadExecutor.lambda$onMessage$1(SingleThreadExecutor.java:85)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748) 2019-03-13 17:00:54,349 ERROR 
> container.ContainerReportHandler 
> (ContainerReportHandler.java:processContainerReplicas(173)) - Received 
> container report for an unknown container 23 from datanode 
> 80f046cb-6fe2-4a05-bb67-9bf46f48723b{ip: 172.27.69.155, host: 
> ctr-e139-1542663976389-86524-01-05.hwx.site} {} 
> org.apache.hadoop.hdds.scm.container.ContainerNotFoundException: #23 at 
> org.apache.hadoop.hdds.scm.container.states.ContainerStateMap.checkIfContainerExist(ContainerStateMap.java:543)
>  at 
> org.apache.hadoop.hdds.scm.container.states.ContainerStateMap.updateContainerReplica(ContainerStateMap.java:230)
>  at 
> org.apache.hadoop.hdds.scm.container.ContainerStateManager.updateContainerReplica(ContainerStateManager.java:565)
>  at 
> org.apache.hadoop.hdds.scm.container.SCMContainerManager.updateContainerReplica(SCMContainerManager.java:393)
>  at 
> org.apache.hadoop.hdds.scm.container.ReportHandlerHelper.processContainerReplica(ReportHandlerHelper.java:74)
>  at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.processContainerReplicas(ContainerReportHandler.java:159)
>  at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:110)
>  at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:51)
>  at 
> 

[jira] [Commented] (HDDS-1284) Adjust default values of pipline recovery for more resilient service restart

2019-03-15 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793976#comment-16793976
 ] 

Hudson commented on HDDS-1284:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16221 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16221/])
HDDS-1284. Adjust default values of pipline recovery for more resilient 
(7813154+ajayydv: rev 44b8451821c392dd59ee84153c98547ae9ce7042)
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
* (edit) hadoop-hdds/common/src/main/resources/ozone-default.xml


> Adjust default values of pipline recovery for more resilient service restart
> 
>
> Key: HDDS-1284
> URL: https://issues.apache.org/jira/browse/HDDS-1284
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 0.4.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> As of now we have a following algorithm to handle node failures:
> 1. In case of a missing node the leader of the pipline or the scm can 
> detected the missing heartbeats.
> 2. SCM will start to close the pipeline (CLOSING state) and try to close the 
> containers with the remaining nodes in the pipeline
> 3. After 5 minutes the pipeline will be destroyed (CLOSED) and a new pipeline 
> can be created from the healthy nodes (one node can be part only one pipwline 
> in the same time).
> While this algorithm can work well with a big cluster it doesn't provide very 
> good usability on small clusters:
> Use case1:
> Given 3 nodes, in case of a service restart, if the restart takes more than 
> 90s, the pipline will be moved to the CLOSING state. For the next 5 minutes 
> (ozone.scm.pipeline.destroy.timeout) the container will remain in the CLOSING 
> state. As there are no more nodes and we can't assign the same node to two 
> different pipeline, the cluster will be unavailable for 5 minutes.
> Use case2:
> Given 90 nodes and 30 pipelines where all the pipelines are spread across 3 
> racks. Let's stop one rack. As all the pipelines are affected, all the 
> pipelines will be moved to the CLOSING state. We have no free nodes, 
> therefore we need to wait for 5 minutes to write any data to the cluster.
> These problems can be solved in multiple ways:
> 1.) Instead of waiting 5 minutes, destroy the pipeline when all the 
> containers are reported to be closed. (Most of the time it's enough, but some 
> container report can be missing)
> 2.) Support multi-raft and open a pipeline as soon as we have enough nodes 
> (even if the nodes already have a CLOSING pipelines).
> Both the options require more work on the pipeline management side. For 0.4.0 
> we can adjust the following parameters to get better user experience:
> {code}
>   
> ozone.scm.pipeline.destroy.timeout
> 60s
> OZONE, SCM, PIPELINE
> 
>   Once a pipeline is closed, SCM should wait for the above configured time
>   before destroying a pipeline.
> 
>   
> ozone.scm.stale.node.interval
> 90s
> OZONE, MANAGEMENT
> 
>   The interval for stale node flagging. Please
>   see ozone.scm.heartbeat.thread.interval before changing this value.
> 
>   
>  {code}
> First of all, we can be more optimistic and mark node to stale only after 5 
> mins instead of 90s. 5 mins should be enough most of the time to recover the 
> nodes.
> Second: we can decrease the time of ozone.scm.pipeline.destroy.timeout. 
> Ideally the close command is sent by the scm to the datanode with a HB. 
> Between two HB we have enough time to close all the containers via ratis. 
> With the next HB, datanode can report the successful datanode. (If the 
> containers can be closed the scm can manage the QUASI_CLOSED containers)
> We need to wait 29 seconds (worst case) for the next HB, and 29+30 seconds 
> for the confirmation. --> 66 seconds seems to be a safe choice (assuming that 
> 6 seconds is enough to process the report about the successful closing)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1291) Set OmKeyArgs#refreshPipeline flag properly when client reads a stale pipeline

2019-03-15 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1291:
-
Priority: Blocker  (was: Major)

> Set OmKeyArgs#refreshPipeline flag properly when client reads a stale pipeline
> --
>
> Key: HDDS-1291
> URL: https://issues.apache.org/jira/browse/HDDS-1291
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Blocker
>
> After HDDS-1138, the OM client will not talk to SCM directly to fetch the 
> pipeline info. Instead the pipeline info is returned as part of the 
> keyLocation cached by OM. 
>  
> In case SCM pipeline is changed such as closed, the client may get invalid 
> pipeline exception. In this case, the client need to getKeyLocation with 
> OmKeyArgs#refreshPipeline = true to force OM update its pipeline cache for 
> this key. 
>  
> An optimization could be queue a background task to update all the 
> keyLocations that is affected when OM does a refreshPipeline. (This part can 
> be done in 0.5)
> {code:java}
> oldpipeline->newpipeline{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1255) Split security robot tests in multiple robot test files for better modularity

2019-03-15 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1255:
-
Target Version/s: 0.5.0  (was: 0.4.0)

> Split security robot tests in multiple robot test files for better modularity
> -
>
> Key: HDDS-1255
> URL: https://issues.apache.org/jira/browse/HDDS-1255
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>
> Split security robot tests in multiple robot test files for better modularity



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-595) Add robot test for OM Delegation Token

2019-03-15 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar resolved HDDS-595.
-
Resolution: Won't Fix

> Add robot test for OM Delegation Token 
> ---
>
> Key: HDDS-595
> URL: https://issues.apache.org/jira/browse/HDDS-595
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1215) Change hadoop-runner and apache/hadoop base image to use Java8

2019-03-15 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1215:
-
Priority: Blocker  (was: Major)

> Change hadoop-runner and apache/hadoop base image to use Java8
> --
>
> Key: HDDS-1215
> URL: https://issues.apache.org/jira/browse/HDDS-1215
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Elek, Marton
>Priority: Blocker
>
> {code}
> kms_1           | Exception in thread "main" java.lang.NoClassDefFoundError: 
> javax/activation/DataSource
> kms_1           | at 
> com.sun.xml.bind.v2.model.impl.RuntimeBuiltinLeafInfoImpl.(RuntimeBuiltinLeafInfoImpl.java:457)
> kms_1           | at 
> com.sun.xml.bind.v2.model.impl.RuntimeTypeInfoSetImpl.(RuntimeTypeInfoSetImpl.java:65)
> kms_1           | at 
> com.sun.xml.bind.v2.model.impl.RuntimeModelBuilder.createTypeInfoSet(RuntimeModelBuilder.java:133)
> kms_1           | at 
> com.sun.xml.bind.v2.model.impl.RuntimeModelBuilder.createTypeInfoSet(RuntimeModelBuilder.java:85)
> kms_1           | at 
> com.sun.xml.bind.v2.model.impl.ModelBuilder.(ModelBuilder.java:156)
> kms_1           | at 
> com.sun.xml.bind.v2.model.impl.RuntimeModelBuilder.(RuntimeModelBuilder.java:93)
> kms_1           | at 
> com.sun.xml.bind.v2.runtime.JAXBContextImpl.getTypeInfoSet(JAXBContextImpl.java:473)
> kms_1           | at 
> com.sun.xml.bind.v2.runtime.JAXBContextImpl.(JAXBContextImpl.java:319)
> kms_1           | at 
> com.sun.xml.bind.v2.runtime.JAXBContextImpl$JAXBContextBuilder.build(JAXBContextImpl.java:1170)
> kms_1           | at 
> com.sun.xml.bind.v2.ContextFactory.createContext(ContextFactory.java:145)
> kms_1           | at 
> com.sun.xml.bind.v2.ContextFactory.createContext(ContextFactory.java:236)
> kms_1           | at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> kms_1           | at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> kms_1           | at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> kms_1           | at 
> java.base/java.lang.reflect.Method.invoke(Method.java:566)
> kms_1           | at 
> javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:186)
> kms_1           | at 
> javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:146)
> kms_1           | at javax.xml.bind.ContextFinder.find(ContextFinder.java:350)
> kms_1           | at 
> javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:446)
> kms_1           | at 
> javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:409)
> kms_1           | at 
> com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.(WadlApplicationContextImpl.java:103)
> kms_1           | at 
> com.sun.jersey.server.impl.wadl.WadlFactory.init(WadlFactory.java:100)
> kms_1           | at 
> com.sun.jersey.server.impl.application.RootResourceUriRules.initWadl(RootResourceUriRules.java:169)
> kms_1           | at 
> com.sun.jersey.server.impl.application.RootResourceUriRules.(RootResourceUriRules.java:106)
> kms_1           | at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._initiate(WebApplicationImpl.java:1359)
> kms_1           | at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.access$700(WebApplicationImpl.java:180)
> kms_1           | at 
> com.sun.jersey.server.impl.application.WebApplicationImpl$13.f(WebApplicationImpl.java:799)
> kms_1           | at 
> com.sun.jersey.server.impl.application.WebApplicationImpl$13.f(WebApplicationImpl.java:795)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-600) Mapreduce example fails with java.lang.IllegalArgumentException: Bucket or Volume name has an unsupported character

2019-03-15 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar resolved HDDS-600.
-
Resolution: Not A Problem

> Mapreduce example fails with java.lang.IllegalArgumentException: Bucket or 
> Volume name has an unsupported character
> ---
>
> Key: HDDS-600
> URL: https://issues.apache.org/jira/browse/HDDS-600
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Namit Maheshwari
>Assignee: Hanisha Koneru
>Priority: Blocker
>  Labels: app-compat, test-badlands
>
> Set up a hadoop cluster where ozone is also installed. Ozone can be 
> referenced via o3://xx.xx.xx.xx:9889
> {code:java}
> [root@ctr-e138-1518143905142-510793-01-02 ~]# ozone sh bucket list 
> o3://xx.xx.xx.xx:9889/volume1/
> 2018-10-09 07:21:24,624 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> [ {
> "volumeName" : "volume1",
> "bucketName" : "bucket1",
> "createdOn" : "Tue, 09 Oct 2018 06:48:02 GMT",
> "acls" : [ {
> "type" : "USER",
> "name" : "root",
> "rights" : "READ_WRITE"
> }, {
> "type" : "GROUP",
> "name" : "root",
> "rights" : "READ_WRITE"
> } ],
> "versioning" : "DISABLED",
> "storageType" : "DISK"
> } ]
> [root@ctr-e138-1518143905142-510793-01-02 ~]# ozone sh key list 
> o3://xx.xx.xx.xx:9889/volume1/bucket1
> 2018-10-09 07:21:54,500 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> [ {
> "version" : 0,
> "md5hash" : null,
> "createdOn" : "Tue, 09 Oct 2018 06:58:32 GMT",
> "modifiedOn" : "Tue, 09 Oct 2018 06:58:32 GMT",
> "size" : 0,
> "keyName" : "mr_job_dir"
> } ]
> [root@ctr-e138-1518143905142-510793-01-02 ~]#{code}
> Hdfs is also set fine as below
> {code:java}
> [root@ctr-e138-1518143905142-510793-01-02 ~]# hdfs dfs -ls 
> /tmp/mr_jobs/input/
> Found 1 items
> -rw-r--r-- 3 root hdfs 215755 2018-10-09 06:37 
> /tmp/mr_jobs/input/wordcount_input_1.txt
> [root@ctr-e138-1518143905142-510793-01-02 ~]#{code}
> Now try to run Mapreduce example job against ozone o3:
> {code:java}
> [root@ctr-e138-1518143905142-510793-01-02 ~]# 
> /usr/hdp/current/hadoop-client/bin/hadoop jar 
> /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
> wordcount /tmp/mr_jobs/input/ 
> o3://xx.xx.xx.xx:9889/volume1/bucket1/mr_job_dir/output
> 18/10/09 07:15:38 INFO conf.Configuration: Removed undeclared tags:
> java.lang.IllegalArgumentException: Bucket or Volume name has an unsupported 
> character : :
> at 
> org.apache.hadoop.hdds.scm.client.HddsClientUtils.verifyResourceName(HddsClientUtils.java:143)
> at 
> org.apache.hadoop.ozone.client.rpc.RpcClient.getVolumeDetails(RpcClient.java:231)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:54)
> at com.sun.proxy.$Proxy16.getVolumeDetails(Unknown Source)
> at org.apache.hadoop.ozone.client.ObjectStore.getVolume(ObjectStore.java:92)
> at 
> org.apache.hadoop.fs.ozone.OzoneFileSystem.initialize(OzoneFileSystem.java:121)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
> at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.setOutputPath(FileOutputFormat.java:178)
> at org.apache.hadoop.examples.WordCount.main(WordCount.java:85)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
> at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
> at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> 

[jira] [Commented] (HDDS-1281) Fix the findbug issue caused by HDDS-1163

2019-03-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793968#comment-16793968
 ] 

Hadoop QA commented on HDDS-1281:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdds hadoop-ozone {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdds hadoop-ozone {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 44s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 58s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.container.common.TestDatanodeStateMachine |
|   | hadoop.ozone.client.rpc.TestBCSID |
|   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
|   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
|   | hadoop.ozone.om.TestScmChillMode |
|   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HDDS-Build/2535/artifact/out/Dockerfile 
|
| JIRA Issue | HDDS-1281 

[jira] [Resolved] (HDDS-859) Fix NPE ServerUtils#getOzoneMetaDirPath

2019-03-15 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar resolved HDDS-859.
-
Resolution: Not A Problem

> Fix NPE ServerUtils#getOzoneMetaDirPath
> ---
>
> Key: HDDS-859
> URL: https://issues.apache.org/jira/browse/HDDS-859
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
>  Labels: test-badlands
>
> This can be reproed with "mvn test" under hadoop-ozone project but not with 
> individual test run under IntelliJ.
>  
> {code:java}
> Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.33 s <<< 
> FAILURE! - in org.apache.hadoop.ozone.TestOmUtils
> testNoOmDbDirConfigured(org.apache.hadoop.ozone.TestOmUtils)  Time elapsed: 
> 0.028 s  <<< FAILURE!
> java.lang.AssertionError:
>  
> Expected: an instance of java.lang.IllegalArgumentException
>      but:  is a java.lang.NullPointerException
> Stacktrace was: java.lang.NullPointerException
>         at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:187)
>         at 
> org.apache.hadoop.hdds.server.ServerUtils.getOzoneMetaDirPath(ServerUtils.java:130)
>         at org.apache.hadoop.ozone.OmUtils.getOmDbDir(OmUtils.java:141)
>         at 
> org.apache.hadoop.ozone.TestOmUtils.testNoOmDbDirConfigured(TestOmUtils.java:89)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>         at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
>  
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1119) DN get OM certificate from SCM CA for block token validation

2019-03-15 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1119:
-
Priority: Blocker  (was: Major)

> DN get OM certificate from SCM CA for block token validation
> 
>
> Key: HDDS-1119
> URL: https://issues.apache.org/jira/browse/HDDS-1119
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 10h
>  Remaining Estimate: 0h
>
> This is needed when the DN received block token signed by OM and it does not 
> have the certificate that OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1284) Adjust default values of pipline recovery for more resilient service restart

2019-03-15 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1284:
-
Fix Version/s: 0.4.0

> Adjust default values of pipline recovery for more resilient service restart
> 
>
> Key: HDDS-1284
> URL: https://issues.apache.org/jira/browse/HDDS-1284
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 0.4.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> As of now we have a following algorithm to handle node failures:
> 1. In case of a missing node the leader of the pipline or the scm can 
> detected the missing heartbeats.
> 2. SCM will start to close the pipeline (CLOSING state) and try to close the 
> containers with the remaining nodes in the pipeline
> 3. After 5 minutes the pipeline will be destroyed (CLOSED) and a new pipeline 
> can be created from the healthy nodes (one node can be part only one pipwline 
> in the same time).
> While this algorithm can work well with a big cluster it doesn't provide very 
> good usability on small clusters:
> Use case1:
> Given 3 nodes, in case of a service restart, if the restart takes more than 
> 90s, the pipline will be moved to the CLOSING state. For the next 5 minutes 
> (ozone.scm.pipeline.destroy.timeout) the container will remain in the CLOSING 
> state. As there are no more nodes and we can't assign the same node to two 
> different pipeline, the cluster will be unavailable for 5 minutes.
> Use case2:
> Given 90 nodes and 30 pipelines where all the pipelines are spread across 3 
> racks. Let's stop one rack. As all the pipelines are affected, all the 
> pipelines will be moved to the CLOSING state. We have no free nodes, 
> therefore we need to wait for 5 minutes to write any data to the cluster.
> These problems can be solved in multiple ways:
> 1.) Instead of waiting 5 minutes, destroy the pipeline when all the 
> containers are reported to be closed. (Most of the time it's enough, but some 
> container report can be missing)
> 2.) Support multi-raft and open a pipeline as soon as we have enough nodes 
> (even if the nodes already have a CLOSING pipelines).
> Both the options require more work on the pipeline management side. For 0.4.0 
> we can adjust the following parameters to get better user experience:
> {code}
>   
> ozone.scm.pipeline.destroy.timeout
> 60s
> OZONE, SCM, PIPELINE
> 
>   Once a pipeline is closed, SCM should wait for the above configured time
>   before destroying a pipeline.
> 
>   
> ozone.scm.stale.node.interval
> 90s
> OZONE, MANAGEMENT
> 
>   The interval for stale node flagging. Please
>   see ozone.scm.heartbeat.thread.interval before changing this value.
> 
>   
>  {code}
> First of all, we can be more optimistic and mark node to stale only after 5 
> mins instead of 90s. 5 mins should be enough most of the time to recover the 
> nodes.
> Second: we can decrease the time of ozone.scm.pipeline.destroy.timeout. 
> Ideally the close command is sent by the scm to the datanode with a HB. 
> Between two HB we have enough time to close all the containers via ratis. 
> With the next HB, datanode can report the successful datanode. (If the 
> containers can be closed the scm can manage the QUASI_CLOSED containers)
> We need to wait 29 seconds (worst case) for the next HB, and 29+30 seconds 
> for the confirmation. --> 66 seconds seems to be a safe choice (assuming that 
> 6 seconds is enough to process the report about the successful closing)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1284) Adjust default values of pipline recovery for more resilient service restart

2019-03-15 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1284:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Adjust default values of pipline recovery for more resilient service restart
> 
>
> Key: HDDS-1284
> URL: https://issues.apache.org/jira/browse/HDDS-1284
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> As of now we have a following algorithm to handle node failures:
> 1. In case of a missing node the leader of the pipline or the scm can 
> detected the missing heartbeats.
> 2. SCM will start to close the pipeline (CLOSING state) and try to close the 
> containers with the remaining nodes in the pipeline
> 3. After 5 minutes the pipeline will be destroyed (CLOSED) and a new pipeline 
> can be created from the healthy nodes (one node can be part only one pipwline 
> in the same time).
> While this algorithm can work well with a big cluster it doesn't provide very 
> good usability on small clusters:
> Use case1:
> Given 3 nodes, in case of a service restart, if the restart takes more than 
> 90s, the pipline will be moved to the CLOSING state. For the next 5 minutes 
> (ozone.scm.pipeline.destroy.timeout) the container will remain in the CLOSING 
> state. As there are no more nodes and we can't assign the same node to two 
> different pipeline, the cluster will be unavailable for 5 minutes.
> Use case2:
> Given 90 nodes and 30 pipelines where all the pipelines are spread across 3 
> racks. Let's stop one rack. As all the pipelines are affected, all the 
> pipelines will be moved to the CLOSING state. We have no free nodes, 
> therefore we need to wait for 5 minutes to write any data to the cluster.
> These problems can be solved in multiple ways:
> 1.) Instead of waiting 5 minutes, destroy the pipeline when all the 
> containers are reported to be closed. (Most of the time it's enough, but some 
> container report can be missing)
> 2.) Support multi-raft and open a pipeline as soon as we have enough nodes 
> (even if the nodes already have a CLOSING pipelines).
> Both the options require more work on the pipeline management side. For 0.4.0 
> we can adjust the following parameters to get better user experience:
> {code}
>   
> ozone.scm.pipeline.destroy.timeout
> 60s
> OZONE, SCM, PIPELINE
> 
>   Once a pipeline is closed, SCM should wait for the above configured time
>   before destroying a pipeline.
> 
>   
> ozone.scm.stale.node.interval
> 90s
> OZONE, MANAGEMENT
> 
>   The interval for stale node flagging. Please
>   see ozone.scm.heartbeat.thread.interval before changing this value.
> 
>   
>  {code}
> First of all, we can be more optimistic and mark node to stale only after 5 
> mins instead of 90s. 5 mins should be enough most of the time to recover the 
> nodes.
> Second: we can decrease the time of ozone.scm.pipeline.destroy.timeout. 
> Ideally the close command is sent by the scm to the datanode with a HB. 
> Between two HB we have enough time to close all the containers via ratis. 
> With the next HB, datanode can report the successful datanode. (If the 
> containers can be closed the scm can manage the QUASI_CLOSED containers)
> We need to wait 29 seconds (worst case) for the next HB, and 29+30 seconds 
> for the confirmation. --> 66 seconds seems to be a safe choice (assuming that 
> 6 seconds is enough to process the report about the successful closing)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1284) Adjust default values of pipline recovery for more resilient service restart

2019-03-15 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793963#comment-16793963
 ] 

Ajay Kumar commented on HDDS-1284:
--

+1, [~elek] thanks for contribution. Committed to trunk and 0.4.

> Adjust default values of pipline recovery for more resilient service restart
> 
>
> Key: HDDS-1284
> URL: https://issues.apache.org/jira/browse/HDDS-1284
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> As of now we have a following algorithm to handle node failures:
> 1. In case of a missing node the leader of the pipline or the scm can 
> detected the missing heartbeats.
> 2. SCM will start to close the pipeline (CLOSING state) and try to close the 
> containers with the remaining nodes in the pipeline
> 3. After 5 minutes the pipeline will be destroyed (CLOSED) and a new pipeline 
> can be created from the healthy nodes (one node can be part only one pipwline 
> in the same time).
> While this algorithm can work well with a big cluster it doesn't provide very 
> good usability on small clusters:
> Use case1:
> Given 3 nodes, in case of a service restart, if the restart takes more than 
> 90s, the pipline will be moved to the CLOSING state. For the next 5 minutes 
> (ozone.scm.pipeline.destroy.timeout) the container will remain in the CLOSING 
> state. As there are no more nodes and we can't assign the same node to two 
> different pipeline, the cluster will be unavailable for 5 minutes.
> Use case2:
> Given 90 nodes and 30 pipelines where all the pipelines are spread across 3 
> racks. Let's stop one rack. As all the pipelines are affected, all the 
> pipelines will be moved to the CLOSING state. We have no free nodes, 
> therefore we need to wait for 5 minutes to write any data to the cluster.
> These problems can be solved in multiple ways:
> 1.) Instead of waiting 5 minutes, destroy the pipeline when all the 
> containers are reported to be closed. (Most of the time it's enough, but some 
> container report can be missing)
> 2.) Support multi-raft and open a pipeline as soon as we have enough nodes 
> (even if the nodes already have a CLOSING pipelines).
> Both the options require more work on the pipeline management side. For 0.4.0 
> we can adjust the following parameters to get better user experience:
> {code}
>   
> ozone.scm.pipeline.destroy.timeout
> 60s
> OZONE, SCM, PIPELINE
> 
>   Once a pipeline is closed, SCM should wait for the above configured time
>   before destroying a pipeline.
> 
>   
> ozone.scm.stale.node.interval
> 90s
> OZONE, MANAGEMENT
> 
>   The interval for stale node flagging. Please
>   see ozone.scm.heartbeat.thread.interval before changing this value.
> 
>   
>  {code}
> First of all, we can be more optimistic and mark node to stale only after 5 
> mins instead of 90s. 5 mins should be enough most of the time to recover the 
> nodes.
> Second: we can decrease the time of ozone.scm.pipeline.destroy.timeout. 
> Ideally the close command is sent by the scm to the datanode with a HB. 
> Between two HB we have enough time to close all the containers via ratis. 
> With the next HB, datanode can report the successful datanode. (If the 
> containers can be closed the scm can manage the QUASI_CLOSED containers)
> We need to wait 29 seconds (worst case) for the next HB, and 29+30 seconds 
> for the confirmation. --> 66 seconds seems to be a safe choice (assuming that 
> 6 seconds is enough to process the report about the successful closing)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1284) Adjust default values of pipline recovery for more resilient service restart

2019-03-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1284?focusedWorklogId=214061=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-214061
 ]

ASF GitHub Bot logged work on HDDS-1284:


Author: ASF GitHub Bot
Created on: 15/Mar/19 21:51
Start Date: 15/Mar/19 21:51
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on pull request #608: HDDS-1284. 
Adjust default values of pipline recovery for more resilient service restart
URL: https://github.com/apache/hadoop/pull/608
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 214061)
Time Spent: 0.5h  (was: 20m)

> Adjust default values of pipline recovery for more resilient service restart
> 
>
> Key: HDDS-1284
> URL: https://issues.apache.org/jira/browse/HDDS-1284
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> As of now we have a following algorithm to handle node failures:
> 1. In case of a missing node the leader of the pipline or the scm can 
> detected the missing heartbeats.
> 2. SCM will start to close the pipeline (CLOSING state) and try to close the 
> containers with the remaining nodes in the pipeline
> 3. After 5 minutes the pipeline will be destroyed (CLOSED) and a new pipeline 
> can be created from the healthy nodes (one node can be part only one pipwline 
> in the same time).
> While this algorithm can work well with a big cluster it doesn't provide very 
> good usability on small clusters:
> Use case1:
> Given 3 nodes, in case of a service restart, if the restart takes more than 
> 90s, the pipline will be moved to the CLOSING state. For the next 5 minutes 
> (ozone.scm.pipeline.destroy.timeout) the container will remain in the CLOSING 
> state. As there are no more nodes and we can't assign the same node to two 
> different pipeline, the cluster will be unavailable for 5 minutes.
> Use case2:
> Given 90 nodes and 30 pipelines where all the pipelines are spread across 3 
> racks. Let's stop one rack. As all the pipelines are affected, all the 
> pipelines will be moved to the CLOSING state. We have no free nodes, 
> therefore we need to wait for 5 minutes to write any data to the cluster.
> These problems can be solved in multiple ways:
> 1.) Instead of waiting 5 minutes, destroy the pipeline when all the 
> containers are reported to be closed. (Most of the time it's enough, but some 
> container report can be missing)
> 2.) Support multi-raft and open a pipeline as soon as we have enough nodes 
> (even if the nodes already have a CLOSING pipelines).
> Both the options require more work on the pipeline management side. For 0.4.0 
> we can adjust the following parameters to get better user experience:
> {code}
>   
> ozone.scm.pipeline.destroy.timeout
> 60s
> OZONE, SCM, PIPELINE
> 
>   Once a pipeline is closed, SCM should wait for the above configured time
>   before destroying a pipeline.
> 
>   
> ozone.scm.stale.node.interval
> 90s
> OZONE, MANAGEMENT
> 
>   The interval for stale node flagging. Please
>   see ozone.scm.heartbeat.thread.interval before changing this value.
> 
>   
>  {code}
> First of all, we can be more optimistic and mark node to stale only after 5 
> mins instead of 90s. 5 mins should be enough most of the time to recover the 
> nodes.
> Second: we can decrease the time of ozone.scm.pipeline.destroy.timeout. 
> Ideally the close command is sent by the scm to the datanode with a HB. 
> Between two HB we have enough time to close all the containers via ratis. 
> With the next HB, datanode can report the successful datanode. (If the 
> containers can be closed the scm can manage the QUASI_CLOSED containers)
> We need to wait 29 seconds (worst case) for the next HB, and 29+30 seconds 
> for the confirmation. --> 66 seconds seems to be a safe choice (assuming that 
> 6 seconds is enough to process the report about the successful closing)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1164) Add New blockade Tests to test Replica Manager

2019-03-15 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1164:
-
Labels: postpone-to-craterlake  (was: )

> Add New blockade Tests to test Replica Manager
> --
>
> Key: HDDS-1164
> URL: https://issues.apache.org/jira/browse/HDDS-1164
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Major
>  Labels: postpone-to-craterlake
> Attachments: HDDS-1164.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1164) Add New blockade Tests to test Replica Manager

2019-03-15 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1164:
-
Target Version/s: 0.5.0  (was: 0.4.0)

> Add New blockade Tests to test Replica Manager
> --
>
> Key: HDDS-1164
> URL: https://issues.apache.org/jira/browse/HDDS-1164
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Major
> Attachments: HDDS-1164.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1164) Add New blockade Tests to test Replica Manager

2019-03-15 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1164:
-
Sprint:   (was: HDDS BadLands)

> Add New blockade Tests to test Replica Manager
> --
>
> Key: HDDS-1164
> URL: https://issues.apache.org/jira/browse/HDDS-1164
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Major
>  Labels: postpone-to-craterlake
> Attachments: HDDS-1164.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1233) Create an Ozone Manager Service provider for Recon.

2019-03-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793960#comment-16793960
 ] 

Hadoop QA commented on HDDS-1233:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
59s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdds hadoop-ozone {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
4s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 47s{color} 
| {color:red} hadoop-ozone generated 4 new + 0 unchanged - 0 fixed = 4 total 
(was 0) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdds hadoop-ozone {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 29s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 12m 59s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.container.common.TestDatanodeStateMachine |
|   | hadoop.ozone.om.TestOzoneManager |
|   | hadoop.ozone.om.TestScmChillMode |
|   | 

[jira] [Work logged] (HDDS-1264) Remove Parametrized in TestOzoneShell

2019-03-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1264?focusedWorklogId=214055=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-214055
 ]

ASF GitHub Bot logged work on HDDS-1264:


Author: ASF GitHub Bot
Created on: 15/Mar/19 21:38
Start Date: 15/Mar/19 21:38
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #614: HDDS-1264. Remove 
Parametrized in TestOzoneShell
URL: https://github.com/apache/hadoop/pull/614#issuecomment-473449370
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 521 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 982 | trunk passed |
   | -1 | compile | 23 | integration-test in trunk failed. |
   | +1 | checkstyle | 21 | trunk passed |
   | -1 | mvnsite | 28 | integration-test in trunk failed. |
   | +1 | shadedclient | 696 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 0 | trunk passed |
   | +1 | javadoc | 18 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 24 | integration-test in the patch failed. |
   | -1 | compile | 24 | integration-test in the patch failed. |
   | -1 | javac | 24 | integration-test in the patch failed. |
   | +1 | checkstyle | 15 | the patch passed |
   | -1 | mvnsite | 23 | integration-test in the patch failed. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 741 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 0 | the patch passed |
   | +1 | javadoc | 17 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 27 | integration-test in the patch failed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 3268 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-614/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/614 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 3bd7f220a862 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ff06ef0 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-614/1/artifact/out/branch-compile-hadoop-ozone_integration-test.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-614/1/artifact/out/branch-mvnsite-hadoop-ozone_integration-test.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-614/1/artifact/out/patch-mvninstall-hadoop-ozone_integration-test.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-614/1/artifact/out/patch-compile-hadoop-ozone_integration-test.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-614/1/artifact/out/patch-compile-hadoop-ozone_integration-test.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-614/1/artifact/out/patch-mvnsite-hadoop-ozone_integration-test.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-614/1/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-614/1/testReport/ |
   | Max. process+thread count | 411 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/integration-test U: 
hadoop-ozone/integration-test |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-614/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 214055)
Time Spent: 0.5h  (was: 20m)

> Remove 

[jira] [Commented] (HDDS-1283) Fix the dynamic documentation of basic s3 client usage

2019-03-15 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793949#comment-16793949
 ] 

Hudson commented on HDDS-1283:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16220 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16220/])
HDDS-1283. Fix the dynamic documentation of basic s3 client usage. 
(7813154+ajayydv: rev 16b78622ccf641e3805a0d78be9c1c3e20f97f6a)
* (edit) hadoop-ozone/s3gateway/src/main/resources/webapps/static/index.html


> Fix the dynamic documentation of basic s3 client usage
> --
>
> Key: HDDS-1283
> URL: https://issues.apache.org/jira/browse/HDDS-1283
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> S3 gateway has a default web page to display a generic message if you open 
> the endpoint in the browser:
> http://localhost:9878/static/
> It also contains a simple example to use the endpoint:
> {code}
> This is an endpoint of Apache Hadoop Ozone S3 gateway. Use it with any AWS S3 
> compatible tool with setting this url as an endpoint
> For example with aws-cli:
> aws s3api --endpoint http://localhost:9878/static/ create-bucket 
> --bucket=wordcount
> For more information, please check the documentation. 
> {code}
> Unfortunately the endpoint is wrong here, the static should be removed from 
> the url.
> The trivial fix is to move the ) in the js code>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1281) Fix the findbug issue caused by HDDS-1163

2019-03-15 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1281:

Attachment: HDDS-1281-000.patch
Status: Patch Available  (was: Open)

> Fix the findbug issue caused by HDDS-1163
> -
>
> Key: HDDS-1281
> URL: https://issues.apache.org/jira/browse/HDDS-1281
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Aravindan Vijayan
>Priority: Minor
>  Labels: newbie
> Attachments: HDDS-1281-000.patch
>
>
> https://ci.anzix.net/job/ozone-nightly/30/findbugs/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1283) Fix the dynamic documentation of basic s3 client usage

2019-03-15 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1283:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Fix the dynamic documentation of basic s3 client usage
> --
>
> Key: HDDS-1283
> URL: https://issues.apache.org/jira/browse/HDDS-1283
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> S3 gateway has a default web page to display a generic message if you open 
> the endpoint in the browser:
> http://localhost:9878/static/
> It also contains a simple example to use the endpoint:
> {code}
> This is an endpoint of Apache Hadoop Ozone S3 gateway. Use it with any AWS S3 
> compatible tool with setting this url as an endpoint
> For example with aws-cli:
> aws s3api --endpoint http://localhost:9878/static/ create-bucket 
> --bucket=wordcount
> For more information, please check the documentation. 
> {code}
> Unfortunately the endpoint is wrong here, the static should be removed from 
> the url.
> The trivial fix is to move the ) in the js code>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1283) Fix the dynamic documentation of basic s3 client usage

2019-03-15 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1283:
-
Fix Version/s: 0.4.0

> Fix the dynamic documentation of basic s3 client usage
> --
>
> Key: HDDS-1283
> URL: https://issues.apache.org/jira/browse/HDDS-1283
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> S3 gateway has a default web page to display a generic message if you open 
> the endpoint in the browser:
> http://localhost:9878/static/
> It also contains a simple example to use the endpoint:
> {code}
> This is an endpoint of Apache Hadoop Ozone S3 gateway. Use it with any AWS S3 
> compatible tool with setting this url as an endpoint
> For example with aws-cli:
> aws s3api --endpoint http://localhost:9878/static/ create-bucket 
> --bucket=wordcount
> For more information, please check the documentation. 
> {code}
> Unfortunately the endpoint is wrong here, the static should be removed from 
> the url.
> The trivial fix is to move the ) in the js code>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1263) SCM CLI does not list container with id 1

2019-03-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1263?focusedWorklogId=214041=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-214041
 ]

ASF GitHub Bot logged work on HDDS-1263:


Author: ASF GitHub Bot
Created on: 15/Mar/19 21:06
Start Date: 15/Mar/19 21:06
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #613: HDDS-1263. SCM 
CLI does not list container with id 1
URL: https://github.com/apache/hadoop/pull/613#issuecomment-473441191
 
 
   I will commit this shortly.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 214041)
Time Spent: 40m  (was: 0.5h)

> SCM CLI does not list container with id 1
> -
>
> Key: HDDS-1263
> URL: https://issues.apache.org/jira/browse/HDDS-1263
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone CLI
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Steps to reproduce
>  # Create two containers 
> {code:java}
> ozone scmcli create
> ozone scmcli create{code}
>  # Try to list containers
> {code:java}
> hadoop@7a73695402ae:~$ ozone scmcli list --start=0
>  Container ID should be a positive long. 0
> hadoop@7a73695402ae:~$ ozone scmcli list --start=1 
> { 
> "state" : "OPEN",
> "replicationFactor" : "ONE",
> "replicationType" : "STAND_ALONE",
> "usedBytes" : 0,
> "numberOfKeys" : 0,
> "lastUsed" : 274660388,
> "stateEnterTime" : 274646481,
> "owner" : "OZONE",
> "containerID" : 2,
> "deleteTransactionId" : 0,
> "sequenceId" : 0,
> "open" : true 
> }{code}
> There is no way to list the container with containerID 1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1283) Fix the dynamic documentation of basic s3 client usage

2019-03-15 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793947#comment-16793947
 ] 

Ajay Kumar commented on HDDS-1283:
--

[~elek] thanks for contribution, committed to trunk and ozone 0.4

> Fix the dynamic documentation of basic s3 client usage
> --
>
> Key: HDDS-1283
> URL: https://issues.apache.org/jira/browse/HDDS-1283
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> S3 gateway has a default web page to display a generic message if you open 
> the endpoint in the browser:
> http://localhost:9878/static/
> It also contains a simple example to use the endpoint:
> {code}
> This is an endpoint of Apache Hadoop Ozone S3 gateway. Use it with any AWS S3 
> compatible tool with setting this url as an endpoint
> For example with aws-cli:
> aws s3api --endpoint http://localhost:9878/static/ create-bucket 
> --bucket=wordcount
> For more information, please check the documentation. 
> {code}
> Unfortunately the endpoint is wrong here, the static should be removed from 
> the url.
> The trivial fix is to move the ) in the js code>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1283) Fix the dynamic documentation of basic s3 client usage

2019-03-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1283?focusedWorklogId=214044=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-214044
 ]

ASF GitHub Bot logged work on HDDS-1283:


Author: ASF GitHub Bot
Created on: 15/Mar/19 21:12
Start Date: 15/Mar/19 21:12
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on pull request #605: HDDS-1283. Fix 
the dynamic documentation of basic s3 client usage
URL: https://github.com/apache/hadoop/pull/605
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 214044)
Time Spent: 40m  (was: 0.5h)

> Fix the dynamic documentation of basic s3 client usage
> --
>
> Key: HDDS-1283
> URL: https://issues.apache.org/jira/browse/HDDS-1283
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> S3 gateway has a default web page to display a generic message if you open 
> the endpoint in the browser:
> http://localhost:9878/static/
> It also contains a simple example to use the endpoint:
> {code}
> This is an endpoint of Apache Hadoop Ozone S3 gateway. Use it with any AWS S3 
> compatible tool with setting this url as an endpoint
> For example with aws-cli:
> aws s3api --endpoint http://localhost:9878/static/ create-bucket 
> --bucket=wordcount
> For more information, please check the documentation. 
> {code}
> Unfortunately the endpoint is wrong here, the static should be removed from 
> the url.
> The trivial fix is to move the ) in the js code>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1250) In OM HA AllocateBlock call where connecting to SCM from OM should not happen on Ratis

2019-03-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793940#comment-16793940
 ] 

Hadoop QA commented on HDDS-1250:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} https://github.com/apache/hadoop/pull/591 does not apply to 
trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| GITHUB PR | https://github.com/apache/hadoop/pull/591 |
| JIRA Issue | HDDS-1250 |
| Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-591/10/console |
| Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |


This message was automatically generated.



> In OM HA AllocateBlock call where connecting to SCM from OM should not happen 
> on Ratis
> --
>
> Key: HDDS-1250
> URL: https://issues.apache.org/jira/browse/HDDS-1250
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> In OM HA, currently when allocateBlock is called, in applyTransaction() on 
> all OM nodes, we make a call to SCM and write the allocateBlock information 
> into OM DB. The problem with this is, every OM allocateBlock and appends new 
> BlockInfo into OMKeyInfom and also this a correctness issue. (As all OM's 
> should have the same block information for a key, even though eventually this 
> might be changed during key commit)
>  
> The proposed approach is:
> 1. Calling SCM for allocation of block will happen outside of ratis, and this 
> block information is passed and writing to DB will happen via Ratis.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1281) Fix the findbug issue caused by HDDS-1163

2019-03-15 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1281:

Status: Open  (was: Patch Available)

> Fix the findbug issue caused by HDDS-1163
> -
>
> Key: HDDS-1281
> URL: https://issues.apache.org/jira/browse/HDDS-1281
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Aravindan Vijayan
>Priority: Minor
>  Labels: newbie
>
> https://ci.anzix.net/job/ozone-nightly/30/findbugs/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14366) Improve HDFS append performance

2019-03-15 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793943#comment-16793943
 ] 

Íñigo Goiri commented on HDFS-14366:


In addition to trunk, I committed to branch-2, branch-2.9, branch-3.0, 
branch-3.1, and branch-3.2.
It applied with no issues.


> Improve HDFS append performance
> ---
>
> Key: HDFS-14366
> URL: https://issues.apache.org/jira/browse/HDFS-14366
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 2.8.2
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14366.000.patch, HDFS-14366.001.patch, 
> append-flamegraph.png
>
>
> In our HDFS cluster we observed that {{append}} operation can take as much as 
> 10X write lock time than other write operations. By collecting flamegraph on 
> the namenode (see attachment: append-flamegraph.png), we found that most of 
> the append call is spent on {{getNumLiveDataNodes()}}:
> {code}
>   /** @return the number of live datanodes. */
>   public int getNumLiveDataNodes() {
> int numLive = 0;
> synchronized (this) {
>   for(DatanodeDescriptor dn : datanodeMap.values()) {
> if (!isDatanodeDead(dn) ) {
>   numLive++;
> }
>   }
> }
> return numLive;
>   }
> {code}
> this method synchronizes on the {{DatanodeManager}} which is particularly 
> expensive in large clusters since {{datanodeMap}} is being modified in many 
> places such as processing DN heartbeats.
> For {{append}} operation, {{getNumLiveDataNodes()}} is invoked in 
> {{isSufficientlyReplicated}}:
> {code}
>   /**
>* Check if a block is replicated to at least the minimum replication.
>*/
>   public boolean isSufficientlyReplicated(BlockInfo b) {
> // Compare against the lesser of the minReplication and number of live 
> DNs.
> final int replication =
> Math.min(minReplication, getDatanodeManager().getNumLiveDataNodes());
> return countNodes(b).liveReplicas() >= replication;
>   }
> {code}
> The way that the {{replication}} is calculated is not very optimal, as it 
> will call {{getNumLiveDataNodes()}} _every time_ even though usually 
> {{minReplication}} is much smaller than the latter. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14374) Expose total number of delegation tokens in AbstractDelegationTokenSecretManager

2019-03-15 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793942#comment-16793942
 ] 

Íñigo Goiri commented on HDFS-14374:


Thanks [~crh] for the patch; a few comments:
* Can you do an static import for assertEquals()? The rest may use the whole 
syntax but I think we should start using just assertEquals().
* The expected value should be the first parameter for assertEquals().
* For the initialization of {{TestDelegationTokenSecretManager()}} can we use 
values like: {{TimeUnit.DAYS.toMillis(7)}}?
* In the javadoc for {{getCurrentTokensSize()}} we should say that cancelled 
tokens don't count. Basically describe what we are testing in 
{{testDelegationTokenCount()}}.

> Expose total number of delegation tokens in 
> AbstractDelegationTokenSecretManager
> 
>
> Key: HDFS-14374
> URL: https://issues.apache.org/jira/browse/HDFS-14374
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14374.001.patch, HDFS-14374.002.patch
>
>
> AbstractDelegationTokenSecretManager should expose total number of active 
> delegation tokens for specific implementations to track for observability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14374) Expose total number of delegation tokens in AbstractDelegationTokenSecretManager

2019-03-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793933#comment-16793933
 ] 

Hadoop QA commented on HDFS-14374:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
12s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 56s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 94 unchanged - 0 fixed = 96 total (was 94) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
5s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 99m 18s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14374 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12962641/HDFS-14374.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 89cdbb3200db 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ff06ef0 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26484/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26484/testReport/ |
| Max. process+thread count | 1347 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26484/console |
| Powered by | Apache Yetus 0.8.0   

[jira] [Commented] (HDFS-14366) Improve HDFS append performance

2019-03-15 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793937#comment-16793937
 ] 

Íñigo Goiri commented on HDFS-14366:


I guess is worthy.
Let me see if it applies.

> Improve HDFS append performance
> ---
>
> Key: HDFS-14366
> URL: https://issues.apache.org/jira/browse/HDFS-14366
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 2.8.2
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14366.000.patch, HDFS-14366.001.patch, 
> append-flamegraph.png
>
>
> In our HDFS cluster we observed that {{append}} operation can take as much as 
> 10X write lock time than other write operations. By collecting flamegraph on 
> the namenode (see attachment: append-flamegraph.png), we found that most of 
> the append call is spent on {{getNumLiveDataNodes()}}:
> {code}
>   /** @return the number of live datanodes. */
>   public int getNumLiveDataNodes() {
> int numLive = 0;
> synchronized (this) {
>   for(DatanodeDescriptor dn : datanodeMap.values()) {
> if (!isDatanodeDead(dn) ) {
>   numLive++;
> }
>   }
> }
> return numLive;
>   }
> {code}
> this method synchronizes on the {{DatanodeManager}} which is particularly 
> expensive in large clusters since {{datanodeMap}} is being modified in many 
> places such as processing DN heartbeats.
> For {{append}} operation, {{getNumLiveDataNodes()}} is invoked in 
> {{isSufficientlyReplicated}}:
> {code}
>   /**
>* Check if a block is replicated to at least the minimum replication.
>*/
>   public boolean isSufficientlyReplicated(BlockInfo b) {
> // Compare against the lesser of the minReplication and number of live 
> DNs.
> final int replication =
> Math.min(minReplication, getDatanodeManager().getNumLiveDataNodes());
> return countNodes(b).liveReplicas() >= replication;
>   }
> {code}
> The way that the {{replication}} is calculated is not very optimal, as it 
> will call {{getNumLiveDataNodes()}} _every time_ even though usually 
> {{minReplication}} is much smaller than the latter. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14374) Expose total number of delegation tokens in AbstractDelegationTokenSecretManager

2019-03-15 Thread CR Hota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CR Hota updated HDFS-14374:
---
Attachment: HDFS-14374.002.patch

> Expose total number of delegation tokens in 
> AbstractDelegationTokenSecretManager
> 
>
> Key: HDFS-14374
> URL: https://issues.apache.org/jira/browse/HDFS-14374
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14374.001.patch, HDFS-14374.002.patch
>
>
> AbstractDelegationTokenSecretManager should expose total number of active 
> delegation tokens for specific implementations to track for observability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1281) Fix the findbug issue caused by HDDS-1163

2019-03-15 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1281:

Attachment: (was: HDDS-1281-000.patch)

> Fix the findbug issue caused by HDDS-1163
> -
>
> Key: HDDS-1281
> URL: https://issues.apache.org/jira/browse/HDDS-1281
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Aravindan Vijayan
>Priority: Minor
>  Labels: newbie
>
> https://ci.anzix.net/job/ozone-nightly/30/findbugs/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1250) In OM HA AllocateBlock call where connecting to SCM from OM should not happen on Ratis

2019-03-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1250?focusedWorklogId=214033=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-214033
 ]

ASF GitHub Bot logged work on HDDS-1250:


Author: ASF GitHub Bot
Created on: 15/Mar/19 20:58
Start Date: 15/Mar/19 20:58
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #591: HDDS-1250: 
Initial patch with the proposed fix.
URL: https://github.com/apache/hadoop/pull/591#issuecomment-473438864
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 7 | https://github.com/apache/hadoop/pull/591 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/591 |
   | JIRA Issue | HDDS-1250 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-591/10/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 214033)
Time Spent: 2.5h  (was: 2h 20m)

> In OM HA AllocateBlock call where connecting to SCM from OM should not happen 
> on Ratis
> --
>
> Key: HDDS-1250
> URL: https://issues.apache.org/jira/browse/HDDS-1250
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> In OM HA, currently when allocateBlock is called, in applyTransaction() on 
> all OM nodes, we make a call to SCM and write the allocateBlock information 
> into OM DB. The problem with this is, every OM allocateBlock and appends new 
> BlockInfo into OMKeyInfom and also this a correctness issue. (As all OM's 
> should have the same block information for a key, even though eventually this 
> might be changed during key commit)
>  
> The proposed approach is:
> 1. Calling SCM for allocation of block will happen outside of ratis, and this 
> block information is passed and writing to DB will happen via Ratis.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1250) In OM HA AllocateBlock call where connecting to SCM from OM should not happen on Ratis

2019-03-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1250?focusedWorklogId=214032=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-214032
 ]

ASF GitHub Bot logged work on HDDS-1250:


Author: ASF GitHub Bot
Created on: 15/Mar/19 20:57
Start Date: 15/Mar/19 20:57
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #591: HDDS-1250: 
Initial patch with the proposed fix.
URL: https://github.com/apache/hadoop/pull/591#issuecomment-473438659
 
 
   Thank You @hanishakoneru  for the offline discussion.
   Addressed the suggestions.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 214032)
Time Spent: 2h 20m  (was: 2h 10m)

> In OM HA AllocateBlock call where connecting to SCM from OM should not happen 
> on Ratis
> --
>
> Key: HDDS-1250
> URL: https://issues.apache.org/jira/browse/HDDS-1250
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> In OM HA, currently when allocateBlock is called, in applyTransaction() on 
> all OM nodes, we make a call to SCM and write the allocateBlock information 
> into OM DB. The problem with this is, every OM allocateBlock and appends new 
> BlockInfo into OMKeyInfom and also this a correctness issue. (As all OM's 
> should have the same block information for a key, even though eventually this 
> might be changed during key commit)
>  
> The proposed approach is:
> 1. Calling SCM for allocation of block will happen outside of ratis, and this 
> block information is passed and writing to DB will happen via Ratis.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1264) Remove Parametrized in TestOzoneShell

2019-03-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1264?focusedWorklogId=214024=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-214024
 ]

ASF GitHub Bot logged work on HDDS-1264:


Author: ASF GitHub Bot
Created on: 15/Mar/19 20:43
Start Date: 15/Mar/19 20:43
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on issue #614: HDDS-1264. 
Remove Parametrized in TestOzoneShell
URL: https://github.com/apache/hadoop/pull/614#issuecomment-473434768
 
 
   @bharatviswa504 Please review this when you find time
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 214024)
Time Spent: 20m  (was: 10m)

> Remove Parametrized in TestOzoneShell
> -
>
> Key: HDDS-1264
> URL: https://issues.apache.org/jira/browse/HDDS-1264
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> HDDS-1068 removed RestClient from the TestOzoneShell.java.
> So now we don't need to be parameterized in the test anymore. We can directly 
> test with RpcClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1281) Fix the findbug issue caused by HDDS-1163

2019-03-15 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1281:

Attachment: HDDS-1281-000.patch
Status: Patch Available  (was: Open)

Fixed the findbugs issue. 

> Fix the findbug issue caused by HDDS-1163
> -
>
> Key: HDDS-1281
> URL: https://issues.apache.org/jira/browse/HDDS-1281
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Aravindan Vijayan
>Priority: Minor
>  Labels: newbie
> Attachments: HDDS-1281-000.patch
>
>
> https://ci.anzix.net/job/ozone-nightly/30/findbugs/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1250) In OM HA AllocateBlock call where connecting to SCM from OM should not happen on Ratis

2019-03-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1250?focusedWorklogId=214026=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-214026
 ]

ASF GitHub Bot logged work on HDDS-1250:


Author: ASF GitHub Bot
Created on: 15/Mar/19 20:48
Start Date: 15/Mar/19 20:48
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #591: HDDS-1250: 
Initial patch with the proposed fix.
URL: https://github.com/apache/hadoop/pull/591#issuecomment-473436151
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 6 | https://github.com/apache/hadoop/pull/591 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/591 |
   | JIRA Issue | HDDS-1250 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-591/9/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 214026)
Time Spent: 2h 10m  (was: 2h)

> In OM HA AllocateBlock call where connecting to SCM from OM should not happen 
> on Ratis
> --
>
> Key: HDDS-1250
> URL: https://issues.apache.org/jira/browse/HDDS-1250
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> In OM HA, currently when allocateBlock is called, in applyTransaction() on 
> all OM nodes, we make a call to SCM and write the allocateBlock information 
> into OM DB. The problem with this is, every OM allocateBlock and appends new 
> BlockInfo into OMKeyInfom and also this a correctness issue. (As all OM's 
> should have the same block information for a key, even though eventually this 
> might be changed during key commit)
>  
> The proposed approach is:
> 1. Calling SCM for allocation of block will happen outside of ratis, and this 
> block information is passed and writing to DB will happen via Ratis.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1250) In OM HA AllocateBlock call where connecting to SCM from OM should not happen on Ratis

2019-03-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793934#comment-16793934
 ] 

Hadoop QA commented on HDDS-1250:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} https://github.com/apache/hadoop/pull/591 does not apply to 
trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| GITHUB PR | https://github.com/apache/hadoop/pull/591 |
| JIRA Issue | HDDS-1250 |
| Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-591/9/console |
| Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |


This message was automatically generated.



> In OM HA AllocateBlock call where connecting to SCM from OM should not happen 
> on Ratis
> --
>
> Key: HDDS-1250
> URL: https://issues.apache.org/jira/browse/HDDS-1250
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> In OM HA, currently when allocateBlock is called, in applyTransaction() on 
> all OM nodes, we make a call to SCM and write the allocateBlock information 
> into OM DB. The problem with this is, every OM allocateBlock and appends new 
> BlockInfo into OMKeyInfom and also this a correctness issue. (As all OM's 
> should have the same block information for a key, even though eventually this 
> might be changed during key commit)
>  
> The proposed approach is:
> 1. Calling SCM for allocation of block will happen outside of ratis, and this 
> block information is passed and writing to DB will happen via Ratis.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1281) Fix the findbug issue caused by HDDS-1163

2019-03-15 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan reassigned HDDS-1281:
---

Assignee: Aravindan Vijayan

> Fix the findbug issue caused by HDDS-1163
> -
>
> Key: HDDS-1281
> URL: https://issues.apache.org/jira/browse/HDDS-1281
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Aravindan Vijayan
>Priority: Minor
>  Labels: newbie
>
> https://ci.anzix.net/job/ozone-nightly/30/findbugs/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1281) Fix the findbug issue caused by HDDS-1163

2019-03-15 Thread Aravindan Vijayan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793932#comment-16793932
 ] 

Aravindan Vijayan commented on HDDS-1281:
-

[~bharatviswa] I can work on this. 

> Fix the findbug issue caused by HDDS-1163
> -
>
> Key: HDDS-1281
> URL: https://issues.apache.org/jira/browse/HDDS-1281
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Minor
>  Labels: newbie
>
> https://ci.anzix.net/job/ozone-nightly/30/findbugs/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1264) Remove Parametrized in TestOzoneShell

2019-03-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1264?focusedWorklogId=214023=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-214023
 ]

ASF GitHub Bot logged work on HDDS-1264:


Author: ASF GitHub Bot
Created on: 15/Mar/19 20:42
Start Date: 15/Mar/19 20:42
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on pull request #614: 
HDDS-1264. Remove Parametrized in TestOzoneShell
URL: https://github.com/apache/hadoop/pull/614
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 214023)
Time Spent: 10m
Remaining Estimate: 0h

> Remove Parametrized in TestOzoneShell
> -
>
> Key: HDDS-1264
> URL: https://issues.apache.org/jira/browse/HDDS-1264
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> HDDS-1068 removed RestClient from the TestOzoneShell.java.
> So now we don't need to be parameterized in the test anymore. We can directly 
> test with RpcClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1264) Remove Parametrized in TestOzoneShell

2019-03-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1264:
-
Labels: newbie pull-request-available  (was: newbie)

> Remove Parametrized in TestOzoneShell
> -
>
> Key: HDDS-1264
> URL: https://issues.apache.org/jira/browse/HDDS-1264
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>  Labels: newbie, pull-request-available
>
> HDDS-1068 removed RestClient from the TestOzoneShell.java.
> So now we don't need to be parameterized in the test anymore. We can directly 
> test with RpcClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1263) SCM CLI does not list container with id 1

2019-03-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1263?focusedWorklogId=214022=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-214022
 ]

ASF GitHub Bot logged work on HDDS-1263:


Author: ASF GitHub Bot
Created on: 15/Mar/19 20:40
Start Date: 15/Mar/19 20:40
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #613: HDDS-1263. SCM 
CLI does not list container with id 1
URL: https://github.com/apache/hadoop/pull/613#issuecomment-473433734
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 25 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 992 | trunk passed |
   | +1 | compile | 51 | trunk passed |
   | +1 | checkstyle | 23 | trunk passed |
   | +1 | mvnsite | 33 | trunk passed |
   | +1 | shadedclient | 720 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 37 | trunk passed |
   | +1 | javadoc | 24 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 33 | the patch passed |
   | +1 | compile | 22 | the patch passed |
   | +1 | javac | 22 | the patch passed |
   | +1 | checkstyle | 12 | the patch passed |
   | +1 | mvnsite | 25 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 698 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 46 | the patch passed |
   | +1 | javadoc | 21 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 92 | server-scm in the patch passed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 2965 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-613/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/613 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux d02eec570390 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ff06ef0 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-613/1/testReport/ |
   | Max. process+thread count | 446 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-613/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 214022)
Time Spent: 0.5h  (was: 20m)

> SCM CLI does not list container with id 1
> -
>
> Key: HDDS-1263
> URL: https://issues.apache.org/jira/browse/HDDS-1263
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone CLI
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Steps to reproduce
>  # Create two containers 
> {code:java}
> ozone scmcli create
> ozone scmcli create{code}
>  # Try to list containers
> {code:java}
> hadoop@7a73695402ae:~$ ozone scmcli list --start=0
>  Container ID should be a positive long. 0
> hadoop@7a73695402ae:~$ ozone scmcli list --start=1 
> { 
> "state" : "OPEN",
> "replicationFactor" : "ONE",
> "replicationType" : "STAND_ALONE",
> "usedBytes" : 0,
> "numberOfKeys" : 0,
> "lastUsed" : 274660388,
> "stateEnterTime" : 274646481,
> "owner" : "OZONE",
> "containerID" : 2,
> "deleteTransactionId" : 0,
> "sequenceId" : 0,
> "open" : true 
> }{code}
> There 

[jira] [Updated] (HDDS-1233) Create an Ozone Manager Service provider for Recon.

2019-03-15 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1233:

Attachment: HDDS-1233-002.patch
Status: Patch Available  (was: Open)

Thanks for the review [~swagle] and [~linyiqun]. 

Addressed Review comments

* "recon.om.connection.timeout", specify millis, same for other configs - 
*Fixed*
* ReconUtils seems to provide configuration key, should probably be named 
appropriately - *Added more methods here. So, the class name now makes sense. *
* untarCheckpointFile, increase READ/WRITE buffer sizes. - Increased to 50Kb.
* Don't see the need for ReconOMMetadataManagerProvider. - *Removed*
* ReconContainerDBProvider might return null - *No straightforward way to fix 
this since RocksDB constructor itself throws IOException. We can revisit this 
later. *
* omMetadataManager.start(configuration) Should rename if it does not start a 
thread - *Fixed*

OzoneManagerServiceProviderImpl
Line 98: Format string lacks one '{}' -
Line 78: We can make these time configs support with time-unit suffixes. Like 
ozone.om.save.metrics.interval did. After that, we should use 
Configuration#getTimeDuration to get the value.
*Fixed both*

ReconOMHelper
Line 215: The GzipCompressorInputStream should be closed before throwing the 
exception, otherwise it will lead the resource leak.
*Fixed*

OMConfigKeys
Line 56: This setting is existed in OzoneConfigKeys#OZONE_SECURITY_ENABLED_KEY 
under hadoop-hdds-common.
*Fixed*

Unit test
Can we add the prefix 'test'}}for all added test cases (exclude setup/teardown 
method)? E.g. {{getOMMetadataManagerInstance to 
testGetOMMetadataManagerInstance.
*Fixed*

> Create an Ozone Manager Service provider for Recon.
> ---
>
> Key: HDDS-1233
> URL: https://issues.apache.org/jira/browse/HDDS-1233
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1233-000.patch, HDDS-1233-001.patch, 
> HDDS-1233-002.patch
>
>
> * Implement an abstraction to let Recon make OM specific requests.
> * At this point of time, the only request is to get the DB snapshot. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1233) Create an Ozone Manager Service provider for Recon.

2019-03-15 Thread Aravindan Vijayan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793923#comment-16793923
 ] 

Aravindan Vijayan edited comment on HDDS-1233 at 3/15/19 8:33 PM:
--

Thanks for the review [~swagle] and [~linyiqun]. 

Addressed Review comments

* "recon.om.connection.timeout", specify millis, same for other configs - 
*Fixed*
* ReconUtils seems to provide configuration key, should probably be named 
appropriately -
 *Added more methods here. So, the class name now makes sense.*
* untarCheckpointFile, increase READ/WRITE buffer sizes. - *Increased to 50Kb.*
* Don't see the need for ReconOMMetadataManagerProvider. - *Removed*
* ReconContainerDBProvider might return null - 
*No straightforward way to fix this since RocksDB constructor itself throws 
IOException. We can revisit this later.*
* omMetadataManager.start(configuration) Should rename if it does not start a 
thread - *Fixed*

OzoneManagerServiceProviderImpl
Line 98: Format string lacks one '{}' -
Line 78: We can make these time configs support with time-unit suffixes. Like 
ozone.om.save.metrics.interval did. After that, we should use 
Configuration#getTimeDuration to get the value.
*Fixed both*

ReconOMHelper
Line 215: The GzipCompressorInputStream should be closed before throwing the 
exception, otherwise it will lead the resource leak.
*Fixed*

OMConfigKeys
Line 56: This setting is existed in OzoneConfigKeys#OZONE_SECURITY_ENABLED_KEY 
under hadoop-hdds-common.
*Fixed*

Unit test
Can we add the prefix 'test'}}for all added test cases (exclude setup/teardown 
method)? E.g. {{getOMMetadataManagerInstance to 
testGetOMMetadataManagerInstance.
*Fixed*


was (Author: avijayan):
Thanks for the review [~swagle] and [~linyiqun]. 

Addressed Review comments

* "recon.om.connection.timeout", specify millis, same for other configs - 
*Fixed*
* ReconUtils seems to provide configuration key, should probably be named 
appropriately - *Added more methods here. So, the class name now makes sense. *
* untarCheckpointFile, increase READ/WRITE buffer sizes. - Increased to 50Kb.
* Don't see the need for ReconOMMetadataManagerProvider. - *Removed*
* ReconContainerDBProvider might return null - *No straightforward way to fix 
this since RocksDB constructor itself throws IOException. We can revisit this 
later. *
* omMetadataManager.start(configuration) Should rename if it does not start a 
thread - *Fixed*

OzoneManagerServiceProviderImpl
Line 98: Format string lacks one '{}' -
Line 78: We can make these time configs support with time-unit suffixes. Like 
ozone.om.save.metrics.interval did. After that, we should use 
Configuration#getTimeDuration to get the value.
*Fixed both*

ReconOMHelper
Line 215: The GzipCompressorInputStream should be closed before throwing the 
exception, otherwise it will lead the resource leak.
*Fixed*

OMConfigKeys
Line 56: This setting is existed in OzoneConfigKeys#OZONE_SECURITY_ENABLED_KEY 
under hadoop-hdds-common.
*Fixed*

Unit test
Can we add the prefix 'test'}}for all added test cases (exclude setup/teardown 
method)? E.g. {{getOMMetadataManagerInstance to 
testGetOMMetadataManagerInstance.
*Fixed*

> Create an Ozone Manager Service provider for Recon.
> ---
>
> Key: HDDS-1233
> URL: https://issues.apache.org/jira/browse/HDDS-1233
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1233-000.patch, HDDS-1233-001.patch, 
> HDDS-1233-002.patch
>
>
> * Implement an abstraction to let Recon make OM specific requests.
> * At this point of time, the only request is to get the DB snapshot. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14351) RBF: Optimize configuration item resolving for monitor namenode

2019-03-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793918#comment-16793918
 ] 

Hadoop QA commented on HDFS-14351:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 1s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 24m 
18s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14351 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12962640/HDFS-14351-HDFS-13891.006.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c6ff993a00ff 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / c359a52 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26485/testReport/ |
| Max. process+thread count | 996 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26485/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: Optimize configuration item resolving for monitor namenode
> 

[jira] [Work started] (HDDS-1263) SCM CLI does not list container with id 1

2019-03-15 Thread Vivek Ratnavel Subramanian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1263 started by Vivek Ratnavel Subramanian.

> SCM CLI does not list container with id 1
> -
>
> Key: HDDS-1263
> URL: https://issues.apache.org/jira/browse/HDDS-1263
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone CLI
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Steps to reproduce
>  # Create two containers 
> {code:java}
> ozone scmcli create
> ozone scmcli create{code}
>  # Try to list containers
> {code:java}
> hadoop@7a73695402ae:~$ ozone scmcli list --start=0
>  Container ID should be a positive long. 0
> hadoop@7a73695402ae:~$ ozone scmcli list --start=1 
> { 
> "state" : "OPEN",
> "replicationFactor" : "ONE",
> "replicationType" : "STAND_ALONE",
> "usedBytes" : 0,
> "numberOfKeys" : 0,
> "lastUsed" : 274660388,
> "stateEnterTime" : 274646481,
> "owner" : "OZONE",
> "containerID" : 2,
> "deleteTransactionId" : 0,
> "sequenceId" : 0,
> "open" : true 
> }{code}
> There is no way to list the container with containerID 1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1233) Create an Ozone Manager Service provider for Recon.

2019-03-15 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1233:

Status: Open  (was: Patch Available)

> Create an Ozone Manager Service provider for Recon.
> ---
>
> Key: HDDS-1233
> URL: https://issues.apache.org/jira/browse/HDDS-1233
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1233-000.patch, HDDS-1233-001.patch
>
>
> * Implement an abstraction to let Recon make OM specific requests.
> * At this point of time, the only request is to get the DB snapshot. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1263) SCM CLI does not list container with id 1

2019-03-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1263?focusedWorklogId=214013=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-214013
 ]

ASF GitHub Bot logged work on HDDS-1263:


Author: ASF GitHub Bot
Created on: 15/Mar/19 20:18
Start Date: 15/Mar/19 20:18
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on issue #613: HDDS-1263. SCM 
CLI does not list container with id 1
URL: https://github.com/apache/hadoop/pull/613#issuecomment-473427544
 
 
   @bharatviswa504 Yes, they will be added as part of HDDS-711 as robot tests.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 214013)
Time Spent: 20m  (was: 10m)

> SCM CLI does not list container with id 1
> -
>
> Key: HDDS-1263
> URL: https://issues.apache.org/jira/browse/HDDS-1263
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone CLI
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Steps to reproduce
>  # Create two containers 
> {code:java}
> ozone scmcli create
> ozone scmcli create{code}
>  # Try to list containers
> {code:java}
> hadoop@7a73695402ae:~$ ozone scmcli list --start=0
>  Container ID should be a positive long. 0
> hadoop@7a73695402ae:~$ ozone scmcli list --start=1 
> { 
> "state" : "OPEN",
> "replicationFactor" : "ONE",
> "replicationType" : "STAND_ALONE",
> "usedBytes" : 0,
> "numberOfKeys" : 0,
> "lastUsed" : 274660388,
> "stateEnterTime" : 274646481,
> "owner" : "OZONE",
> "containerID" : 2,
> "deleteTransactionId" : 0,
> "sequenceId" : 0,
> "open" : true 
> }{code}
> There is no way to list the container with containerID 1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1250) In OM HA AllocateBlock call where connecting to SCM from OM should not happen on Ratis

2019-03-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793907#comment-16793907
 ] 

Hadoop QA commented on HDDS-1250:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
46s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
42s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
39s{color} | {color:red} hadoop-ozone in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
27s{color} | {color:red} ozone-manager in trunk failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
28s{color} | {color:red} integration-test in trunk failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 48s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
25s{color} | {color:red} ozone-manager in trunk failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
20s{color} | {color:red} ozone-manager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
24s{color} | {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
37s{color} | {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  0m 37s{color} | 
{color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 37s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 21s{color} | {color:orange} hadoop-ozone: The patch generated 3 new + 2 
unchanged - 0 fixed = 5 total (was 2) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
23s{color} | {color:red} ozone-manager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
25s{color} | {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  4s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
24s{color} | {color:red} ozone-manager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
38s{color} | {color:green} common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 24s{color} 
| {color:red} ozone-manager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 23s{color} 
| {color:red} 

[jira] [Work logged] (HDDS-1250) In OM HA AllocateBlock call where connecting to SCM from OM should not happen on Ratis

2019-03-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1250?focusedWorklogId=214000=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-214000
 ]

ASF GitHub Bot logged work on HDDS-1250:


Author: ASF GitHub Bot
Created on: 15/Mar/19 20:05
Start Date: 15/Mar/19 20:05
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #591: HDDS-1250: 
Initial patch with the proposed fix.
URL: https://github.com/apache/hadoop/pull/591#issuecomment-473423858
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 46 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 20 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1122 | trunk passed |
   | -1 | compile | 39 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 27 | trunk passed |
   | -1 | mvnsite | 27 | ozone-manager in trunk failed. |
   | -1 | mvnsite | 28 | integration-test in trunk failed. |
   | +1 | shadedclient | 828 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | -1 | findbugs | 25 | ozone-manager in trunk failed. |
   | +1 | javadoc | 72 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 14 | Maven dependency ordering for patch |
   | -1 | mvninstall | 20 | ozone-manager in the patch failed. |
   | -1 | mvninstall | 24 | integration-test in the patch failed. |
   | -1 | compile | 37 | hadoop-ozone in the patch failed. |
   | -1 | cc | 37 | hadoop-ozone in the patch failed. |
   | -1 | javac | 37 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 21 | hadoop-ozone: The patch generated 3 new + 2 
unchanged - 0 fixed = 5 total (was 2) |
   | -1 | mvnsite | 23 | ozone-manager in the patch failed. |
   | -1 | mvnsite | 25 | integration-test in the patch failed. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 784 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | -1 | findbugs | 24 | ozone-manager in the patch failed. |
   | +1 | javadoc | 66 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 38 | common in the patch passed. |
   | -1 | unit | 24 | ozone-manager in the patch failed. |
   | -1 | unit | 23 | integration-test in the patch failed. |
   | +1 | asflicense | 26 | The patch does not generate ASF License warnings. |
   | | | 3658 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-591/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/591 |
   | JIRA Issue | HDDS-1250 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  cc  |
   | uname | Linux 188bdc237bd8 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri 
Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ff06ef0 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-591/8/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-591/8/artifact/out/branch-mvnsite-hadoop-ozone_ozone-manager.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-591/8/artifact/out/branch-mvnsite-hadoop-ozone_integration-test.txt
 |
   | findbugs | v3.1.0-RC1 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-591/8/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-591/8/artifact/out/patch-mvninstall-hadoop-ozone_ozone-manager.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-591/8/artifact/out/patch-mvninstall-hadoop-ozone_integration-test.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-591/8/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | cc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-591/8/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-591/8/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | checkstyle | 

[jira] [Commented] (HDFS-14366) Improve HDFS append performance

2019-03-15 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793906#comment-16793906
 ] 

Chao Sun commented on HDFS-14366:
-

Thanks [~elgoiri]! do you think we should backport this to other branches such 
as branch-2 as well?

> Improve HDFS append performance
> ---
>
> Key: HDFS-14366
> URL: https://issues.apache.org/jira/browse/HDFS-14366
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 2.8.2
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14366.000.patch, HDFS-14366.001.patch, 
> append-flamegraph.png
>
>
> In our HDFS cluster we observed that {{append}} operation can take as much as 
> 10X write lock time than other write operations. By collecting flamegraph on 
> the namenode (see attachment: append-flamegraph.png), we found that most of 
> the append call is spent on {{getNumLiveDataNodes()}}:
> {code}
>   /** @return the number of live datanodes. */
>   public int getNumLiveDataNodes() {
> int numLive = 0;
> synchronized (this) {
>   for(DatanodeDescriptor dn : datanodeMap.values()) {
> if (!isDatanodeDead(dn) ) {
>   numLive++;
> }
>   }
> }
> return numLive;
>   }
> {code}
> this method synchronizes on the {{DatanodeManager}} which is particularly 
> expensive in large clusters since {{datanodeMap}} is being modified in many 
> places such as processing DN heartbeats.
> For {{append}} operation, {{getNumLiveDataNodes()}} is invoked in 
> {{isSufficientlyReplicated}}:
> {code}
>   /**
>* Check if a block is replicated to at least the minimum replication.
>*/
>   public boolean isSufficientlyReplicated(BlockInfo b) {
> // Compare against the lesser of the minReplication and number of live 
> DNs.
> final int replication =
> Math.min(minReplication, getDatanodeManager().getNumLiveDataNodes());
> return countNodes(b).liveReplicas() >= replication;
>   }
> {code}
> The way that the {{replication}} is calculated is not very optimal, as it 
> will call {{getNumLiveDataNodes()}} _every time_ even though usually 
> {{minReplication}} is much smaller than the latter. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1263) SCM CLI does not list container with id 1

2019-03-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1263:
-
Labels: pull-request-available  (was: )

> SCM CLI does not list container with id 1
> -
>
> Key: HDDS-1263
> URL: https://issues.apache.org/jira/browse/HDDS-1263
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone CLI
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>
> Steps to reproduce
>  # Create two containers 
> {code:java}
> ozone scmcli create
> ozone scmcli create{code}
>  # Try to list containers
> {code:java}
> hadoop@7a73695402ae:~$ ozone scmcli list --start=0
>  Container ID should be a positive long. 0
> hadoop@7a73695402ae:~$ ozone scmcli list --start=1 
> { 
> "state" : "OPEN",
> "replicationFactor" : "ONE",
> "replicationType" : "STAND_ALONE",
> "usedBytes" : 0,
> "numberOfKeys" : 0,
> "lastUsed" : 274660388,
> "stateEnterTime" : 274646481,
> "owner" : "OZONE",
> "containerID" : 2,
> "deleteTransactionId" : 0,
> "sequenceId" : 0,
> "open" : true 
> }{code}
> There is no way to list the container with containerID 1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1263) SCM CLI does not list container with id 1

2019-03-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1263?focusedWorklogId=213991=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-213991
 ]

ASF GitHub Bot logged work on HDDS-1263:


Author: ASF GitHub Bot
Created on: 15/Mar/19 19:49
Start Date: 15/Mar/19 19:49
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on pull request #613: 
HDDS-1263. SCM CLI does not list container with id 1
URL: https://github.com/apache/hadoop/pull/613
 
 
   "ozone scmcli list --start=1" lists containers starting from container ID 2.
   There is no way to list the container with containerID 1.
   
   This PR fixes this behavior.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 213991)
Time Spent: 10m
Remaining Estimate: 0h

> SCM CLI does not list container with id 1
> -
>
> Key: HDDS-1263
> URL: https://issues.apache.org/jira/browse/HDDS-1263
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone CLI
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Steps to reproduce
>  # Create two containers 
> {code:java}
> ozone scmcli create
> ozone scmcli create{code}
>  # Try to list containers
> {code:java}
> hadoop@7a73695402ae:~$ ozone scmcli list --start=0
>  Container ID should be a positive long. 0
> hadoop@7a73695402ae:~$ ozone scmcli list --start=1 
> { 
> "state" : "OPEN",
> "replicationFactor" : "ONE",
> "replicationType" : "STAND_ALONE",
> "usedBytes" : 0,
> "numberOfKeys" : 0,
> "lastUsed" : 274660388,
> "stateEnterTime" : 274646481,
> "owner" : "OZONE",
> "containerID" : 2,
> "deleteTransactionId" : 0,
> "sequenceId" : 0,
> "open" : true 
> }{code}
> There is no way to list the container with containerID 1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14371) Improve Logging in FSNamesystem by adding parameterized logging

2019-03-15 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HDFS-14371:
--
Description: 
Remove several instances of check for debug log enabled in FSNamesystem one 
such example is as:
{code}
if (LOG.isDebugEnabled()) {
LOG.debug("list corrupt file blocks returned: " + count);
}
{code}

This can be replaced by using parameterized logging.

  was:
Remove several instances of check for debug log enabled in FSNamesystem one 
such example is as:
{code}
if (LOG.isDebugEnabled()) {
  LOG.debug("NameNode metadata after re-processing " +
  "replication and invalidation queues during failover:\n" +
  metaSaveAsString());
}
{code}

This can be replaced by using parameterized logging.


> Improve Logging in FSNamesystem by adding parameterized logging
> ---
>
> Key: HDFS-14371
> URL: https://issues.apache.org/jira/browse/HDFS-14371
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.3.0
>Reporter: Shweta
>Assignee: Shweta
>Priority: Minor
> Attachments: HDFS-14371.001.patch
>
>
> Remove several instances of check for debug log enabled in FSNamesystem one 
> such example is as:
> {code}
> if (LOG.isDebugEnabled()) {
> LOG.debug("list corrupt file blocks returned: " + count);
> }
> {code}
> This can be replaced by using parameterized logging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14316) RBF: Support unavailable subclusters for mount points with multiple destinations

2019-03-15 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793857#comment-16793857
 ] 

Ayush Saxena edited comment on HDFS-14316 at 3/15/19 6:49 PM:
--

Thanx [~elgoiri] for the patch.

Had a quick look at this!!!
 * I guess we are retrying here for all exceptions encountered? Might be we 
should restrict retrying to just certain cases and let fail for some genuine 
ones like AccessControlException,Which are supposed to fail for all subclusters.
 * 
{code:java}
  final List locations = new ArrayList<>();
  for (RemoteLocation loc : rpcServer.getLocationsForPath(src, true)) {
    if (!loc.equals(createLocation)) {
  locations.add(loc);
    }
{code}
I guess this isn't working as intended, if in case the namenode is in 
StandbyState and isn't able to give the block locations.Thus throwing the 
exception at :
{code:java}
  createLocation = rpcServer.getCreateLocation(src);
{code}
The createLocation stays null. So in the above Loop we land up iterating 
checking no null entry.Literally doing nothing Got the Log from the UT too as :

 
{noformat}
2019-03-15 23:57:47,751 [IPC Server handler 6 on default port 38833] ERROR 
router.RouterClientProtocol (RouterClientProtocol.java:create(253)) - Cannot 
create /HASH_ALL-failsubcluster/dir100/file5.txt in null: No namenode available 
to invoke getBlockLocations [/HASH_ALL-failsubcluster/dir100/file5.txt, 0, 
1]{noformat}
 
 * 
{code:java}
    // Check if this file already exists in other subclusters
    LocatedBlocks existingLocation = getBlockLocations(src, 0, 1);
{code}
If we supress the exception here. Is there a chance we may land up creating a 
file that already existed in the other subCluster?

 


was (Author: ayushtkn):
Thanx [~elgoiri] for the patch.

Had a quick look at this!!!
 * I guess we are retrying here for all exceptions encountered? Might be we 
should restrict retrying to just certain cases and let fail for some genuine 
ones like AccessControlException,Which are supposed to fail for all subclusters.
 * 
{code:java}
  final List locations = new ArrayList<>();
  for (RemoteLocation loc : rpcServer.getLocationsForPath(src, true)) {
    if (!loc.equals(createLocation)) {
  locations.add(loc);
    }
{code}
I guess this isn't working as intended, if in case the namenode is in 
StandbyState and isn't able to give the block locations.Thus throwing the 
exception at :
{code:java}
  createLocation = rpcServer.getCreateLocation(src);
{code}
The createLocation stays null. So in the above Loop we land up iterating 
checking no null entry.Literally doing nothing Got the Log from the UT too as :

 
{noformat}
2019-03-15 23:57:47,751 [IPC Server handler 6 on default port 38833] ERROR 
router.RouterClientProtocol (RouterClientProtocol.java:create(253)) - Cannot 
create /HASH_ALL-failsubcluster/dir100/file5.txt in null: No namenode available 
to invoke getBlockLocations [/HASH_ALL-failsubcluster/dir100/file5.txt, 0, 
1]{noformat}
 

 

> RBF: Support unavailable subclusters for mount points with multiple 
> destinations
> 
>
> Key: HDFS-14316
> URL: https://issues.apache.org/jira/browse/HDFS-14316
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14316-HDFS-13891.000.patch, 
> HDFS-14316-HDFS-13891.001.patch, HDFS-14316-HDFS-13891.002.patch, 
> HDFS-14316-HDFS-13891.003.patch, HDFS-14316-HDFS-13891.004.patch, 
> HDFS-14316-HDFS-13891.005.patch, HDFS-14316-HDFS-13891.006.patch, 
> HDFS-14316-HDFS-13891.007.patch
>
>
> Currently mount points with multiple destinations (e.g., HASH_ALL) fail 
> writes when the destination subcluster is down. We need an option to allow 
> writing in other subclusters when one is down.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1119) DN get OM certificate from SCM CA for block token validation

2019-03-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1119?focusedWorklogId=213948=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-213948
 ]

ASF GitHub Bot logged work on HDDS-1119:


Author: ASF GitHub Bot
Created on: 15/Mar/19 18:39
Start Date: 15/Mar/19 18:39
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #601: HDDS-1119. DN get 
OM certificate from SCM CA for block token validat…
URL: https://github.com/apache/hadoop/pull/601#issuecomment-473399445
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | 0 | yamllint | 0 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 25 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 59 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1025 | trunk passed |
   | +1 | compile | 951 | trunk passed |
   | +1 | checkstyle | 191 | trunk passed |
   | +1 | mvnsite | 356 | trunk passed |
   | +1 | shadedclient | 1280 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist hadoop-ozone/integration-test |
   | -1 | findbugs | 60 | hadoop-hdds/container-service in trunk has 1 extant 
Findbugs warnings. |
   | +1 | javadoc | 267 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for patch |
   | -1 | mvninstall | 19 | dist in the patch failed. |
   | -1 | mvninstall | 24 | integration-test in the patch failed. |
   | +1 | compile | 916 | the patch passed |
   | +1 | cc | 916 | the patch passed |
   | +1 | javac | 916 | the patch passed |
   | +1 | checkstyle | 194 | the patch passed |
   | +1 | mvnsite | 310 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 705 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist hadoop-ozone/integration-test |
   | -1 | findbugs | 86 | hadoop-hdds/common generated 1 new + 0 unchanged - 0 
fixed = 1 total (was 0) |
   | +1 | javadoc | 266 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 89 | common in the patch passed. |
   | -1 | unit | 73 | container-service in the patch failed. |
   | +1 | unit | 114 | server-scm in the patch passed. |
   | +1 | unit | 48 | common in the patch passed. |
   | +1 | unit | 35 | dist in the patch passed. |
   | -1 | unit | 690 | integration-test in the patch failed. |
   | +1 | unit | 55 | ozone-manager in the patch passed. |
   | +1 | asflicense | 48 | The patch does not generate ASF License warnings. |
   | | | 8352 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-hdds/common |
   |  |  Possible null pointer dereference of cert in 
org.apache.hadoop.hdds.security.x509.certificate.client.DefaultCertificateClient.loadAllCertificates()
  Dereferenced at DefaultCertificateClient.java:cert in 
org.apache.hadoop.hdds.security.x509.certificate.client.DefaultCertificateClient.loadAllCertificates()
  Dereferenced at DefaultCertificateClient.java:[line 130] |
   | Failed junit tests | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.om.TestScmChillMode |
   |   | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/20/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/601 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  cc  yamllint  |
   | uname | Linux 7cf37c6f3d64 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / a7f5e74 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/20/artifact/out/branch-findbugs-hadoop-hdds_container-service-warnings.html
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/20/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
   | mvninstall | 

[jira] [Commented] (HDFS-14316) RBF: Support unavailable subclusters for mount points with multiple destinations

2019-03-15 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793857#comment-16793857
 ] 

Ayush Saxena commented on HDFS-14316:
-

Thanx [~elgoiri] for the patch.

Had a quick look at this!!!
 * I guess we are retrying here for all exceptions encountered? Might be we 
should restrict retrying to just certain cases and let fail for some genuine 
ones like AccessControlException,Which are supposed to fail for all subclusters.
 * 
{code:java}
  final List locations = new ArrayList<>();
  for (RemoteLocation loc : rpcServer.getLocationsForPath(src, true)) {
    if (!loc.equals(createLocation)) {
  locations.add(loc);
    }
{code}
I guess this isn't working as intended, if in case the namenode is in 
StandbyState and isn't able to give the block locations.Thus throwing the 
exception at :
{code:java}
  createLocation = rpcServer.getCreateLocation(src);
{code}
The createLocation stays null. So in the above Loop we land up iterating 
checking no null entry.Literally doing nothing Got the Log from the UT too as :

 
{noformat}
2019-03-15 23:57:47,751 [IPC Server handler 6 on default port 38833] ERROR 
router.RouterClientProtocol (RouterClientProtocol.java:create(253)) - Cannot 
create /HASH_ALL-failsubcluster/dir100/file5.txt in null: No namenode available 
to invoke getBlockLocations [/HASH_ALL-failsubcluster/dir100/file5.txt, 0, 
1]{noformat}
 

 

> RBF: Support unavailable subclusters for mount points with multiple 
> destinations
> 
>
> Key: HDFS-14316
> URL: https://issues.apache.org/jira/browse/HDFS-14316
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14316-HDFS-13891.000.patch, 
> HDFS-14316-HDFS-13891.001.patch, HDFS-14316-HDFS-13891.002.patch, 
> HDFS-14316-HDFS-13891.003.patch, HDFS-14316-HDFS-13891.004.patch, 
> HDFS-14316-HDFS-13891.005.patch, HDFS-14316-HDFS-13891.006.patch, 
> HDFS-14316-HDFS-13891.007.patch
>
>
> Currently mount points with multiple destinations (e.g., HASH_ALL) fail 
> writes when the destination subcluster is down. We need an option to allow 
> writing in other subclusters when one is down.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1163) Basic framework for Ozone Data Scrubber

2019-03-15 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793854#comment-16793854
 ] 

Xiaoyu Yao commented on HDDS-1163:
--

Thanks [~sdeka] for the work. Just have a question about the purpose of 

inMemContainerData in KeyValueContainerCheck. I see it is passed in by the 
production code and unit test with non-null value but never get used in the 
actual checking process. This was flagged as a checkstyle warning in our 
nightly run. Maybe we can remove it for now and add it back when it is ready to 
be consumed? Feel free to reassign 
https://issues.apache.org/jira/browse/HDDS-1292 for the fix. 

> Basic framework for Ozone Data Scrubber
> ---
>
> Key: HDDS-1163
> URL: https://issues.apache.org/jira/browse/HDDS-1163
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1163.000.patch, HDDS-1163.001.patch, 
> HDDS-1163.002.patch, HDDS-1163.003.patch, HDDS-1163.004.patch, 
> HDDS-1163.005.patch, HDDS-1163.006.patch, HDDS-1163.007.patch
>
>
> Included in the scope:
> 1. Background scanner thread to iterate over container set and dispatch check 
> tasks for individual containers
> 2. Fixed rate scheduling - dispatch tasks at a pre-determined rate (for 
> example 1 container/s)
> 3. Check disk layout of Container - basic check for integrity of the 
> directory hierarchy inside the container, include chunk directory and 
> metadata directories
> 4. Check container file - basic sanity checks for the container metafile
> 5. Check Block Database - iterate over entries in the container block 
> database and check for the existence and accessibility of the chunks for each 
> block.
> Not in scope (will be done as separate subtasks):
> 1. Dynamic scheduling/pacing of background scan based on system load and 
> available resources.
> 2. Detection and handling of orphan chunks
> 3. Checksum verification for Chunks
> 4. Corruption handling - reporting (to SCM) and subsequent handling of any 
> corruption detected by the scanner. The current subtask will simply log any 
> corruption which is detected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14366) Improve HDFS append performance

2019-03-15 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793848#comment-16793848
 ] 

Hudson commented on HDFS-14366:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16219 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16219/])
HDFS-14366. Improve HDFS append performance. Contributed by Chao Sun. 
(inigoiri: rev ff06ef0631cb8a0f67bbc39b5b5a1b0a81ca3b3c)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java


> Improve HDFS append performance
> ---
>
> Key: HDFS-14366
> URL: https://issues.apache.org/jira/browse/HDFS-14366
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 2.8.2
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14366.000.patch, HDFS-14366.001.patch, 
> append-flamegraph.png
>
>
> In our HDFS cluster we observed that {{append}} operation can take as much as 
> 10X write lock time than other write operations. By collecting flamegraph on 
> the namenode (see attachment: append-flamegraph.png), we found that most of 
> the append call is spent on {{getNumLiveDataNodes()}}:
> {code}
>   /** @return the number of live datanodes. */
>   public int getNumLiveDataNodes() {
> int numLive = 0;
> synchronized (this) {
>   for(DatanodeDescriptor dn : datanodeMap.values()) {
> if (!isDatanodeDead(dn) ) {
>   numLive++;
> }
>   }
> }
> return numLive;
>   }
> {code}
> this method synchronizes on the {{DatanodeManager}} which is particularly 
> expensive in large clusters since {{datanodeMap}} is being modified in many 
> places such as processing DN heartbeats.
> For {{append}} operation, {{getNumLiveDataNodes()}} is invoked in 
> {{isSufficientlyReplicated}}:
> {code}
>   /**
>* Check if a block is replicated to at least the minimum replication.
>*/
>   public boolean isSufficientlyReplicated(BlockInfo b) {
> // Compare against the lesser of the minReplication and number of live 
> DNs.
> final int replication =
> Math.min(minReplication, getDatanodeManager().getNumLiveDataNodes());
> return countNodes(b).liveReplicas() >= replication;
>   }
> {code}
> The way that the {{replication}} is calculated is not very optimal, as it 
> will call {{getNumLiveDataNodes()}} _every time_ even though usually 
> {{minReplication}} is much smaller than the latter. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1119) DN get OM certificate from SCM CA for block token validation

2019-03-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1119?focusedWorklogId=213937=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-213937
 ]

ASF GitHub Bot logged work on HDDS-1119:


Author: ASF GitHub Bot
Created on: 15/Mar/19 18:17
Start Date: 15/Mar/19 18:17
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on issue #601: HDDS-1119. DN get OM 
certificate from SCM CA for block token validat…
URL: https://github.com/apache/hadoop/pull/601#issuecomment-473392540
 
 
   Thanks @ajayydv  for the update. +1 the latest push, pending fix of the 
related findbugs issue and unit test results.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 213937)
Time Spent: 9h 50m  (was: 9h 40m)

> DN get OM certificate from SCM CA for block token validation
> 
>
> Key: HDDS-1119
> URL: https://issues.apache.org/jira/browse/HDDS-1119
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 9h 50m
>  Remaining Estimate: 0h
>
> This is needed when the DN received block token signed by OM and it does not 
> have the certificate that OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1119) DN get OM certificate from SCM CA for block token validation

2019-03-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1119?focusedWorklogId=213933=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-213933
 ]

ASF GitHub Bot logged work on HDDS-1119:


Author: ASF GitHub Bot
Created on: 15/Mar/19 18:17
Start Date: 15/Mar/19 18:17
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #601: HDDS-1119. DN 
get OM certificate from SCM CA for block token validat…
URL: https://github.com/apache/hadoop/pull/601#discussion_r266097008
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/client/DefaultCertificateClient.java
 ##
 @@ -131,34 +187,72 @@ public PublicKey getPublicKey() {
   }
 
   /**
-   * Returns the certificate  of the specified component if it exists on the
-   * local system.
+   * Returns the default certificate of given client if it exists.
*
* @return certificate or Null if there is no data.
*/
   @Override
   public X509Certificate getCertificate() {
-if(x509Certificate != null){
+if (x509Certificate != null) {
   return x509Certificate;
 }
 
-Path certPath = securityConfig.getCertificateLocation();
-if (OzoneSecurityUtil.checkIfFileExist(certPath,
-securityConfig.getCertificateFileName())) {
-  CertificateCodec certificateCodec =
-  new CertificateCodec(securityConfig);
-  try {
-X509CertificateHolder x509CertificateHolder =
-certificateCodec.readCertificate();
-x509Certificate =
-CertificateCodec.getX509Certificate(x509CertificateHolder);
-  } catch (java.security.cert.CertificateException | IOException e) {
-getLogger().error("Error reading certificate.", e);
-  }
+if (certSerialId == null) {
+  getLogger().error("Default certificate serial id is not set. Can't " +
+  "locate the default certificate for this client.");
+  return null;
+}
+// Refresh the cache from file system.
+loadAllCertificates();
 
 Review comment:
   Let's discuss and file follow up JIRAs. I'm OK with the current approach 
as-is.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 213933)
Time Spent: 9h 40m  (was: 9.5h)

> DN get OM certificate from SCM CA for block token validation
> 
>
> Key: HDDS-1119
> URL: https://issues.apache.org/jira/browse/HDDS-1119
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 9h 40m
>  Remaining Estimate: 0h
>
> This is needed when the DN received block token signed by OM and it does not 
> have the certificate that OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14359) Inherited ACL permissions masked when parent directory does not exist (mkdir -p)

2019-03-15 Thread Stephen O'Donnell (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793839#comment-16793839
 ] 

Stephen O'Donnell commented on HDFS-14359:
--

Failures seem unrelated to this patch and all passed when I ran them locally.

> Inherited ACL permissions masked when parent directory does not exist (mkdir 
> -p)
> 
>
> Key: HDFS-14359
> URL: https://issues.apache.org/jira/browse/HDFS-14359
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-14359.001.patch, HDFS-14359.002.patch, 
> HDFS-14359.003.patch
>
>
> There appears to be an issue with ACL inheritance if you 'mkdir' a directory 
> such that the parent directories need to be created (ie mkdir -p).
> If you have a folder /tmp2/testacls as:
> {code}
> hadoop fs -mkdir /tmp2
> hadoop fs -mkdir /tmp2/testacls
> hadoop fs -setfacl -m default:user:hive:rwx /tmp2/testacls
> hadoop fs -setfacl -m default:user:flume:rwx /tmp2/testacls
> hadoop fs -setfacl -m user:hive:rwx /tmp2/testacls
> hadoop fs -setfacl -m user:flume:rwx /tmp2/testacls
> hadoop fs -getfacl -R /tmp2/testacls
> # file: /tmp2/testacls
> # owner: kafka
> # group: supergroup
> user::rwx
> user:flume:rwx
> user:hive:rwx
> group::r-x
> mask::rwx
> other::r-x
> default:user::rwx
> default:user:flume:rwx
> default:user:hive:rwx
> default:group::r-x
> default:mask::rwx
> default:other::r-x
> {code}
> Then create a sub-directory in it, the ACLs are as expected:
> {code}
> hadoop fs -mkdir /tmp2/testacls/dir_from_mkdir
> # file: /tmp2/testacls/dir_from_mkdir
> # owner: sodonnell
> # group: supergroup
> user::rwx
> user:flume:rwx
> user:hive:rwx
> group::r-x
> mask::rwx
> other::r-x
> default:user::rwx
> default:user:flume:rwx
> default:user:hive:rwx
> default:group::r-x
> default:mask::rwx
> default:other::r-x
> {code}
> However if you mkdir -p a directory, the situation is not the same:
> {code}
> hadoop fs -mkdir -p /tmp2/testacls/dir_with_subdirs/sub1/sub2
> # file: /tmp2/testacls/dir_with_subdirs
> # owner: sodonnell
> # group: supergroup
> user::rwx
> user:flume:rwx#effective:r-x
> user:hive:rwx #effective:r-x
> group::r-x
> mask::r-x
> other::r-x
> default:user::rwx
> default:user:flume:rwx
> default:user:hive:rwx
> default:group::r-x
> default:mask::rwx
> default:other::r-x
> # file: /tmp2/testacls/dir_with_subdirs/sub1
> # owner: sodonnell
> # group: supergroup
> user::rwx
> user:flume:rwx#effective:r-x
> user:hive:rwx #effective:r-x
> group::r-x
> mask::r-x
> other::r-x
> default:user::rwx
> default:user:flume:rwx
> default:user:hive:rwx
> default:group::r-x
> default:mask::rwx
> default:other::r-x
> # file: /tmp2/testacls/dir_with_subdirs/sub1/sub2
> # owner: sodonnell
> # group: supergroup
> user::rwx
> user:flume:rwx
> user:hive:rwx
> group::r-x
> mask::rwx
> other::r-x
> default:user::rwx
> default:user:flume:rwx
> default:user:hive:rwx
> default:group::r-x
> default:mask::rwx
> default:other::r-x
> {code}
> Notice the the leaf folder "sub2" is correct, but the two ancestor folders 
> have their permissions masked. I believe this is a regression from the fix 
> for HDFS-6962 with dfs.namenode.posix.acl.inheritance.enabled set to true, as 
> the code has changed significantly from the earlier 2.6 / 2.8 branch.
> I will submit a patch for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >