[jira] [Commented] (HDFS-14390) Provide kerberos support for AliasMap service used by Provided storage

2019-03-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802501#comment-16802501
 ] 

Hadoop QA commented on HDFS-14390:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
24s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 15m 24s{color} 
| {color:red} root generated 1 new + 1482 unchanged - 0 fixed = 1483 total (was 
1482) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
14s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 35s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
33s{color} | {color:green} hadoop-fs2img in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}188m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14390 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12963824/HDFS-14390.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux fd2e6344f4fc 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revisi

[jira] [Updated] (HDDS-1146) Adding container related metrics in SCM

2019-03-26 Thread Supratim Deka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Supratim Deka updated HDDS-1146:

Attachment: HDDS-1146.002.patch

> Adding container related metrics in SCM
> ---
>
> Key: HDDS-1146
> URL: https://issues.apache.org/jira/browse/HDDS-1146
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Supratim Deka
>Priority: Major
> Attachments: HDDS-1146.000.patch, HDDS-1146.001.patch, 
> HDDS-1146.002.patch
>
>
> This jira aims to add more container related metrics to SCM.
> Following metrics will be added as part of this jira:
> * Number of containers
> * Number of open containers
> * Number of closed containers
> * Number of quasi closed containers
> * Number of closing containers
> * Number of successful create container calls
> * Number of failed create container calls
> * Number of successful delete container calls
> * Number of failed delete container calls
> * Number of successful container report processing
> * Number of failed container report processing
> * Number of successful incremental container report processing
> * Number of failed incremental container report processing



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1340) Add List Containers API for Recon

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1340?focusedWorklogId=219148&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-219148
 ]

ASF GitHub Bot logged work on HDDS-1340:


Author: ASF GitHub Bot
Created on: 27/Mar/19 05:42
Start Date: 27/Mar/19 05:42
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #648: HDDS-1340. Add 
List Containers API for Recon
URL: https://github.com/apache/hadoop/pull/648#issuecomment-476985391
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 30 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1086 | trunk passed |
   | +1 | compile | 50 | trunk passed |
   | +1 | checkstyle | 14 | trunk passed |
   | +1 | mvnsite | 24 | trunk passed |
   | +1 | shadedclient | 677 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 35 | trunk passed |
   | +1 | javadoc | 22 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 30 | the patch passed |
   | +1 | compile | 22 | the patch passed |
   | +1 | javac | 22 | the patch passed |
   | -0 | checkstyle | 14 | hadoop-ozone/ozone-recon: The patch generated 2 new 
+ 0 unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 23 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 732 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 41 | the patch passed |
   | +1 | javadoc | 20 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 36 | ozone-recon in the patch passed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 2992 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-648/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/648 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 21608414ace0 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / eef8cae |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-648/2/artifact/out/diff-checkstyle-hadoop-ozone_ozone-recon.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-648/2/testReport/ |
   | Max. process+thread count | 441 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-recon U: hadoop-ozone/ozone-recon |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-648/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 219148)
Time Spent: 1h 20m  (was: 1h 10m)

> Add List Containers API for Recon
> -
>
> Key: HDDS-1340
> URL: https://issues.apache.org/jira/browse/HDDS-1340
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Recon server should support "/containers" API that lists all the containers



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1264) Remove Parametrized in TestOzoneShell

2019-03-26 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1264:
-
Fix Version/s: 0.5.0

> Remove Parametrized in TestOzoneShell
> -
>
> Key: HDDS-1264
> URL: https://issues.apache.org/jira/browse/HDDS-1264
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>  Labels: newbie, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> HDDS-1068 removed RestClient from the TestOzoneShell.java.
> So now we don't need to be parameterized in the test anymore. We can directly 
> test with RpcClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1264) Remove Parametrized in TestOzoneShell

2019-03-26 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802456#comment-16802456
 ] 

Hudson commented on HDDS-1264:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16293 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16293/])
HDDS-1264. Remove Parametrized in TestOzoneShell. (#614) (bharat: rev 
b2269581f74df4045cb169a4ce328957c26062ae)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java


> Remove Parametrized in TestOzoneShell
> -
>
> Key: HDDS-1264
> URL: https://issues.apache.org/jira/browse/HDDS-1264
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>  Labels: newbie, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> HDDS-1068 removed RestClient from the TestOzoneShell.java.
> So now we don't need to be parameterized in the test anymore. We can directly 
> test with RpcClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1264) Remove Parametrized in TestOzoneShell

2019-03-26 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1264:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thank You [~vivekratnavel] for the contribution.

I have committed this to the trunk.

> Remove Parametrized in TestOzoneShell
> -
>
> Key: HDDS-1264
> URL: https://issues.apache.org/jira/browse/HDDS-1264
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> HDDS-1068 removed RestClient from the TestOzoneShell.java.
> So now we don't need to be parameterized in the test anymore. We can directly 
> test with RpcClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1264) Remove Parametrized in TestOzoneShell

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1264?focusedWorklogId=219134&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-219134
 ]

ASF GitHub Bot logged work on HDDS-1264:


Author: ASF GitHub Bot
Created on: 27/Mar/19 05:07
Start Date: 27/Mar/19 05:07
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #614: 
HDDS-1264. Remove Parametrized in TestOzoneShell
URL: https://github.com/apache/hadoop/pull/614
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 219134)
Time Spent: 50m  (was: 40m)

> Remove Parametrized in TestOzoneShell
> -
>
> Key: HDDS-1264
> URL: https://issues.apache.org/jira/browse/HDDS-1264
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> HDDS-1068 removed RestClient from the TestOzoneShell.java.
> So now we don't need to be parameterized in the test anymore. We can directly 
> test with RpcClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1264) Remove Parametrized in TestOzoneShell

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1264?focusedWorklogId=219133&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-219133
 ]

ASF GitHub Bot logged work on HDDS-1264:


Author: ASF GitHub Bot
Created on: 27/Mar/19 05:06
Start Date: 27/Mar/19 05:06
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #614: HDDS-1264. 
Remove Parametrized in TestOzoneShell
URL: https://github.com/apache/hadoop/pull/614#issuecomment-476977165
 
 
   +1 LGTM.
   I will commit this shortly.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 219133)
Time Spent: 40m  (was: 0.5h)

> Remove Parametrized in TestOzoneShell
> -
>
> Key: HDDS-1264
> URL: https://issues.apache.org/jira/browse/HDDS-1264
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> HDDS-1068 removed RestClient from the TestOzoneShell.java.
> So now we don't need to be parameterized in the test anymore. We can directly 
> test with RpcClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1262) In OM HA OpenKey call Should happen only leader OM

2019-03-26 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802452#comment-16802452
 ] 

Hudson commented on HDDS-1262:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16292 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16292/])
HDDS-1262. In OM HA OpenKey call Should happen only leader OM. (#626) (github: 
rev eef8cae7cf42c2d1622970e177d699546351587f)
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerStateMachine.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerHA.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerStateMachine.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/audit/OMAction.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/exceptions/OMException.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerRequestHandler.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocol/OzoneManagerHAProtocol.java
* (edit) hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManager.java
* (edit) hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OmUtils.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisClient.java


> In OM HA OpenKey call Should happen only leader OM
> --
>
> Key: HDDS-1262
> URL: https://issues.apache.org/jira/browse/HDDS-1262
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: HDDS-1262.01.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> In OM HA, currently, when openKey is called, in applyTransaction() on all OM 
> nodes, we make a call to SCM and write the allocateBlock information into OM 
> DB and also clientID will be generated by each OM node.
>  
> The proposed approach is:
> 1. In startTransaction, call openKey and the response returned should be used 
> to create a new OmRequest object and used in setting the transaction context. 
> And also modify the ozoneManager and KeymanagerImpl to handle the code with 
> and without ratis.
>  
> This Jira also implements HDDS-1319. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14390) Provide kerberos support for AliasMap service used by Provided storage

2019-03-26 Thread Virajith Jalaparti (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802450#comment-16802450
 ] 

Virajith Jalaparti commented on HDFS-14390:
---

Thanks for reporting and working on this [~ashvin]. The patch looks good to me. 
My only concern is that for this to work the namenode and datanode principles 
have to be the same if {{InMemoryAliasMapClient.java}} is used by both.

> Provide kerberos support for AliasMap service used by Provided storage
> --
>
> Key: HDFS-14390
> URL: https://issues.apache.org/jira/browse/HDFS-14390
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ashvin
>Priority: Major
> Attachments: HDFS-14390.001.patch
>
>
> With {{PROVIDED}} storage (-HDFS-9806)-, HDFS can address data stored in 
> external storage systems. This feature is not supported in a secure HDFS 
> cluster. The {{AliasMap}} service does not support kerberos, and as a result 
> the cluster nodes will fail to communicate with it. This JIRA is to enable 
> kerberos support for the {{AliasMap}} service.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1340) Add List Containers API for Recon

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1340?focusedWorklogId=219120&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-219120
 ]

ASF GitHub Bot logged work on HDDS-1340:


Author: ASF GitHub Bot
Created on: 27/Mar/19 04:52
Start Date: 27/Mar/19 04:52
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on pull request #648: 
HDDS-1340. Add List Containers API for Recon
URL: https://github.com/apache/hadoop/pull/648#discussion_r269404999
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/api/ContainerKeyService.java
 ##
 @@ -51,6 +51,23 @@
   @Inject
   private ReconOMMetadataManager omMetadataManager;
 
+  /**
+   * Return list of container IDs for all the containers
+   *
+   * @return {@link Response}
+   */
+  @GET
+  public Response getContainerIDList() {
+List containerIDs;
 
 Review comment:
   Initialization is redundant here since the initialized value will never be 
used. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 219120)
Time Spent: 1h 10m  (was: 1h)

> Add List Containers API for Recon
> -
>
> Key: HDDS-1340
> URL: https://issues.apache.org/jira/browse/HDDS-1340
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Recon server should support "/containers" API that lists all the containers



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1262) In OM HA OpenKey call Should happen only leader OM

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1262?focusedWorklogId=219119&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-219119
 ]

ASF GitHub Bot logged work on HDDS-1262:


Author: ASF GitHub Bot
Created on: 27/Mar/19 04:48
Start Date: 27/Mar/19 04:48
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #626: 
HDDS-1262. In OM HA OpenKey and initiateMultipartUpload call Should happen only 
leader OM.
URL: https://github.com/apache/hadoop/pull/626
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 219119)
Time Spent: 4h 50m  (was: 4h 40m)

> In OM HA OpenKey call Should happen only leader OM
> --
>
> Key: HDDS-1262
> URL: https://issues.apache.org/jira/browse/HDDS-1262
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: HDDS-1262.01.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> In OM HA, currently, when openKey is called, in applyTransaction() on all OM 
> nodes, we make a call to SCM and write the allocateBlock information into OM 
> DB and also clientID will be generated by each OM node.
>  
> The proposed approach is:
> 1. In startTransaction, call openKey and the response returned should be used 
> to create a new OmRequest object and used in setting the transaction context. 
> And also modify the ozoneManager and KeymanagerImpl to handle the code with 
> and without ratis.
>  
> This Jira also implements HDDS-1319. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1262) In OM HA OpenKey call Should happen only leader OM

2019-03-26 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1262:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thank You [~hanishakoneru] for the review.

I have committed this to the trunk.

> In OM HA OpenKey call Should happen only leader OM
> --
>
> Key: HDDS-1262
> URL: https://issues.apache.org/jira/browse/HDDS-1262
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1262.01.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> In OM HA, currently, when openKey is called, in applyTransaction() on all OM 
> nodes, we make a call to SCM and write the allocateBlock information into OM 
> DB and also clientID will be generated by each OM node.
>  
> The proposed approach is:
> 1. In startTransaction, call openKey and the response returned should be used 
> to create a new OmRequest object and used in setting the transaction context. 
> And also modify the ozoneManager and KeymanagerImpl to handle the code with 
> and without ratis.
>  
> This Jira also implements HDDS-1319. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1262) In OM HA OpenKey call Should happen only leader OM

2019-03-26 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1262:
-
Fix Version/s: 0.5.0

> In OM HA OpenKey call Should happen only leader OM
> --
>
> Key: HDDS-1262
> URL: https://issues.apache.org/jira/browse/HDDS-1262
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: HDDS-1262.01.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> In OM HA, currently, when openKey is called, in applyTransaction() on all OM 
> nodes, we make a call to SCM and write the allocateBlock information into OM 
> DB and also clientID will be generated by each OM node.
>  
> The proposed approach is:
> 1. In startTransaction, call openKey and the response returned should be used 
> to create a new OmRequest object and used in setting the transaction context. 
> And also modify the ozoneManager and KeymanagerImpl to handle the code with 
> and without ratis.
>  
> This Jira also implements HDDS-1319. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1340) Add List Containers API for Recon

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1340?focusedWorklogId=219116&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-219116
 ]

ASF GitHub Bot logged work on HDDS-1340:


Author: ASF GitHub Bot
Created on: 27/Mar/19 04:40
Start Date: 27/Mar/19 04:40
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on pull request #648: 
HDDS-1340. Add List Containers API for Recon
URL: https://github.com/apache/hadoop/pull/648#discussion_r269403451
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/api/TestContainerKeyService.java
 ##
 @@ -198,6 +198,38 @@ public void testGetKeysForContainer() throws Exception {
 assertTrue(keyMetadataList.isEmpty());
   }
 
+  @Test
+  public void testGetContainerIDList() throws Exception {
+//Take snapshot of OM DB and copy over to Recon OM DB.
+DBCheckpoint checkpoint = omMetadataManager.getStore()
 
 Review comment:
   Writes to OM DB is moved to setup phase in `@Before` 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 219116)
Time Spent: 1h  (was: 50m)

> Add List Containers API for Recon
> -
>
> Key: HDDS-1340
> URL: https://issues.apache.org/jira/browse/HDDS-1340
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Recon server should support "/containers" API that lists all the containers



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14304) High lock contention on hdfsHashMutex in libhdfs

2019-03-26 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802430#comment-16802430
 ] 

Hudson commented on HDFS-14304:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16290 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16290/])
HDFS-14304: High lock contention on hdfsHashMutex in libhdfs (todd: rev 
18c57cf0464f4d1fa95899d75b2f59cae33c7c33)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/os/posix/thread_local_storage.c
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/common/htable.c
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/CMakeLists.txt
* (add) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jclasses.h
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/native_mini_dfs.c
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/os/mutexes.h
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_htable.c
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/os/windows/mutexes.c
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/CMakeLists.txt
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/os/posix/mutexes.c
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/hdfs.c
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/common/htable.h
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.h
* (add) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jclasses.c
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/exception.c


> High lock contention on hdfsHashMutex in libhdfs
> 
>
> Key: HDFS-14304
> URL: https://issues.apache.org/jira/browse/HDFS-14304
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, libhdfs, native
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Fix For: 3.3.0
>
>
> While doing some performance profiling of an application using libhdfs, we 
> noticed a high amount of lock contention on the {{hdfsHashMutex}} defined in 
> {{hadoop-hdfs-native-client/src/main/native/libhdfs/os/mutexes.h}}
> The issue is that every JNI method invocation done by {{hdfs.c}} goes through 
> a helper method called {{invokeMethod}}. {{invokeMethod}} calls 
> {{globalClassReference}} which acquires {{hdfsHashMutex}} while performing a 
> lookup in a {{htable}} (a custom hash table that lives in {{libhdfs/common}}) 
> (the lock is acquired for both reads and writes). The hash table maps {{char 
> *className}} to {{jclass}} objects, it seems the goal of the hash table is to 
> avoid repeatedly creating {{jclass}} objects for each JNI call.
> For multi-threaded applications, this lock severely limits that rate at which 
> Java methods can be invoked. pstacks show a lot of time being spent on 
> {{hdfsHashMutex}}
> {code:java}
> #0  0x7fba2dbc242d in __lll_lock_wait () from /lib64/libpthread.so.0
> #1  0x7fba2dbbddcb in _L_lock_812 () from /lib64/libpthread.so.0
> #2  0x7fba2dbbdc98 in pthread_mutex_lock () from /lib64/libpthread.so.0
> #3  0x027d8386 in mutexLock ()
> #4  0x027d0e7b in globalClassReference ()
> #5  0x027d1160 in invokeMethod ()
> #6  0x027d4176 in readDirect ()
> #7  0x027d4325 in hdfsRead ()
> {code}
> Same with {{perf report}}
> {code:java}
> +   63.36% 0.01%  [k] system_call_fastpath
> +   61.60% 0.12%  [k] sys_futex 
> +   61.45% 0.13%  [k] do_futex 
> +   57.54% 0.49%  [k] _raw_qspin_lock
> +   57.07% 0.01%  [k] queued_spin_lock_slowpath
> +   55.47%55.47%  [k] native_queued_spin_lock_slowpath
> -   35.68% 0.00%  [k] 0x6f6f6461682f6568
>- 0x6f6f6461682f6568 
>   - 30.55% __lll_lock_wait   
>  - 29.40% system_call_fastpath  
> - 29.39% sys_futex  
>- 29.35% do_futex   
>   - 29.27% futex_wait 
>  - 28.17% futex_wait_setup
> - 27.05% _raw_qspin_lock 
>- 27.05% queued_spin_lock_slowpath
> 26.30% native_queued_spin_lock_slowpath 
>   + 0.67% ret_from_intr 
>  + 0.71% futex_wait_queue_me
>   - 2.00% methodIdFromClass
>  - 1.94% jni_GetMethodID  
> - 1.71% get_method_id   
>  0.96% Sy

[jira] [Work logged] (HDDS-1340) Add List Containers API for Recon

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1340?focusedWorklogId=219105&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-219105
 ]

ASF GitHub Bot logged work on HDDS-1340:


Author: ASF GitHub Bot
Created on: 27/Mar/19 03:59
Start Date: 27/Mar/19 03:59
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on pull request #648: HDDS-1340. 
Add List Containers API for Recon
URL: https://github.com/apache/hadoop/pull/648#discussion_r269382496
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/api/TestContainerKeyService.java
 ##
 @@ -198,6 +198,38 @@ public void testGetKeysForContainer() throws Exception {
 assertTrue(keyMetadataList.isEmpty());
   }
 
+  @Test
+  public void testGetContainerIDList() throws Exception {
+//Take snapshot of OM DB and copy over to Recon OM DB.
+DBCheckpoint checkpoint = omMetadataManager.getStore()
 
 Review comment:
   Why are we taking DB snapshot if we are not writing anything new to OM DB? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 219105)
Time Spent: 50m  (was: 40m)

> Add List Containers API for Recon
> -
>
> Key: HDDS-1340
> URL: https://issues.apache.org/jira/browse/HDDS-1340
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Recon server should support "/containers" API that lists all the containers



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1340) Add List Containers API for Recon

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1340?focusedWorklogId=219104&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-219104
 ]

ASF GitHub Bot logged work on HDDS-1340:


Author: ASF GitHub Bot
Created on: 27/Mar/19 03:59
Start Date: 27/Mar/19 03:59
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on pull request #648: HDDS-1340. 
Add List Containers API for Recon
URL: https://github.com/apache/hadoop/pull/648#discussion_r269382718
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/spi/ContainerDBServiceProvider.java
 ##
 @@ -66,4 +67,12 @@ Integer getCountForForContainerKeyPrefix(
*/
   Map getKeyPrefixesForContainer(long containerId)
   throws IOException;
+
+  /**
+   * Get a list of all Container IDs.
+   *
+   * @return List of Container IDs.
+   * @throws IOException
+   */
+  List getContainerIDList() throws IOException;
 
 Review comment:
   API can return Set instead of List.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 219104)
Time Spent: 40m  (was: 0.5h)

> Add List Containers API for Recon
> -
>
> Key: HDDS-1340
> URL: https://issues.apache.org/jira/browse/HDDS-1340
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Recon server should support "/containers" API that lists all the containers



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14304) High lock contention on hdfsHashMutex in libhdfs

2019-03-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802427#comment-16802427
 ] 

Hadoop QA commented on HDFS-14304:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
31m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
30s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
51s{color} | {color:green} hadoop-hdfs-native-client in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-595/7/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/595 |
| JIRA Issue | HDFS-14304 |
| Optional Tests |  dupname  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 77c238b5cbc6 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / f426b7c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
|  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-595/7/testReport/ |
| Max. process+thread count | 412 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-595/7/console |
| Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |


This message was automatically generated.



> High lock contention on hdfsHashMutex in libhdfs
> 
>
> Key: HDFS-14304
> URL: https://issues.apache.org/jira/browse/HDFS-14304
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, libhdfs, native
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Fix For: 3.3.0
>
>
> While doing some performance profiling of an application using libhdfs, we 
> noticed a high amount of lock contention on the {{hdfsHashMutex}} defined in 
> {{hadoop-hdfs-native-client/src/main/native/libhdfs/os/mutexes.h}}
> The issue is that every JNI method invocation done by {{hd

[jira] [Work logged] (HDDS-1340) Add List Containers API for Recon

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1340?focusedWorklogId=219103&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-219103
 ]

ASF GitHub Bot logged work on HDDS-1340:


Author: ASF GitHub Bot
Created on: 27/Mar/19 03:59
Start Date: 27/Mar/19 03:59
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on pull request #648: HDDS-1340. 
Add List Containers API for Recon
URL: https://github.com/apache/hadoop/pull/648#discussion_r269382028
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/api/ContainerKeyService.java
 ##
 @@ -51,6 +51,23 @@
   @Inject
   private ReconOMMetadataManager omMetadataManager;
 
+  /**
+   * Return list of container IDs for all the containers
+   *
+   * @return {@link Response}
+   */
+  @GET
+  public Response getContainerIDList() {
+List containerIDs;
 
 Review comment:
   (Minor) Initialize to empty list.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 219103)
Time Spent: 0.5h  (was: 20m)

> Add List Containers API for Recon
> -
>
> Key: HDDS-1340
> URL: https://issues.apache.org/jira/browse/HDDS-1340
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Recon server should support "/containers" API that lists all the containers



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14304) High lock contention on hdfsHashMutex in libhdfs

2019-03-26 Thread Todd Lipcon (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-14304:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

> High lock contention on hdfsHashMutex in libhdfs
> 
>
> Key: HDFS-14304
> URL: https://issues.apache.org/jira/browse/HDFS-14304
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, libhdfs, native
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Fix For: 3.3.0
>
>
> While doing some performance profiling of an application using libhdfs, we 
> noticed a high amount of lock contention on the {{hdfsHashMutex}} defined in 
> {{hadoop-hdfs-native-client/src/main/native/libhdfs/os/mutexes.h}}
> The issue is that every JNI method invocation done by {{hdfs.c}} goes through 
> a helper method called {{invokeMethod}}. {{invokeMethod}} calls 
> {{globalClassReference}} which acquires {{hdfsHashMutex}} while performing a 
> lookup in a {{htable}} (a custom hash table that lives in {{libhdfs/common}}) 
> (the lock is acquired for both reads and writes). The hash table maps {{char 
> *className}} to {{jclass}} objects, it seems the goal of the hash table is to 
> avoid repeatedly creating {{jclass}} objects for each JNI call.
> For multi-threaded applications, this lock severely limits that rate at which 
> Java methods can be invoked. pstacks show a lot of time being spent on 
> {{hdfsHashMutex}}
> {code:java}
> #0  0x7fba2dbc242d in __lll_lock_wait () from /lib64/libpthread.so.0
> #1  0x7fba2dbbddcb in _L_lock_812 () from /lib64/libpthread.so.0
> #2  0x7fba2dbbdc98 in pthread_mutex_lock () from /lib64/libpthread.so.0
> #3  0x027d8386 in mutexLock ()
> #4  0x027d0e7b in globalClassReference ()
> #5  0x027d1160 in invokeMethod ()
> #6  0x027d4176 in readDirect ()
> #7  0x027d4325 in hdfsRead ()
> {code}
> Same with {{perf report}}
> {code:java}
> +   63.36% 0.01%  [k] system_call_fastpath
> +   61.60% 0.12%  [k] sys_futex 
> +   61.45% 0.13%  [k] do_futex 
> +   57.54% 0.49%  [k] _raw_qspin_lock
> +   57.07% 0.01%  [k] queued_spin_lock_slowpath
> +   55.47%55.47%  [k] native_queued_spin_lock_slowpath
> -   35.68% 0.00%  [k] 0x6f6f6461682f6568
>- 0x6f6f6461682f6568 
>   - 30.55% __lll_lock_wait   
>  - 29.40% system_call_fastpath  
> - 29.39% sys_futex  
>- 29.35% do_futex   
>   - 29.27% futex_wait 
>  - 28.17% futex_wait_setup
> - 27.05% _raw_qspin_lock 
>- 27.05% queued_spin_lock_slowpath
> 26.30% native_queued_spin_lock_slowpath 
>   + 0.67% ret_from_intr 
>  + 0.71% futex_wait_queue_me
>   - 2.00% methodIdFromClass
>  - 1.94% jni_GetMethodID  
> - 1.71% get_method_id   
>  0.96% SymbolTable::lookup_only 
>   - 1.61% invokeMethod
>  - 0.62% jni_CallLongMethodV 
>   0.52% jni_invoke_nonstatic 
> 0.75% pthread_mutex_lock
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14348) Fix JNI exception handling issues in libhdfs

2019-03-26 Thread Todd Lipcon (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-14348:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

> Fix JNI exception handling issues in libhdfs
> 
>
> Key: HDFS-14348
> URL: https://issues.apache.org/jira/browse/HDFS-14348
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client, libhdfs, native
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Fix For: 3.3.0
>
>
> During some manual digging through the libhdfs code, we found several places 
> where we are not handling exceptions properly.
> Specifically, there seem to be some violation of the following snippet from 
> the JNI Oracle docs 
> (https://docs.oracle.com/javase/8/docs/technotes/guides/jni/spec/design.html#exceptions_and_error_codes):
> {quote}
> *Exceptions and Error Codes*
> Certain JNI functions use the Java exception mechanism to report error 
> conditions. In most cases, JNI functions report error conditions by returning 
> an error code and throwing a Java exception. The error code is usually a 
> special return value (such as NULL) that is outside of the range of normal 
> return values. Therefore, the programmer can quickly check the return value 
> of the last JNI call to determine if an error has occurred, and call a 
> function, ExceptionOccurred(), to obtain the exception object that contains a 
> more detailed description of the error condition.
> There are two cases where the programmer needs to check for exceptions 
> without being able to first check an error code:
> [1] The JNI functions that invoke a Java method return the result of the Java 
> method. The programmer must call ExceptionOccurred() to check for possible 
> exceptions that occurred during the execution of the Java method.
> [2] Some of the JNI array access functions do not return an error code, but 
> may throw an ArrayIndexOutOfBoundsException or ArrayStoreException.
> In all other cases, a non-error return value guarantees that no exceptions 
> have been thrown.
> {quote}
> Here is a running list of issues:
> * {{classNameOfObject}} in {{jni_helper.c}} calls {{CallObjectMethod}} but 
> does not check if an exception has occurred, it only checks if the result of 
> the method (in this case {{Class#getName(String)}}) returns {{NULL}}
> * Exception handling in {{get_current_thread_id}} (both 
> {{posix/thread_local_storage.c}} and {{windows/thread_local_storage.c}}) 
> seems to have several issues; lots of JNI methods are called without checking 
> for exceptions
> * Most of the calls to {{GetObjectArrayElement}} and {{GetByteArrayRegion}} 
> in {{hdfs.c}} do not check for exceptions properly
> ** e.g. for {{GetObjectArrayElement}} they only check if the result of the 
> operation is {{NULL}}, but they should call {{ExceptionOccurred}} to look for 
> pending exceptions as well



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14390) Provide kerberos support for AliasMap service used by Provided storage

2019-03-26 Thread Ashvin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashvin updated HDFS-14390:
--
Attachment: (was: HDFS-14390.001.patch)

> Provide kerberos support for AliasMap service used by Provided storage
> --
>
> Key: HDFS-14390
> URL: https://issues.apache.org/jira/browse/HDFS-14390
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ashvin
>Priority: Major
> Attachments: HDFS-14390.001.patch
>
>
> With {{PROVIDED}} storage (-HDFS-9806)-, HDFS can address data stored in 
> external storage systems. This feature is not supported in a secure HDFS 
> cluster. The {{AliasMap}} service does not support kerberos, and as a result 
> the cluster nodes will fail to communicate with it. This JIRA is to enable 
> kerberos support for the {{AliasMap}} service.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14390) Provide kerberos support for AliasMap service used by Provided storage

2019-03-26 Thread Ashvin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashvin updated HDFS-14390:
--
Attachment: HDFS-14390.001.patch
Status: Patch Available  (was: Open)

> Provide kerberos support for AliasMap service used by Provided storage
> --
>
> Key: HDFS-14390
> URL: https://issues.apache.org/jira/browse/HDFS-14390
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ashvin
>Priority: Major
> Attachments: HDFS-14390.001.patch, HDFS-14390.001.patch
>
>
> With {{PROVIDED}} storage (-HDFS-9806)-, HDFS can address data stored in 
> external storage systems. This feature is not supported in a secure HDFS 
> cluster. The {{AliasMap}} service does not support kerberos, and as a result 
> the cluster nodes will fail to communicate with it. This JIRA is to enable 
> kerberos support for the {{AliasMap}} service.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14390) Provide kerberos support for AliasMap service used by Provided storage

2019-03-26 Thread Ashvin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashvin updated HDFS-14390:
--
Attachment: HDFS-14390.001.patch

> Provide kerberos support for AliasMap service used by Provided storage
> --
>
> Key: HDFS-14390
> URL: https://issues.apache.org/jira/browse/HDFS-14390
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ashvin
>Priority: Major
> Attachments: HDFS-14390.001.patch
>
>
> With {{PROVIDED}} storage (-HDFS-9806)-, HDFS can address data stored in 
> external storage systems. This feature is not supported in a secure HDFS 
> cluster. The {{AliasMap}} service does not support kerberos, and as a result 
> the cluster nodes will fail to communicate with it. This JIRA is to enable 
> kerberos support for the {{AliasMap}} service.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14304) High lock contention on hdfsHashMutex in libhdfs

2019-03-26 Thread Sahil Takiar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HDFS-14304:

Status: Open  (was: Patch Available)

> High lock contention on hdfsHashMutex in libhdfs
> 
>
> Key: HDFS-14304
> URL: https://issues.apache.org/jira/browse/HDFS-14304
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, libhdfs, native
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
>
> While doing some performance profiling of an application using libhdfs, we 
> noticed a high amount of lock contention on the {{hdfsHashMutex}} defined in 
> {{hadoop-hdfs-native-client/src/main/native/libhdfs/os/mutexes.h}}
> The issue is that every JNI method invocation done by {{hdfs.c}} goes through 
> a helper method called {{invokeMethod}}. {{invokeMethod}} calls 
> {{globalClassReference}} which acquires {{hdfsHashMutex}} while performing a 
> lookup in a {{htable}} (a custom hash table that lives in {{libhdfs/common}}) 
> (the lock is acquired for both reads and writes). The hash table maps {{char 
> *className}} to {{jclass}} objects, it seems the goal of the hash table is to 
> avoid repeatedly creating {{jclass}} objects for each JNI call.
> For multi-threaded applications, this lock severely limits that rate at which 
> Java methods can be invoked. pstacks show a lot of time being spent on 
> {{hdfsHashMutex}}
> {code:java}
> #0  0x7fba2dbc242d in __lll_lock_wait () from /lib64/libpthread.so.0
> #1  0x7fba2dbbddcb in _L_lock_812 () from /lib64/libpthread.so.0
> #2  0x7fba2dbbdc98 in pthread_mutex_lock () from /lib64/libpthread.so.0
> #3  0x027d8386 in mutexLock ()
> #4  0x027d0e7b in globalClassReference ()
> #5  0x027d1160 in invokeMethod ()
> #6  0x027d4176 in readDirect ()
> #7  0x027d4325 in hdfsRead ()
> {code}
> Same with {{perf report}}
> {code:java}
> +   63.36% 0.01%  [k] system_call_fastpath
> +   61.60% 0.12%  [k] sys_futex 
> +   61.45% 0.13%  [k] do_futex 
> +   57.54% 0.49%  [k] _raw_qspin_lock
> +   57.07% 0.01%  [k] queued_spin_lock_slowpath
> +   55.47%55.47%  [k] native_queued_spin_lock_slowpath
> -   35.68% 0.00%  [k] 0x6f6f6461682f6568
>- 0x6f6f6461682f6568 
>   - 30.55% __lll_lock_wait   
>  - 29.40% system_call_fastpath  
> - 29.39% sys_futex  
>- 29.35% do_futex   
>   - 29.27% futex_wait 
>  - 28.17% futex_wait_setup
> - 27.05% _raw_qspin_lock 
>- 27.05% queued_spin_lock_slowpath
> 26.30% native_queued_spin_lock_slowpath 
>   + 0.67% ret_from_intr 
>  + 0.71% futex_wait_queue_me
>   - 2.00% methodIdFromClass
>  - 1.94% jni_GetMethodID  
> - 1.71% get_method_id   
>  0.96% SymbolTable::lookup_only 
>   - 1.61% invokeMethod
>  - 0.62% jni_CallLongMethodV 
>   0.52% jni_invoke_nonstatic 
> 0.75% pthread_mutex_lock
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14304) High lock contention on hdfsHashMutex in libhdfs

2019-03-26 Thread Sahil Takiar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HDFS-14304:

Status: Patch Available  (was: Open)

> High lock contention on hdfsHashMutex in libhdfs
> 
>
> Key: HDFS-14304
> URL: https://issues.apache.org/jira/browse/HDFS-14304
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, libhdfs, native
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
>
> While doing some performance profiling of an application using libhdfs, we 
> noticed a high amount of lock contention on the {{hdfsHashMutex}} defined in 
> {{hadoop-hdfs-native-client/src/main/native/libhdfs/os/mutexes.h}}
> The issue is that every JNI method invocation done by {{hdfs.c}} goes through 
> a helper method called {{invokeMethod}}. {{invokeMethod}} calls 
> {{globalClassReference}} which acquires {{hdfsHashMutex}} while performing a 
> lookup in a {{htable}} (a custom hash table that lives in {{libhdfs/common}}) 
> (the lock is acquired for both reads and writes). The hash table maps {{char 
> *className}} to {{jclass}} objects, it seems the goal of the hash table is to 
> avoid repeatedly creating {{jclass}} objects for each JNI call.
> For multi-threaded applications, this lock severely limits that rate at which 
> Java methods can be invoked. pstacks show a lot of time being spent on 
> {{hdfsHashMutex}}
> {code:java}
> #0  0x7fba2dbc242d in __lll_lock_wait () from /lib64/libpthread.so.0
> #1  0x7fba2dbbddcb in _L_lock_812 () from /lib64/libpthread.so.0
> #2  0x7fba2dbbdc98 in pthread_mutex_lock () from /lib64/libpthread.so.0
> #3  0x027d8386 in mutexLock ()
> #4  0x027d0e7b in globalClassReference ()
> #5  0x027d1160 in invokeMethod ()
> #6  0x027d4176 in readDirect ()
> #7  0x027d4325 in hdfsRead ()
> {code}
> Same with {{perf report}}
> {code:java}
> +   63.36% 0.01%  [k] system_call_fastpath
> +   61.60% 0.12%  [k] sys_futex 
> +   61.45% 0.13%  [k] do_futex 
> +   57.54% 0.49%  [k] _raw_qspin_lock
> +   57.07% 0.01%  [k] queued_spin_lock_slowpath
> +   55.47%55.47%  [k] native_queued_spin_lock_slowpath
> -   35.68% 0.00%  [k] 0x6f6f6461682f6568
>- 0x6f6f6461682f6568 
>   - 30.55% __lll_lock_wait   
>  - 29.40% system_call_fastpath  
> - 29.39% sys_futex  
>- 29.35% do_futex   
>   - 29.27% futex_wait 
>  - 28.17% futex_wait_setup
> - 27.05% _raw_qspin_lock 
>- 27.05% queued_spin_lock_slowpath
> 26.30% native_queued_spin_lock_slowpath 
>   + 0.67% ret_from_intr 
>  + 0.71% futex_wait_queue_me
>   - 2.00% methodIdFromClass
>  - 1.94% jni_GetMethodID  
> - 1.71% get_method_id   
>  0.96% SymbolTable::lookup_only 
>   - 1.61% invokeMethod
>  - 0.62% jni_CallLongMethodV 
>   0.52% jni_invoke_nonstatic 
> 0.75% pthread_mutex_lock
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1255) Refactor ozone acceptance test to allow run in secure mode

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1255?focusedWorklogId=219058&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-219058
 ]

ASF GitHub Bot logged work on HDDS-1255:


Author: ASF GitHub Bot
Created on: 27/Mar/19 02:21
Start Date: 27/Mar/19 02:21
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on pull request #632: HDDS-1255. 
Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay 
Kumar.
URL: https://github.com/apache/hadoop/pull/632#discussion_r269384452
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/commonlib.robot
 ##
 @@ -35,3 +41,51 @@ Compare files
 ${checksumbefore} = Executemd5sum ${file1} | 
awk '{print $1}'
 ${checksumafter} =  Executemd5sum ${file2} | 
awk '{print $1}'
 Should Be Equal${checksumbefore}   
 ${checksumafter}
+Execute AWSS3APICli
+[Arguments]   ${command}
+${output} =   Executeaws s3api --endpoint-url 
${ENDPOINT_URL} ${command}
+[return]  ${output}
+
+Execute AWSS3APICli and checkrc
+[Arguments]   ${command} ${expected_error_code}
+${output} =   Execute and checkrcaws s3api --endpoint-url 
${ENDPOINT_URL} ${command}  ${expected_error_code}
+[return]  ${output}
+
+Execute AWSS3Cli
+[Arguments]   ${command}
+${output} =   Execute aws s3 --endpoint-url 
${ENDPOINT_URL} ${command}
+[return]  ${output}
+
+Install aws cli s3 centos
+Executesudo yum install -y awscli
+
+Install aws cli s3 debian
+Executesudo apt-get install -y awscli
+
+Install aws cli
+${rc}  ${output} = Run And Return Rc And 
Output   which apt-get
+Run Keyword if '${rc}' == '0'  Install aws cli s3 debian
+${rc}  ${output} = Run And Return Rc And 
Output   yum --help
+Run Keyword if '${rc}' == '0'  Install aws cli s3 centos
+
+Kinit test user
+${hostname} =   Executehostname
+Set Suite Variable  ${TEST_USER}   
testuser/${hostname}@EXAMPLE.COM
+Execute kinit -k ${TEST_USER} -t 
/etc/security/keytabs/testuser.keytab
+
+Setup secure credentials
+Run Keyword Install aws cli
 
 Review comment:
   you are right, moved s3 part to s3 commonlib.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 219058)
Time Spent: 2.5h  (was: 2h 20m)

> Refactor ozone acceptance test to allow run in secure mode
> --
>
> Key: HDDS-1255
> URL: https://issues.apache.org/jira/browse/HDDS-1255
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> Refactor ozone acceptance test to allow run in secure mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1255) Refactor ozone acceptance test to allow run in secure mode

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1255?focusedWorklogId=219057&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-219057
 ]

ASF GitHub Bot logged work on HDDS-1255:


Author: ASF GitHub Bot
Created on: 27/Mar/19 02:20
Start Date: 27/Mar/19 02:20
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on pull request #632: HDDS-1255. 
Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay 
Kumar.
URL: https://github.com/apache/hadoop/pull/632#discussion_r269384368
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/commonlib.robot
 ##
 @@ -35,3 +41,51 @@ Compare files
 ${checksumbefore} = Executemd5sum ${file1} | 
awk '{print $1}'
 ${checksumafter} =  Executemd5sum ${file2} | 
awk '{print $1}'
 Should Be Equal${checksumbefore}   
 ${checksumafter}
+Execute AWSS3APICli
+[Arguments]   ${command}
+${output} =   Executeaws s3api --endpoint-url 
${ENDPOINT_URL} ${command}
+[return]  ${output}
+
+Execute AWSS3APICli and checkrc
+[Arguments]   ${command} ${expected_error_code}
+${output} =   Execute and checkrcaws s3api --endpoint-url 
${ENDPOINT_URL} ${command}  ${expected_error_code}
+[return]  ${output}
+
+Execute AWSS3Cli
+[Arguments]   ${command}
+${output} =   Execute aws s3 --endpoint-url 
${ENDPOINT_URL} ${command}
+[return]  ${output}
+
+Install aws cli s3 centos
+Executesudo yum install -y awscli
+
+Install aws cli s3 debian
+Executesudo apt-get install -y awscli
+
+Install aws cli
+${rc}  ${output} = Run And Return Rc And 
Output   which apt-get
+Run Keyword if '${rc}' == '0'  Install aws cli s3 debian
+${rc}  ${output} = Run And Return Rc And 
Output   yum --help
+Run Keyword if '${rc}' == '0'  Install aws cli s3 centos
+
+Kinit test user
+${hostname} =   Executehostname
+Set Suite Variable  ${TEST_USER}   
testuser/${hostname}@EXAMPLE.COM
+Execute kinit -k ${TEST_USER} -t 
/etc/security/keytabs/testuser.keytab
+
+Setup secure credentials
+Run Keyword Install aws cli
+Run Keyword Kinit test user
+${result} = Executeozone s3 getsecret
+${accessKey} =  Get Regexp Matches ${result} 
(?<=awsAccessKey=).*
+${secret} = Get Regexp Matches ${result} 
(?<=awsSecret=).*
+Executeaws configure set 
default.s3.signature_version s3v4
+Executeaws configure set 
aws_access_key_id ${accessKey[0]}
+Executeaws configure set 
aws_secret_access_key ${secret[0]}
+Executeaws configure set region 
us-west-1
+
+Setup incorrect credentials for S3
 
 Review comment:
   done
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 219057)
Time Spent: 2h 20m  (was: 2h 10m)

> Refactor ozone acceptance test to allow run in secure mode
> --
>
> Key: HDDS-1255
> URL: https://issues.apache.org/jira/browse/HDDS-1255
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Refactor ozone acceptance test to allow run in secure mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-139) Output of createVolume can be improved

2019-03-26 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802364#comment-16802364
 ] 

Hudson commented on HDDS-139:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16289 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16289/])
HDDS-139. Output of createVolume can be improved. Contributed by Shweta. (arp: 
rev f426b7ce8fb33d57e4187484448b9e0bfc04ccfa)
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java


> Output of createVolume can be improved
> --
>
> Key: HDDS-139
> URL: https://issues.apache.org/jira/browse/HDDS-139
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.2.1
>Reporter: Arpit Agarwal
>Assignee: Shweta
>Priority: Major
>  Labels: newbie, usability
> Fix For: 0.4.0
>
> Attachments: HDDS-139.001.patch
>
>
> The output of {{createVolume}} includes a huge number (1 Exabyte) when the 
> quota is not specified. This number can either be specified in a friendly 
> format or omitted when the user did not use the \{{-quota}} option.
> {code:java}
>     2018-05-31 20:35:56 INFO  RpcClient:210 - Creating Volume: vol2, with 
> hadoop as owner and quota set to 1152921504606846976 bytes.{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-139) Output of createVolume can be improved

2019-03-26 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-139:
---
  Resolution: Fixed
   Fix Version/s: 0.4.0
Target Version/s:   (was: 0.5.0)
  Status: Resolved  (was: Patch Available)

I've committed this. Thanks for the contribution [~shwetayakkali]! The test 
failures look unrelated.

> Output of createVolume can be improved
> --
>
> Key: HDDS-139
> URL: https://issues.apache.org/jira/browse/HDDS-139
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.2.1
>Reporter: Arpit Agarwal
>Assignee: Shweta
>Priority: Major
>  Labels: newbie, usability
> Fix For: 0.4.0
>
> Attachments: HDDS-139.001.patch
>
>
> The output of {{createVolume}} includes a huge number (1 Exabyte) when the 
> quota is not specified. This number can either be specified in a friendly 
> format or omitted when the user did not use the \{{-quota}} option.
> {code:java}
>     2018-05-31 20:35:56 INFO  RpcClient:210 - Creating Volume: vol2, with 
> hadoop as owner and quota set to 1152921504606846976 bytes.{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1255) Refactor ozone acceptance test to allow run in secure mode

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1255?focusedWorklogId=219047&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-219047
 ]

ASF GitHub Bot logged work on HDDS-1255:


Author: ASF GitHub Bot
Created on: 27/Mar/19 01:51
Start Date: 27/Mar/19 01:51
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #632: HDDS-1255. 
Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay 
Kumar.
URL: https://github.com/apache/hadoop/pull/632#issuecomment-476930753
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 5 | https://github.com/apache/hadoop/pull/632 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/632 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-632/3/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 219047)
Time Spent: 2h 10m  (was: 2h)

> Refactor ozone acceptance test to allow run in secure mode
> --
>
> Key: HDDS-1255
> URL: https://issues.apache.org/jira/browse/HDDS-1255
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Refactor ozone acceptance test to allow run in secure mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-139) Output of createVolume can be improved

2019-03-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802345#comment-16802345
 ] 

Hadoop QA commented on HDDS-139:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 44s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 21s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 59s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.web.client.TestKeysRatis |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HDDS-Build/2587/artifact/out/Dockerfile 
|
| JIRA Issue | HDDS-139 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12963798/HDDS-139.001.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 846ca262e999 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / fe29b39 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit

[jira] [Commented] (HDDS-139) Output of createVolume can be improved

2019-03-26 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802340#comment-16802340
 ] 

Arpit Agarwal commented on HDDS-139:


+1 pending Jenkins.

> Output of createVolume can be improved
> --
>
> Key: HDDS-139
> URL: https://issues.apache.org/jira/browse/HDDS-139
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.2.1
>Reporter: Arpit Agarwal
>Assignee: Shweta
>Priority: Major
>  Labels: newbie, usability
> Attachments: HDDS-139.001.patch
>
>
> The output of {{createVolume}} includes a huge number (1 Exabyte) when the 
> quota is not specified. This number can either be specified in a friendly 
> format or omitted when the user did not use the \{{-quota}} option.
> {code:java}
>     2018-05-31 20:35:56 INFO  RpcClient:210 - Creating Volume: vol2, with 
> hadoop as owner and quota set to 1152921504606846976 bytes.{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1262) In OM HA OpenKey call Should happen only leader OM

2019-03-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802335#comment-16802335
 ] 

Hadoop QA commented on HDDS-1262:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
53s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  3m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 54s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m 17s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m  6s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 91m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.container.common.TestDatanodeStateMachine |
|   | hadoop.ozone.TestStorageContainerManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HDDS-Build/2586/artifact/out/Dockerfile 
|
| JIRA Issue | HDDS-1262 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12963806/HDDS-1262.01.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle cc |
| uname | Linux 595ac71f4955 4.4.0-138-generic #164

[jira] [Work logged] (HDDS-1332) Add some logging for flaky test testStartStopDatanodeStateMachine

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1332?focusedWorklogId=219040&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-219040
 ]

ASF GitHub Bot logged work on HDDS-1332:


Author: ASF GitHub Bot
Created on: 27/Mar/19 01:28
Start Date: 27/Mar/19 01:28
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #649: HDDS-1332. Add 
some logging for flaky test testStartStopDatanodeState…
URL: https://github.com/apache/hadoop/pull/649#issuecomment-476924855
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 41 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1076 | trunk passed |
   | +1 | compile | 40 | trunk passed |
   | +1 | checkstyle | 18 | trunk passed |
   | +1 | mvnsite | 34 | trunk passed |
   | +1 | shadedclient | 784 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 51 | trunk passed |
   | +1 | javadoc | 28 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 36 | the patch passed |
   | +1 | compile | 26 | the patch passed |
   | +1 | javac | 26 | the patch passed |
   | -0 | checkstyle | 14 | hadoop-hdds/container-service: The patch generated 
1 new + 0 unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 29 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 808 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 54 | the patch passed |
   | +1 | javadoc | 25 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 81 | container-service in the patch failed. |
   | +1 | asflicense | 25 | The patch does not generate ASF License warnings. |
   | | | 3257 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.TestDatanodeStateMachine |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-649/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/649 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 5e9db77ed771 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri 
Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / fe29b39 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-649/1/artifact/out/diff-checkstyle-hadoop-hdds_container-service.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-649/1/artifact/out/patch-unit-hadoop-hdds_container-service.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-649/1/testReport/ |
   | Max. process+thread count | 403 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service U: 
hadoop-hdds/container-service |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-649/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 219040)
Time Spent: 20m  (was: 10m)

> Add some logging for flaky test testStartStopDatanodeStateMachine
> -
>
> Key: HDDS-1332
> URL: https://issues.apache.org/jira/browse/HDDS-1332
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> testStartStopDatanodeStateMachine fails frequently in Jenkins. It also seems 
> to have a timing issue which may be different from the Jenkins failure.
> E.g. If I add a 10 second sleep as below I can get the test to fail 100%.
> {code}
> @@ -163,

[jira] [Commented] (HDDS-1262) In OM HA OpenKey call Should happen only leader OM

2019-03-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802332#comment-16802332
 ] 

Hadoop QA commented on HDDS-1262:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
51s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
2s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  3m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
55s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m  2s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 87m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.ozShell.TestOzoneShell |
|   | hadoop.ozone.om.TestScmChillMode |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HDDS-Build/2585/artifact/out/Dockerfile 
|
| JIRA Issue | HDDS-1262 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12963806/HDDS-1262.01.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle cc |
| uname | Linux 38b15867d7f2 4.4.0-138-generic #164~14.04.1-Ubuntu SMP

[jira] [Commented] (HDFS-14304) High lock contention on hdfsHashMutex in libhdfs

2019-03-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802330#comment-16802330
 ] 

Hadoop QA commented on HDFS-14304:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
28m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
34s{color} | {color:red} hadoop-hdfs-native-client in the patch failed. {color} 
|
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  0m 34s{color} | 
{color:red} hadoop-hdfs-native-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 34s{color} 
| {color:red} hadoop-hdfs-native-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 35s{color} 
| {color:red} hadoop-hdfs-native-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-595/6/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/595 |
| JIRA Issue | HDFS-14304 |
| Optional Tests |  dupname  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux fc6e44fb6578 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / fe29b39 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-595/6/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
 |
| cc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-595/6/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
 |
| javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-595/6/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
 |
| unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-595/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-595/6/testReport/ |
| Max. process+thread count | 411 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-595/6/console |
| Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |


This message was automatically generated.



> High lock contention on hdfsHashMutex in libhdfs
> 

[jira] [Work logged] (HDDS-1340) Add List Containers API for Recon

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1340?focusedWorklogId=219027&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-219027
 ]

ASF GitHub Bot logged work on HDDS-1340:


Author: ASF GitHub Bot
Created on: 27/Mar/19 01:02
Start Date: 27/Mar/19 01:02
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #648: HDDS-1340. Add 
List Containers API for Recon
URL: https://github.com/apache/hadoop/pull/648#issuecomment-476918265
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 531 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 993 | trunk passed |
   | +1 | compile | 50 | trunk passed |
   | +1 | checkstyle | 19 | trunk passed |
   | +1 | mvnsite | 29 | trunk passed |
   | +1 | shadedclient | 731 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 34 | trunk passed |
   | +1 | javadoc | 22 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 27 | the patch passed |
   | +1 | compile | 20 | the patch passed |
   | +1 | javac | 20 | the patch passed |
   | -0 | checkstyle | 10 | hadoop-ozone/ozone-recon: The patch generated 2 new 
+ 0 unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 19 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 691 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 39 | the patch passed |
   | +1 | javadoc | 16 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 31 | ozone-recon in the patch passed. |
   | +1 | asflicense | 22 | The patch does not generate ASF License warnings. |
   | | | 3367 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-648/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/648 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux b1271d539ce3 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / fe29b39 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-648/1/artifact/out/diff-checkstyle-hadoop-ozone_ozone-recon.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-648/1/testReport/ |
   | Max. process+thread count | 440 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-recon U: hadoop-ozone/ozone-recon |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-648/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 219027)
Time Spent: 20m  (was: 10m)

> Add List Containers API for Recon
> -
>
> Key: HDDS-1340
> URL: https://issues.apache.org/jira/browse/HDDS-1340
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Recon server should support "/containers" API that lists all the containers



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14304) High lock contention on hdfsHashMutex in libhdfs

2019-03-26 Thread Sahil Takiar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HDFS-14304:

Status: Patch Available  (was: Open)

> High lock contention on hdfsHashMutex in libhdfs
> 
>
> Key: HDFS-14304
> URL: https://issues.apache.org/jira/browse/HDFS-14304
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, libhdfs, native
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
>
> While doing some performance profiling of an application using libhdfs, we 
> noticed a high amount of lock contention on the {{hdfsHashMutex}} defined in 
> {{hadoop-hdfs-native-client/src/main/native/libhdfs/os/mutexes.h}}
> The issue is that every JNI method invocation done by {{hdfs.c}} goes through 
> a helper method called {{invokeMethod}}. {{invokeMethod}} calls 
> {{globalClassReference}} which acquires {{hdfsHashMutex}} while performing a 
> lookup in a {{htable}} (a custom hash table that lives in {{libhdfs/common}}) 
> (the lock is acquired for both reads and writes). The hash table maps {{char 
> *className}} to {{jclass}} objects, it seems the goal of the hash table is to 
> avoid repeatedly creating {{jclass}} objects for each JNI call.
> For multi-threaded applications, this lock severely limits that rate at which 
> Java methods can be invoked. pstacks show a lot of time being spent on 
> {{hdfsHashMutex}}
> {code:java}
> #0  0x7fba2dbc242d in __lll_lock_wait () from /lib64/libpthread.so.0
> #1  0x7fba2dbbddcb in _L_lock_812 () from /lib64/libpthread.so.0
> #2  0x7fba2dbbdc98 in pthread_mutex_lock () from /lib64/libpthread.so.0
> #3  0x027d8386 in mutexLock ()
> #4  0x027d0e7b in globalClassReference ()
> #5  0x027d1160 in invokeMethod ()
> #6  0x027d4176 in readDirect ()
> #7  0x027d4325 in hdfsRead ()
> {code}
> Same with {{perf report}}
> {code:java}
> +   63.36% 0.01%  [k] system_call_fastpath
> +   61.60% 0.12%  [k] sys_futex 
> +   61.45% 0.13%  [k] do_futex 
> +   57.54% 0.49%  [k] _raw_qspin_lock
> +   57.07% 0.01%  [k] queued_spin_lock_slowpath
> +   55.47%55.47%  [k] native_queued_spin_lock_slowpath
> -   35.68% 0.00%  [k] 0x6f6f6461682f6568
>- 0x6f6f6461682f6568 
>   - 30.55% __lll_lock_wait   
>  - 29.40% system_call_fastpath  
> - 29.39% sys_futex  
>- 29.35% do_futex   
>   - 29.27% futex_wait 
>  - 28.17% futex_wait_setup
> - 27.05% _raw_qspin_lock 
>- 27.05% queued_spin_lock_slowpath
> 26.30% native_queued_spin_lock_slowpath 
>   + 0.67% ret_from_intr 
>  + 0.71% futex_wait_queue_me
>   - 2.00% methodIdFromClass
>  - 1.94% jni_GetMethodID  
> - 1.71% get_method_id   
>  0.96% SymbolTable::lookup_only 
>   - 1.61% invokeMethod
>  - 0.62% jni_CallLongMethodV 
>   0.52% jni_invoke_nonstatic 
> 0.75% pthread_mutex_lock
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14304) High lock contention on hdfsHashMutex in libhdfs

2019-03-26 Thread Sahil Takiar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HDFS-14304:

Status: Open  (was: Patch Available)

> High lock contention on hdfsHashMutex in libhdfs
> 
>
> Key: HDFS-14304
> URL: https://issues.apache.org/jira/browse/HDFS-14304
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, libhdfs, native
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
>
> While doing some performance profiling of an application using libhdfs, we 
> noticed a high amount of lock contention on the {{hdfsHashMutex}} defined in 
> {{hadoop-hdfs-native-client/src/main/native/libhdfs/os/mutexes.h}}
> The issue is that every JNI method invocation done by {{hdfs.c}} goes through 
> a helper method called {{invokeMethod}}. {{invokeMethod}} calls 
> {{globalClassReference}} which acquires {{hdfsHashMutex}} while performing a 
> lookup in a {{htable}} (a custom hash table that lives in {{libhdfs/common}}) 
> (the lock is acquired for both reads and writes). The hash table maps {{char 
> *className}} to {{jclass}} objects, it seems the goal of the hash table is to 
> avoid repeatedly creating {{jclass}} objects for each JNI call.
> For multi-threaded applications, this lock severely limits that rate at which 
> Java methods can be invoked. pstacks show a lot of time being spent on 
> {{hdfsHashMutex}}
> {code:java}
> #0  0x7fba2dbc242d in __lll_lock_wait () from /lib64/libpthread.so.0
> #1  0x7fba2dbbddcb in _L_lock_812 () from /lib64/libpthread.so.0
> #2  0x7fba2dbbdc98 in pthread_mutex_lock () from /lib64/libpthread.so.0
> #3  0x027d8386 in mutexLock ()
> #4  0x027d0e7b in globalClassReference ()
> #5  0x027d1160 in invokeMethod ()
> #6  0x027d4176 in readDirect ()
> #7  0x027d4325 in hdfsRead ()
> {code}
> Same with {{perf report}}
> {code:java}
> +   63.36% 0.01%  [k] system_call_fastpath
> +   61.60% 0.12%  [k] sys_futex 
> +   61.45% 0.13%  [k] do_futex 
> +   57.54% 0.49%  [k] _raw_qspin_lock
> +   57.07% 0.01%  [k] queued_spin_lock_slowpath
> +   55.47%55.47%  [k] native_queued_spin_lock_slowpath
> -   35.68% 0.00%  [k] 0x6f6f6461682f6568
>- 0x6f6f6461682f6568 
>   - 30.55% __lll_lock_wait   
>  - 29.40% system_call_fastpath  
> - 29.39% sys_futex  
>- 29.35% do_futex   
>   - 29.27% futex_wait 
>  - 28.17% futex_wait_setup
> - 27.05% _raw_qspin_lock 
>- 27.05% queued_spin_lock_slowpath
> 26.30% native_queued_spin_lock_slowpath 
>   + 0.67% ret_from_intr 
>  + 0.71% futex_wait_queue_me
>   - 2.00% methodIdFromClass
>  - 1.94% jni_GetMethodID  
> - 1.71% get_method_id   
>  0.96% SymbolTable::lookup_only 
>   - 1.61% invokeMethod
>  - 0.62% jni_CallLongMethodV 
>   0.52% jni_invoke_nonstatic 
> 0.75% pthread_mutex_lock
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1332) Add some logging for flaky test testStartStopDatanodeStateMachine

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1332?focusedWorklogId=219017&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-219017
 ]

ASF GitHub Bot logged work on HDDS-1332:


Author: ASF GitHub Bot
Created on: 27/Mar/19 00:33
Start Date: 27/Mar/19 00:33
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #649: HDDS-1332. Add 
some logging for flaky test testStartStopDatanodeState…
URL: https://github.com/apache/hadoop/pull/649
 
 
   …Machine. Contributed by Arpit Agarwal.
   
   Change-Id: I4f9dc6aeff7f4502956d160e35f2c4caadccb246
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 219017)
Time Spent: 10m
Remaining Estimate: 0h

> Add some logging for flaky test testStartStopDatanodeStateMachine
> -
>
> Key: HDDS-1332
> URL: https://issues.apache.org/jira/browse/HDDS-1332
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> testStartStopDatanodeStateMachine fails frequently in Jenkins. It also seems 
> to have a timing issue which may be different from the Jenkins failure.
> E.g. If I add a 10 second sleep as below I can get the test to fail 100%.
> {code}
> @@ -163,6 +163,7 @@ public void testStartStopDatanodeStateMachine() throws 
> IOException,
>  try (DatanodeStateMachine stateMachine =
>  new DatanodeStateMachine(getNewDatanodeDetails(), conf, null)) {
>stateMachine.startDaemon();
> +  Thread.sleep(10_000L);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1332) Add some logging for flaky test testStartStopDatanodeStateMachine

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1332:
-
Labels: pull-request-available  (was: )

> Add some logging for flaky test testStartStopDatanodeStateMachine
> -
>
> Key: HDDS-1332
> URL: https://issues.apache.org/jira/browse/HDDS-1332
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>  Labels: pull-request-available
>
> testStartStopDatanodeStateMachine fails frequently in Jenkins. It also seems 
> to have a timing issue which may be different from the Jenkins failure.
> E.g. If I add a 10 second sleep as below I can get the test to fail 100%.
> {code}
> @@ -163,6 +163,7 @@ public void testStartStopDatanodeStateMachine() throws 
> IOException,
>  try (DatanodeStateMachine stateMachine =
>  new DatanodeStateMachine(getNewDatanodeDetails(), conf, null)) {
>stateMachine.startDaemon();
> +  Thread.sleep(10_000L);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1332) Add some logging for flaky test testStartStopDatanodeStateMachine

2019-03-26 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reassigned HDDS-1332:
---

Assignee: Arpit Agarwal

> Add some logging for flaky test testStartStopDatanodeStateMachine
> -
>
> Key: HDDS-1332
> URL: https://issues.apache.org/jira/browse/HDDS-1332
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>
> testStartStopDatanodeStateMachine fails frequently in Jenkins. It also seems 
> to have a timing issue which may be different from the Jenkins failure.
> E.g. If I add a 10 second sleep as below I can get the test to fail 100%.
> {code}
> @@ -163,6 +163,7 @@ public void testStartStopDatanodeStateMachine() throws 
> IOException,
>  try (DatanodeStateMachine stateMachine =
>  new DatanodeStateMachine(getNewDatanodeDetails(), conf, null)) {
>stateMachine.startDaemon();
> +  Thread.sleep(10_000L);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1332) Add some logging for flaky test testStartStopDatanodeStateMachine

2019-03-26 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-1332:

Summary: Add some logging for flaky test testStartStopDatanodeStateMachine  
(was: Skip flaky test - testStartStopDatanodeStateMachine)

> Add some logging for flaky test testStartStopDatanodeStateMachine
> -
>
> Key: HDDS-1332
> URL: https://issues.apache.org/jira/browse/HDDS-1332
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Priority: Major
>
> testStartStopDatanodeStateMachine fails frequently in Jenkins. It also seems 
> to have a timing issue which may be different from the Jenkins failure.
> E.g. If I add a 10 second sleep as below I can get the test to fail 100%.
> {code}
> @@ -163,6 +163,7 @@ public void testStartStopDatanodeStateMachine() throws 
> IOException,
>  try (DatanodeStateMachine stateMachine =
>  new DatanodeStateMachine(getNewDatanodeDetails(), conf, null)) {
>stateMachine.startDaemon();
> +  Thread.sleep(10_000L);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-139) Output of createVolume can be improved

2019-03-26 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-139:
---
Status: Patch Available  (was: Open)

> Output of createVolume can be improved
> --
>
> Key: HDDS-139
> URL: https://issues.apache.org/jira/browse/HDDS-139
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.2.1
>Reporter: Arpit Agarwal
>Assignee: Shweta
>Priority: Major
>  Labels: newbie, usability
> Attachments: HDDS-139.001.patch
>
>
> The output of {{createVolume}} includes a huge number (1 Exabyte) when the 
> quota is not specified. This number can either be specified in a friendly 
> format or omitted when the user did not use the \{{-quota}} option.
> {code:java}
>     2018-05-31 20:35:56 INFO  RpcClient:210 - Creating Volume: vol2, with 
> hadoop as owner and quota set to 1152921504606846976 bytes.{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1262) In OM HA OpenKey call Should happen only leader OM

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1262?focusedWorklogId=219014&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-219014
 ]

ASF GitHub Bot logged work on HDDS-1262:


Author: ASF GitHub Bot
Created on: 27/Mar/19 00:07
Start Date: 27/Mar/19 00:07
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #626: HDDS-1262. In 
OM HA OpenKey and initiateMultipartUpload call Should happen only leader OM.
URL: https://github.com/apache/hadoop/pull/626#issuecomment-476904363
 
 
   Not sure why for this PR yetus is throwing mvn install errors.
   I am able to compile locally on my dev machine.
   Posted a patch to the jira.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 219014)
Time Spent: 4h 40m  (was: 4.5h)

> In OM HA OpenKey call Should happen only leader OM
> --
>
> Key: HDDS-1262
> URL: https://issues.apache.org/jira/browse/HDDS-1262
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1262.01.patch
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> In OM HA, currently, when openKey is called, in applyTransaction() on all OM 
> nodes, we make a call to SCM and write the allocateBlock information into OM 
> DB and also clientID will be generated by each OM node.
>  
> The proposed approach is:
> 1. In startTransaction, call openKey and the response returned should be used 
> to create a new OmRequest object and used in setting the transaction context. 
> And also modify the ozoneManager and KeymanagerImpl to handle the code with 
> and without ratis.
>  
> This Jira also implements HDDS-1319. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1340) Add List Containers API for Recon

2019-03-26 Thread Vivek Ratnavel Subramanian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDDS-1340:
-
Status: Patch Available  (was: In Progress)

> Add List Containers API for Recon
> -
>
> Key: HDDS-1340
> URL: https://issues.apache.org/jira/browse/HDDS-1340
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Recon server should support "/containers" API that lists all the containers



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1340) Add List Containers API for Recon

2019-03-26 Thread Vivek Ratnavel Subramanian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDDS-1340:
-
Description: Recon server should support "/containers" API that lists all 
the containers  (was: Recon API should support "/containers" that lists all the 
containers)

> Add List Containers API for Recon
> -
>
> Key: HDDS-1340
> URL: https://issues.apache.org/jira/browse/HDDS-1340
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Recon server should support "/containers" API that lists all the containers



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1340) Add List Containers API for Recon

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1340?focusedWorklogId=219013&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-219013
 ]

ASF GitHub Bot logged work on HDDS-1340:


Author: ASF GitHub Bot
Created on: 27/Mar/19 00:04
Start Date: 27/Mar/19 00:04
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on pull request #648: 
HDDS-1340. Add List Containers API for Recon
URL: https://github.com/apache/hadoop/pull/648
 
 
   Recon server should support "/containers" API that lists all the containers
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 219013)
Time Spent: 10m
Remaining Estimate: 0h

> Add List Containers API for Recon
> -
>
> Key: HDDS-1340
> URL: https://issues.apache.org/jira/browse/HDDS-1340
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Recon API should support "/containers" that lists all the containers



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1340) Add List Containers API for Recon

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1340:
-
Labels: pull-request-available  (was: )

> Add List Containers API for Recon
> -
>
> Key: HDDS-1340
> URL: https://issues.apache.org/jira/browse/HDDS-1340
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>
> Recon API should support "/containers" that lists all the containers



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1262) In OM HA OpenKey call Should happen only leader OM

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1262?focusedWorklogId=219011&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-219011
 ]

ASF GitHub Bot logged work on HDDS-1262:


Author: ASF GitHub Bot
Created on: 27/Mar/19 00:03
Start Date: 27/Mar/19 00:03
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #626: HDDS-1262. In OM 
HA OpenKey and initiateMultipartUpload call Should happen only leader OM.
URL: https://github.com/apache/hadoop/pull/626#issuecomment-476903348
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 25 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 11 | Maven dependency ordering for branch |
   | +1 | mvninstall | 979 | trunk passed |
   | +1 | compile | 91 | trunk passed |
   | +1 | checkstyle | 25 | trunk passed |
   | -1 | mvnsite | 24 | ozone-manager in trunk failed. |
   | -1 | mvnsite | 23 | integration-test in trunk failed. |
   | +1 | shadedclient | 703 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | -1 | findbugs | 22 | ozone-manager in trunk failed. |
   | +1 | javadoc | 66 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 11 | Maven dependency ordering for patch |
   | -1 | mvninstall | 19 | integration-test in the patch failed. |
   | +1 | compile | 89 | the patch passed |
   | +1 | cc | 89 | the patch passed |
   | +1 | javac | 89 | the patch passed |
   | +1 | checkstyle | 20 | the patch passed |
   | -1 | mvnsite | 20 | integration-test in the patch failed. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 670 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 109 | the patch passed |
   | +1 | javadoc | 103 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 37 | common in the patch passed. |
   | +1 | unit | 39 | ozone-manager in the patch passed. |
   | -1 | unit | 25 | integration-test in the patch failed. |
   | +1 | asflicense | 30 | The patch does not generate ASF License warnings. |
   | | | 3387 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/12/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/626 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  cc  |
   | uname | Linux f90876374467 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ce4bafd |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/12/artifact/out/branch-mvnsite-hadoop-ozone_ozone-manager.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/12/artifact/out/branch-mvnsite-hadoop-ozone_integration-test.txt
 |
   | findbugs | v3.1.0-RC1 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/12/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/12/artifact/out/patch-mvninstall-hadoop-ozone_integration-test.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/12/artifact/out/patch-mvnsite-hadoop-ozone_integration-test.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/12/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/12/testReport/ |
   | Max. process+thread count | 446 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/12/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use t

[jira] [Work started] (HDDS-1340) Add List Containers API for Recon

2019-03-26 Thread Vivek Ratnavel Subramanian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1340 started by Vivek Ratnavel Subramanian.

> Add List Containers API for Recon
> -
>
> Key: HDDS-1340
> URL: https://issues.apache.org/jira/browse/HDDS-1340
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> Recon API should support "/containers" that lists all the containers



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1262) In OM HA OpenKey call Should happen only leader OM

2019-03-26 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1262:
-
Attachment: HDDS-1262.01.patch

> In OM HA OpenKey call Should happen only leader OM
> --
>
> Key: HDDS-1262
> URL: https://issues.apache.org/jira/browse/HDDS-1262
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1262.01.patch
>
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> In OM HA, currently, when openKey is called, in applyTransaction() on all OM 
> nodes, we make a call to SCM and write the allocateBlock information into OM 
> DB and also clientID will be generated by each OM node.
>  
> The proposed approach is:
> 1. In startTransaction, call openKey and the response returned should be used 
> to create a new OmRequest object and used in setting the transaction context. 
> And also modify the ozoneManager and KeymanagerImpl to handle the code with 
> and without ratis.
>  
> This Jira also implements HDDS-1319. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1262) In OM HA OpenKey call Should happen only leader OM

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1262?focusedWorklogId=219008&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-219008
 ]

ASF GitHub Bot logged work on HDDS-1262:


Author: ASF GitHub Bot
Created on: 26/Mar/19 23:57
Start Date: 26/Mar/19 23:57
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #626: HDDS-1262. In OM 
HA OpenKey and initiateMultipartUpload call Should happen only leader OM.
URL: https://github.com/apache/hadoop/pull/626#issuecomment-476901803
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 24 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 13 | Maven dependency ordering for branch |
   | +1 | mvninstall | 983 | trunk passed |
   | +1 | compile | 89 | trunk passed |
   | +1 | checkstyle | 25 | trunk passed |
   | -1 | mvnsite | 23 | ozone-manager in trunk failed. |
   | -1 | mvnsite | 23 | integration-test in trunk failed. |
   | +1 | shadedclient | 712 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | -1 | findbugs | 21 | ozone-manager in trunk failed. |
   | +1 | javadoc | 68 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 12 | Maven dependency ordering for patch |
   | -1 | mvninstall | 19 | integration-test in the patch failed. |
   | +1 | compile | 93 | the patch passed |
   | +1 | cc | 93 | the patch passed |
   | +1 | javac | 93 | the patch passed |
   | +1 | checkstyle | 22 | the patch passed |
   | -1 | mvnsite | 20 | integration-test in the patch failed. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 665 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 102 | the patch passed |
   | -1 | javadoc | 28 | hadoop-ozone_common generated 1 new + 1 unchanged - 0 
fixed = 2 total (was 1) |
   ||| _ Other Tests _ |
   | +1 | unit | 31 | common in the patch passed. |
   | +1 | unit | 38 | ozone-manager in the patch passed. |
   | -1 | unit | 22 | integration-test in the patch failed. |
   | +1 | asflicense | 22 | The patch does not generate ASF License warnings. |
   | | | 3329 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/11/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/626 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  cc  |
   | uname | Linux db25df154125 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ce4bafd |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/11/artifact/out/branch-mvnsite-hadoop-ozone_ozone-manager.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/11/artifact/out/branch-mvnsite-hadoop-ozone_integration-test.txt
 |
   | findbugs | v3.1.0-RC1 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/11/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/11/artifact/out/patch-mvninstall-hadoop-ozone_integration-test.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/11/artifact/out/patch-mvnsite-hadoop-ozone_integration-test.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/11/artifact/out/diff-javadoc-javadoc-hadoop-ozone_common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/11/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/11/testReport/ |
   | Max. process+thread count | 446 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/11/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically gen

[jira] [Commented] (HDFS-13699) Add DFSClient sending handshake token to DataNode, and allow DataNode overwrite downstream QOP

2019-03-26 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802284#comment-16802284
 ] 

Konstantin Shvachko commented on HDFS-13699:


Thanks, [~vagarychen] for the patch. So my main question is
 * What is the upgrade story here? During the upgrade from old version to the 
new we will have old and new clients and DNs communicating to each other some 
of them will have {{SaslMessageWithHandshake}} and some won't.

Other comments below:
 # SaslDataTransferServer
 ** Should use {{assert :}} instead of
{{Preconditions.checkArgument(secret != null && bpid != null);}}
 ** {{"HmacSHA1"}} should either reuse {{DEFAULT_HMAC_ALGORITHM}} or be defined 
as a local constant
 # Unused imports: {{DataXceiver}}, {{TestMultipleNNPortQOP}}
 # Naming of constants for new config properties is not in sync with the 
properties names. Should also move to the right section in {{DFSConfigKeys}}.

> Add DFSClient sending handshake token to DataNode, and allow DataNode 
> overwrite downstream QOP
> --
>
> Key: HDFS-13699
> URL: https://issues.apache.org/jira/browse/HDFS-13699
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-13699.001.patch, HDFS-13699.002.patch, 
> HDFS-13699.003.patch, HDFS-13699.004.patch, HDFS-13699.005.patch, 
> HDFS-13699.006.patch, HDFS-13699.007.patch, HDFS-13699.WIP.001.patch
>
>
> Given the other Jiras under HDFS-13541, this Jira is to allow DFSClient to 
> redirect the encrypt secret to DataNode. The encrypted message is the QOP 
> that client and NameNode have used. DataNode decrypts the message and enforce 
> the QOP for the client connection. Also, this Jira will also include 
> overwriting downstream QOP, as mentioned in the HDFS-13541 design doc. 
> Namely, this is to allow inter-DN QOP that is different from client-DN QOP.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14348) Fix JNI exception handling issues in libhdfs

2019-03-26 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802282#comment-16802282
 ] 

Hudson commented on HDFS-14348:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16288 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16288/])
HDFS-14348: Fix JNI exception handling issues in libhdfs (todd: rev 
fe29b3901be1b06db92379c7b7fac4954253e6e2)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/os/posix/thread_local_storage.c
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/hdfs.c


> Fix JNI exception handling issues in libhdfs
> 
>
> Key: HDFS-14348
> URL: https://issues.apache.org/jira/browse/HDFS-14348
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client, libhdfs, native
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
>
> During some manual digging through the libhdfs code, we found several places 
> where we are not handling exceptions properly.
> Specifically, there seem to be some violation of the following snippet from 
> the JNI Oracle docs 
> (https://docs.oracle.com/javase/8/docs/technotes/guides/jni/spec/design.html#exceptions_and_error_codes):
> {quote}
> *Exceptions and Error Codes*
> Certain JNI functions use the Java exception mechanism to report error 
> conditions. In most cases, JNI functions report error conditions by returning 
> an error code and throwing a Java exception. The error code is usually a 
> special return value (such as NULL) that is outside of the range of normal 
> return values. Therefore, the programmer can quickly check the return value 
> of the last JNI call to determine if an error has occurred, and call a 
> function, ExceptionOccurred(), to obtain the exception object that contains a 
> more detailed description of the error condition.
> There are two cases where the programmer needs to check for exceptions 
> without being able to first check an error code:
> [1] The JNI functions that invoke a Java method return the result of the Java 
> method. The programmer must call ExceptionOccurred() to check for possible 
> exceptions that occurred during the execution of the Java method.
> [2] Some of the JNI array access functions do not return an error code, but 
> may throw an ArrayIndexOutOfBoundsException or ArrayStoreException.
> In all other cases, a non-error return value guarantees that no exceptions 
> have been thrown.
> {quote}
> Here is a running list of issues:
> * {{classNameOfObject}} in {{jni_helper.c}} calls {{CallObjectMethod}} but 
> does not check if an exception has occurred, it only checks if the result of 
> the method (in this case {{Class#getName(String)}}) returns {{NULL}}
> * Exception handling in {{get_current_thread_id}} (both 
> {{posix/thread_local_storage.c}} and {{windows/thread_local_storage.c}}) 
> seems to have several issues; lots of JNI methods are called without checking 
> for exceptions
> * Most of the calls to {{GetObjectArrayElement}} and {{GetByteArrayRegion}} 
> in {{hdfs.c}} do not check for exceptions properly
> ** e.g. for {{GetObjectArrayElement}} they only check if the result of the 
> operation is {{NULL}}, but they should call {{ExceptionOccurred}} to look for 
> pending exceptions as well



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-139) Output of createVolume can be improved

2019-03-26 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HDDS-139:

Attachment: HDDS-139.001.patch

> Output of createVolume can be improved
> --
>
> Key: HDDS-139
> URL: https://issues.apache.org/jira/browse/HDDS-139
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.2.1
>Reporter: Arpit Agarwal
>Assignee: Shweta
>Priority: Major
>  Labels: newbie, usability
> Attachments: HDDS-139.001.patch
>
>
> The output of {{createVolume}} includes a huge number (1 Exabyte) when the 
> quota is not specified. This number can either be specified in a friendly 
> format or omitted when the user did not use the \{{-quota}} option.
> {code:java}
>     2018-05-31 20:35:56 INFO  RpcClient:210 - Creating Volume: vol2, with 
> hadoop as owner and quota set to 1152921504606846976 bytes.{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1262) In OM HA OpenKey call Should happen only leader OM

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1262?focusedWorklogId=218990&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218990
 ]

ASF GitHub Bot logged work on HDDS-1262:


Author: ASF GitHub Bot
Created on: 26/Mar/19 22:54
Start Date: 26/Mar/19 22:54
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #626: HDDS-1262. In OM 
HA OpenKey and initiateMultipartUpload call Should happen only leader OM.
URL: https://github.com/apache/hadoop/pull/626#issuecomment-476884407
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 25 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 13 | Maven dependency ordering for branch |
   | +1 | mvninstall | 991 | trunk passed |
   | +1 | compile | 97 | trunk passed |
   | +1 | checkstyle | 30 | trunk passed |
   | -1 | mvnsite | 29 | integration-test in trunk failed. |
   | +1 | shadedclient | 768 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 96 | trunk passed |
   | +1 | javadoc | 79 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 14 | Maven dependency ordering for patch |
   | -1 | mvninstall | 22 | integration-test in the patch failed. |
   | +1 | compile | 94 | the patch passed |
   | +1 | cc | 94 | the patch passed |
   | +1 | javac | 94 | the patch passed |
   | +1 | checkstyle | 24 | the patch passed |
   | -1 | mvnsite | 23 | integration-test in the patch failed. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 722 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 112 | the patch passed |
   | -1 | javadoc | 34 | hadoop-ozone_common generated 1 new + 1 unchanged - 0 
fixed = 2 total (was 1) |
   ||| _ Other Tests _ |
   | +1 | unit | 35 | common in the patch passed. |
   | +1 | unit | 42 | ozone-manager in the patch passed. |
   | -1 | unit | 26 | integration-test in the patch failed. |
   | +1 | asflicense | 30 | The patch does not generate ASF License warnings. |
   | | | 3572 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/626 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  cc  |
   | uname | Linux 7032c47b14f8 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ce4bafd |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/10/artifact/out/branch-mvnsite-hadoop-ozone_integration-test.txt
 |
   | findbugs | v3.1.0-RC1 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/10/artifact/out/patch-mvninstall-hadoop-ozone_integration-test.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/10/artifact/out/patch-mvnsite-hadoop-ozone_integration-test.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/10/artifact/out/diff-javadoc-javadoc-hadoop-ozone_common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/10/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/10/testReport/ |
   | Max. process+thread count | 446 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/10/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
--

[jira] [Work logged] (HDDS-1262) In OM HA OpenKey call Should happen only leader OM

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1262?focusedWorklogId=218972&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218972
 ]

ASF GitHub Bot logged work on HDDS-1262:


Author: ASF GitHub Bot
Created on: 26/Mar/19 22:17
Start Date: 26/Mar/19 22:17
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on issue #626: HDDS-1262. In OM 
HA OpenKey and initiateMultipartUpload call Should happen only leader OM.
URL: https://github.com/apache/hadoop/pull/626#issuecomment-476874883
 
 
   LGTM. +1 pending Jenkins/ CI.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218972)
Time Spent: 4h  (was: 3h 50m)

> In OM HA OpenKey call Should happen only leader OM
> --
>
> Key: HDDS-1262
> URL: https://issues.apache.org/jira/browse/HDDS-1262
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> In OM HA, currently, when openKey is called, in applyTransaction() on all OM 
> nodes, we make a call to SCM and write the allocateBlock information into OM 
> DB and also clientID will be generated by each OM node.
>  
> The proposed approach is:
> 1. In startTransaction, call openKey and the response returned should be used 
> to create a new OmRequest object and used in setting the transaction context. 
> And also modify the ozoneManager and KeymanagerImpl to handle the code with 
> and without ratis.
>  
> This Jira also implements HDDS-1319. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-961) Send command execution metrics from Datanode to SCM

2019-03-26 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802223#comment-16802223
 ] 

Nanda kumar commented on HDDS-961:
--

[~swagle], +1 on reporting the metrics from datanode itself.
{quote}bq. Alternatively, could we include completion time in the CommandStatus 
object? Should we create something like SCMCommandMetrics per command type and 
it stores these global counters for the average completion time of the specific 
command?
{quote}
We don't send the status for all the commands to SCM, and also there is a 
proposal (HDDS-895) to remove command watcher from Replication Manager (We 
won't be needing CommandStatus for Replicate and Delete command if this is 
done). After these changes, the number of commands for which a datanode sends 
its status to SCM will be very small (only Delete Block Command). Maintaining a 
global counter in SCM for just one command type feels like overkill. The 
metrics exposed from datanode should be sufficient.

 

I will change the Jira summary and the description.

> Send command execution metrics from Datanode to SCM
> ---
>
> Key: HDDS-961
> URL: https://issues.apache.org/jira/browse/HDDS-961
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode, SCM
>Reporter: Nanda kumar
>Priority: Major
>
> The CommandHandlers in datanode calculates and tracks the time taken to 
> execute each command that is sent by SCM. It would be nice to report these 
> values to SCM so that we can build average time, std dev etc for those 
> operations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1340) Add List Containers API for Recon

2019-03-26 Thread Vivek Ratnavel Subramanian (JIRA)
Vivek Ratnavel Subramanian created HDDS-1340:


 Summary: Add List Containers API for Recon
 Key: HDDS-1340
 URL: https://issues.apache.org/jira/browse/HDDS-1340
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Recon
Reporter: Vivek Ratnavel Subramanian
Assignee: Vivek Ratnavel Subramanian


Recon API should support "/containers" that lists all the containers



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1262) In OM HA OpenKey call Should happen only leader OM

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1262?focusedWorklogId=218961&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218961
 ]

ASF GitHub Bot logged work on HDDS-1262:


Author: ASF GitHub Bot
Created on: 26/Mar/19 21:58
Start Date: 26/Mar/19 21:58
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #626: 
HDDS-1262. In OM HA OpenKey and initiateMultipartUpload call Should happen only 
leader OM.
URL: https://github.com/apache/hadoop/pull/626#discussion_r269330512
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerStateMachine.java
 ##
 @@ -181,6 +297,17 @@ private TransactionContext handleAllocateBlock(
 
   }
 
+  /**
+   * Construct IOException message for failed requests in StartTransaction.
+   * @param omResponse
+   * @return
+   */
+  private IOException constructExceptionForFailedRequest(
+  OMResponse omResponse) {
+return new IOException(omResponse.getMessage() + " " +
+STATUS_CODE + omResponse.getStatus());
+  }
 
 Review comment:
   I tried that way, as this is converted to IOException somewhere in Ratis 
end, I am not able to do that. Initially I have tried the way you have 
suggested and figured out it is not working. Because from Ratis, we get 
StateMachine Exception.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218961)
Time Spent: 3h 50m  (was: 3h 40m)

> In OM HA OpenKey call Should happen only leader OM
> --
>
> Key: HDDS-1262
> URL: https://issues.apache.org/jira/browse/HDDS-1262
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> In OM HA, currently, when openKey is called, in applyTransaction() on all OM 
> nodes, we make a call to SCM and write the allocateBlock information into OM 
> DB and also clientID will be generated by each OM node.
>  
> The proposed approach is:
> 1. In startTransaction, call openKey and the response returned should be used 
> to create a new OmRequest object and used in setting the transaction context. 
> And also modify the ozoneManager and KeymanagerImpl to handle the code with 
> and without ratis.
>  
> This Jira also implements HDDS-1319. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1105) Add mechanism in Recon to obtain DB snapshot 'delta' updates from Ozone Manager.

2019-03-26 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1105:

Summary: Add mechanism in Recon to obtain DB snapshot 'delta' updates from 
Ozone Manager.  (was: Create an OM API that takes in a RocksDB sequence number 
and attempts to return all transactions after that.)

> Add mechanism in Recon to obtain DB snapshot 'delta' updates from Ozone 
> Manager.
> 
>
> Key: HDDS-1105
> URL: https://issues.apache.org/jira/browse/HDDS-1105
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>
> *Some context*
> The FSCK server will periodically invoke this OM API passing in the most 
> recent sequence number of its own RocksDB instance. The OM will use the 
> RockDB getUpdateSince() API to answer this query. Since the getUpdateSince 
> API only works against the RocksDB WAL, we have to configure OM RocksDB WAL 
> (https://github.com/facebook/rocksdb/wiki/Write-Ahead-Log) with sufficient 
> max size to make this API useful. If the OM cannot get all transactions since 
> the given sequence number (due to WAL flushing), it can error out. In that 
> case the FSCK server can fall back to getting the entire checkpoint snapshot 
> implemented in HDDS-1085.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1262) In OM HA OpenKey call Should happen only leader OM

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1262?focusedWorklogId=218958&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218958
 ]

ASF GitHub Bot logged work on HDDS-1262:


Author: ASF GitHub Bot
Created on: 26/Mar/19 21:55
Start Date: 26/Mar/19 21:55
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #626: 
HDDS-1262. In OM HA OpenKey and initiateMultipartUpload call Should happen only 
leader OM.
URL: https://github.com/apache/hadoop/pull/626#discussion_r269330114
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -2474,6 +2524,28 @@ public String getOzoneBucketMapping(String s3BucketName)
 }
   }
 
+
+  @Override
+  public OmMultipartInfo applyInitiateMultipartUpload(OmKeyArgs keyArgs,
+  String multipartUploadID) throws IOException {
+OmMultipartInfo multipartInfo;
+metrics.incNumInitiateMultipartUploads();
+try {
+  multipartInfo = keyManager.applyInitiateMultipartUpload(keyArgs,
+  multipartUploadID);
+  AUDIT.logWriteSuccess(buildAuditMessageForSuccess(
+  OMAction.INITIATE_MULTIPART_UPLOAD, (keyArgs == null) ? null :
+  keyArgs.toAuditMap()));
+} catch (IOException ex) {
+  AUDIT.logWriteFailure(buildAuditMessageForFailure(
+  OMAction.INITIATE_MULTIPART_UPLOAD,
+  (keyArgs == null) ? null : keyArgs.toAuditMap(), ex));
+  metrics.incNumInitiateMultipartUploadFails();
 
 Review comment:
   IN HA case initiateMultipartUpload will not be called, so it will not be 
updated twice.
   In startTransaction, we are not calling initiateMultipartUpload. (It 
generates a random id as multipartuploadID). This is the reason for not having 
new type here.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218958)
Time Spent: 3.5h  (was: 3h 20m)

> In OM HA OpenKey call Should happen only leader OM
> --
>
> Key: HDDS-1262
> URL: https://issues.apache.org/jira/browse/HDDS-1262
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> In OM HA, currently, when openKey is called, in applyTransaction() on all OM 
> nodes, we make a call to SCM and write the allocateBlock information into OM 
> DB and also clientID will be generated by each OM node.
>  
> The proposed approach is:
> 1. In startTransaction, call openKey and the response returned should be used 
> to create a new OmRequest object and used in setting the transaction context. 
> And also modify the ozoneManager and KeymanagerImpl to handle the code with 
> and without ratis.
>  
> This Jira also implements HDDS-1319. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1262) In OM HA OpenKey call Should happen only leader OM

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1262?focusedWorklogId=218957&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218957
 ]

ASF GitHub Bot logged work on HDDS-1262:


Author: ASF GitHub Bot
Created on: 26/Mar/19 21:55
Start Date: 26/Mar/19 21:55
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #626: 
HDDS-1262. In OM HA OpenKey and initiateMultipartUpload call Should happen only 
leader OM.
URL: https://github.com/apache/hadoop/pull/626#discussion_r269330114
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -2474,6 +2524,28 @@ public String getOzoneBucketMapping(String s3BucketName)
 }
   }
 
+
+  @Override
+  public OmMultipartInfo applyInitiateMultipartUpload(OmKeyArgs keyArgs,
+  String multipartUploadID) throws IOException {
+OmMultipartInfo multipartInfo;
+metrics.incNumInitiateMultipartUploads();
+try {
+  multipartInfo = keyManager.applyInitiateMultipartUpload(keyArgs,
+  multipartUploadID);
+  AUDIT.logWriteSuccess(buildAuditMessageForSuccess(
+  OMAction.INITIATE_MULTIPART_UPLOAD, (keyArgs == null) ? null :
+  keyArgs.toAuditMap()));
+} catch (IOException ex) {
+  AUDIT.logWriteFailure(buildAuditMessageForFailure(
+  OMAction.INITIATE_MULTIPART_UPLOAD,
+  (keyArgs == null) ? null : keyArgs.toAuditMap(), ex));
+  metrics.incNumInitiateMultipartUploadFails();
 
 Review comment:
   IN HA case initiateMultipartUpload will not be called, so it will not be 
updated twice.
   In startTransaction, we are not calling initiateMultipartUpload. (It 
generates a random id as multipartuploadID)
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218957)
Time Spent: 3h 20m  (was: 3h 10m)

> In OM HA OpenKey call Should happen only leader OM
> --
>
> Key: HDDS-1262
> URL: https://issues.apache.org/jira/browse/HDDS-1262
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> In OM HA, currently, when openKey is called, in applyTransaction() on all OM 
> nodes, we make a call to SCM and write the allocateBlock information into OM 
> DB and also clientID will be generated by each OM node.
>  
> The proposed approach is:
> 1. In startTransaction, call openKey and the response returned should be used 
> to create a new OmRequest object and used in setting the transaction context. 
> And also modify the ozoneManager and KeymanagerImpl to handle the code with 
> and without ratis.
>  
> This Jira also implements HDDS-1319. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1262) In OM HA OpenKey call Should happen only leader OM

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1262?focusedWorklogId=218959&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218959
 ]

ASF GitHub Bot logged work on HDDS-1262:


Author: ASF GitHub Bot
Created on: 26/Mar/19 21:55
Start Date: 26/Mar/19 21:55
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #626: 
HDDS-1262. In OM HA OpenKey and initiateMultipartUpload call Should happen only 
leader OM.
URL: https://github.com/apache/hadoop/pull/626#discussion_r269330512
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerStateMachine.java
 ##
 @@ -181,6 +297,17 @@ private TransactionContext handleAllocateBlock(
 
   }
 
+  /**
+   * Construct IOException message for failed requests in StartTransaction.
+   * @param omResponse
+   * @return
+   */
+  private IOException constructExceptionForFailedRequest(
+  OMResponse omResponse) {
+return new IOException(omResponse.getMessage() + " " +
+STATUS_CODE + omResponse.getStatus());
+  }
 
 Review comment:
   I tried that way, as this is converted to IOException somewhere in Ratis 
end, I am not able to do that. Initially I have tried the way you have 
suggested and figured out it is not working.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218959)
Time Spent: 3h 40m  (was: 3.5h)

> In OM HA OpenKey call Should happen only leader OM
> --
>
> Key: HDDS-1262
> URL: https://issues.apache.org/jira/browse/HDDS-1262
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> In OM HA, currently, when openKey is called, in applyTransaction() on all OM 
> nodes, we make a call to SCM and write the allocateBlock information into OM 
> DB and also clientID will be generated by each OM node.
>  
> The proposed approach is:
> 1. In startTransaction, call openKey and the response returned should be used 
> to create a new OmRequest object and used in setting the transaction context. 
> And also modify the ozoneManager and KeymanagerImpl to handle the code with 
> and without ratis.
>  
> This Jira also implements HDDS-1319. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1262) In OM HA OpenKey call Should happen only leader OM

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1262?focusedWorklogId=218956&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218956
 ]

ASF GitHub Bot logged work on HDDS-1262:


Author: ASF GitHub Bot
Created on: 26/Mar/19 21:54
Start Date: 26/Mar/19 21:54
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #626: 
HDDS-1262. In OM HA OpenKey and initiateMultipartUpload call Should happen only 
leader OM.
URL: https://github.com/apache/hadoop/pull/626#discussion_r269331803
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -1985,6 +1990,51 @@ public OpenKeySession openKey(OmKeyArgs args) throws 
IOException {
 }
   }
 
+  @Override
+  public void applyOpenKey(KeyArgs omKeyArgs, KeyInfo keyInfo, long clientID)
+  throws IOException {
+// Do we need to check again Acl's for apply OpenKey call?
+if(isAclEnabled) {
+  checkAcls(ResourceType.KEY, StoreType.OZONE, ACLType.READ,
+  omKeyArgs.getVolumeName(), omKeyArgs.getBucketName(),
+  omKeyArgs.getKeyName());
+}
+boolean auditSuccess = true;
+try {
+  keyManager.applyOpenKey(omKeyArgs, keyInfo, clientID);
+} catch (Exception ex) {
+  metrics.incNumKeyAllocateFails();
+  auditSuccess = false;
+  AUDIT.logWriteFailure(buildAuditMessageForFailure(
+  OMAction.APPLY_ALLOCATE_KEY,
+  (omKeyArgs == null) ? null : toAuditMap(omKeyArgs), ex));
+  throw ex;
+} finally {
+  if(auditSuccess){
+AUDIT.logWriteSuccess(buildAuditMessageForSuccess(
+OMAction.ALLOCATE_KEY, (omKeyArgs == null) ? null :
 
 Review comment:
   Done
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218956)
Time Spent: 3h 10m  (was: 3h)

> In OM HA OpenKey call Should happen only leader OM
> --
>
> Key: HDDS-1262
> URL: https://issues.apache.org/jira/browse/HDDS-1262
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> In OM HA, currently, when openKey is called, in applyTransaction() on all OM 
> nodes, we make a call to SCM and write the allocateBlock information into OM 
> DB and also clientID will be generated by each OM node.
>  
> The proposed approach is:
> 1. In startTransaction, call openKey and the response returned should be used 
> to create a new OmRequest object and used in setting the transaction context. 
> And also modify the ozoneManager and KeymanagerImpl to handle the code with 
> and without ratis.
>  
> This Jira also implements HDDS-1319. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1330) Add a docker compose for Ozone deployment with Recon.

2019-03-26 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1330 started by Aravindan Vijayan.
---
> Add a docker compose for Ozone deployment with Recon.
> -
>
> Key: HDDS-1330
> URL: https://issues.apache.org/jira/browse/HDDS-1330
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
> Fix For: 0.5.0
>
>
> * Add a docker compose for Ozone deployment with Recon.
> * Test out Recon container key service. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1330) Add a docker compose for Ozone deployment with Recon.

2019-03-26 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1330:

Summary: Add a docker compose for Ozone deployment with Recon.  (was: Test 
out Recon Container service endpoint.)

> Add a docker compose for Ozone deployment with Recon.
> -
>
> Key: HDDS-1330
> URL: https://issues.apache.org/jira/browse/HDDS-1330
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
> Fix For: 0.5.0
>
>
> * Add a docker compose for Ozone deployment with Recon.
> * Test out Recon container key service. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1260) Create Recon Server lifecyle integration with Ozone.

2019-03-26 Thread Vivek Ratnavel Subramanian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDDS-1260:
-
Status: Patch Available  (was: Open)

> Create Recon Server lifecyle integration with Ozone.
> 
>
> Key: HDDS-1260
> URL: https://issues.apache.org/jira/browse/HDDS-1260
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Vivek Ratnavel Subramanian
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> * Create the lifecycle scripts (start/stop) for Recon Server along with Shell 
> interface like the other components.
>  * Verify configurations are being picked up by Recon Server on startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1262) In OM HA OpenKey call Should happen only leader OM

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1262?focusedWorklogId=218950&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218950
 ]

ASF GitHub Bot logged work on HDDS-1262:


Author: ASF GitHub Bot
Created on: 26/Mar/19 21:50
Start Date: 26/Mar/19 21:50
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #626: 
HDDS-1262. In OM HA OpenKey and initiateMultipartUpload call Should happen only 
leader OM.
URL: https://github.com/apache/hadoop/pull/626#discussion_r269330512
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerStateMachine.java
 ##
 @@ -181,6 +297,17 @@ private TransactionContext handleAllocateBlock(
 
   }
 
+  /**
+   * Construct IOException message for failed requests in StartTransaction.
+   * @param omResponse
+   * @return
+   */
+  private IOException constructExceptionForFailedRequest(
+  OMResponse omResponse) {
+return new IOException(omResponse.getMessage() + " " +
+STATUS_CODE + omResponse.getStatus());
+  }
 
 Review comment:
   I tried that way, as this is converted to IOException to in Ratis end, I am 
not able to do that. Initially I have tried the way you have suggested and 
figured out it is not working.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218950)
Time Spent: 3h  (was: 2h 50m)

> In OM HA OpenKey call Should happen only leader OM
> --
>
> Key: HDDS-1262
> URL: https://issues.apache.org/jira/browse/HDDS-1262
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> In OM HA, currently, when openKey is called, in applyTransaction() on all OM 
> nodes, we make a call to SCM and write the allocateBlock information into OM 
> DB and also clientID will be generated by each OM node.
>  
> The proposed approach is:
> 1. In startTransaction, call openKey and the response returned should be used 
> to create a new OmRequest object and used in setting the transaction context. 
> And also modify the ozoneManager and KeymanagerImpl to handle the code with 
> and without ratis.
>  
> This Jira also implements HDDS-1319. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1262) In OM HA OpenKey call Should happen only leader OM

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1262?focusedWorklogId=218947&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218947
 ]

ASF GitHub Bot logged work on HDDS-1262:


Author: ASF GitHub Bot
Created on: 26/Mar/19 21:49
Start Date: 26/Mar/19 21:49
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #626: 
HDDS-1262. In OM HA OpenKey and initiateMultipartUpload call Should happen only 
leader OM.
URL: https://github.com/apache/hadoop/pull/626#discussion_r269330114
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -2474,6 +2524,28 @@ public String getOzoneBucketMapping(String s3BucketName)
 }
   }
 
+
+  @Override
+  public OmMultipartInfo applyInitiateMultipartUpload(OmKeyArgs keyArgs,
+  String multipartUploadID) throws IOException {
+OmMultipartInfo multipartInfo;
+metrics.incNumInitiateMultipartUploads();
+try {
+  multipartInfo = keyManager.applyInitiateMultipartUpload(keyArgs,
+  multipartUploadID);
+  AUDIT.logWriteSuccess(buildAuditMessageForSuccess(
+  OMAction.INITIATE_MULTIPART_UPLOAD, (keyArgs == null) ? null :
+  keyArgs.toAuditMap()));
+} catch (IOException ex) {
+  AUDIT.logWriteFailure(buildAuditMessageForFailure(
+  OMAction.INITIATE_MULTIPART_UPLOAD,
+  (keyArgs == null) ? null : keyArgs.toAuditMap(), ex));
+  metrics.incNumInitiateMultipartUploadFails();
 
 Review comment:
   IN HA case initiateMultipartUpload will nott be called, so it will not be 
updated twice.
   In startTransaction, we are not calling initiateMultipartUpload. (It 
generates a random id as multipartuploadID)
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218947)
Time Spent: 2h 50m  (was: 2h 40m)

> In OM HA OpenKey call Should happen only leader OM
> --
>
> Key: HDDS-1262
> URL: https://issues.apache.org/jira/browse/HDDS-1262
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> In OM HA, currently, when openKey is called, in applyTransaction() on all OM 
> nodes, we make a call to SCM and write the allocateBlock information into OM 
> DB and also clientID will be generated by each OM node.
>  
> The proposed approach is:
> 1. In startTransaction, call openKey and the response returned should be used 
> to create a new OmRequest object and used in setting the transaction context. 
> And also modify the ozoneManager and KeymanagerImpl to handle the code with 
> and without ratis.
>  
> This Jira also implements HDDS-1319. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1262) In OM HA OpenKey call Should happen only leader OM

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1262?focusedWorklogId=218943&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218943
 ]

ASF GitHub Bot logged work on HDDS-1262:


Author: ASF GitHub Bot
Created on: 26/Mar/19 21:46
Start Date: 26/Mar/19 21:46
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #626: 
HDDS-1262. In OM HA OpenKey and initiateMultipartUpload call Should happen only 
leader OM.
URL: https://github.com/apache/hadoop/pull/626#discussion_r268885709
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerStateMachine.java
 ##
 @@ -181,6 +297,17 @@ private TransactionContext handleAllocateBlock(
 
   }
 
+  /**
+   * Construct IOException message for failed requests in StartTransaction.
+   * @param omResponse
+   * @return
+   */
+  private IOException constructExceptionForFailedRequest(
+  OMResponse omResponse) {
+return new IOException(omResponse.getMessage() + " " +
+STATUS_CODE + omResponse.getStatus());
+  }
 
 Review comment:
   Instead of creating an IOException and then parsing the status code back at 
the client, can we use OMException instead? We can add the Status parameter to 
OMException.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218943)
Time Spent: 2h 20m  (was: 2h 10m)

> In OM HA OpenKey call Should happen only leader OM
> --
>
> Key: HDDS-1262
> URL: https://issues.apache.org/jira/browse/HDDS-1262
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> In OM HA, currently, when openKey is called, in applyTransaction() on all OM 
> nodes, we make a call to SCM and write the allocateBlock information into OM 
> DB and also clientID will be generated by each OM node.
>  
> The proposed approach is:
> 1. In startTransaction, call openKey and the response returned should be used 
> to create a new OmRequest object and used in setting the transaction context. 
> And also modify the ozoneManager and KeymanagerImpl to handle the code with 
> and without ratis.
>  
> This Jira also implements HDDS-1319. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1262) In OM HA OpenKey call Should happen only leader OM

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1262?focusedWorklogId=218945&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218945
 ]

ASF GitHub Bot logged work on HDDS-1262:


Author: ASF GitHub Bot
Created on: 26/Mar/19 21:46
Start Date: 26/Mar/19 21:46
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #626: 
HDDS-1262. In OM HA OpenKey and initiateMultipartUpload call Should happen only 
leader OM.
URL: https://github.com/apache/hadoop/pull/626#discussion_r268886537
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -1985,6 +1990,51 @@ public OpenKeySession openKey(OmKeyArgs args) throws 
IOException {
 }
   }
 
+  @Override
+  public void applyOpenKey(KeyArgs omKeyArgs, KeyInfo keyInfo, long clientID)
+  throws IOException {
+// Do we need to check again Acl's for apply OpenKey call?
+if(isAclEnabled) {
+  checkAcls(ResourceType.KEY, StoreType.OZONE, ACLType.READ,
+  omKeyArgs.getVolumeName(), omKeyArgs.getBucketName(),
+  omKeyArgs.getKeyName());
+}
+boolean auditSuccess = true;
+try {
+  keyManager.applyOpenKey(omKeyArgs, keyInfo, clientID);
+} catch (Exception ex) {
+  metrics.incNumKeyAllocateFails();
+  auditSuccess = false;
+  AUDIT.logWriteFailure(buildAuditMessageForFailure(
+  OMAction.APPLY_ALLOCATE_KEY,
+  (omKeyArgs == null) ? null : toAuditMap(omKeyArgs), ex));
+  throw ex;
+} finally {
+  if(auditSuccess){
+AUDIT.logWriteSuccess(buildAuditMessageForSuccess(
+OMAction.ALLOCATE_KEY, (omKeyArgs == null) ? null :
 
 Review comment:
   OMAction should be APPLY_ALLOCATE_KEY. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218945)
Time Spent: 2h 40m  (was: 2.5h)

> In OM HA OpenKey call Should happen only leader OM
> --
>
> Key: HDDS-1262
> URL: https://issues.apache.org/jira/browse/HDDS-1262
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> In OM HA, currently, when openKey is called, in applyTransaction() on all OM 
> nodes, we make a call to SCM and write the allocateBlock information into OM 
> DB and also clientID will be generated by each OM node.
>  
> The proposed approach is:
> 1. In startTransaction, call openKey and the response returned should be used 
> to create a new OmRequest object and used in setting the transaction context. 
> And also modify the ozoneManager and KeymanagerImpl to handle the code with 
> and without ratis.
>  
> This Jira also implements HDDS-1319. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1262) In OM HA OpenKey call Should happen only leader OM

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1262?focusedWorklogId=218944&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218944
 ]

ASF GitHub Bot logged work on HDDS-1262:


Author: ASF GitHub Bot
Created on: 26/Mar/19 21:46
Start Date: 26/Mar/19 21:46
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #626: 
HDDS-1262. In OM HA OpenKey and initiateMultipartUpload call Should happen only 
leader OM.
URL: https://github.com/apache/hadoop/pull/626#discussion_r268887028
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -2474,6 +2524,28 @@ public String getOzoneBucketMapping(String s3BucketName)
 }
   }
 
+
+  @Override
+  public OmMultipartInfo applyInitiateMultipartUpload(OmKeyArgs keyArgs,
+  String multipartUploadID) throws IOException {
+OmMultipartInfo multipartInfo;
+metrics.incNumInitiateMultipartUploads();
+try {
+  multipartInfo = keyManager.applyInitiateMultipartUpload(keyArgs,
+  multipartUploadID);
+  AUDIT.logWriteSuccess(buildAuditMessageForSuccess(
+  OMAction.INITIATE_MULTIPART_UPLOAD, (keyArgs == null) ? null :
+  keyArgs.toAuditMap()));
+} catch (IOException ex) {
+  AUDIT.logWriteFailure(buildAuditMessageForFailure(
+  OMAction.INITIATE_MULTIPART_UPLOAD,
+  (keyArgs == null) ? null : keyArgs.toAuditMap(), ex));
+  metrics.incNumInitiateMultipartUploadFails();
 
 Review comment:
   The metrics and Audit log would be updated twice with the same OMAction 
(INITIATE_MULTIPART_UPLOAD). Can we create new OMAction for this method also.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218944)
Time Spent: 2.5h  (was: 2h 20m)

> In OM HA OpenKey call Should happen only leader OM
> --
>
> Key: HDDS-1262
> URL: https://issues.apache.org/jira/browse/HDDS-1262
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> In OM HA, currently, when openKey is called, in applyTransaction() on all OM 
> nodes, we make a call to SCM and write the allocateBlock information into OM 
> DB and also clientID will be generated by each OM node.
>  
> The proposed approach is:
> 1. In startTransaction, call openKey and the response returned should be used 
> to create a new OmRequest object and used in setting the transaction context. 
> And also modify the ozoneManager and KeymanagerImpl to handle the code with 
> and without ratis.
>  
> This Jira also implements HDDS-1319. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1260) Create Recon Server lifecyle integration with Ozone.

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1260?focusedWorklogId=218939&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218939
 ]

ASF GitHub Bot logged work on HDDS-1260:


Author: ASF GitHub Bot
Created on: 26/Mar/19 21:44
Start Date: 26/Mar/19 21:44
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #643: HDDS-1260. Create 
Recon Server lifecycle integration with Ozone.
URL: https://github.com/apache/hadoop/pull/643#issuecomment-476864906
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 7 | https://github.com/apache/hadoop/pull/643 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/643 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-643/7/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218939)
Time Spent: 2h  (was: 1h 50m)

> Create Recon Server lifecyle integration with Ozone.
> 
>
> Key: HDDS-1260
> URL: https://issues.apache.org/jira/browse/HDDS-1260
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Vivek Ratnavel Subramanian
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> * Create the lifecycle scripts (start/stop) for Recon Server along with Shell 
> interface like the other components.
>  * Verify configurations are being picked up by Recon Server on startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1255) Refactor ozone acceptance test to allow run in secure mode

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1255?focusedWorklogId=218936&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218936
 ]

ASF GitHub Bot logged work on HDDS-1255:


Author: ASF GitHub Bot
Created on: 26/Mar/19 21:41
Start Date: 26/Mar/19 21:41
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #632: HDDS-1255. 
Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay 
Kumar.
URL: https://github.com/apache/hadoop/pull/632#discussion_r269327752
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/commonlib.robot
 ##
 @@ -35,3 +41,51 @@ Compare files
 ${checksumbefore} = Executemd5sum ${file1} | 
awk '{print $1}'
 ${checksumafter} =  Executemd5sum ${file2} | 
awk '{print $1}'
 Should Be Equal${checksumbefore}   
 ${checksumafter}
+Execute AWSS3APICli
+[Arguments]   ${command}
+${output} =   Executeaws s3api --endpoint-url 
${ENDPOINT_URL} ${command}
+[return]  ${output}
+
+Execute AWSS3APICli and checkrc
+[Arguments]   ${command} ${expected_error_code}
+${output} =   Execute and checkrcaws s3api --endpoint-url 
${ENDPOINT_URL} ${command}  ${expected_error_code}
+[return]  ${output}
+
+Execute AWSS3Cli
+[Arguments]   ${command}
+${output} =   Execute aws s3 --endpoint-url 
${ENDPOINT_URL} ${command}
+[return]  ${output}
+
+Install aws cli s3 centos
+Executesudo yum install -y awscli
+
+Install aws cli s3 debian
+Executesudo apt-get install -y awscli
+
+Install aws cli
+${rc}  ${output} = Run And Return Rc And 
Output   which apt-get
+Run Keyword if '${rc}' == '0'  Install aws cli s3 debian
+${rc}  ${output} = Run And Return Rc And 
Output   yum --help
+Run Keyword if '${rc}' == '0'  Install aws cli s3 centos
+
+Kinit test user
+${hostname} =   Executehostname
+Set Suite Variable  ${TEST_USER}   
testuser/${hostname}@EXAMPLE.COM
+Execute kinit -k ${TEST_USER} -t 
/etc/security/keytabs/testuser.keytab
+
+Setup secure credentials
+Run Keyword Install aws cli
+Run Keyword Kinit test user
+${result} = Executeozone s3 getsecret
+${accessKey} =  Get Regexp Matches ${result} 
(?<=awsAccessKey=).*
+${secret} = Get Regexp Matches ${result} 
(?<=awsSecret=).*
+Executeaws configure set 
default.s3.signature_version s3v4
+Executeaws configure set 
aws_access_key_id ${accessKey[0]}
+Executeaws configure set 
aws_secret_access_key ${secret[0]}
+Executeaws configure set region 
us-west-1
+
+Setup incorrect credentials for S3
 
 Review comment:
   shall we move this to commonawslib.robot?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218936)
Time Spent: 2h  (was: 1h 50m)

> Refactor ozone acceptance test to allow run in secure mode
> --
>
> Key: HDDS-1255
> URL: https://issues.apache.org/jira/browse/HDDS-1255
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Refactor ozone acceptance test to allow run in secure mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14359) Inherited ACL permissions masked when parent directory does not exist (mkdir -p)

2019-03-26 Thread Stephen O'Donnell (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802193#comment-16802193
 ] 

Stephen O'Donnell commented on HDFS-14359:
--

Yea I think it would make sense to bring this back to the earlier 3.x branches 
too. I quickly checked the source for 3.0.0 and it has the same issue. 
Therefore we should be able to apply this patch to any versions that include 
HDFS-6962, as it added the FsCreateModes class.

> Inherited ACL permissions masked when parent directory does not exist (mkdir 
> -p)
> 
>
> Key: HDFS-14359
> URL: https://issues.apache.org/jira/browse/HDFS-14359
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14359.001.patch, HDFS-14359.002.patch, 
> HDFS-14359.003.patch
>
>
> There appears to be an issue with ACL inheritance if you 'mkdir' a directory 
> such that the parent directories need to be created (ie mkdir -p).
> If you have a folder /tmp2/testacls as:
> {code}
> hadoop fs -mkdir /tmp2
> hadoop fs -mkdir /tmp2/testacls
> hadoop fs -setfacl -m default:user:hive:rwx /tmp2/testacls
> hadoop fs -setfacl -m default:user:flume:rwx /tmp2/testacls
> hadoop fs -setfacl -m user:hive:rwx /tmp2/testacls
> hadoop fs -setfacl -m user:flume:rwx /tmp2/testacls
> hadoop fs -getfacl -R /tmp2/testacls
> # file: /tmp2/testacls
> # owner: kafka
> # group: supergroup
> user::rwx
> user:flume:rwx
> user:hive:rwx
> group::r-x
> mask::rwx
> other::r-x
> default:user::rwx
> default:user:flume:rwx
> default:user:hive:rwx
> default:group::r-x
> default:mask::rwx
> default:other::r-x
> {code}
> Then create a sub-directory in it, the ACLs are as expected:
> {code}
> hadoop fs -mkdir /tmp2/testacls/dir_from_mkdir
> # file: /tmp2/testacls/dir_from_mkdir
> # owner: sodonnell
> # group: supergroup
> user::rwx
> user:flume:rwx
> user:hive:rwx
> group::r-x
> mask::rwx
> other::r-x
> default:user::rwx
> default:user:flume:rwx
> default:user:hive:rwx
> default:group::r-x
> default:mask::rwx
> default:other::r-x
> {code}
> However if you mkdir -p a directory, the situation is not the same:
> {code}
> hadoop fs -mkdir -p /tmp2/testacls/dir_with_subdirs/sub1/sub2
> # file: /tmp2/testacls/dir_with_subdirs
> # owner: sodonnell
> # group: supergroup
> user::rwx
> user:flume:rwx#effective:r-x
> user:hive:rwx #effective:r-x
> group::r-x
> mask::r-x
> other::r-x
> default:user::rwx
> default:user:flume:rwx
> default:user:hive:rwx
> default:group::r-x
> default:mask::rwx
> default:other::r-x
> # file: /tmp2/testacls/dir_with_subdirs/sub1
> # owner: sodonnell
> # group: supergroup
> user::rwx
> user:flume:rwx#effective:r-x
> user:hive:rwx #effective:r-x
> group::r-x
> mask::r-x
> other::r-x
> default:user::rwx
> default:user:flume:rwx
> default:user:hive:rwx
> default:group::r-x
> default:mask::rwx
> default:other::r-x
> # file: /tmp2/testacls/dir_with_subdirs/sub1/sub2
> # owner: sodonnell
> # group: supergroup
> user::rwx
> user:flume:rwx
> user:hive:rwx
> group::r-x
> mask::rwx
> other::r-x
> default:user::rwx
> default:user:flume:rwx
> default:user:hive:rwx
> default:group::r-x
> default:mask::rwx
> default:other::r-x
> {code}
> Notice the the leaf folder "sub2" is correct, but the two ancestor folders 
> have their permissions masked. I believe this is a regression from the fix 
> for HDFS-6962 with dfs.namenode.posix.acl.inheritance.enabled set to true, as 
> the code has changed significantly from the earlier 2.6 / 2.8 branch.
> I will submit a patch for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1318) Fix MalformedTracerStateStringException on DN logs

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1318?focusedWorklogId=218921&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218921
 ]

ASF GitHub Bot logged work on HDDS-1318:


Author: ASF GitHub Bot
Created on: 26/Mar/19 20:54
Start Date: 26/Mar/19 20:54
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #641: HDDS-1318. 
Fix MalformedTracerStateStringException on DN logs. Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/641#discussion_r269311177
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/tracing/StringCodec.java
 ##
 @@ -25,12 +25,15 @@
 import io.jaegertracing.internal.exceptions.TraceIdOutOfBoundException;
 import io.jaegertracing.spi.Codec;
 import io.opentracing.propagation.Format;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * A jaeger codec to save the current tracing context t a string.
 
 Review comment:
   sure, will fix it in next commit.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218921)
Time Spent: 1h 20m  (was: 1h 10m)

> Fix MalformedTracerStateStringException on DN logs
> --
>
> Key: HDDS-1318
> URL: https://issues.apache.org/jira/browse/HDDS-1318
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Have seen many warnings on DN logs. This ticket is opened to track the 
> investigation and fix for this.
> {code}
> 2019-03-20 19:01:33 WARN 
> PropagationRegistry$ExceptionCatchingExtractorDecorator:60 - Error when 
> extracting SpanContext from carrier. Handling gracefully.
> io.jaegertracing.internal.exceptions.MalformedTracerStateStringException: 
> String does not match tracer state format: 
> 2c919331-9a51-4bc4-acee-df57a8dcecf0
>  at org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:42)
>  at org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:32)
>  at 
> io.jaegertracing.internal.PropagationRegistry$ExceptionCatchingExtractorDecorator.extract(PropagationRegistry.java:57)
>  at io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:208)
>  at io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:61)
>  at io.opentracing.util.GlobalTracer.extract(GlobalTracer.java:143)
>  at 
> org.apache.hadoop.hdds.tracing.TracingUtil.importAndCreateScope(TracingUtil.java:96)
>  at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:148)
>  at 
> org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:73)
>  at 
> org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:61)
>  at 
> org.apache.ratis.thirdparty.io.grpc.stub.ServerCalls$StreamingServerCallHandler$StreamingServerCallListener.onMessage(ServerCalls.java:248)
>  at 
> org.apache.ratis.thirdparty.io.grpc.ForwardingServerCallListener.onMessage(ForwardingServerCallListener.java:33)
>  at 
> org.apache.ratis.thirdparty.io.grpc.Contexts$ContextualizedServerCallListener.onMessage(Contexts.java:76)
>  at 
> org.apache.ratis.thirdparty.io.grpc.ForwardingServerCallListener.onMessage(ForwardingServerCallListener.java:33)
>  at 
> org.apache.hadoop.hdds.tracing.GrpcServerInterceptor$1.onMessage(GrpcServerInterceptor.java:46)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.messagesAvailable(ServerCallImpl.java:263)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1MessagesAvailable.runInContext(ServerImpl.java:686)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additiona

[jira] [Work logged] (HDDS-1318) Fix MalformedTracerStateStringException on DN logs

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1318?focusedWorklogId=218920&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218920
 ]

ASF GitHub Bot logged work on HDDS-1318:


Author: ASF GitHub Bot
Created on: 26/Mar/19 20:54
Start Date: 26/Mar/19 20:54
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #641: HDDS-1318. 
Fix MalformedTracerStateStringException on DN logs. Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/641#discussion_r269311007
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java
 ##
 @@ -217,7 +220,21 @@ public XceiverClientReply sendCommand(
   ContainerCommandRequestProto request, List excludeDns)
   throws IOException {
 Preconditions.checkState(HddsUtils.isReadOnly(request));
-return sendCommandWithRetry(request, excludeDns);
+return sendCommandWithTraceIDAndRetry(request, excludeDns);
 
 Review comment:
   Unfortunately, I'm not aware of a switch to turn tracing off globally. That 
will be a much bigger change than the scope of this ticket. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218920)
Time Spent: 1h 10m  (was: 1h)

> Fix MalformedTracerStateStringException on DN logs
> --
>
> Key: HDDS-1318
> URL: https://issues.apache.org/jira/browse/HDDS-1318
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Have seen many warnings on DN logs. This ticket is opened to track the 
> investigation and fix for this.
> {code}
> 2019-03-20 19:01:33 WARN 
> PropagationRegistry$ExceptionCatchingExtractorDecorator:60 - Error when 
> extracting SpanContext from carrier. Handling gracefully.
> io.jaegertracing.internal.exceptions.MalformedTracerStateStringException: 
> String does not match tracer state format: 
> 2c919331-9a51-4bc4-acee-df57a8dcecf0
>  at org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:42)
>  at org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:32)
>  at 
> io.jaegertracing.internal.PropagationRegistry$ExceptionCatchingExtractorDecorator.extract(PropagationRegistry.java:57)
>  at io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:208)
>  at io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:61)
>  at io.opentracing.util.GlobalTracer.extract(GlobalTracer.java:143)
>  at 
> org.apache.hadoop.hdds.tracing.TracingUtil.importAndCreateScope(TracingUtil.java:96)
>  at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:148)
>  at 
> org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:73)
>  at 
> org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:61)
>  at 
> org.apache.ratis.thirdparty.io.grpc.stub.ServerCalls$StreamingServerCallHandler$StreamingServerCallListener.onMessage(ServerCalls.java:248)
>  at 
> org.apache.ratis.thirdparty.io.grpc.ForwardingServerCallListener.onMessage(ForwardingServerCallListener.java:33)
>  at 
> org.apache.ratis.thirdparty.io.grpc.Contexts$ContextualizedServerCallListener.onMessage(Contexts.java:76)
>  at 
> org.apache.ratis.thirdparty.io.grpc.ForwardingServerCallListener.onMessage(ForwardingServerCallListener.java:33)
>  at 
> org.apache.hadoop.hdds.tracing.GrpcServerInterceptor$1.onMessage(GrpcServerInterceptor.java:46)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.messagesAvailable(ServerCallImpl.java:263)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1MessagesAvailable.runInContext(ServerImpl.java:686)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HDDS-1318) Fix MalformedTracerStateStringException on DN logs

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1318?focusedWorklogId=218919&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218919
 ]

ASF GitHub Bot logged work on HDDS-1318:


Author: ASF GitHub Bot
Created on: 26/Mar/19 20:53
Start Date: 26/Mar/19 20:53
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #641: HDDS-1318. 
Fix MalformedTracerStateStringException on DN logs. Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/641#discussion_r269310591
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java
 ##
 @@ -919,13 +926,19 @@ public void testGetKey() throws Exception {
 bucket.createKey(keyName, dataStr.length());
 keyOutputStream.write(dataStr.getBytes());
 keyOutputStream.close();
+assertFalse("put key without malformed tracing",
+logs.getOutput().contains("MalformedTracerStateString"));
+logs.clearOutput();
 
 String tmpPath = baseDir.getAbsolutePath() + "/testfile-"
 + UUID.randomUUID().toString();
 String[] args = new String[] {"key", "get",
 url + "/" + volumeName + "/" + bucketName + "/" + keyName,
 tmpPath};
 execute(shell, args);
+assertFalse("get key without malformed tracing",
 
 Review comment:
   malformed trace can be easily reproed without the production code fix when 
getKey is called (e.g., in this test). 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218919)
Time Spent: 1h  (was: 50m)

> Fix MalformedTracerStateStringException on DN logs
> --
>
> Key: HDDS-1318
> URL: https://issues.apache.org/jira/browse/HDDS-1318
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Have seen many warnings on DN logs. This ticket is opened to track the 
> investigation and fix for this.
> {code}
> 2019-03-20 19:01:33 WARN 
> PropagationRegistry$ExceptionCatchingExtractorDecorator:60 - Error when 
> extracting SpanContext from carrier. Handling gracefully.
> io.jaegertracing.internal.exceptions.MalformedTracerStateStringException: 
> String does not match tracer state format: 
> 2c919331-9a51-4bc4-acee-df57a8dcecf0
>  at org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:42)
>  at org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:32)
>  at 
> io.jaegertracing.internal.PropagationRegistry$ExceptionCatchingExtractorDecorator.extract(PropagationRegistry.java:57)
>  at io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:208)
>  at io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:61)
>  at io.opentracing.util.GlobalTracer.extract(GlobalTracer.java:143)
>  at 
> org.apache.hadoop.hdds.tracing.TracingUtil.importAndCreateScope(TracingUtil.java:96)
>  at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:148)
>  at 
> org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:73)
>  at 
> org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:61)
>  at 
> org.apache.ratis.thirdparty.io.grpc.stub.ServerCalls$StreamingServerCallHandler$StreamingServerCallListener.onMessage(ServerCalls.java:248)
>  at 
> org.apache.ratis.thirdparty.io.grpc.ForwardingServerCallListener.onMessage(ForwardingServerCallListener.java:33)
>  at 
> org.apache.ratis.thirdparty.io.grpc.Contexts$ContextualizedServerCallListener.onMessage(Contexts.java:76)
>  at 
> org.apache.ratis.thirdparty.io.grpc.ForwardingServerCallListener.onMessage(ForwardingServerCallListener.java:33)
>  at 
> org.apache.hadoop.hdds.tracing.GrpcServerInterceptor$1.onMessage(GrpcServerInterceptor.java:46)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.messagesAvailable(ServerCallImpl.java:263)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1MessagesAvailable.runInContext(ServerImpl.java:686)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java

[jira] [Work started] (HDDS-1189) Recon Aggregate DB schema and ORM

2019-03-26 Thread Siddharth Wagle (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1189 started by Siddharth Wagle.
-
> Recon Aggregate DB schema and ORM
> -
>
> Key: HDDS-1189
> URL: https://issues.apache.org/jira/browse/HDDS-1189
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
>
> _Objectives_
> - Define V1 of the db schema for recon service
> - The current proposal is to use jOOQ as the ORM for SQL interaction. For two 
> main reasons: a) powerful DSL for querying, that abstracts out SQL dialects, 
> b) Allows code to schema and schema to code seamless transition, critical for 
> creating DDL through the code and unit testing across versions of the 
> application.
> - Add e2e unit tests suite for Recon entities, created based on the design doc



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14359) Inherited ACL permissions masked when parent directory does not exist (mkdir -p)

2019-03-26 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802150#comment-16802150
 ] 

Erik Krogen commented on HDFS-14359:


Hi [~sodonnell], it looks like HDFS-6962 went into 3.0.0, should this fix be 
backported to the 3.0.x, 3.1.x, and 3.2.x lines?

> Inherited ACL permissions masked when parent directory does not exist (mkdir 
> -p)
> 
>
> Key: HDFS-14359
> URL: https://issues.apache.org/jira/browse/HDFS-14359
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14359.001.patch, HDFS-14359.002.patch, 
> HDFS-14359.003.patch
>
>
> There appears to be an issue with ACL inheritance if you 'mkdir' a directory 
> such that the parent directories need to be created (ie mkdir -p).
> If you have a folder /tmp2/testacls as:
> {code}
> hadoop fs -mkdir /tmp2
> hadoop fs -mkdir /tmp2/testacls
> hadoop fs -setfacl -m default:user:hive:rwx /tmp2/testacls
> hadoop fs -setfacl -m default:user:flume:rwx /tmp2/testacls
> hadoop fs -setfacl -m user:hive:rwx /tmp2/testacls
> hadoop fs -setfacl -m user:flume:rwx /tmp2/testacls
> hadoop fs -getfacl -R /tmp2/testacls
> # file: /tmp2/testacls
> # owner: kafka
> # group: supergroup
> user::rwx
> user:flume:rwx
> user:hive:rwx
> group::r-x
> mask::rwx
> other::r-x
> default:user::rwx
> default:user:flume:rwx
> default:user:hive:rwx
> default:group::r-x
> default:mask::rwx
> default:other::r-x
> {code}
> Then create a sub-directory in it, the ACLs are as expected:
> {code}
> hadoop fs -mkdir /tmp2/testacls/dir_from_mkdir
> # file: /tmp2/testacls/dir_from_mkdir
> # owner: sodonnell
> # group: supergroup
> user::rwx
> user:flume:rwx
> user:hive:rwx
> group::r-x
> mask::rwx
> other::r-x
> default:user::rwx
> default:user:flume:rwx
> default:user:hive:rwx
> default:group::r-x
> default:mask::rwx
> default:other::r-x
> {code}
> However if you mkdir -p a directory, the situation is not the same:
> {code}
> hadoop fs -mkdir -p /tmp2/testacls/dir_with_subdirs/sub1/sub2
> # file: /tmp2/testacls/dir_with_subdirs
> # owner: sodonnell
> # group: supergroup
> user::rwx
> user:flume:rwx#effective:r-x
> user:hive:rwx #effective:r-x
> group::r-x
> mask::r-x
> other::r-x
> default:user::rwx
> default:user:flume:rwx
> default:user:hive:rwx
> default:group::r-x
> default:mask::rwx
> default:other::r-x
> # file: /tmp2/testacls/dir_with_subdirs/sub1
> # owner: sodonnell
> # group: supergroup
> user::rwx
> user:flume:rwx#effective:r-x
> user:hive:rwx #effective:r-x
> group::r-x
> mask::r-x
> other::r-x
> default:user::rwx
> default:user:flume:rwx
> default:user:hive:rwx
> default:group::r-x
> default:mask::rwx
> default:other::r-x
> # file: /tmp2/testacls/dir_with_subdirs/sub1/sub2
> # owner: sodonnell
> # group: supergroup
> user::rwx
> user:flume:rwx
> user:hive:rwx
> group::r-x
> mask::rwx
> other::r-x
> default:user::rwx
> default:user:flume:rwx
> default:user:hive:rwx
> default:group::r-x
> default:mask::rwx
> default:other::r-x
> {code}
> Notice the the leaf folder "sub2" is correct, but the two ancestor folders 
> have their permissions masked. I believe this is a regression from the fix 
> for HDFS-6962 with dfs.namenode.posix.acl.inheritance.enabled set to true, as 
> the code has changed significantly from the earlier 2.6 / 2.8 branch.
> I will submit a patch for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1318) Fix MalformedTracerStateStringException on DN logs

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1318?focusedWorklogId=218899&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218899
 ]

ASF GitHub Bot logged work on HDDS-1318:


Author: ASF GitHub Bot
Created on: 26/Mar/19 20:09
Start Date: 26/Mar/19 20:09
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on pull request #641: HDDS-1318. Fix 
MalformedTracerStateStringException on DN logs. Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/641#discussion_r269293268
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java
 ##
 @@ -217,7 +220,21 @@ public XceiverClientReply sendCommand(
   ContainerCommandRequestProto request, List excludeDns)
   throws IOException {
 Preconditions.checkState(HddsUtils.isReadOnly(request));
-return sendCommandWithRetry(request, excludeDns);
+return sendCommandWithTraceIDAndRetry(request, excludeDns);
 
 Review comment:
   shall we do this only when tracing is enabled?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218899)
Time Spent: 50m  (was: 40m)

> Fix MalformedTracerStateStringException on DN logs
> --
>
> Key: HDDS-1318
> URL: https://issues.apache.org/jira/browse/HDDS-1318
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Have seen many warnings on DN logs. This ticket is opened to track the 
> investigation and fix for this.
> {code}
> 2019-03-20 19:01:33 WARN 
> PropagationRegistry$ExceptionCatchingExtractorDecorator:60 - Error when 
> extracting SpanContext from carrier. Handling gracefully.
> io.jaegertracing.internal.exceptions.MalformedTracerStateStringException: 
> String does not match tracer state format: 
> 2c919331-9a51-4bc4-acee-df57a8dcecf0
>  at org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:42)
>  at org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:32)
>  at 
> io.jaegertracing.internal.PropagationRegistry$ExceptionCatchingExtractorDecorator.extract(PropagationRegistry.java:57)
>  at io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:208)
>  at io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:61)
>  at io.opentracing.util.GlobalTracer.extract(GlobalTracer.java:143)
>  at 
> org.apache.hadoop.hdds.tracing.TracingUtil.importAndCreateScope(TracingUtil.java:96)
>  at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:148)
>  at 
> org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:73)
>  at 
> org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:61)
>  at 
> org.apache.ratis.thirdparty.io.grpc.stub.ServerCalls$StreamingServerCallHandler$StreamingServerCallListener.onMessage(ServerCalls.java:248)
>  at 
> org.apache.ratis.thirdparty.io.grpc.ForwardingServerCallListener.onMessage(ForwardingServerCallListener.java:33)
>  at 
> org.apache.ratis.thirdparty.io.grpc.Contexts$ContextualizedServerCallListener.onMessage(Contexts.java:76)
>  at 
> org.apache.ratis.thirdparty.io.grpc.ForwardingServerCallListener.onMessage(ForwardingServerCallListener.java:33)
>  at 
> org.apache.hadoop.hdds.tracing.GrpcServerInterceptor$1.onMessage(GrpcServerInterceptor.java:46)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.messagesAvailable(ServerCallImpl.java:263)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1MessagesAvailable.runInContext(ServerImpl.java:686)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@

[jira] [Work logged] (HDDS-1318) Fix MalformedTracerStateStringException on DN logs

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1318?focusedWorklogId=218898&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218898
 ]

ASF GitHub Bot logged work on HDDS-1318:


Author: ASF GitHub Bot
Created on: 26/Mar/19 20:09
Start Date: 26/Mar/19 20:09
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on pull request #641: HDDS-1318. Fix 
MalformedTracerStateStringException on DN logs. Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/641#discussion_r269292509
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java
 ##
 @@ -919,13 +926,19 @@ public void testGetKey() throws Exception {
 bucket.createKey(keyName, dataStr.length());
 keyOutputStream.write(dataStr.getBytes());
 keyOutputStream.close();
+assertFalse("put key without malformed tracing",
+logs.getOutput().contains("MalformedTracerStateString"));
+logs.clearOutput();
 
 String tmpPath = baseDir.getAbsolutePath() + "/testfile-"
 + UUID.randomUUID().toString();
 String[] args = new String[] {"key", "get",
 url + "/" + volumeName + "/" + bucketName + "/" + keyName,
 tmpPath};
 execute(shell, args);
+assertFalse("get key without malformed tracing",
 
 Review comment:
   Shall we check the case when it is malformed?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218898)
Time Spent: 40m  (was: 0.5h)

> Fix MalformedTracerStateStringException on DN logs
> --
>
> Key: HDDS-1318
> URL: https://issues.apache.org/jira/browse/HDDS-1318
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Have seen many warnings on DN logs. This ticket is opened to track the 
> investigation and fix for this.
> {code}
> 2019-03-20 19:01:33 WARN 
> PropagationRegistry$ExceptionCatchingExtractorDecorator:60 - Error when 
> extracting SpanContext from carrier. Handling gracefully.
> io.jaegertracing.internal.exceptions.MalformedTracerStateStringException: 
> String does not match tracer state format: 
> 2c919331-9a51-4bc4-acee-df57a8dcecf0
>  at org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:42)
>  at org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:32)
>  at 
> io.jaegertracing.internal.PropagationRegistry$ExceptionCatchingExtractorDecorator.extract(PropagationRegistry.java:57)
>  at io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:208)
>  at io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:61)
>  at io.opentracing.util.GlobalTracer.extract(GlobalTracer.java:143)
>  at 
> org.apache.hadoop.hdds.tracing.TracingUtil.importAndCreateScope(TracingUtil.java:96)
>  at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:148)
>  at 
> org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:73)
>  at 
> org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:61)
>  at 
> org.apache.ratis.thirdparty.io.grpc.stub.ServerCalls$StreamingServerCallHandler$StreamingServerCallListener.onMessage(ServerCalls.java:248)
>  at 
> org.apache.ratis.thirdparty.io.grpc.ForwardingServerCallListener.onMessage(ForwardingServerCallListener.java:33)
>  at 
> org.apache.ratis.thirdparty.io.grpc.Contexts$ContextualizedServerCallListener.onMessage(Contexts.java:76)
>  at 
> org.apache.ratis.thirdparty.io.grpc.ForwardingServerCallListener.onMessage(ForwardingServerCallListener.java:33)
>  at 
> org.apache.hadoop.hdds.tracing.GrpcServerInterceptor$1.onMessage(GrpcServerInterceptor.java:46)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.messagesAvailable(ServerCallImpl.java:263)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1MessagesAvailable.runInContext(ServerImpl.java:686)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(Thre

[jira] [Work logged] (HDDS-1318) Fix MalformedTracerStateStringException on DN logs

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1318?focusedWorklogId=218895&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218895
 ]

ASF GitHub Bot logged work on HDDS-1318:


Author: ASF GitHub Bot
Created on: 26/Mar/19 20:06
Start Date: 26/Mar/19 20:06
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on pull request #641: HDDS-1318. Fix 
MalformedTracerStateStringException on DN logs. Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/641#discussion_r269291995
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/tracing/StringCodec.java
 ##
 @@ -25,12 +25,15 @@
 import io.jaegertracing.internal.exceptions.TraceIdOutOfBoundException;
 import io.jaegertracing.spi.Codec;
 import io.opentracing.propagation.Format;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * A jaeger codec to save the current tracing context t a string.
 
 Review comment:
   can we fix this typo as well?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218895)
Time Spent: 0.5h  (was: 20m)

> Fix MalformedTracerStateStringException on DN logs
> --
>
> Key: HDDS-1318
> URL: https://issues.apache.org/jira/browse/HDDS-1318
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Have seen many warnings on DN logs. This ticket is opened to track the 
> investigation and fix for this.
> {code}
> 2019-03-20 19:01:33 WARN 
> PropagationRegistry$ExceptionCatchingExtractorDecorator:60 - Error when 
> extracting SpanContext from carrier. Handling gracefully.
> io.jaegertracing.internal.exceptions.MalformedTracerStateStringException: 
> String does not match tracer state format: 
> 2c919331-9a51-4bc4-acee-df57a8dcecf0
>  at org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:42)
>  at org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:32)
>  at 
> io.jaegertracing.internal.PropagationRegistry$ExceptionCatchingExtractorDecorator.extract(PropagationRegistry.java:57)
>  at io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:208)
>  at io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:61)
>  at io.opentracing.util.GlobalTracer.extract(GlobalTracer.java:143)
>  at 
> org.apache.hadoop.hdds.tracing.TracingUtil.importAndCreateScope(TracingUtil.java:96)
>  at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:148)
>  at 
> org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:73)
>  at 
> org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:61)
>  at 
> org.apache.ratis.thirdparty.io.grpc.stub.ServerCalls$StreamingServerCallHandler$StreamingServerCallListener.onMessage(ServerCalls.java:248)
>  at 
> org.apache.ratis.thirdparty.io.grpc.ForwardingServerCallListener.onMessage(ForwardingServerCallListener.java:33)
>  at 
> org.apache.ratis.thirdparty.io.grpc.Contexts$ContextualizedServerCallListener.onMessage(Contexts.java:76)
>  at 
> org.apache.ratis.thirdparty.io.grpc.ForwardingServerCallListener.onMessage(ForwardingServerCallListener.java:33)
>  at 
> org.apache.hadoop.hdds.tracing.GrpcServerInterceptor$1.onMessage(GrpcServerInterceptor.java:46)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.messagesAvailable(ServerCallImpl.java:263)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1MessagesAvailable.runInContext(ServerImpl.java:686)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e

[jira] [Work logged] (HDDS-1285) Implement actions need to be taken after chill mode exit wait time

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1285?focusedWorklogId=218890&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218890
 ]

ASF GitHub Bot logged work on HDDS-1285:


Author: ASF GitHub Bot
Created on: 26/Mar/19 19:47
Start Date: 26/Mar/19 19:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #612: HDDS-1285. 
Implement actions need to be taken after chill mode exit w…
URL: https://github.com/apache/hadoop/pull/612#issuecomment-476818806
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 12 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 56 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1068 | trunk passed |
   | +1 | compile | 947 | trunk passed |
   | +1 | checkstyle | 213 | trunk passed |
   | +1 | mvnsite | 75 | trunk passed |
   | +1 | shadedclient | 1029 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 51 | trunk passed |
   | +1 | javadoc | 54 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | +1 | mvninstall | 64 | the patch passed |
   | +1 | compile | 878 | the patch passed |
   | +1 | javac | 878 | the patch passed |
   | +1 | checkstyle | 207 | the patch passed |
   | +1 | mvnsite | 76 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 718 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 52 | the patch passed |
   | +1 | javadoc | 53 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 97 | server-scm in the patch passed. |
   | +1 | unit | 604 | integration-test in the patch passed. |
   | +1 | asflicense | 44 | The patch does not generate ASF License warnings. |
   | | | 6296 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-612/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/612 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 4bbc9fb63d7d 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed 
Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 82d4772 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-612/4/testReport/ |
   | Max. process+thread count | 4709 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-612/4/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218890)
Time Spent: 50m  (was: 40m)

> Implement actions need to be taken after chill mode exit wait time
> --
>
> Key: HDDS-1285
> URL: https://issues.apache.org/jira/browse/HDDS-1285
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> # Destroy and close the pipelines
>  # Close all the containers on the pipeline.
>  # trigger for pipeline creation



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For addit

[jira] [Commented] (HDFS-14205) Backport HDFS-6440 to branch-2

2019-03-26 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802098#comment-16802098
 ] 

Chao Sun commented on HDFS-14205:
-

Thanks [~vagarychen]! I'll backport the follow-up JIRAs soon.

> Backport HDFS-6440 to branch-2
> --
>
> Key: HDFS-14205
> URL: https://issues.apache.org/jira/browse/HDFS-14205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chao Sun
>Priority: Major
> Fix For: 2.10.0
>
> Attachments: HDFS-14205-branch-2.001.patch, 
> HDFS-14205-branch-2.002.patch, HDFS-14205-branch-2.003.patch, 
> HDFS-14205-branch-2.004.patch, HDFS-14205-branch-2.005.patch, 
> HDFS-14205-branch-2.006.patch, HDFS-14205-branch-2.007.patch, 
> HDFS-14205-branch-2.008.patch, HDFS-14205-branch-2.009.patch
>
>
> Currently support for more than 2 NameNodes (HDFS-6440) is only in branch-3. 
> This JIRA aims to backport it to branch-2, as this is required by HDFS-12943 
> (consistent read from standby) backport to branch-2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-117) Wrapper for set/get Standalone, Ratis and Rest Ports in DatanodeDetails.

2019-03-26 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta reassigned HDDS-117:
---

Assignee: Shweta

> Wrapper for set/get Standalone, Ratis and Rest Ports in DatanodeDetails.
> 
>
> Key: HDDS-117
> URL: https://issues.apache.org/jira/browse/HDDS-117
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Nanda kumar
>Assignee: Shweta
>Priority: Major
>  Labels: newbie
>
> It will be very helpful to have a wrapper for set/get Standalone, Ratis and 
> Rest Ports in DatanodeDetails.
> Search and Replace usage of DatanodeDetails#newPort directly in current code. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1189) Recon Aggregate DB schema and ORM

2019-03-26 Thread Siddharth Wagle (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle reassigned HDDS-1189:
-

Assignee: Siddharth Wagle  (was: Aravindan Vijayan)

> Recon Aggregate DB schema and ORM
> -
>
> Key: HDDS-1189
> URL: https://issues.apache.org/jira/browse/HDDS-1189
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
>
> _Objectives_
> - Define V1 of the db schema for recon service
> - The current proposal is to use jOOQ as the ORM for SQL interaction. For two 
> main reasons: a) powerful DSL for querying, that abstracts out SQL dialects, 
> b) Allows code to schema and schema to code seamless transition, critical for 
> creating DDL through the code and unit testing across versions of the 
> application.
> - Add e2e unit tests suite for Recon entities, created based on the design doc



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12345) Scale testing HDFS NameNode with real metadata and workloads (Dynamometer)

2019-03-26 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802095#comment-16802095
 ] 

Siyao Meng commented on HDFS-12345:
---

[~daryn] IMHO one solution would be not compiling and packaging Dynamometer by 
default, which can be easily done in maven (set profile activeByDefault to 
false). So developers won't need to worry about changing the tool every time 
they want to change APIs, while we can fix it as it is needed.
On the usage of private APIs though, would you like to add anything [~xkrogen]?

> Scale testing HDFS NameNode with real metadata and workloads (Dynamometer)
> --
>
> Key: HDFS-12345
> URL: https://issues.apache.org/jira/browse/HDFS-12345
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode, test
>Reporter: Zhe Zhang
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-12345.000.patch, HDFS-12345.001.patch, 
> HDFS-12345.002.patch, HDFS-12345.003.patch, HDFS-12345.004.patch, 
> HDFS-12345.005.patch
>
>
> Dynamometer has now been open sourced on our [GitHub 
> page|https://github.com/linkedin/dynamometer]. Read more at our [recent blog 
> post|https://engineering.linkedin.com/blog/2018/02/dynamometer--scale-testing-hdfs-on-minimal-hardware-with-maximum].
> To encourage getting the tool into the open for others to use as quickly as 
> possible, we went through our standard open sourcing process of releasing on 
> GitHub. However we are interested in the possibility of donating this to 
> Apache as part of Hadoop itself and would appreciate feedback on whether or 
> not this is something that would be supported by the community.
> Also of note, previous [discussions on the dev mail 
> lists|http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201707.mbox/%3c98fceffa-faff-4cf1-a14d-4faab6567...@gmail.com%3e]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14390) Provide kerberos support for AliasMap service used by Provided storage

2019-03-26 Thread Ashvin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802090#comment-16802090
 ] 

Ashvin commented on HDFS-14390:
---

In a secure HDFS cluster, the DN and NN will fail to connect with the 
{{AliasMap}} service. The following error messages can be seen in the logs.

2019-03-26 10:56:15,460 [Block report processor] WARN ipc.Client 
(Client.java:run(760)) - Exception encountered while connecting to the server : 
org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
via:[KERBEROS]
 2019-03-26 10:56:15,461 [Block report processor] ERROR 
impl.InMemoryLevelDBAliasMapClient 
(InMemoryLevelDBAliasMapClient.java:getAliasMap(171)) - Exception in retrieving 
block pool id {}
 java.io.IOException: DestHost:destPort localhost:32445 , LocalHost:localPort 
XXX. Failed on local exception: java.io.IOException: 
org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
via:[KERBEROS]
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 …
 at com.sun.proxy.$Proxy13.getBlockPoolId(Unknown Source)
 at 
org.apache.hadoop.hdfs.protocolPB.InMemoryAliasMapProtocolClientSideTranslatorPB.getBlockPoolId(InMemoryAliasMapProtocolClientSideTranslatorPB.java:219)
 at 
org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.InMemoryLevelDBAliasMapClient.getAliasMap(InMemoryLevelDBAliasMapClient.java:165)
 at 
org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.InMemoryLevelDBAliasMapClient.getReader(InMemoryLevelDBAliasMapClient.java:181)
 at 
org.apache.hadoop.hdfs.server.blockmanagement.ProvidedStorageMap.processProvidedStorageReport(ProvidedStorageMap.java:156)
 at 
org.apache.hadoop.hdfs.server.blockmanagement.ProvidedStorageMap.getStorage(ProvidedStorageMap.java:139)
 at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:2536)
 …
 Caused by: java.io.IOException: 
org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
via:[KERBEROS]
 at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:765)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1891)
 at 
org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:728)
 at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:822)

…

> Provide kerberos support for AliasMap service used by Provided storage
> --
>
> Key: HDFS-14390
> URL: https://issues.apache.org/jira/browse/HDFS-14390
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ashvin
>Priority: Major
>
> With {{PROVIDED}} storage (-HDFS-9806)-, HDFS can address data stored in 
> external storage systems. This feature is not supported in a secure HDFS 
> cluster. The {{AliasMap}} service does not support kerberos, and as a result 
> the cluster nodes will fail to communicate with it. This JIRA is to enable 
> kerberos support for the {{AliasMap}} service.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14390) Provide kerberos support for AliasMap service used by Provided storage

2019-03-26 Thread Ashvin (JIRA)
Ashvin created HDFS-14390:
-

 Summary: Provide kerberos support for AliasMap service used by 
Provided storage
 Key: HDFS-14390
 URL: https://issues.apache.org/jira/browse/HDFS-14390
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ashvin


With {{PROVIDED}} storage (-HDFS-9806)-, HDFS can address data stored in 
external storage systems. This feature is not supported in a secure HDFS 
cluster. The {{AliasMap}} service does not support kerberos, and as a result 
the cluster nodes will fail to communicate with it. This JIRA is to enable 
kerberos support for the {{AliasMap}} service.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14205) Backport HDFS-6440 to branch-2

2019-03-26 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-14205:
--
   Resolution: Fixed
Fix Version/s: 2.10.0
   Status: Resolved  (was: Patch Available)

> Backport HDFS-6440 to branch-2
> --
>
> Key: HDFS-14205
> URL: https://issues.apache.org/jira/browse/HDFS-14205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chao Sun
>Priority: Major
> Fix For: 2.10.0
>
> Attachments: HDFS-14205-branch-2.001.patch, 
> HDFS-14205-branch-2.002.patch, HDFS-14205-branch-2.003.patch, 
> HDFS-14205-branch-2.004.patch, HDFS-14205-branch-2.005.patch, 
> HDFS-14205-branch-2.006.patch, HDFS-14205-branch-2.007.patch, 
> HDFS-14205-branch-2.008.patch, HDFS-14205-branch-2.009.patch
>
>
> Currently support for more than 2 NameNodes (HDFS-6440) is only in branch-3. 
> This JIRA aims to backport it to branch-2, as this is required by HDFS-12943 
> (consistent read from standby) backport to branch-2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14205) Backport HDFS-6440 to branch-2

2019-03-26 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802083#comment-16802083
 ] 

Chen Liang commented on HDFS-14205:
---

I have backported v009 patch to branch-2, thanks [~csun] for the effort!

> Backport HDFS-6440 to branch-2
> --
>
> Key: HDFS-14205
> URL: https://issues.apache.org/jira/browse/HDFS-14205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14205-branch-2.001.patch, 
> HDFS-14205-branch-2.002.patch, HDFS-14205-branch-2.003.patch, 
> HDFS-14205-branch-2.004.patch, HDFS-14205-branch-2.005.patch, 
> HDFS-14205-branch-2.006.patch, HDFS-14205-branch-2.007.patch, 
> HDFS-14205-branch-2.008.patch, HDFS-14205-branch-2.009.patch
>
>
> Currently support for more than 2 NameNodes (HDFS-6440) is only in branch-3. 
> This JIRA aims to backport it to branch-2, as this is required by HDFS-12943 
> (consistent read from standby) backport to branch-2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1146) Adding container related metrics in SCM

2019-03-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802066#comment-16802066
 ] 

Hadoop QA commented on HDDS-1146:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
5s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
11s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
23s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} hadoop-hdds: The patch generated 7 new + 0 
unchanged - 0 fixed = 7 total (was 0) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} hadoop-ozone: The patch generated 3 new + 0 
unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m  7s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 21s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 86m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.container.common.TestDatanodeStateMachine |
|   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HDDS-Build/2583/artifact/out/Dockerfile 
|
| JIRA Issue | HDDS-1146 |
| JIRA Patch URL | 
https:/

[jira] [Updated] (HDDS-1304) Ozone ha breaks service discovery

2019-03-26 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-1304:

Target Version/s: 0.5.0  (was: 0.4.0)
Priority: Critical  (was: Blocker)

My apologies for the edit spam. There was some confusion whether this causes 
HDDS-1298. Thanks to [~msingh] for confirming the issues are unrelated.

I am marking it as a non-blocker again.

> Ozone ha breaks service discovery
> -
>
> Key: HDDS-1304
> URL: https://issues.apache.org/jira/browse/HDDS-1304
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Nanda kumar
>Priority: Critical
>
> We need to redefine the semantics of what service discovery means with HA 
> enabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14037) Fix SSLFactory truststore reloader thread leak in URLConnectionFactory

2019-03-26 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802058#comment-16802058
 ] 

Hudson commented on HDFS-14037:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16286 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16286/])
HDFS-14037. Fix SSLFactory truststore reloader thread leak in (tasanuma: rev 
55fb3c32fb48ca26a629d4d5f3f07e2858d09594)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/QuorumJournalManager.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/SSLConnectionConfigurator.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/web/TestURLConnectionFactory.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterWebHdfsMethods.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/URLConnectionFactory.java


> Fix SSLFactory truststore reloader thread leak in URLConnectionFactory
> --
>
> Key: HDFS-14037
> URL: https://issues.apache.org/jira/browse/HDFS-14037
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client, webhdfs
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.0.4, 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14037.1.patch, HDFS-14037.2.patch
>
>
> This is reported by [~yoshiata]. It is a similar issue as HADOOP-11368 and 
> YARN-5309 in URLConnectionFactory.
> {quote}SSLFactory in newSslConnConfigurator and subsequently creates the 
> ReloadingX509TrustManager instance which in turn starts a trust store 
> reloader thread.
> However, the SSLFactory is never destroyed and hence the trust store reloader 
> threads are not killed.
> {quote}
> We observed many leaked threads when we used swebhdfs via NiFi cluster.
> {noformat}
> "Truststore reloader thread" Id=221 TIMED_WAITING  on null
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.security.ssl.ReloadingX509TrustManager.run(ReloadingX509TrustManager.java:189)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1300) Optimize non-recursive ozone filesystem apis

2019-03-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802055#comment-16802055
 ] 

Hadoop QA commented on HDDS-1300:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
53s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
6s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  3m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
2s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
34s{color} | {color:red} hadoop-hdds generated 1 new + 0 unchanged - 0 fixed = 
1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
59s{color} | {color:red} hadoop-ozone generated 1 new + 0 unchanged - 0 fixed = 
1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
58s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 35s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 88m  6s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdds |
|  |  org.apache.hadoop.ozone.om.helpers.OzoneFileStatus is Serializable; 
consider declaring a serialVersionUID  At OzoneFileStatus.java:a 
serialVersionUID  At OzoneFileStatus.java:[lines 40-111] |
| FindBugs | module:hadoop-ozone |
|  |  org.apache.hadoop.ozone.o

  1   2   >