[jira] [Updated] (HDDS-697) update and validate the BCSID for PutSmallFile/GetSmallFile command

2018-10-22 Thread Jitendra Nath Pandey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDDS-697:
--
Status: Open  (was: Patch Available)

> update and validate the BCSID for PutSmallFile/GetSmallFile command
> ---
>
> Key: HDDS-697
> URL: https://issues.apache.org/jira/browse/HDDS-697
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-697.000.patch
>
>
> Similar to putBlock/GetBlock, putSmallFile transaction in Ratis needs to 
> update the BCSID in the container db on datanode. getSmallFile should 
> validate the bcsId while reading the block similar to getBlock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-697) update and validate the BCSID for PutSmallFile/GetSmallFile command

2018-10-22 Thread Jitendra Nath Pandey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDDS-697:
--
Status: Patch Available  (was: Open)

> update and validate the BCSID for PutSmallFile/GetSmallFile command
> ---
>
> Key: HDDS-697
> URL: https://issues.apache.org/jira/browse/HDDS-697
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-697.000.patch
>
>
> Similar to putBlock/GetBlock, putSmallFile transaction in Ratis needs to 
> update the BCSID in the container db on datanode. getSmallFile should 
> validate the bcsId while reading the block similar to getBlock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-708) Validate BCSID while reading blocks from containers in datanodes

2018-10-22 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16660083#comment-16660083
 ] 

Mukul Kumar Singh commented on HDDS-708:


Thanks for working on this [~shashikant].

1) BlockManagerImpl.java:165, I feel that the condition should be (id < bcsId). 
This can happen in case the flush succeeded on one of the datanode. This 
replica might have all the required data plus some extra.
2) Also can we remove the default values from the proto files? I feel the 
client should always set the bcsID while requesting for a read.

> Validate BCSID while reading blocks from containers in datanodes
> 
>
> Key: HDDS-708
> URL: https://issues.apache.org/jira/browse/HDDS-708
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-708.000.patch
>
>
> Ozone client while making a getBlock call during reading data , should read 
> the bcsId from OzoneManager for the block and the same needs to be validated 
> in Datanode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13369) FSCK Report broken with RequestHedgingProxyProvider

2018-10-22 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16660069#comment-16660069
 ] 

Ranith Sardar commented on HDFS-13369:
--

yes, i will upload the new patch very soon.

> FSCK Report broken with RequestHedgingProxyProvider 
> 
>
> Key: HDFS-13369
> URL: https://issues.apache.org/jira/browse/HDFS-13369
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.3, 3.0.0, 3.1.0
>Reporter: Harshakiran Reddy
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-13369.001.patch, HDFS-13369.002.patch, 
> HDFS-13369.003.patch, HDFS-13369.004.patch, HDFS-13369.005.patch
>
>
> Scenario:-
> 1.Configure the RequestHedgingProxy
> 2. write some files in file system
> 3. Take FSCK report for the above files
>  
> {noformat}
> bin> hdfs fsck /file1 -locations -files -blocks
> Exception in thread "main" java.lang.ClassCastException: 
> org.apache.hadoop.hdfs.server.namenode.ha.RequestHedgingProxyProvider$RequestHedgingInvocationHandler
>  cannot be cast to org.apache.hadoop.ipc.RpcInvocationHandler
> at org.apache.hadoop.ipc.RPC.getConnectionIdForProxy(RPC.java:626)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.getConnectionId(RetryInvocationHandler.java:438)
> at org.apache.hadoop.ipc.RPC.getConnectionIdForProxy(RPC.java:628)
> at org.apache.hadoop.ipc.RPC.getServerAddress(RPC.java:611)
> at org.apache.hadoop.hdfs.HAUtil.getAddressOfActive(HAUtil.java:263)
> at 
> org.apache.hadoop.hdfs.tools.DFSck.getCurrentNamenodeAddress(DFSck.java:257)
> at org.apache.hadoop.hdfs.tools.DFSck.doWork(DFSck.java:319)
> at org.apache.hadoop.hdfs.tools.DFSck.access$000(DFSck.java:72)
> at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:156)
> at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:153)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
> at org.apache.hadoop.hdfs.tools.DFSck.run(DFSck.java:152)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:385){noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-499) Ozone UI: Configuration page does not show description.

2018-10-22 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16660065#comment-16660065
 ] 

Dinesh Chitlangia commented on HDDS-499:


[~anu] Because as per current design we are composing a map(tag, propsList) 
where propsList is a collection of (propName=propValue).

We are not even accounting for the description.

For this, we need to send, say, map(tag,List), where PropPOJO (name, 
value, description).

Once this is done, we get the required values in json.

 

However, the UI needs to be adjusted to be able to access and print these 
values correctly.

> Ozone UI: Configuration page does not show description.
> ---
>
> Key: HDDS-499
> URL: https://issues.apache.org/jira/browse/HDDS-499
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.2.1
>Reporter: Anu Engineer
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: alpha2, newbie
>
> The Config page on OzoneManager UI does not show the Description. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-499) Ozone UI: Configuration page does not show description.

2018-10-22 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16660062#comment-16660062
 ] 

Anu Engineer commented on HDDS-499:
---

[~dineshchitlangia] What is the core issue? Why is it not working?

 

> Ozone UI: Configuration page does not show description.
> ---
>
> Key: HDDS-499
> URL: https://issues.apache.org/jira/browse/HDDS-499
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.2.1
>Reporter: Anu Engineer
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: alpha2, newbie
>
> The Config page on OzoneManager UI does not show the Description. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-499) Ozone UI: Configuration page does not show description.

2018-10-22 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16660057#comment-16660057
 ] 

Dinesh Chitlangia edited comment on HDDS-499 at 10/23/18 4:12 AM:
--

[~anu] Based on my analysis, here is my proposal for this issue:
 # Open a HADOOP Jira similar to HADOOP-15007 where we can address the 
'description' metadata.
 # Revamp the UI so that with new Json returned as part of #1, the page can 
look and appear to function as current design (sorting, searching and selecting 
multiple tags etc).

I have done a draft change for #1 and tested it. If you & [~ajayydv] agree with 
the proposal, I will file a HADOOP Jira and post the changes.

I will then release this Jira back to the pool so someone with AngularJS 
expertise can do the UI changes.


was (Author: dineshchitlangia):
[~anu] Based on my analysis, here is my proposal for this issue:
 # Open a HADOOP Jira similar to HADOOP-15007 where we address the 
'description' metadata.
 # Revamp the UI so that with new Json returned as part of #1, the page can 
look and appear to function as current design (sorting, searching and selecting 
multiple tags etc).

I have done a draft change for #1 and tested it. If you & [~ajayydv] agree with 
the proposal, I will file a HADOOP Jira and post the changes.

I will then release this Jira back to the pool so someone with AngularJS 
expertise can do the UI changes.

> Ozone UI: Configuration page does not show description.
> ---
>
> Key: HDDS-499
> URL: https://issues.apache.org/jira/browse/HDDS-499
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.2.1
>Reporter: Anu Engineer
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: alpha2, newbie
>
> The Config page on OzoneManager UI does not show the Description. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-499) Ozone UI: Configuration page does not show description.

2018-10-22 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16660057#comment-16660057
 ] 

Dinesh Chitlangia commented on HDDS-499:


[~anu] Based on my analysis, here is my proposal for this issue:
 # Open a HADOOP Jira similar to HADOOP-15007 where we address the 
'description' metadata.
 # Revamp the UI so that with new Json returned as part of #1, the page can 
look and appear to function as current design (sorting, searching and selecting 
multiple tags etc).

I have done a draft change for #1 and tested it. If you & [~ajayydv] agree with 
the proposal, I will file a HADOOP Jira and post the changes.

I will then release this Jira back to the pool so someone with AngularJS 
expertise can do the UI changes.

> Ozone UI: Configuration page does not show description.
> ---
>
> Key: HDDS-499
> URL: https://issues.apache.org/jira/browse/HDDS-499
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.2.1
>Reporter: Anu Engineer
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: alpha2, newbie
>
> The Config page on OzoneManager UI does not show the Description. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-103) SCM CA: Add new security protocol for SCM to expose security related functions

2018-10-22 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16660043#comment-16660043
 ] 

Hadoop QA commented on HDDS-103:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
45s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
28s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
8s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
54s{color} | {color:red} hadoop-hdds/container-service in HDDS-4 has 1 extant 
Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
41s{color} | {color:red} hadoop-hdds/server-scm in HDDS-4 has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
50s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 23s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
4s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
43s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
34s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 77m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | 

[jira] [Commented] (HDDS-695) Introduce a new SCM Command to teardown a Pipeline

2018-10-22 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16660020#comment-16660020
 ] 

Yiqun Lin commented on HDDS-695:


Thanks for updating the patch, [~nandakumar131]. For the new change of the 
codes, I have two minor comments for the {{listPipelines}} command.

* For returning pipeline list in PipelineSelector, can we use 
{{Collections.unmodifiableList(..)}} to return a unmodifiable list?
* Can we add a corresponding unit test for listPipelines api in 
{{TestPipelineSelector}} as well?

> Introduce a new SCM Command to teardown a Pipeline
> --
>
> Key: HDDS-695
> URL: https://issues.apache.org/jira/browse/HDDS-695
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Blocker
> Attachments: HDDS-695-ozone-0.3.000.patch, 
> HDDS-695-ozone-0.3.001.patch, HDDS-695-ozone-0.3.002.patch
>
>
> We need to have a tear-down pipeline command in SCM so that an administrator 
> can close/destroy a pipeline in the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12284) RBF: Support for Kerberos authentication

2018-10-22 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1666#comment-1666
 ] 

Hadoop QA commented on HDFS-12284:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 14 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13532 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
57s{color} | {color:green} HDFS-13532 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} HDFS-13532 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} HDFS-13532 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} HDFS-13532 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-13532 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} HDFS-13532 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  4s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m 
33s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-12284 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12945136/HDFS-12284-HDFS-13532.011.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux 8e9a8a45844f 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13532 / 96ae4ac |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25340/testReport/ |
| Max. process+thread count | 1391 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25340/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This 

[jira] [Commented] (HDDS-580) Bootstrap OM/SCM with private/public key pair

2018-10-22 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659994#comment-16659994
 ] 

Hadoop QA commented on HDDS-580:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  5m 
56s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
29s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
39s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 5s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
51s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
39s{color} | {color:red} hadoop-hdds/server-scm in HDDS-4 has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
43s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
57s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 55s{color} | {color:orange} root: The patch generated 4 new + 19 unchanged - 
0 fixed = 23 total (was 19) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
18s{color} | {color:red} hadoop-hdds/common generated 1 new + 0 unchanged - 0 
fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
12s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
29s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
40s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
54s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 36s{color} 
| {color:red} integration-test in the patch 

[jira] [Commented] (HDDS-103) SCM CA: Add new security protocol for SCM to expose security related functions

2018-10-22 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659977#comment-16659977
 ] 

Ajay Kumar commented on HDDS-103:
-

[~xyao] thanks for rebasing the patch and comments. Patch v3  to address both 
comments.

> SCM CA: Add new security protocol for SCM to expose security related functions
> --
>
> Key: HDDS-103
> URL: https://issues.apache.org/jira/browse/HDDS-103
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-103-HDDS-4.00.patch, HDDS-103-HDDS-4.01.patch, 
> HDDS-103-HDDS-4.02.patch, HDDS-103-HDDS-4.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-103) SCM CA: Add new security protocol for SCM to expose security related functions

2018-10-22 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-103:

Attachment: HDDS-103-HDDS-4.03.patch

> SCM CA: Add new security protocol for SCM to expose security related functions
> --
>
> Key: HDDS-103
> URL: https://issues.apache.org/jira/browse/HDDS-103
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-103-HDDS-4.00.patch, HDDS-103-HDDS-4.01.patch, 
> HDDS-103-HDDS-4.02.patch, HDDS-103-HDDS-4.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-676) Enable Read from open Containers via Standalone Protocol

2018-10-22 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-676:
-
Attachment: HDDS-676-ozone-0.3.000.patch

> Enable Read from open Containers via Standalone Protocol
> 
>
> Key: HDDS-676
> URL: https://issues.apache.org/jira/browse/HDDS-676
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-676-ozone-0.3.000.patch, HDDS-676.001.patch, 
> HDDS-676.002.patch, HDDS-676.003.patch, HDDS-676.004.patch, 
> HDDS-676.005.patch, HDDS-676.006.patch, HDDS-676.007.patch, HDDS-676.008.patch
>
>
> With BlockCommitSequenceId getting updated per block commit on open 
> containers in OM as well datanode, Ozone Client reads can through Standalone 
> protocol not necessarily requiring Ratis. Client should verify the BCSID of 
> the container which has the data block , which should always be greater than 
> or equal to the BCSID of the block to be read and the existing block BCSID 
> should exactly match that of the block to be read. As a part of this, Client 
> can try to read from a replica with a supplied BCSID and failover to the next 
> one in case the block does ont exist on one replica.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14011) RBF: Add more information to HdfsFileStatus for a mount point

2018-10-22 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659974#comment-16659974
 ] 

Hadoop QA commented on HDFS-14011:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 44s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 
56s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-14011 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12945131/HDFS-14011.03.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0fc5db374dce 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e3cca12 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25339/testReport/ |
| Max. process+thread count | 956 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25339/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: Add more information to HdfsFileStatus for a mount point
> -
>
> Key: HDFS-14011
> 

[jira] [Commented] (HDFS-13996) Make HttpFS' ACLs RegEx configurable

2018-10-22 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659954#comment-16659954
 ] 

Hadoop QA commented on HDFS-13996:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
2s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  6s{color} | {color:orange} hadoop-hdfs-project: The patch generated 5 new + 
460 unchanged - 0 fixed = 465 total (was 460) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 39s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
13s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}153m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestPersistentStoragePolicySatisfier |
|   | hadoop.fs.TestHdfsNativeCodeLoader |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-13996 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12945120/HDFS-13996.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 71392a0b4424 4.4.0-134-generic #160~14.04.1-Ubuntu SMP Fri Aug 
17 11:07:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HDDS-643) Parse Authorization header in a separate filter

2018-10-22 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659945#comment-16659945
 ] 

Hadoop QA commented on HDDS-643:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 10s{color} | {color:orange} hadoop-ozone/s3gateway: The patch generated 6 
new + 1 unchanged - 1 fixed = 7 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 56s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} s3gateway in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-643 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12945132/HDDS-643.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 76e294961646 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e3cca12 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1486/artifact/out/diff-checkstyle-hadoop-ozone_s3gateway.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1486/testReport/ |
| Max. process+thread count | 412 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/s3gateway U: hadoop-ozone/s3gateway |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1486/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> 

[jira] [Updated] (HDFS-12284) RBF: Support for Kerberos authentication

2018-10-22 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12284:
---
Attachment: HDFS-12284-HDFS-13532.011.patch

> RBF: Support for Kerberos authentication
> 
>
> Key: HDFS-12284
> URL: https://issues.apache.org/jira/browse/HDFS-12284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Zhe Zhang
>Assignee: Sherwood Zheng
>Priority: Major
> Attachments: HDFS-12284-HDFS-13532.004.patch, 
> HDFS-12284-HDFS-13532.005.patch, HDFS-12284-HDFS-13532.006.patch, 
> HDFS-12284-HDFS-13532.007.patch, HDFS-12284-HDFS-13532.008.patch, 
> HDFS-12284-HDFS-13532.009.patch, HDFS-12284-HDFS-13532.010.patch, 
> HDFS-12284-HDFS-13532.011.patch, HDFS-12284.000.patch, HDFS-12284.001.patch, 
> HDFS-12284.002.patch, HDFS-12284.003.patch
>
>
> HDFS Router should support Kerberos authentication and issuing / managing 
> HDFS delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12284) RBF: Support for Kerberos authentication

2018-10-22 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659908#comment-16659908
 ] 

Íñigo Goiri commented on HDFS-12284:


We found another issue: when doing the login, we use a way to resolve the 
address.
This might give undesired resolutions, we should do something similar to what 
the DNs do and provide an option to set it as a key (i.e., 
DFS_DATANODE_HOST_NAME_KEY).
We should do the same for the Namenode.
I'll include an updated path with this but we may want to do the NN part in a 
separate JIRA in trunk.


> RBF: Support for Kerberos authentication
> 
>
> Key: HDFS-12284
> URL: https://issues.apache.org/jira/browse/HDFS-12284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Zhe Zhang
>Assignee: Sherwood Zheng
>Priority: Major
> Attachments: HDFS-12284-HDFS-13532.004.patch, 
> HDFS-12284-HDFS-13532.005.patch, HDFS-12284-HDFS-13532.006.patch, 
> HDFS-12284-HDFS-13532.007.patch, HDFS-12284-HDFS-13532.008.patch, 
> HDFS-12284-HDFS-13532.009.patch, HDFS-12284-HDFS-13532.010.patch, 
> HDFS-12284.000.patch, HDFS-12284.001.patch, HDFS-12284.002.patch, 
> HDFS-12284.003.patch
>
>
> HDFS Router should support Kerberos authentication and issuing / managing 
> HDFS delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-103) SCM CA: Add new security protocol for SCM to expose security related functions

2018-10-22 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-103:

Attachment: (was: HDDS-103-HDDS-4.03.patch)

> SCM CA: Add new security protocol for SCM to expose security related functions
> --
>
> Key: HDDS-103
> URL: https://issues.apache.org/jira/browse/HDDS-103
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-103-HDDS-4.00.patch, HDDS-103-HDDS-4.01.patch, 
> HDDS-103-HDDS-4.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-103) SCM CA: Add new security protocol for SCM to expose security related functions

2018-10-22 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-103:

Attachment: HDDS-103-HDDS-4.03.patch

> SCM CA: Add new security protocol for SCM to expose security related functions
> --
>
> Key: HDDS-103
> URL: https://issues.apache.org/jira/browse/HDDS-103
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-103-HDDS-4.00.patch, HDDS-103-HDDS-4.01.patch, 
> HDDS-103-HDDS-4.02.patch, HDDS-103-HDDS-4.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-643) Parse Authorization header in a separate filter

2018-10-22 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659897#comment-16659897
 ] 

Bharat Viswanadham commented on HDDS-643:
-

Fixed Jenkins reported issues.

> Parse Authorization header in a separate filter
> ---
>
> Key: HDDS-643
> URL: https://issues.apache.org/jira/browse/HDDS-643
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-643.00.patch, HDDS-643.01.patch
>
>
> This Jira is created from HDDS-522 comment from [~elek]
>  # I think the authorization headers could be parsed in a separated filters 
> similar to the request ids. But it could be implemented later. This is more 
> like a prototype.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-643) Parse Authorization header in a separate filter

2018-10-22 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-643:

Attachment: HDDS-643.01.patch

> Parse Authorization header in a separate filter
> ---
>
> Key: HDDS-643
> URL: https://issues.apache.org/jira/browse/HDDS-643
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-643.00.patch, HDDS-643.01.patch
>
>
> This Jira is created from HDDS-522 comment from [~elek]
>  # I think the authorization headers could be parsed in a separated filters 
> similar to the request ids. But it could be implemented later. This is more 
> like a prototype.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14011) RBF: Add more information to HdfsFileStatus for a mount point

2018-10-22 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-14011:
-
Attachment: HDFS-14011.03.patch

> RBF: Add more information to HdfsFileStatus for a mount point
> -
>
> Key: HDFS-14011
> URL: https://issues.apache.org/jira/browse/HDFS-14011
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HDFS-14011.01.patch, HDFS-14011.02.patch, 
> HDFS-14011.03.patch
>
>
> RouterClientProtocol#getMountPointStatus does not use information of the 
> mount point, therefore, 'hdfs dfs -ls' to a directory including mount point 
> returns the incorrect information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14011) RBF: Add more information to HdfsFileStatus for a mount point

2018-10-22 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659896#comment-16659896
 ] 

Akira Ajisaka commented on HDFS-14011:
--

Thanks [~elgoiri] for the review. Updated the patch to fix the checkstyle 
warning.

> RBF: Add more information to HdfsFileStatus for a mount point
> -
>
> Key: HDFS-14011
> URL: https://issues.apache.org/jira/browse/HDFS-14011
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HDFS-14011.01.patch, HDFS-14011.02.patch, 
> HDFS-14011.03.patch
>
>
> RouterClientProtocol#getMountPointStatus does not use information of the 
> mount point, therefore, 'hdfs dfs -ls' to a directory including mount point 
> returns the incorrect information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-580) Bootstrap OM/SCM with private/public key pair

2018-10-22 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659873#comment-16659873
 ] 

Ajay Kumar commented on HDDS-580:
-

[~xyao] thanks for review. 

{quote}SecurityUtil.java
Line 68: when bootstrap SCM, should we special handling if there are key exist? 
In the normal non-bootstrap case, it is fine to read keys if they exist. I 
think we need to define what to expect wrt. bootstrap. If there are key exist, 
I think we can throw exception for now and assume no existing key. We should 
move existing key to another dir to support rotate key, which will be needed 
later.{quote}
Good suggestion. Repurposed bootstrap method to create keypair only if it 
doesn't exist. Added to more util functions to read and rotate keys.
{quote}SecurityConfig.java
Line 150: should we check and enforce component!=null to avoid key overwritten 
and simplify the logic?{quote}
Added the check to SecurityConfig constructor.

{quote}Line 72/108/189/190/196/211: NIT: incorrect global replace from x509->x, 
please check other places. Let's just skip the unnecessary variable name change 
x509SignatureAlgo.{quote}
Done

{quote}StorageContainerManager.java
Line 152/210: keyPair should be assigned in the non-bootstrap case when 
ozone_security is enabled. In other words, we should detect if scm security 
boot-strap has not been done and return proper message.{quote}
Done
{quote}Line 493: we should init_security when --init scm when security is 
invoked.{quote}
Added call to bootstrap security if security is enabled. Separate call to 
INIT_SECURITY is still required in case security is enabled later.
{quote}Line 509: should we terminate after the init_security like init.{quote}
Not sure what we benefit we get with this terminate call but added this change 
in latest patch. With terminate call we can't test this initialization so 
removed corresponding test case in {{TestStorageContainerManager}}.
{quote}OMConfigKeys.java
Line 87: unnecessary change that will break the secure docker-compose. To be 
consistent, let's just keep the .file for now in the key name.{quote}
Actually in ozone-default.xml and ozonesecure/docker-config it is 
{{ozone.om.http.kerberos.keytab}}. Removing file resolves the bug, let me know 
if you think this should be done in seperate jira.

{quote}HDDSKeyPEMWriter.java
Line 66: Filename should be HDDSKeyPEMHandler.java after changing the class 
name.{quote}
Already done in last patch.
{quote}Line 89: let's remove the cstor without component so that each component 
is guaranteed to have their separate key location.{quote}
Done
{quote}Line 309: suggest using DFSUtil.bytes2String for byte->String 
conversion.{quote}
I think you meant DFSUtil.string2Byte. Using it throws InvalidKeyException 
exception as encoding is different. 
{quote}Line 320: we need to call KeyFactor.getInstance with "BC" as provider. 
Otherwise, the default one from JDK will be used. (Similarly apply to 
readPublicKeyFromFile){quote}
Doing so for readPublicKeyFromFile results in {{InvalidKeySpecException: 
encoded key spec not recognized}}. Changed it for readPrivateKeyFromFile. (Test 
case in {{TestHDDSKeyPEMHandler}} also uses same approach for public key.)

{quote}TestHDDSKeyPEMWriter.java
File needs to be renamed.{quote}
Handled in last patch.
{quote}TestHDDSKeyGenerator.java
Line 46: please use "test" as the component for test keys.
TestRootCertificate.java
Line 63: please use "test" as the component for test keys.{quote}
Done

> Bootstrap OM/SCM with private/public key pair
> -
>
> Key: HDDS-580
> URL: https://issues.apache.org/jira/browse/HDDS-580
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-4-HDDS-580.00.patch, HDDS-580-HDDS-4.00.patch, 
> HDDS-580-HDDS-4.01.patch, HDDS-580-HDDS-4.02.patch, HDDS-580-HDDS-4.03.patch
>
>
> We will need to add API that leverage the key generator from HDDS-100 to 
> generate public/private key pair for OM/SCM, this will be called by the 
> scm/om admin cli with "-init" cmd.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-580) Bootstrap OM/SCM with private/public key pair

2018-10-22 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-580:

Attachment: HDDS-580-HDDS-4.03.patch

> Bootstrap OM/SCM with private/public key pair
> -
>
> Key: HDDS-580
> URL: https://issues.apache.org/jira/browse/HDDS-580
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-4-HDDS-580.00.patch, HDDS-580-HDDS-4.00.patch, 
> HDDS-580-HDDS-4.01.patch, HDDS-580-HDDS-4.02.patch, HDDS-580-HDDS-4.03.patch
>
>
> We will need to add API that leverage the key generator from HDDS-100 to 
> generate public/private key pair for OM/SCM, this will be called by the 
> scm/om admin cli with "-init" cmd.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-708) Validate BCSID while reading blocks from containers in datanodes

2018-10-22 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659864#comment-16659864
 ] 

Anu Engineer commented on HDDS-708:
---

+1, Thanks for the contribution, please feel free to commit when you get a 
chance.

> Validate BCSID while reading blocks from containers in datanodes
> 
>
> Key: HDDS-708
> URL: https://issues.apache.org/jira/browse/HDDS-708
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-708.000.patch
>
>
> Ozone client while making a getBlock call during reading data , should read 
> the bcsId from OzoneManager for the block and the same needs to be validated 
> in Datanode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDDS-676) Enable Read from open Containers via Standalone Protocol

2018-10-22 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reopened HDDS-676:
---

There is an conflict when porting to 0.3 branch, can you please take this to 
that branch if needed?

> Enable Read from open Containers via Standalone Protocol
> 
>
> Key: HDDS-676
> URL: https://issues.apache.org/jira/browse/HDDS-676
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-676.001.patch, HDDS-676.002.patch, 
> HDDS-676.003.patch, HDDS-676.004.patch, HDDS-676.005.patch, 
> HDDS-676.006.patch, HDDS-676.007.patch, HDDS-676.008.patch
>
>
> With BlockCommitSequenceId getting updated per block commit on open 
> containers in OM as well datanode, Ozone Client reads can through Standalone 
> protocol not necessarily requiring Ratis. Client should verify the BCSID of 
> the container which has the data block , which should always be greater than 
> or equal to the BCSID of the block to be read and the existing block BCSID 
> should exactly match that of the block to be read. As a part of this, Client 
> can try to read from a replica with a supplied BCSID and failover to the next 
> one in case the block does ont exist on one replica.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-676) Enable Read from open Containers via Standalone Protocol

2018-10-22 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-676:
--
   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

[~jnp] Thanks for the reviews; [~shashikant] Thanks for the contribution. I 
have committed this change to trunk and ozone-0.3

> Enable Read from open Containers via Standalone Protocol
> 
>
> Key: HDDS-676
> URL: https://issues.apache.org/jira/browse/HDDS-676
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-676.001.patch, HDDS-676.002.patch, 
> HDDS-676.003.patch, HDDS-676.004.patch, HDDS-676.005.patch, 
> HDDS-676.006.patch, HDDS-676.007.patch, HDDS-676.008.patch
>
>
> With BlockCommitSequenceId getting updated per block commit on open 
> containers in OM as well datanode, Ozone Client reads can through Standalone 
> protocol not necessarily requiring Ratis. Client should verify the BCSID of 
> the container which has the data block , which should always be greater than 
> or equal to the BCSID of the block to be read and the existing block BCSID 
> should exactly match that of the block to be read. As a part of this, Client 
> can try to read from a replica with a supplied BCSID and failover to the next 
> one in case the block does ont exist on one replica.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14015) Improve error handling in hdfsThreadDestructor in native thread local storage

2018-10-22 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659855#comment-16659855
 ] 

Hadoop QA commented on HDFS-14015:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
34m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m  3s{color} 
| {color:red} hadoop-hdfs-native-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed CTEST tests | test_test_libhdfs_threaded_hdfs_static |
|   | test_test_libhdfs_zerocopy_hdfs_static |
|   | test_libhdfs_threaded_hdfspp_test_shim_static |
|   | test_hdfspp_mini_dfs_smoke_hdfspp_test_shim_static |
|   | libhdfs_mini_stress_valgrind_hdfspp_test_static |
|   | memcheck_libhdfs_mini_stress_valgrind_hdfspp_test_static |
|   | test_libhdfs_mini_stress_hdfspp_test_shim_static |
|   | test_hdfs_ext_hdfspp_test_shim_static |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-14015 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12945116/HDFS-14015.004.patch |
| Optional Tests |  dupname  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux b2d772fa9e52 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e3cca12 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| CTEST | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25337/artifact/out/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-ctest.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25337/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25337/testReport/ |
| Max. process+thread count | 305 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25337/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically 

[jira] [Commented] (HDFS-13941) make storageId in BlockPoolTokenSecretManager.checkAccess optional

2018-10-22 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659853#comment-16659853
 ] 

Ajay Kumar commented on HDFS-13941:
---

[~arpitagarwal] thanks for review and commit.

> make storageId in BlockPoolTokenSecretManager.checkAccess optional
> --
>
> Key: HDFS-13941
> URL: https://issues.apache.org/jira/browse/HDFS-13941
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2, 3.3.0
>
> Attachments: HDFS-13941.00.patch, HDFS-13941.01.patch, 
> HDFS-13941.02.patch
>
>
> Change in {{BlockPoolTokenSecretManager.checkAccess}} by 
> [HDDS-9807|https://issues.apache.org/jira/browse/HDFS-9807] breaks backward 
> compatibility for applications using the private API (we've run into such 
> apps).
> Although there is no compatibility guarantee for the private interface, we 
> can restore the original version of checkAccess as an overload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-712) Use x-amz-storage-class to specify replication type and replication factor

2018-10-22 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-712:

Description: 
 

This has been a comment in the Jira in HDDS-693 from [~anu]

@DefaultValue("STAND_ALONE") @QueryParam("replicationType")

Just an opportunistic comment. Not part of this patch, this query param will 
not be sent by S3 hence this will always default to Stand_Alone. At some point 
we need to move to RATIS, Perhaps we have to read this via x-amz-storage-class.

*I propose below solution for this:*

Currently, in code we take query params replicationType and replicationFactor 
and default them to Stand alone and 1. But these query params cannot be passed 
from aws cli.

We want to use x-amz-storage-class header and pass the values. By default for 
S3 If you don't specify this it defaults to Standard. So, in Ozone over S3 
also, as we want to default to Ratis and replication factor three by default.

We can use the mapping Standard to RATIS and REDUCED_REDUNDANCY to Stand alone.

 

There are 2 more values 

STANDARD_IA and ONEZONE_IA these need to be considered later how we want to use 
them. Intially we are considering to use Standard and Reduced_Redundancy.

  was:
 

This has been a comment in the Jira in HDDS-693 from 

@DefaultValue("STAND_ALONE") @QueryParam("replicationType")

Just an opportunistic comment. Not part of this patch, this query param will 
not be sent by S3 hence this will always default to Stand_Alone. At some point 
we need to move to RATIS, Perhaps we have to read this via x-amz-storage-class.

*I propose below solution for this:*

Currently, in code we take query params replicationType and replicationFactor 
and default them to Stand alone and 1. But these query params cannot be passed 
from aws cli.

We want to use x-amz-storage-class header and pass the values. By default for 
S3 If you don't specify this it defaults to Standard. So, in Ozone over S3 
also, as we want to default to Ratis and replication factor three by default.

We can use the mapping Standard to RATIS and REDUCED_REDUNDANCY to Stand alone.

 

There are 2 more values 

STANDARD_IA and ONEZONE_IA these need to be considered later how we want to use 
them. Intially we are considering to use Standard and Reduced_Redundancy.


> Use x-amz-storage-class to specify replication type and replication factor
> --
>
> Key: HDDS-712
> URL: https://issues.apache.org/jira/browse/HDDS-712
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
>  
> This has been a comment in the Jira in HDDS-693 from [~anu]
> @DefaultValue("STAND_ALONE") @QueryParam("replicationType")
> Just an opportunistic comment. Not part of this patch, this query param will 
> not be sent by S3 hence this will always default to Stand_Alone. At some 
> point we need to move to RATIS, Perhaps we have to read this via 
> x-amz-storage-class.
> *I propose below solution for this:*
> Currently, in code we take query params replicationType and replicationFactor 
> and default them to Stand alone and 1. But these query params cannot be 
> passed from aws cli.
> We want to use x-amz-storage-class header and pass the values. By default for 
> S3 If you don't specify this it defaults to Standard. So, in Ozone over S3 
> also, as we want to default to Ratis and replication factor three by default.
> We can use the mapping Standard to RATIS and REDUCED_REDUNDANCY to Stand 
> alone.
>  
> There are 2 more values 
> STANDARD_IA and ONEZONE_IA these need to be considered later how we want to 
> use them. Intially we are considering to use Standard and Reduced_Redundancy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-676) Enable Read from open Containers via Standalone Protocol

2018-10-22 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659837#comment-16659837
 ] 

Hadoop QA commented on HDDS-676:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  7m  
0s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  7m  0s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
2s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
22s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m  7s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}120m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.ozShell.TestOzoneShell |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-676 |
| JIRA Patch URL | 

[jira] [Updated] (HDDS-712) Use x-amz-storage-class to specify replication type and replication factor

2018-10-22 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-712:

Issue Type: Sub-task  (was: Task)
Parent: HDDS-434

> Use x-amz-storage-class to specify replication type and replication factor
> --
>
> Key: HDDS-712
> URL: https://issues.apache.org/jira/browse/HDDS-712
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
>  
> This has been a comment in the Jira in HDDS-693 from 
> @DefaultValue("STAND_ALONE") @QueryParam("replicationType")
> Just an opportunistic comment. Not part of this patch, this query param will 
> not be sent by S3 hence this will always default to Stand_Alone. At some 
> point we need to move to RATIS, Perhaps we have to read this via 
> x-amz-storage-class.
> *I propose below solution for this:*
> Currently, in code we take query params replicationType and replicationFactor 
> and default them to Stand alone and 1. But these query params cannot be 
> passed from aws cli.
> We want to use x-amz-storage-class header and pass the values. By default for 
> S3 If you don't specify this it defaults to Standard. So, in Ozone over S3 
> also, as we want to default to Ratis and replication factor three by default.
> We can use the mapping Standard to RATIS and REDUCED_REDUNDANCY to Stand 
> alone.
>  
> There are 2 more values 
> STANDARD_IA and ONEZONE_IA these need to be considered later how we want to 
> use them. Intially we are considering to use Standard and Reduced_Redundancy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-712) Use x-amz-storage-class to specify replication type and replication factor

2018-10-22 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-712:
---

 Summary: Use x-amz-storage-class to specify replication type and 
replication factor
 Key: HDDS-712
 URL: https://issues.apache.org/jira/browse/HDDS-712
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


 

This has been a comment in the Jira in HDDS-693 from 

@DefaultValue("STAND_ALONE") @QueryParam("replicationType")

Just an opportunistic comment. Not part of this patch, this query param will 
not be sent by S3 hence this will always default to Stand_Alone. At some point 
we need to move to RATIS, Perhaps we have to read this via x-amz-storage-class.

*I propose below solution for this:*

Currently, in code we take query params replicationType and replicationFactor 
and default them to Stand alone and 1. But these query params cannot be passed 
from aws cli.

We want to use x-amz-storage-class header and pass the values. By default for 
S3 If you don't specify this it defaults to Standard. So, in Ozone over S3 
also, as we want to default to Ratis and replication factor three by default.

We can use the mapping Standard to RATIS and REDUCED_REDUNDANCY to Stand alone.

 

There are 2 more values 

STANDARD_IA and ONEZONE_IA these need to be considered later how we want to use 
them. Intially we are considering to use Standard and Reduced_Redundancy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13924) Handle BlockMissingException when reading from observer

2018-10-22 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659827#comment-16659827
 ] 

Chao Sun commented on HDFS-13924:
-

[~xkrogen]: could you take another look? The latest jenkins run looks OK - the 
findbugs warning is not related.

> Handle BlockMissingException when reading from observer
> ---
>
> Key: HDFS-13924
> URL: https://issues.apache.org/jira/browse/HDFS-13924
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13924-HDFS-12943.000.patch, 
> HDFS-13924-HDFS-12943.001.patch, HDFS-13924-HDFS-12943.002.patch, 
> HDFS-13924-HDFS-12943.003.patch
>
>
> Internally we found that reading from ObserverNode may result to 
> {{BlockMissingException}}. This may happen when the observer sees a smaller 
> number of DNs than active (maybe due to communication issue with those DNs), 
> or (we guess) late block reports from some DNs to the observer. This error 
> happens in 
> [DFSInputStream#chooseDataNode|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L846],
>  when no valid DN can be found for the {{LocatedBlock}} got from the NN side.
> One potential solution (although a little hacky) is to ask the 
> {{DFSInputStream}} to retry active when this happens. The retry logic already 
> present in the code - we just have to dynamically set a flag to ask the 
> {{ObserverReadProxyProvider}} try active in this case.
> cc [~shv], [~xkrogen], [~vagarychen], [~zero45] for discussion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-422) ContainerStateMachine.readStateMachineData throws OverlappingFileLockException

2018-10-22 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659822#comment-16659822
 ] 

Arpit Agarwal commented on HDDS-422:


Thanks [~ljain], feel free to resolve it. I've moved it out to 0.4.0 for now.

> ContainerStateMachine.readStateMachineData throws OverlappingFileLockException
> --
>
> Key: HDDS-422
> URL: https://issues.apache.org/jira/browse/HDDS-422
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
>  Labels: Arches-Deferral-Candidate
>
>  
> {code:java}
> 2018-09-06 23:11:41,386 ERROR org.apache.ratis.server.impl.LogAppender: 
> GRpcLogAppender(d95c60fd-0e23-4237-8135-e05a326b952d_9858 -> 
> 954e7a3b-b20e-43a5-8f82-4381872aa7bb_9858) hit IOException while loadin
> g raft log
> org.apache.ratis.server.storage.RaftLogIOException: 
> d95c60fd-0e23-4237-8135-e05a326b952d_9858: Failed readStateMachineData for 
> (t:39, i:667)SMLOGENTRY, client-CD988394E416, cid=90
> at 
> org.apache.ratis.server.storage.RaftLog$EntryWithData.getEntry(RaftLog.java:360)
> at 
> org.apache.ratis.server.impl.LogAppender$LogEntryBuffer.getAppendRequest(LogAppender.java:165)
> at 
> org.apache.ratis.server.impl.LogAppender.createRequest(LogAppender.java:214)
> at 
> org.apache.ratis.grpc.server.GRpcLogAppender.appendLog(GRpcLogAppender.java:148)
> at 
> org.apache.ratis.grpc.server.GRpcLogAppender.runAppenderImpl(GRpcLogAppender.java:92)
> at org.apache.ratis.server.impl.LogAppender.runAppender(LogAppender.java:101)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.nio.channels.OverlappingFileLockException
> at sun.nio.ch.SharedFileLockTable.checkList(FileLockTable.java:255)
> at sun.nio.ch.SharedFileLockTable.add(FileLockTable.java:152)
> at 
> sun.nio.ch.AsynchronousFileChannelImpl.addToFileLockTable(AsynchronousFileChannelImpl.java:178)
> at 
> sun.nio.ch.SimpleAsynchronousFileChannelImpl.implLock(SimpleAsynchronousFileChannelImpl.java:185)
> at 
> sun.nio.ch.AsynchronousFileChannelImpl.lock(AsynchronousFileChannelImpl.java:118)
> at 
> org.apache.hadoop.ozone.container.keyvalue.helpers.ChunkUtils.readData(ChunkUtils.java:176)
> at 
> org.apache.hadoop.ozone.container.keyvalue.impl.ChunkManagerImpl.readChunk(ChunkManagerImpl.java:161)
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handleReadChunk(KeyValueHandler.java:598)
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handle(KeyValueHandler.java:201)
> at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:142)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatchCommand(ContainerStateMachine.java:217)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.readStateMachineData(ContainerStateMachine.java:289)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.lambda$readStateMachineData$3(ContainerStateMachine.java:359)
> at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> ... 1 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-643) Parse Authorization header in a separate filter

2018-10-22 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659823#comment-16659823
 ] 

Hadoop QA commented on HDDS-643:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} hadoop-ozone/s3gateway: The patch generated 0 new + 
1 unchanged - 1 fixed = 1 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 56s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 29s{color} 
| {color:red} s3gateway in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
27s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.s3.endpoint.TestRootList |
|   | hadoop.ozone.s3.TestVirtualHostStyleFilter |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-643 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12945114/HDDS-643.00.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d3a8327b2311 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c58811c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1484/artifact/out/patch-unit-hadoop-ozone_s3gateway.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1484/testReport/ |
| asflicense | 

[jira] [Commented] (HDDS-676) Enable Read from open Containers via Standalone Protocol

2018-10-22 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659818#comment-16659818
 ] 

Hudson commented on HDDS-676:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15289 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15289/])
HDDS-676. Enable Read from open Containers via Standalone Protocol. (aengineer: 
rev e3cca1204874d37b48095c8ff9a44c1f16dc15ed)
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientRatis.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/ChunkGroupInputStream.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientManager.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/ChunkGroupOutputStream.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/common/helpers/Pipeline.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestMiniOzoneCluster.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClient.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestXceiverClientManager.java


> Enable Read from open Containers via Standalone Protocol
> 
>
> Key: HDDS-676
> URL: https://issues.apache.org/jira/browse/HDDS-676
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-676.001.patch, HDDS-676.002.patch, 
> HDDS-676.003.patch, HDDS-676.004.patch, HDDS-676.005.patch, 
> HDDS-676.006.patch, HDDS-676.007.patch, HDDS-676.008.patch
>
>
> With BlockCommitSequenceId getting updated per block commit on open 
> containers in OM as well datanode, Ozone Client reads can through Standalone 
> protocol not necessarily requiring Ratis. Client should verify the BCSID of 
> the container which has the data block , which should always be greater than 
> or equal to the BCSID of the block to be read and the existing block BCSID 
> should exactly match that of the block to be read. As a part of this, Client 
> can try to read from a replica with a supplied BCSID and failover to the next 
> one in case the block does ont exist on one replica.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-711) Add robot tests for scm cli commands

2018-10-22 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-711:

Labels: newbie  (was: )

> Add robot tests for scm cli commands
> 
>
> Key: HDDS-711
> URL: https://issues.apache.org/jira/browse/HDDS-711
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
>
> Integration tests for scmcli are removed as part of HDDS-379.
> This Jira is opened to add robot tests for scmcli commands.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13996) Make HttpFS' ACLs RegEx configurable

2018-10-22 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13996:
--
Attachment: HDFS-13996.001.patch
Status: Patch Available  (was: In Progress)

Submitted patch rev 001. Added test case in Test HttpFSServer.
But NOT in BaseTestHttpFSWith, reason:
1. I must restart the MiniDFSCluster after setting the configuration or it 
won't apply for WebHDFS. It is not easily doable in BaseTestHttpFSWith.
2. This patch does not introduce a new API.

> Make HttpFS' ACLs RegEx configurable
> 
>
> Key: HDFS-13996
> URL: https://issues.apache.org/jira/browse/HDFS-13996
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.7.7, 3.0.3, 2.6.5
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13996.001.patch
>
>
> Previously in HDFS-11421, WebHDFS' ACLs RegEx is made configurable, but it's 
> not configurable yet in HttpFS. For now in HttpFS, the ACL permission pattern 
> is fixed to DFS_WEBHDFS_ACL_PERMISSION_PATTERN_DEFAULT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13996) Make HttpFS' ACLs RegEx configurable

2018-10-22 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659802#comment-16659802
 ] 

Siyao Meng commented on HDFS-13996:
---

Just noticed that there are a bunch of dead imports in TestHttpFSServer. Will 
fix that after jenkin results come out.

> Make HttpFS' ACLs RegEx configurable
> 
>
> Key: HDFS-13996
> URL: https://issues.apache.org/jira/browse/HDFS-13996
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.6.5, 3.0.3, 2.7.7
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13996.001.patch
>
>
> Previously in HDFS-11421, WebHDFS' ACLs RegEx is made configurable, but it's 
> not configurable yet in HttpFS. For now in HttpFS, the ACL permission pattern 
> is fixed to DFS_WEBHDFS_ACL_PERMISSION_PATTERN_DEFAULT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-528) add cli command to checkChill mode status and exit chill mode

2018-10-22 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659798#comment-16659798
 ] 

Bharat Viswanadham commented on HDDS-528:
-

Created a HDDS-711 to add robot tests for all scm cli commands

> add cli command to checkChill mode status and exit chill mode
> -
>
> Key: HDDS-528
> URL: https://issues.apache.org/jira/browse/HDDS-528
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: chencan
>Priority: Major
> Attachments: HDDS-528.001.patch
>
>
> [HDDS-370] introduces below 2 API:
> * isScmInChillMode
> * forceScmExitChillMode
> This jira is to call them via relevant cli command.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14015) Improve error handling in hdfsThreadDestructor in native thread local storage

2018-10-22 Thread Daniel Templeton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659800#comment-16659800
 ] 

Daniel Templeton commented on HDFS-14015:
-

Isn't that the build for the 1st patch?  I'm surprised it compiled.  It didn't 
for me locally.

> Improve error handling in hdfsThreadDestructor in native thread local storage
> -
>
> Key: HDFS-14015
> URL: https://issues.apache.org/jira/browse/HDFS-14015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: native
>Affects Versions: 3.0.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Major
> Attachments: HDFS-14015.001.patch, HDFS-14015.002.patch, 
> HDFS-14015.003.patch, HDFS-14015.004.patch
>
>
> In the hdfsThreadDestructor() function, we ignore the return value from the 
> DetachCurrentThread() call.  We are seeing cases where a native thread dies 
> while holding a JVM monitor, and it doesn't release the monitor.  We're 
> hoping that logging this error instead of ignoring it will shed some light on 
> the issue.  In any case, it's good programming practice.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-711) Add robot tests for scm cli commands

2018-10-22 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-711:
---

 Summary: Add robot tests for scm cli commands
 Key: HDDS-711
 URL: https://issues.apache.org/jira/browse/HDDS-711
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham


Integration tests for scmcli are removed as part of HDDS-379.

This Jira is opened to add robot tests for scmcli commands.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14017) ObserverReadProxyProviderWithIPFailover does not quite work

2018-10-22 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-14017:
--
Attachment: (was: HDFS-14017-HDFS-12943.001.patch)

> ObserverReadProxyProviderWithIPFailover does not quite work
> ---
>
> Key: HDFS-14017
> URL: https://issues.apache.org/jira/browse/HDFS-14017
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
>
> Currently {{ObserverReadProxyProviderWithIPFailover}} extends 
> {{ObserverReadProxyProvider}}, and the only difference is changing the proxy 
> factory to use {{IPFailoverProxyProvider}}. However this is not enough 
> because when calling constructor of {{ObserverReadProxyProvider}} in 
> super(...), the follow line:
> {code}
> nameNodeProxies = getProxyAddresses(uri,
> HdfsClientConfigKeys.DFS_NAMENODE_RPC_ADDRESS_KEY);
> {code}
> will try to resolve the all configured NN addresses to do configured 
> failover. But in the case of IPFailover, this does not really apply. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-14017) ObserverReadProxyProviderWithIPFailover does not quite work

2018-10-22 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-14017:
--
Comment: was deleted

(was: Post initial v001 patch for early review. Still missing unit test. The 
basic idea is not to extend ObserverReadProxyProvider, because it is extending 
ConfiguredProxyProvider, which is by itself self different from IP failover we 
are trying to support here.  The patch here does not have the ProxyProvider 
doing any actual failover. Additionally I'm thinking of making a more clear 
separation by also renaming ObserverReadProxyProvider to 
ObserverReadConfiguredProxyProvider.)

> ObserverReadProxyProviderWithIPFailover does not quite work
> ---
>
> Key: HDFS-14017
> URL: https://issues.apache.org/jira/browse/HDFS-14017
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
>
> Currently {{ObserverReadProxyProviderWithIPFailover}} extends 
> {{ObserverReadProxyProvider}}, and the only difference is changing the proxy 
> factory to use {{IPFailoverProxyProvider}}. However this is not enough 
> because when calling constructor of {{ObserverReadProxyProvider}} in 
> super(...), the follow line:
> {code}
> nameNodeProxies = getProxyAddresses(uri,
> HdfsClientConfigKeys.DFS_NAMENODE_RPC_ADDRESS_KEY);
> {code}
> will try to resolve the all configured NN addresses to do configured 
> failover. But in the case of IPFailover, this does not really apply. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-528) add cli command to checkChill mode status and exit chill mode

2018-10-22 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659791#comment-16659791
 ] 

Bharat Viswanadham commented on HDDS-528:
-

Over all patch LGTM.

I think integration tests are removed as part of HDDS-379, to add robot tests 
for these commands.

I will open a new Jira to add robot tests for scm cli commands.

Few comments:
 # Can we rename the classes to add ChillMode before their names?
 # And right now to check whether scm is in chill mode with the patch provided 
the command is ozone scmcli check, can we change it to ozone scm chillmode 
check and ozone scm chillmode exit. So, chillmode will be a subcommand for 
scmcli, and check/exit are subcommands of chillmode.

 

> add cli command to checkChill mode status and exit chill mode
> -
>
> Key: HDDS-528
> URL: https://issues.apache.org/jira/browse/HDDS-528
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: chencan
>Priority: Major
> Attachments: HDDS-528.001.patch
>
>
> [HDDS-370] introduces below 2 API:
> * isScmInChillMode
> * forceScmExitChillMode
> This jira is to call them via relevant cli command.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-14015) Improve error handling in hdfsThreadDestructor in native thread local storage

2018-10-22 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14015:
---
Comment: was deleted

(was: +1 pending Jenkins)

> Improve error handling in hdfsThreadDestructor in native thread local storage
> -
>
> Key: HDFS-14015
> URL: https://issues.apache.org/jira/browse/HDFS-14015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: native
>Affects Versions: 3.0.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Major
> Attachments: HDFS-14015.001.patch, HDFS-14015.002.patch, 
> HDFS-14015.003.patch, HDFS-14015.004.patch
>
>
> In the hdfsThreadDestructor() function, we ignore the return value from the 
> DetachCurrentThread() call.  We are seeing cases where a native thread dies 
> while holding a JVM monitor, and it doesn't release the monitor.  We're 
> hoping that logging this error instead of ignoring it will shed some light on 
> the issue.  In any case, it's good programming practice.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14015) Improve error handling in hdfsThreadDestructor in native thread local storage

2018-10-22 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659790#comment-16659790
 ] 

Wei-Chiu Chuang commented on HDFS-14015:


Hmm I've not dealt with the native code test errors. Looking at the test output 
I don't have a clue: 
https://builds.apache.org/job/PreCommit-HDFS-Build/25335/artifact/out/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-ctest.txt

> Improve error handling in hdfsThreadDestructor in native thread local storage
> -
>
> Key: HDFS-14015
> URL: https://issues.apache.org/jira/browse/HDFS-14015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: native
>Affects Versions: 3.0.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Major
> Attachments: HDFS-14015.001.patch, HDFS-14015.002.patch, 
> HDFS-14015.003.patch, HDFS-14015.004.patch
>
>
> In the hdfsThreadDestructor() function, we ignore the return value from the 
> DetachCurrentThread() call.  We are seeing cases where a native thread dies 
> while holding a JVM monitor, and it doesn't release the monitor.  We're 
> hoping that logging this error instead of ignoring it will shed some light on 
> the issue.  In any case, it's good programming practice.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14015) Improve error handling in hdfsThreadDestructor in native thread local storage

2018-10-22 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659787#comment-16659787
 ] 

Wei-Chiu Chuang commented on HDFS-14015:


+1 pending Jenkins

> Improve error handling in hdfsThreadDestructor in native thread local storage
> -
>
> Key: HDFS-14015
> URL: https://issues.apache.org/jira/browse/HDFS-14015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: native
>Affects Versions: 3.0.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Major
> Attachments: HDFS-14015.001.patch, HDFS-14015.002.patch, 
> HDFS-14015.003.patch, HDFS-14015.004.patch
>
>
> In the hdfsThreadDestructor() function, we ignore the return value from the 
> DetachCurrentThread() call.  We are seeing cases where a native thread dies 
> while holding a JVM monitor, and it doesn't release the monitor.  We're 
> hoping that logging this error instead of ignoring it will shed some light on 
> the issue.  In any case, it's good programming practice.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13996) Make HttpFS' ACLs RegEx configurable

2018-10-22 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13996:
--
Component/s: (was: hdfs)
 httpfs

> Make HttpFS' ACLs RegEx configurable
> 
>
> Key: HDFS-13996
> URL: https://issues.apache.org/jira/browse/HDFS-13996
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.6.5, 3.0.3, 2.7.7
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Previously in HDFS-11421, WebHDFS' ACLs RegEx is made configurable, but it's 
> not configurable yet in HttpFS. For now in HttpFS, the ACL permission pattern 
> is fixed to DFS_WEBHDFS_ACL_PERMISSION_PATTERN_DEFAULT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13941) make storageId in BlockPoolTokenSecretManager.checkAccess optional

2018-10-22 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659785#comment-16659785
 ] 

Hudson commented on HDFS-13941:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15288 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15288/])
HDFS-13941. make storageId in BlockPoolTokenSecretManager.checkAccess (arp: rev 
c58811c77d5c0442c404a5b2876e09eaf6d16073)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/token/block/TestBlockToken.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/block/BlockPoolTokenSecretManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/block/BlockTokenSecretManager.java


> make storageId in BlockPoolTokenSecretManager.checkAccess optional
> --
>
> Key: HDFS-13941
> URL: https://issues.apache.org/jira/browse/HDFS-13941
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2, 3.3.0
>
> Attachments: HDFS-13941.00.patch, HDFS-13941.01.patch, 
> HDFS-13941.02.patch
>
>
> Change in {{BlockPoolTokenSecretManager.checkAccess}} by 
> [HDDS-9807|https://issues.apache.org/jira/browse/HDFS-9807] breaks backward 
> compatibility for applications using the private API (we've run into such 
> apps).
> Although there is no compatibility guarantee for the private interface, we 
> can restore the original version of checkAccess as an overload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-710) Make Block Commit Sequence (BCS) opaque to clients

2018-10-22 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-710:
---
Priority: Critical  (was: Major)

> Make Block Commit Sequence (BCS) opaque to clients
> --
>
> Key: HDDS-710
> URL: https://issues.apache.org/jira/browse/HDDS-710
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Critical
>
> An immutable block is identified by the following:
> - Container ID
> - Local Block ID
> - BCS (Block Commit Sequence ID)
> All of these values are currently exposed to the client. Instead we can have 
> a composite block ID that hides these details from the client. A first 
> thought is a naive implementation that generates a 192-bit (3x64-bit) block 
> ID.
> Proposed by [~anu] in HDDS-676.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-710) Make Block Commit Sequence (BCS) opaque to clients

2018-10-22 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659781#comment-16659781
 ] 

Arpit Agarwal commented on HDDS-710:


Another idea proposed by [~anu] was further simplifying the blockIdentifier by 
completely eliminating the local Block ID.

This gets us significant design simplicity and feels like a very clean approach 
by making it explicit that blocks cannot be modified, only replaced. It 
potentially reduces diagnosability. E.g. we cannot easily relate earlier 
(overwritten) versions of the block.

> Make Block Commit Sequence (BCS) opaque to clients
> --
>
> Key: HDDS-710
> URL: https://issues.apache.org/jira/browse/HDDS-710
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>
> An immutable block is identified by the following:
> - Container ID
> - Local Block ID
> - BCS (Block Commit Sequence ID)
> All of these values are currently exposed to the client. Instead we can have 
> a composite block ID that hides these details from the client. A first 
> thought is a naive implementation that generates a 192-bit (3x64-bit) block 
> ID.
> Proposed by [~anu] in HDDS-676.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-710) Make Block Commit Sequence (BCS) opaque to clients

2018-10-22 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-710:
--

 Summary: Make Block Commit Sequence (BCS) opaque to clients
 Key: HDDS-710
 URL: https://issues.apache.org/jira/browse/HDDS-710
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Client
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


An immutable block is identified by the following:
- Container ID
- Local Block ID
- BCS (Block Commit Sequence ID)

All of these values are currently exposed to the client. Instead we can have a 
composite block ID that hides these details from the client. A first thought is 
a naive implementation that generates a 192-bit (3x64-bit) block ID.

Proposed by [~anu] in HDDS-676.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-676) Enable Read from open Containers via Standalone Protocol

2018-10-22 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659767#comment-16659767
 ] 

Anu Engineer commented on HDDS-676:
---

+1, I will commit this shortly.

> Enable Read from open Containers via Standalone Protocol
> 
>
> Key: HDDS-676
> URL: https://issues.apache.org/jira/browse/HDDS-676
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-676.001.patch, HDDS-676.002.patch, 
> HDDS-676.003.patch, HDDS-676.004.patch, HDDS-676.005.patch, 
> HDDS-676.006.patch, HDDS-676.007.patch, HDDS-676.008.patch
>
>
> With BlockCommitSequenceId getting updated per block commit on open 
> containers in OM as well datanode, Ozone Client reads can through Standalone 
> protocol not necessarily requiring Ratis. Client should verify the BCSID of 
> the container which has the data block , which should always be greater than 
> or equal to the BCSID of the block to be read and the existing block BCSID 
> should exactly match that of the block to be read. As a part of this, Client 
> can try to read from a replica with a supplied BCSID and failover to the next 
> one in case the block does ont exist on one replica.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14015) Improve error handling in hdfsThreadDestructor in native thread local storage

2018-10-22 Thread Daniel Templeton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659763#comment-16659763
 ] 

Daniel Templeton commented on HDFS-14015:
-

Darn it.  Caught a mistake in my own review.  Updated patch 4 posted.

> Improve error handling in hdfsThreadDestructor in native thread local storage
> -
>
> Key: HDFS-14015
> URL: https://issues.apache.org/jira/browse/HDFS-14015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: native
>Affects Versions: 3.0.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Major
> Attachments: HDFS-14015.001.patch, HDFS-14015.002.patch, 
> HDFS-14015.003.patch, HDFS-14015.004.patch
>
>
> In the hdfsThreadDestructor() function, we ignore the return value from the 
> DetachCurrentThread() call.  We are seeing cases where a native thread dies 
> while holding a JVM monitor, and it doesn't release the monitor.  We're 
> hoping that logging this error instead of ignoring it will shed some light on 
> the issue.  In any case, it's good programming practice.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14015) Improve error handling in hdfsThreadDestructor in native thread local storage

2018-10-22 Thread Daniel Templeton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HDFS-14015:

Attachment: HDFS-14015.004.patch

> Improve error handling in hdfsThreadDestructor in native thread local storage
> -
>
> Key: HDFS-14015
> URL: https://issues.apache.org/jira/browse/HDFS-14015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: native
>Affects Versions: 3.0.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Major
> Attachments: HDFS-14015.001.patch, HDFS-14015.002.patch, 
> HDFS-14015.003.patch, HDFS-14015.004.patch
>
>
> In the hdfsThreadDestructor() function, we ignore the return value from the 
> DetachCurrentThread() call.  We are seeing cases where a native thread dies 
> while holding a JVM monitor, and it doesn't release the monitor.  We're 
> hoping that logging this error instead of ignoring it will shed some light on 
> the issue.  In any case, it's good programming practice.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14015) Improve error handling in hdfsThreadDestructor in native thread local storage

2018-10-22 Thread Daniel Templeton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659760#comment-16659760
 ] 

Daniel Templeton commented on HDFS-14015:
-

Good catch, [~jojochuang]!  I also lack a Windows box on which to test, but I 
have to assume the same laws of physics apply there as well.  Posted a new 
patch that applies the same changes to the Windows side.

> Improve error handling in hdfsThreadDestructor in native thread local storage
> -
>
> Key: HDFS-14015
> URL: https://issues.apache.org/jira/browse/HDFS-14015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: native
>Affects Versions: 3.0.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Major
> Attachments: HDFS-14015.001.patch, HDFS-14015.002.patch, 
> HDFS-14015.003.patch
>
>
> In the hdfsThreadDestructor() function, we ignore the return value from the 
> DetachCurrentThread() call.  We are seeing cases where a native thread dies 
> while holding a JVM monitor, and it doesn't release the monitor.  We're 
> hoping that logging this error instead of ignoring it will shed some light on 
> the issue.  In any case, it's good programming practice.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14015) Improve error handling in hdfsThreadDestructor in native thread local storage

2018-10-22 Thread Daniel Templeton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HDFS-14015:

Attachment: HDFS-14015.003.patch

> Improve error handling in hdfsThreadDestructor in native thread local storage
> -
>
> Key: HDFS-14015
> URL: https://issues.apache.org/jira/browse/HDFS-14015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: native
>Affects Versions: 3.0.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Major
> Attachments: HDFS-14015.001.patch, HDFS-14015.002.patch, 
> HDFS-14015.003.patch
>
>
> In the hdfsThreadDestructor() function, we ignore the return value from the 
> DetachCurrentThread() call.  We are seeing cases where a native thread dies 
> while holding a JVM monitor, and it doesn't release the monitor.  We're 
> hoping that logging this error instead of ignoring it will shed some light on 
> the issue.  In any case, it's good programming practice.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-643) Parse Authorization header in a separate filter

2018-10-22 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-643:

Target Version/s: 0.3.0

> Parse Authorization header in a separate filter
> ---
>
> Key: HDDS-643
> URL: https://issues.apache.org/jira/browse/HDDS-643
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-643.00.patch
>
>
> This Jira is created from HDDS-522 comment from [~elek]
>  # I think the authorization headers could be parsed in a separated filters 
> similar to the request ids. But it could be implemented later. This is more 
> like a prototype.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-643) Parse Authorization header in a separate filter

2018-10-22 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-643:

Attachment: HDDS-643.00.patch

> Parse Authorization header in a separate filter
> ---
>
> Key: HDDS-643
> URL: https://issues.apache.org/jira/browse/HDDS-643
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-643.00.patch
>
>
> This Jira is created from HDDS-522 comment from [~elek]
>  # I think the authorization headers could be parsed in a separated filters 
> similar to the request ids. But it could be implemented later. This is more 
> like a prototype.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-643) Parse Authorization header in a separate filter

2018-10-22 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-643:

Status: Patch Available  (was: In Progress)

> Parse Authorization header in a separate filter
> ---
>
> Key: HDDS-643
> URL: https://issues.apache.org/jira/browse/HDDS-643
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-643.00.patch
>
>
> This Jira is created from HDDS-522 comment from [~elek]
>  # I think the authorization headers could be parsed in a separated filters 
> similar to the request ids. But it could be implemented later. This is more 
> like a prototype.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13941) make storageId in BlockPoolTokenSecretManager.checkAccess optional

2018-10-22 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-13941:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   3.1.2
   3.0.4
   3.2.0
   Status: Resolved  (was: Patch Available)

I've committed this.

> make storageId in BlockPoolTokenSecretManager.checkAccess optional
> --
>
> Key: HDFS-13941
> URL: https://issues.apache.org/jira/browse/HDFS-13941
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2, 3.3.0
>
> Attachments: HDFS-13941.00.patch, HDFS-13941.01.patch, 
> HDFS-13941.02.patch
>
>
> Change in {{BlockPoolTokenSecretManager.checkAccess}} by 
> [HDDS-9807|https://issues.apache.org/jira/browse/HDFS-9807] breaks backward 
> compatibility for applications using the private API (we've run into such 
> apps).
> Although there is no compatibility guarantee for the private interface, we 
> can restore the original version of checkAccess as an overload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-695) Introduce a new SCM Command to teardown a Pipeline

2018-10-22 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659751#comment-16659751
 ] 

Anu Engineer commented on HDDS-695:
---

[~nandakumar131] There is a conflict on trunk, please rebase and feel free to 
commit. Thanks.

> Introduce a new SCM Command to teardown a Pipeline
> --
>
> Key: HDDS-695
> URL: https://issues.apache.org/jira/browse/HDDS-695
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Blocker
> Attachments: HDDS-695-ozone-0.3.000.patch, 
> HDDS-695-ozone-0.3.001.patch, HDDS-695-ozone-0.3.002.patch
>
>
> We need to have a tear-down pipeline command in SCM so that an administrator 
> can close/destroy a pipeline in the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-530) Fix Ozone fs valid uri pattern

2018-10-22 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659743#comment-16659743
 ] 

Anu Engineer edited comment on HDDS-530 at 10/22/18 10:01 PM:
--

yes, until that regex is fixed, that is. I will fix it as soon as 0.3 is 
released. Since we did all the testing this format, I don't wan to rock the 
boat this close to release.

 


was (Author: anu):
yes, until that regex is fixed, that is.

 

> Fix Ozone fs valid uri pattern
> --
>
> Key: HDDS-530
> URL: https://issues.apache.org/jira/browse/HDDS-530
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Hanisha Koneru
>Assignee: Anu Engineer
>Priority: Blocker
>  Labels: alpha2
>
> The valid uri pattern for an Ozone fs uri should be 
> {{o3fs://://}}.
> But OzoneFileSystem accepts uri's of the form {{o3fs://.}} 
> only.
> {code:java}
> # In OzoneFileSyste.java
> private static final Pattern URL_SCHEMA_PATTERN =
> Pattern.compile("(.+)\\.([^\\.]+)");
> if (!matcher.matches()) {
>   throw new IllegalArgumentException("Ozone file system url should be "
>   + "in the form o3fs://bucket.volume");
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-695) Introduce a new SCM Command to teardown a Pipeline

2018-10-22 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659742#comment-16659742
 ] 

Hadoop QA commented on HDDS-695:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} ozone-0.3 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
3s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
34s{color} | {color:green} ozone-0.3 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} ozone-0.3 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} ozone-0.3 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} ozone-0.3 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
34s{color} | {color:red} hadoop-hdds/server-scm in ozone-0.3 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} ozone-0.3 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 29s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
59s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
33s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} tools in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-695 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12945089/HDDS-695-ozone-0.3.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux c2acd43a73ea 4.4.0-133-generic #159-Ubuntu SMP 

[jira] [Commented] (HDDS-530) Fix Ozone fs valid uri pattern

2018-10-22 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659743#comment-16659743
 ] 

Anu Engineer commented on HDDS-530:
---

yes, until that regex is fixed, that is.

 

> Fix Ozone fs valid uri pattern
> --
>
> Key: HDDS-530
> URL: https://issues.apache.org/jira/browse/HDDS-530
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Hanisha Koneru
>Assignee: Anu Engineer
>Priority: Blocker
>  Labels: alpha2
>
> The valid uri pattern for an Ozone fs uri should be 
> {{o3fs://://}}.
> But OzoneFileSystem accepts uri's of the form {{o3fs://.}} 
> only.
> {code:java}
> # In OzoneFileSyste.java
> private static final Pattern URL_SCHEMA_PATTERN =
> Pattern.compile("(.+)\\.([^\\.]+)");
> if (!matcher.matches()) {
>   throw new IllegalArgumentException("Ozone file system url should be "
>   + "in the form o3fs://bucket.volume");
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14017) ObserverReadProxyProviderWithIPFailover does not quite work

2018-10-22 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659718#comment-16659718
 ] 

Chen Liang commented on HDFS-14017:
---

Post initial v001 patch for early review. Still missing unit test. The basic 
idea is not to extend ObserverReadProxyProvider, because it is extending 
ConfiguredProxyProvider, which is by itself self different from IP failover we 
are trying to support here.  The patch here does not have the ProxyProvider 
doing any actual failover. Additionally I'm thinking of making a more clear 
separation by also renaming ObserverReadProxyProvider to 
ObserverReadConfiguredProxyProvider.

> ObserverReadProxyProviderWithIPFailover does not quite work
> ---
>
> Key: HDFS-14017
> URL: https://issues.apache.org/jira/browse/HDFS-14017
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14017-HDFS-12943.001.patch
>
>
> Currently {{ObserverReadProxyProviderWithIPFailover}} extends 
> {{ObserverReadProxyProvider}}, and the only difference is changing the proxy 
> factory to use {{IPFailoverProxyProvider}}. However this is not enough 
> because when calling constructor of {{ObserverReadProxyProvider}} in 
> super(...), the follow line:
> {code}
> nameNodeProxies = getProxyAddresses(uri,
> HdfsClientConfigKeys.DFS_NAMENODE_RPC_ADDRESS_KEY);
> {code}
> will try to resolve the all configured NN addresses to do configured 
> failover. But in the case of IPFailover, this does not really apply. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-530) Fix Ozone fs valid uri pattern

2018-10-22 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659717#comment-16659717
 ] 

Arpit Agarwal commented on HDDS-530:


I got that. :) It was a related question. So it sounds like there is no way to 
specify a hostname with an o3fs url today.

> Fix Ozone fs valid uri pattern
> --
>
> Key: HDDS-530
> URL: https://issues.apache.org/jira/browse/HDDS-530
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Hanisha Koneru
>Assignee: Anu Engineer
>Priority: Blocker
>  Labels: alpha2
>
> The valid uri pattern for an Ozone fs uri should be 
> {{o3fs://://}}.
> But OzoneFileSystem accepts uri's of the form {{o3fs://.}} 
> only.
> {code:java}
> # In OzoneFileSyste.java
> private static final Pattern URL_SCHEMA_PATTERN =
> Pattern.compile("(.+)\\.([^\\.]+)");
> if (!matcher.matches()) {
>   throw new IllegalArgumentException("Ozone file system url should be "
>   + "in the form o3fs://bucket.volume");
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13941) make storageId in BlockPoolTokenSecretManager.checkAccess optional

2018-10-22 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659716#comment-16659716
 ] 

Arpit Agarwal commented on HDFS-13941:
--

Thanks [~ajayydv]. +1

> make storageId in BlockPoolTokenSecretManager.checkAccess optional
> --
>
> Key: HDFS-13941
> URL: https://issues.apache.org/jira/browse/HDFS-13941
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13941.00.patch, HDFS-13941.01.patch, 
> HDFS-13941.02.patch
>
>
> Change in {{BlockPoolTokenSecretManager.checkAccess}} by 
> [HDDS-9807|https://issues.apache.org/jira/browse/HDFS-9807] breaks backward 
> compatibility for applications using the private API (we've run into such 
> apps).
> Although there is no compatibility guarantee for the private interface, we 
> can restore the original version of checkAccess as an overload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13941) make storageId in BlockPoolTokenSecretManager.checkAccess optional

2018-10-22 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-13941:
-
Description: 
Change in {{BlockPoolTokenSecretManager.checkAccess}} by 
[HDDS-9807|https://issues.apache.org/jira/browse/HDFS-9807] breaks backward 
compatibility for applications using the private API (we've run into such apps).

Although there is no compatibility guarantee for the private interface, we can 
restore the original version of checkAccess as an overload.

  was:
Change in {{BlockPoolTokenSecretManager.checkAccess}} by 
[HDDS-9807|https://issues.apache.org/jira/browse/HDFS-9807] breaks backward 
compatibility for applications using the private API (we've run into such apps).

Although there is no compatibility guarantee for the private interests, we can 
restore the original version of checkAccess as an overload.


> make storageId in BlockPoolTokenSecretManager.checkAccess optional
> --
>
> Key: HDFS-13941
> URL: https://issues.apache.org/jira/browse/HDFS-13941
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13941.00.patch, HDFS-13941.01.patch, 
> HDFS-13941.02.patch
>
>
> Change in {{BlockPoolTokenSecretManager.checkAccess}} by 
> [HDDS-9807|https://issues.apache.org/jira/browse/HDFS-9807] breaks backward 
> compatibility for applications using the private API (we've run into such 
> apps).
> Although there is no compatibility guarantee for the private interface, we 
> can restore the original version of checkAccess as an overload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14015) Improve error handling in hdfsThreadDestructor in native thread local storage

2018-10-22 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659713#comment-16659713
 ] 

Hadoop QA commented on HDFS-14015:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
29m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 50s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m 29s{color} 
| {color:red} hadoop-hdfs-native-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 47m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed CTEST tests | test_test_libhdfs_threaded_hdfs_static |
|   | test_test_libhdfs_zerocopy_hdfs_static |
|   | test_libhdfs_threaded_hdfspp_test_shim_static |
|   | test_hdfspp_mini_dfs_smoke_hdfspp_test_shim_static |
|   | libhdfs_mini_stress_valgrind_hdfspp_test_static |
|   | memcheck_libhdfs_mini_stress_valgrind_hdfspp_test_static |
|   | test_libhdfs_mini_stress_hdfspp_test_shim_static |
|   | test_hdfs_ext_hdfspp_test_shim_static |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-14015 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12945101/HDFS-14015.002.patch |
| Optional Tests |  dupname  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 7f7faca1ca2e 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 292c9e0 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| CTEST | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25335/artifact/out/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-ctest.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25335/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25335/testReport/ |
| Max. process+thread count | 412 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25335/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.

[jira] [Updated] (HDFS-14017) ObserverReadProxyProviderWithIPFailover does not quite work

2018-10-22 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-14017:
--
Attachment: HDFS-14017-HDFS-12943.001.patch

> ObserverReadProxyProviderWithIPFailover does not quite work
> ---
>
> Key: HDFS-14017
> URL: https://issues.apache.org/jira/browse/HDFS-14017
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14017-HDFS-12943.001.patch
>
>
> Currently {{ObserverReadProxyProviderWithIPFailover}} extends 
> {{ObserverReadProxyProvider}}, and the only difference is changing the proxy 
> factory to use {{IPFailoverProxyProvider}}. However this is not enough 
> because when calling constructor of {{ObserverReadProxyProvider}} in 
> super(...), the follow line:
> {code}
> nameNodeProxies = getProxyAddresses(uri,
> HdfsClientConfigKeys.DFS_NAMENODE_RPC_ADDRESS_KEY);
> {code}
> will try to resolve the all configured NN addresses to do configured 
> failover. But in the case of IPFailover, this does not really apply. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13941) make storageId in BlockPoolTokenSecretManager.checkAccess optional

2018-10-22 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-13941:
-
Description: 
Change in {{BlockPoolTokenSecretManager.checkAccess}} by 
[HDDS-9807|https://issues.apache.org/jira/browse/HDFS-9807] breaks backward 
compatibility for applications using the private API (we've run into such apps).

Although there is no compatibility guarantee for the private interests, we can 
restore the original version of checkAccess as an overload.

  was: change in {{BlockPoolTokenSecretManager.checkAccess}} by 
[HDDS-9807|https://issues.apache.org/jira/browse/HDFS-9807] breaks backward 
compatibility for Hadoop 2 clients. Since SecretManager is marked for public 
audience we should add a overloaded function to allow Hadoop 2 clients to work 
with it.


> make storageId in BlockPoolTokenSecretManager.checkAccess optional
> --
>
> Key: HDFS-13941
> URL: https://issues.apache.org/jira/browse/HDFS-13941
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13941.00.patch, HDFS-13941.01.patch, 
> HDFS-13941.02.patch
>
>
> Change in {{BlockPoolTokenSecretManager.checkAccess}} by 
> [HDDS-9807|https://issues.apache.org/jira/browse/HDFS-9807] breaks backward 
> compatibility for applications using the private API (we've run into such 
> apps).
> Although there is no compatibility guarantee for the private interests, we 
> can restore the original version of checkAccess as an overload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-530) Fix Ozone fs valid uri pattern

2018-10-22 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659695#comment-16659695
 ] 

Anu Engineer commented on HDDS-530:
---

I was planning to fix the {{ozoneFS.md}} , not the real bug :)

> Fix Ozone fs valid uri pattern
> --
>
> Key: HDDS-530
> URL: https://issues.apache.org/jira/browse/HDDS-530
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Hanisha Koneru
>Assignee: Anu Engineer
>Priority: Blocker
>  Labels: alpha2
>
> The valid uri pattern for an Ozone fs uri should be 
> {{o3fs://://}}.
> But OzoneFileSystem accepts uri's of the form {{o3fs://.}} 
> only.
> {code:java}
> # In OzoneFileSyste.java
> private static final Pattern URL_SCHEMA_PATTERN =
> Pattern.compile("(.+)\\.([^\\.]+)");
> if (!matcher.matches()) {
>   throw new IllegalArgumentException("Ozone file system url should be "
>   + "in the form o3fs://bucket.volume");
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13566) Add configurable additional RPC listener to NameNode

2018-10-22 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659692#comment-16659692
 ] 

Hadoop QA commented on HDFS-13566:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
39s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
39s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 34s{color} | {color:orange} root: The patch generated 3 new + 1124 unchanged 
- 1 fixed = 1127 total (was 1125) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 51s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
18s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
37s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}109m 54s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
47s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}225m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-13566 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12945065/HDFS-13566.011.patch |
| Optional Tests |  dupname  

[jira] [Commented] (HDFS-13566) Add configurable additional RPC listener to NameNode

2018-10-22 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659683#comment-16659683
 ] 

Hadoop QA commented on HDFS-13566:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
58s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
31s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m  
7s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m  3s{color} | {color:orange} root: The patch generated 3 new + 1124 unchanged 
- 1 fixed = 1127 total (was 1125) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
10s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
41s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}100m 34s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}216m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-13566 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12945065/HDFS-13566.011.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient 

[jira] [Commented] (HDDS-676) Enable Read from open Containers via Standalone Protocol

2018-10-22 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659673#comment-16659673
 ] 

Shashikant Banerjee commented on HDDS-676:
--

Patch v8 addresses the review comments in *testPutKeyAndGetKey.*

> Enable Read from open Containers via Standalone Protocol
> 
>
> Key: HDDS-676
> URL: https://issues.apache.org/jira/browse/HDDS-676
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-676.001.patch, HDDS-676.002.patch, 
> HDDS-676.003.patch, HDDS-676.004.patch, HDDS-676.005.patch, 
> HDDS-676.006.patch, HDDS-676.007.patch, HDDS-676.008.patch
>
>
> With BlockCommitSequenceId getting updated per block commit on open 
> containers in OM as well datanode, Ozone Client reads can through Standalone 
> protocol not necessarily requiring Ratis. Client should verify the BCSID of 
> the container which has the data block , which should always be greater than 
> or equal to the BCSID of the block to be read and the existing block BCSID 
> should exactly match that of the block to be read. As a part of this, Client 
> can try to read from a replica with a supplied BCSID and failover to the next 
> one in case the block does ont exist on one replica.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-676) Enable Read from open Containers via Standalone Protocol

2018-10-22 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-676:
-
Attachment: HDDS-676.008.patch

> Enable Read from open Containers via Standalone Protocol
> 
>
> Key: HDDS-676
> URL: https://issues.apache.org/jira/browse/HDDS-676
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-676.001.patch, HDDS-676.002.patch, 
> HDDS-676.003.patch, HDDS-676.004.patch, HDDS-676.005.patch, 
> HDDS-676.006.patch, HDDS-676.007.patch, HDDS-676.008.patch
>
>
> With BlockCommitSequenceId getting updated per block commit on open 
> containers in OM as well datanode, Ozone Client reads can through Standalone 
> protocol not necessarily requiring Ratis. Client should verify the BCSID of 
> the container which has the data block , which should always be greater than 
> or equal to the BCSID of the block to be read and the existing block BCSID 
> should exactly match that of the block to be read. As a part of this, Client 
> can try to read from a replica with a supplied BCSID and failover to the next 
> one in case the block does ont exist on one replica.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-676) Enable Read from open Containers via Standalone Protocol

2018-10-22 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659639#comment-16659639
 ] 

Shashikant Banerjee edited comment on HDDS-676 at 10/22/18 8:58 PM:


Thanks[~anu], for the review comments.
{code:java}
I agree with this premise; that is we only talk to next data node if we get a 
failure on the first data node.

If that is the case, do we need all this Async framework changes, hash tables 
etc?
{code}
if we get a failure/connection issues with one of a datanode, we failover to 
the next datanode. We maintain the state of the active channels for 
communication in the hash map, so that when we close the client we close all 
the conections. we maintain the state, we need to close the connections in 
active read path as a part of handling the exception. Connection Errors can be 
transient.  Also, multiple ozone clients can use the same XceiverClient 
instance as we maintain a client cache, so immediately closing the connection 
in case one client op fails may not be good.

HashMap will also be helpful if we get the leader info cached, so that we will 
use that specific channel to execute first.

Regarding the async framework change, there is functionally no change in the 
code.It has been just split to 2 functions so while executing the command we 
execute on a specific channel.


was (Author: shashikant):
Thanks[~anu], for the review comments.
{code:java}
I agree with this premise; that is we only talk to next data node if we get a 
failure on the first data node.

If that is the case, do we need all this Async framework changes, hash tables 
etc?
{code}
if we get a failure/connection issues with one of a datanode, we failover to 
the next datanode. If we don't maintain the state of the active channels for 
communication in the hash map, so that when we close the client we close all 
the conections. we maintain the state, we need to close the connections in 
active read path as a part of handling the exception. Connection Errors can be 
transient.  Also, multiple ozone clients can use the same XceiverClient 
instance as we maintain a client cache, so immediately closing the connection 
in case one client op fails may not be good.

HashMap will also be helpful if we get the leader info cached, so that we will 
use that specific channel to execute first.

Regarding the async framework change, there is functionally no change in the 
code.It has been just split to 2 functions so while executing the command we 
execute on a specific channel.

> Enable Read from open Containers via Standalone Protocol
> 
>
> Key: HDDS-676
> URL: https://issues.apache.org/jira/browse/HDDS-676
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-676.001.patch, HDDS-676.002.patch, 
> HDDS-676.003.patch, HDDS-676.004.patch, HDDS-676.005.patch, 
> HDDS-676.006.patch, HDDS-676.007.patch
>
>
> With BlockCommitSequenceId getting updated per block commit on open 
> containers in OM as well datanode, Ozone Client reads can through Standalone 
> protocol not necessarily requiring Ratis. Client should verify the BCSID of 
> the container which has the data block , which should always be greater than 
> or equal to the BCSID of the block to be read and the existing block BCSID 
> should exactly match that of the block to be read. As a part of this, Client 
> can try to read from a replica with a supplied BCSID and failover to the next 
> one in case the block does ont exist on one replica.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9098) Erasure coding: emulate race conditions among striped streamers in write pipeline

2018-10-22 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-9098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659664#comment-16659664
 ] 

Hadoop QA commented on HDFS-9098:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HDFS-9098 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-9098 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12768447/HDFS-9098.00.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25336/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Erasure coding: emulate race conditions among striped streamers in write 
> pipeline
> -
>
> Key: HDFS-9098
> URL: https://issues.apache.org/jira/browse/HDFS-9098
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>Priority: Major
> Attachments: HDFS-9098.00.patch, HDFS-9098.wip.patch
>
>
> Apparently the interleaving of events among {{StripedDataStreamer}}'s is very 
> tricky to handle. [~walter.k.su] and [~jingzhao] have discussed several race 
> conditions under HDFS-9040.
> Let's use FaultInjector to emulate different combinations of interleaved 
> events.
> In particular, we should consider inject delays in the following places:
> # {{Streamer#endBlock}}
> # {{Streamer#locateFollowingBlock}}
> # {{Streamer#updateBlockForPipeline}}
> # {{Streamer#updatePipeline}}
> # {{OutputStream#writeChunk}}
> # {{OutputStream#close}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14015) Improve error handling in hdfsThreadDestructor in native thread local storage

2018-10-22 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659662#comment-16659662
 ] 

Wei-Chiu Chuang commented on HDFS-14015:


+1 for the 002 patch. Thanks [~templedf]

Also note there's a similar usage in the Windows counterpart of this method in 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/os/windows/thread_local_storage.c
But I don't have a Windows environment I can't verify.

> Improve error handling in hdfsThreadDestructor in native thread local storage
> -
>
> Key: HDFS-14015
> URL: https://issues.apache.org/jira/browse/HDFS-14015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: native
>Affects Versions: 3.0.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Major
> Attachments: HDFS-14015.001.patch, HDFS-14015.002.patch
>
>
> In the hdfsThreadDestructor() function, we ignore the return value from the 
> DetachCurrentThread() call.  We are seeing cases where a native thread dies 
> while holding a JVM monitor, and it doesn't release the monitor.  We're 
> hoping that logging this error instead of ignoring it will shed some light on 
> the issue.  In any case, it's good programming practice.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14015) Improve error handling in hdfsThreadDestructor in native thread local storage

2018-10-22 Thread Daniel Templeton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659651#comment-16659651
 ] 

Daniel Templeton commented on HDFS-14015:
-

Whoops.  Posted the wrong patch before.

> Improve error handling in hdfsThreadDestructor in native thread local storage
> -
>
> Key: HDFS-14015
> URL: https://issues.apache.org/jira/browse/HDFS-14015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: native
>Affects Versions: 3.0.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Major
> Attachments: HDFS-14015.001.patch, HDFS-14015.002.patch
>
>
> In the hdfsThreadDestructor() function, we ignore the return value from the 
> DetachCurrentThread() call.  We are seeing cases where a native thread dies 
> while holding a JVM monitor, and it doesn't release the monitor.  We're 
> hoping that logging this error instead of ignoring it will shed some light on 
> the issue.  In any case, it's good programming practice.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14015) Improve error handling in hdfsThreadDestructor in native thread local storage

2018-10-22 Thread Daniel Templeton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HDFS-14015:

Attachment: HDFS-14015.002.patch

> Improve error handling in hdfsThreadDestructor in native thread local storage
> -
>
> Key: HDFS-14015
> URL: https://issues.apache.org/jira/browse/HDFS-14015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: native
>Affects Versions: 3.0.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Major
> Attachments: HDFS-14015.001.patch, HDFS-14015.002.patch
>
>
> In the hdfsThreadDestructor() function, we ignore the return value from the 
> DetachCurrentThread() call.  We are seeing cases where a native thread dies 
> while holding a JVM monitor, and it doesn't release the monitor.  We're 
> hoping that logging this error instead of ignoring it will shed some light on 
> the issue.  In any case, it's good programming practice.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-676) Enable Read from open Containers via Standalone Protocol

2018-10-22 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659639#comment-16659639
 ] 

Shashikant Banerjee edited comment on HDDS-676 at 10/22/18 8:27 PM:


Thanks[~anu], for the review comments.
{code:java}
I agree with this premise; that is we only talk to next data node if we get a 
failure on the first data node.

If that is the case, do we need all this Async framework changes, hash tables 
etc?
{code}
if we get a failure/connection issues with one of a datanode, we failover to 
the next datanode. If we don't maintain the state of the active channels for 
communication in the hash map, so that when we close the client we close all 
the conections. we maintain the state, we need to close the connections in 
active read path as a part of handling the exception. Connection Errors can be 
transient.  Also, multiple ozone clients can use the same XceiverClient 
instance as we maintain a client cache, so immediately closing the connection 
in case one client op fails may not be good.

HashMap will also be helpful if we get the leader info cached, so that we will 
use that specific channel to execute first.

Regarding the async framework change, there is functionally no change in the 
code.It has been just split to 2 functions so while executing the command we 
execute on a specific channel.


was (Author: shashikant):
Thanks[~anu], for the review comments.
{code:java}
I agree with this premise; that is we only talk to next data node if we get a 
failure on the first data node.

If that is the case, do we need all this Async framework changes, hash tables 
etc?
{code}
if we get a failure/connection issues with one of a datanode, we failover to 
the next datanode. If we don't maintain the state of the active channels for 
communication in the hash map, so that when we close the client we close all 
the conections. If we don't maintain the state, we need to close the 
connections in active read path as a part of handling the exception. Connection 
Errors can be transient.  Also, multiple ozone clients can use the same 
XceiverClient instance as we maintain a client cache, so immediately closing 
the connection in case one client op fails.

HashMap will also be helpful if we get the leader info cached, so that we will 
use that specific channel to execute first.

Regarding the async framework change, there is functionally no change in the 
code.It has been just split to 2 functions so while executing the command we 
execute on a specific channel.

> Enable Read from open Containers via Standalone Protocol
> 
>
> Key: HDDS-676
> URL: https://issues.apache.org/jira/browse/HDDS-676
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-676.001.patch, HDDS-676.002.patch, 
> HDDS-676.003.patch, HDDS-676.004.patch, HDDS-676.005.patch, 
> HDDS-676.006.patch, HDDS-676.007.patch
>
>
> With BlockCommitSequenceId getting updated per block commit on open 
> containers in OM as well datanode, Ozone Client reads can through Standalone 
> protocol not necessarily requiring Ratis. Client should verify the BCSID of 
> the container which has the data block , which should always be greater than 
> or equal to the BCSID of the block to be read and the existing block BCSID 
> should exactly match that of the block to be read. As a part of this, Client 
> can try to read from a replica with a supplied BCSID and failover to the next 
> one in case the block does ont exist on one replica.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-676) Enable Read from open Containers via Standalone Protocol

2018-10-22 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659639#comment-16659639
 ] 

Shashikant Banerjee commented on HDDS-676:
--

Thanks[~anu], for the review comments.
{code:java}
I agree with this premise; that is we only talk to next data node if we get a 
failure on the first data node.

If that is the case, do we need all this Async framework changes, hash tables 
etc?
{code}
if we get a failure/connection issues with one of a datanode, we failover to 
the next datanode. If we don't maintain the state of the active channels for 
communication in the hash map, so that when we close the client we close all 
the conections. If we don't maintain the state, we need to close the 
connections in active read path as a part of handling the exception. Connection 
Errors can be transient.  Also, multiple ozone clients can use the same 
XceiverClient instance as we maintain a client cache, so immediately closing 
the connection in case one client op fails.

HashMap will also be helpful if we get the leader info cached, so that we will 
use that specific channel to execute first.

Regarding the async framework change, there is functionally no change in the 
code.It has been just split to 2 functions so while executing the command we 
execute on a specific channel.

> Enable Read from open Containers via Standalone Protocol
> 
>
> Key: HDDS-676
> URL: https://issues.apache.org/jira/browse/HDDS-676
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-676.001.patch, HDDS-676.002.patch, 
> HDDS-676.003.patch, HDDS-676.004.patch, HDDS-676.005.patch, 
> HDDS-676.006.patch, HDDS-676.007.patch
>
>
> With BlockCommitSequenceId getting updated per block commit on open 
> containers in OM as well datanode, Ozone Client reads can through Standalone 
> protocol not necessarily requiring Ratis. Client should verify the BCSID of 
> the container which has the data block , which should always be greater than 
> or equal to the BCSID of the block to be read and the existing block BCSID 
> should exactly match that of the block to be read. As a part of this, Client 
> can try to read from a replica with a supplied BCSID and failover to the next 
> one in case the block does ont exist on one replica.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-530) Fix Ozone fs valid uri pattern

2018-10-22 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659632#comment-16659632
 ] 

Arpit Agarwal edited comment on HDDS-530 at 10/22/18 8:20 PM:
--

[~anu], do we have any way to provide the hostname with an o3fs path?


was (Author: arpitagarwal):
[~anu], will we have any way to provide the hostname with an o3fs path?

> Fix Ozone fs valid uri pattern
> --
>
> Key: HDDS-530
> URL: https://issues.apache.org/jira/browse/HDDS-530
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Hanisha Koneru
>Assignee: Anu Engineer
>Priority: Blocker
>  Labels: alpha2
>
> The valid uri pattern for an Ozone fs uri should be 
> {{o3fs://://}}.
> But OzoneFileSystem accepts uri's of the form {{o3fs://.}} 
> only.
> {code:java}
> # In OzoneFileSyste.java
> private static final Pattern URL_SCHEMA_PATTERN =
> Pattern.compile("(.+)\\.([^\\.]+)");
> if (!matcher.matches()) {
>   throw new IllegalArgumentException("Ozone file system url should be "
>   + "in the form o3fs://bucket.volume");
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-530) Fix Ozone fs valid uri pattern

2018-10-22 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659632#comment-16659632
 ] 

Arpit Agarwal commented on HDDS-530:


[~anu], will we have any way to provide the hostname with an o3fs path?

> Fix Ozone fs valid uri pattern
> --
>
> Key: HDDS-530
> URL: https://issues.apache.org/jira/browse/HDDS-530
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Hanisha Koneru
>Assignee: Anu Engineer
>Priority: Blocker
>  Labels: alpha2
>
> The valid uri pattern for an Ozone fs uri should be 
> {{o3fs://://}}.
> But OzoneFileSystem accepts uri's of the form {{o3fs://.}} 
> only.
> {code:java}
> # In OzoneFileSyste.java
> private static final Pattern URL_SCHEMA_PATTERN =
> Pattern.compile("(.+)\\.([^\\.]+)");
> if (!matcher.matches()) {
>   throw new IllegalArgumentException("Ozone file system url should be "
>   + "in the form o3fs://bucket.volume");
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-530) Fix Ozone fs valid uri pattern

2018-10-22 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-530:
---
Description: 
The valid uri pattern for an Ozone fs uri should be 
{{o3fs://://}}.

But OzoneFileSystem accepts uri's of the form {{o3fs://.}} only.
{code:java}
# In OzoneFileSyste.java
private static final Pattern URL_SCHEMA_PATTERN =
Pattern.compile("(.+)\\.([^\\.]+)");
if (!matcher.matches()) {
  throw new IllegalArgumentException("Ozone file system url should be "
  + "in the form o3fs://bucket.volume");
}{code}


  was:
The valid uri pattern for an Ozone fs uri should be 
{{o3://://}}.

But OzoneFileSystem accepts uri's of the form {{o3://.}} only.
{code:java}
# In OzoneFileSyste.java
private static final Pattern URL_SCHEMA_PATTERN =
Pattern.compile("(.+)\\.([^\\.]+)");
if (!matcher.matches()) {
  throw new IllegalArgumentException("Ozone file system url should be "
  + "in the form o3://bucket.volume");
}{code}



> Fix Ozone fs valid uri pattern
> --
>
> Key: HDDS-530
> URL: https://issues.apache.org/jira/browse/HDDS-530
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Hanisha Koneru
>Assignee: Anu Engineer
>Priority: Blocker
>  Labels: alpha2
>
> The valid uri pattern for an Ozone fs uri should be 
> {{o3fs://://}}.
> But OzoneFileSystem accepts uri's of the form {{o3fs://.}} 
> only.
> {code:java}
> # In OzoneFileSyste.java
> private static final Pattern URL_SCHEMA_PATTERN =
> Pattern.compile("(.+)\\.([^\\.]+)");
> if (!matcher.matches()) {
>   throw new IllegalArgumentException("Ozone file system url should be "
>   + "in the form o3fs://bucket.volume");
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-676) Enable Read from open Containers via Standalone Protocol

2018-10-22 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659617#comment-16659617
 ] 

Anu Engineer commented on HDDS-676:
---

[~shashikant] The changes look much better, thanks for separating this into 2 
different patches.

Some very minor comments below:

*testPutKeyAndGetKey*
     1. {{long currentTime = Time.now();}} // Unused variable.
     2. {{OzoneKeyDetails keyDetails = (OzoneKeyDetails) 
bucket.getKey(keyName);}} //Casting is not needed 
 3. It might be a good idea to verify the exception string.
  {{catch (Exception e)}}

*XcieverClientGrpc.java*
{{responseProto = sendCommandAsync(request, dn).get();}}

I agree with this premise; that is we only talk to next data node if we get a 
failure on the first data node.

If that is the case, do we need all this Async framework changes, hash tables 
etc?

> Enable Read from open Containers via Standalone Protocol
> 
>
> Key: HDDS-676
> URL: https://issues.apache.org/jira/browse/HDDS-676
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-676.001.patch, HDDS-676.002.patch, 
> HDDS-676.003.patch, HDDS-676.004.patch, HDDS-676.005.patch, 
> HDDS-676.006.patch, HDDS-676.007.patch
>
>
> With BlockCommitSequenceId getting updated per block commit on open 
> containers in OM as well datanode, Ozone Client reads can through Standalone 
> protocol not necessarily requiring Ratis. Client should verify the BCSID of 
> the container which has the data block , which should always be greater than 
> or equal to the BCSID of the block to be read and the existing block BCSID 
> should exactly match that of the block to be read. As a part of this, Client 
> can try to read from a replica with a supplied BCSID and failover to the next 
> one in case the block does ont exist on one replica.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14017) ObserverReadProxyProviderWithIPFailover does not quite work

2018-10-22 Thread Chen Liang (JIRA)
Chen Liang created HDFS-14017:
-

 Summary: ObserverReadProxyProviderWithIPFailover does not quite 
work
 Key: HDFS-14017
 URL: https://issues.apache.org/jira/browse/HDFS-14017
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Chen Liang
Assignee: Chen Liang


Currently {{ObserverReadProxyProviderWithIPFailover}} extends 
{{ObserverReadProxyProvider}}, and the only difference is changing the proxy 
factory to use {{IPFailoverProxyProvider}}. However this is not enough because 
when calling constructor of {{ObserverReadProxyProvider}} in super(...), the 
follow line:
{code}
nameNodeProxies = getProxyAddresses(uri,
HdfsClientConfigKeys.DFS_NAMENODE_RPC_ADDRESS_KEY);
{code}
will try to resolve the all configured NN addresses to do configured failover. 
But in the case of IPFailover, this does not really apply. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14016) ObserverReadProxyProvider can never enable observer read except in tests

2018-10-22 Thread Chen Liang (JIRA)
Chen Liang created HDFS-14016:
-

 Summary: ObserverReadProxyProvider can never enable observer read 
except in tests
 Key: HDFS-14016
 URL: https://issues.apache.org/jira/browse/HDFS-14016
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Chen Liang
Assignee: Chen Liang


Currently in {{ObserverReadProxyProvider#invoke}}, only when 
{{observerReadEnabled && isRead(method)}} is true, the code will check whether 
to talk to Observer. Otherwise always talk to active. The issue here is that 
currently it can only be set through {{setObserverReadEnabled}}, which is used 
by tests only. So observer read is always disabled in deployment and no way to 
enable it. We may want to either expose a configuration key, or hard code it to 
true so it can only be changed for testing purpose, or simply remove this 
variable. This is closely related to HDFS-13923.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-692) Use the ProgressBar class in the RandomKeyGenerator freon test

2018-10-22 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659596#comment-16659596
 ] 

Hadoop QA commented on HDDS-692:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
5s{color} | {color:green} tools in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-692 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12945082/HDDS-692.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8bcf1a16a70a 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f6498af |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1481/testReport/ |
| Max. process+thread count | 2223 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/tools U: hadoop-ozone/tools |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1481/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Use the ProgressBar class in the RandomKeyGenerator freon test
> --
>
> Key: HDDS-692
> URL: 

[jira] [Commented] (HDFS-13997) Secondary NN Web UI displays nothing, and the console log shows moment is not defined.

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659513#comment-16659513
 ] 

ASF GitHub Bot commented on HDFS-13997:
---

GitHub user pixelart7 opened a pull request:

https://github.com/apache/hadoop/pull/431

HDFS-13997. Add moment.js to the secondarynamenode web UI

This pull request fix a problem on the web UI of secondarynamenode. As the 
UI won't show when dfs-dust.js try to call moment() and it is not included in 
the page.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pixelart7/hadoop pixelart7-HDFS-13997

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/431.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #431






> Secondary NN Web UI displays nothing, and the console log shows moment is not 
> defined.
> --
>
> Key: HDFS-13997
> URL: https://issues.apache.org/jira/browse/HDFS-13997
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.1
>Reporter: Rui Chen
>Priority: Major
> Attachments: HDFS-13997-01.patch, Selection_030.png, Selection_031.png
>
>
> It seems that the Secondary Namenode Web UI makes use of dfs-dust.js 
> depending on the moment.js. Nevertheless, the page cannot find moment.js. It 
> then shows the errors as attached.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-530) Fix Ozone fs valid uri pattern

2018-10-22 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-530:
--
Priority: Blocker  (was: Major)

> Fix Ozone fs valid uri pattern
> --
>
> Key: HDDS-530
> URL: https://issues.apache.org/jira/browse/HDDS-530
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Hanisha Koneru
>Assignee: Anu Engineer
>Priority: Blocker
>  Labels: alpha2
>
> The valid uri pattern for an Ozone fs uri should be 
> {{o3://://}}.
> But OzoneFileSystem accepts uri's of the form {{o3://.}} only.
> {code:java}
> # In OzoneFileSyste.java
> private static final Pattern URL_SCHEMA_PATTERN =
> Pattern.compile("(.+)\\.([^\\.]+)");
> if (!matcher.matches()) {
>   throw new IllegalArgumentException("Ozone file system url should be "
>   + "in the form o3://bucket.volume");
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-530) Fix Ozone fs valid uri pattern

2018-10-22 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659508#comment-16659508
 ] 

Anu Engineer commented on HDDS-530:
---

Doc bug, it will make someone's life better, I will fix this in 0.3. Thanks

> Fix Ozone fs valid uri pattern
> --
>
> Key: HDDS-530
> URL: https://issues.apache.org/jira/browse/HDDS-530
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Hanisha Koneru
>Assignee: Anu Engineer
>Priority: Major
>  Labels: alpha2
>
> The valid uri pattern for an Ozone fs uri should be 
> {{o3://://}}.
> But OzoneFileSystem accepts uri's of the form {{o3://.}} only.
> {code:java}
> # In OzoneFileSyste.java
> private static final Pattern URL_SCHEMA_PATTERN =
> Pattern.compile("(.+)\\.([^\\.]+)");
> if (!matcher.matches()) {
>   throw new IllegalArgumentException("Ozone file system url should be "
>   + "in the form o3://bucket.volume");
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >