[jira] [Commented] (HDFS-13443) RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries.

2018-12-17 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723781#comment-16723781
 ] 

Yiqun Lin commented on HDFS-13443:
--

LGTM +1. Thanks [~arshad.mohammad]!

{quote}
I feel default value for this property should be true and description should 
say, "for all the routers"
{quote}
As [~arshad.mohammad] mentioned, we can track this in another JIRA. From my 
opinion, I prefer to give a choice of sync-up behavior for users and disable 
cache update by default.

I will hold off the commit until tomorrow in case there some other comments, :).

> RBF: Update mount table cache immediately after changing (add/update/remove) 
> mount table entries.
> -
>
> Key: HDFS-13443
> URL: https://issues.apache.org/jira/browse/HDFS-13443
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Mohammad Arshad
>Assignee: Mohammad Arshad
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13443-012.patch, HDFS-13443-013.patch, 
> HDFS-13443-014.patch, HDFS-13443-015.patch, HDFS-13443-016.patch, 
> HDFS-13443-017.patch, HDFS-13443-HDFS-13891-001.patch, 
> HDFS-13443-HDFS-13891-002.patch, HDFS-13443-branch-2.001.patch, 
> HDFS-13443-branch-2.002.patch, HDFS-13443.001.patch, HDFS-13443.002.patch, 
> HDFS-13443.003.patch, HDFS-13443.004.patch, HDFS-13443.005.patch, 
> HDFS-13443.006.patch, HDFS-13443.007.patch, HDFS-13443.008.patch, 
> HDFS-13443.009.patch, HDFS-13443.010.patch, HDFS-13443.011.patch
>
>
> Currently mount table cache is updated periodically, by default cache is 
> updated every minute. After change in mount table, user operations may still 
> use old mount table. This is bit wrong.
> To update mount table cache, maybe we can do following
>  * *Add refresh API in MountTableManager which will update mount table cache.*
>  * *When there is a change in mount table entries, router admin server can 
> update its cache and ask other routers to update their cache*. For example if 
> there are three routers R1,R2,R3 in a cluster then add mount table entry API, 
> at admin server side, will perform following sequence of action
>  ## user submit add mount table entry request on R1
>  ## R1 adds the mount table entry in state store
>  ## R1 call refresh API on R2
>  ## R1 calls refresh API on R3
>  ## R1 directly freshest its cache
>  ## Add mount table entry response send back to user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-393) Audit Parser tool for processing ozone audit logs

2018-12-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723770#comment-16723770
 ] 

Hadoop QA commented on HDDS-393:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
22s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 25m 52s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
50s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 41m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.web.client.TestKeys |
|   | hadoop.ozone.om.TestOzoneManager |
|   | hadoop.ozone.container.TestContainerReplication |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-393 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952139/HDDS-393.003.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  shellcheck  findbugs  
checkstyle  |
| uname | Linux 3e45e47d403e 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 94b368f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| shellcheck | v0.4.6 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1951/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1951/testReport/ |
| Max. process+thread count | 1097 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/docs hadoop-ozone hadoop-ozone/common 
hadoop-ozone/tools U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1951/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Audit Parser tool for processing ozone audit logs
> -
>
> Key: HDDS-393
> URL: https://issues.apache.org/jira/browse/HDDS-393
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: 

[jira] [Updated] (HDDS-932) Add blockade Tests for Network partition

2018-12-17 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-932:
---
Target Version/s: 0.4.0
  Status: Patch Available  (was: Open)

> Add blockade Tests for Network partition
> 
>
> Key: HDDS-932
> URL: https://issues.apache.org/jira/browse/HDDS-932
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-932.001.patch
>
>
> Blockade tests need to be added pertaining to network partition.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-933) Add documentation for genconf tool under Tools section

2018-12-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723763#comment-16723763
 ] 

Hadoop QA commented on HDDS-933:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  1m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-933 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952141/HDDS-933.001.patch |
| Optional Tests |  asflicense  |
| uname | Linux 41023d055baf 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 94b368f |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 42 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/docs U: hadoop-hdds/docs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1952/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add documentation for genconf tool under Tools section
> --
>
> Key: HDDS-933
> URL: https://issues.apache.org/jira/browse/HDDS-933
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: documentation
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDDS-933.001.patch
>
>
> In ozone website, tools section is missing link for Genconf tool.
> This Jira aims to add the missing documentation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-933) Add documentation for genconf tool under Tools section

2018-12-17 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-933:
---
Attachment: HDDS-933.001.patch
Status: Patch Available  (was: Open)

> Add documentation for genconf tool under Tools section
> --
>
> Key: HDDS-933
> URL: https://issues.apache.org/jira/browse/HDDS-933
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: documentation
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDDS-933.001.patch
>
>
> In ozone website, tools section is missing link for Genconf tool.
> This Jira aims to add the missing documentation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-933) Add documentation for genconf tool under Tools section

2018-12-17 Thread Dinesh Chitlangia (JIRA)
Dinesh Chitlangia created HDDS-933:
--

 Summary: Add documentation for genconf tool under Tools section
 Key: HDDS-933
 URL: https://issues.apache.org/jira/browse/HDDS-933
 Project: Hadoop Distributed Data Store
  Issue Type: New Feature
  Components: documentation
Reporter: Dinesh Chitlangia
Assignee: Dinesh Chitlangia


In ozone website, tools section is missing link for Genconf tool.

This Jira aims to add the missing documentation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14157) RBF : refreshServiceAcl command fail with router

2018-12-17 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar reassigned HDFS-14157:


Assignee: Ranith Sardar

> RBF : refreshServiceAcl command fail with router
> 
>
> Key: HDFS-14157
> URL: https://issues.apache.org/jira/browse/HDFS-14157
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Harshakiran Reddy
>Assignee: Ranith Sardar
>Priority: Major
>  Labels: RBF
>
> {noformat}
> namenode> ./bin/hdfs dfsadmin -refreshServiceAcl
> Refresh service acl failed for host:
> Refresh service acl failed for host:
> refreshServiceAcl: 2 exceptions 
> [org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RpcNoSuchProtocolException):
>  Unknown protocol: 
> org.apache.hadoop.security.authorize.RefreshAuthorizationPolicyProtocol
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.getProtocolImpl(ProtobufRpcEngine.java:444)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:502)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)
> , java.net.ConnectException: Call From host1 to host2: failed on 
> connection exception: java.net.ConnectException: Connection refused; For more 
> details see:  http://wiki.apache.org/hadoop/ConnectionRefused]
> namenode>
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14157) RBF : refreshServiceAcl command fail with router

2018-12-17 Thread Harshakiran Reddy (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harshakiran Reddy updated HDFS-14157:
-
Labels: RBF  (was: )

> RBF : refreshServiceAcl command fail with router
> 
>
> Key: HDFS-14157
> URL: https://issues.apache.org/jira/browse/HDFS-14157
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Harshakiran Reddy
>Priority: Major
>  Labels: RBF
>
> {noformat}
> namenode> ./bin/hdfs dfsadmin -refreshServiceAcl
> Refresh service acl failed for host:
> Refresh service acl failed for host:
> refreshServiceAcl: 2 exceptions 
> [org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RpcNoSuchProtocolException):
>  Unknown protocol: 
> org.apache.hadoop.security.authorize.RefreshAuthorizationPolicyProtocol
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.getProtocolImpl(ProtobufRpcEngine.java:444)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:502)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)
> , java.net.ConnectException: Call From host1 to host2: failed on 
> connection exception: java.net.ConnectException: Connection refused; For more 
> details see:  http://wiki.apache.org/hadoop/ConnectionRefused]
> namenode>
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14157) RBF : refreshServiceAcl command fail with router

2018-12-17 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-14157:


 Summary: RBF : refreshServiceAcl command fail with router
 Key: HDFS-14157
 URL: https://issues.apache.org/jira/browse/HDFS-14157
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.1.1
Reporter: Harshakiran Reddy


{noformat}
namenode> ./bin/hdfs dfsadmin -refreshServiceAcl
Refresh service acl failed for host:
Refresh service acl failed for host:
refreshServiceAcl: 2 exceptions 
[org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RpcNoSuchProtocolException):
 Unknown protocol: 
org.apache.hadoop.security.authorize.RefreshAuthorizationPolicyProtocol
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.getProtocolImpl(ProtobufRpcEngine.java:444)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:502)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)
, java.net.ConnectException: Call From host1 to host2: failed on connection 
exception: java.net.ConnectException: Connection refused; For more details see: 
 http://wiki.apache.org/hadoop/ConnectionRefused]
namenode>
{noformat}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14116) Fix class cast error in NNThroughputBenchmark with ObserverReadProxyProvider.

2018-12-17 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723752#comment-16723752
 ] 

Chao Sun commented on HDFS-14116:
-

OK. I was just concerned that in some edge cases people may be caught by 
surprise. For instance, they _may_ get ClassCastException if running 
{{HAServiceProtocol}} or {{RouterProtocol}} on a host where it is set to use 
ORPP.

> Fix class cast error in NNThroughputBenchmark with ObserverReadProxyProvider.
> -
>
> Key: HDFS-14116
> URL: https://issues.apache.org/jira/browse/HDFS-14116
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Chen Liang
>Assignee: Chao Sun
>Priority: Major
> Fix For: HDFS-12943
>
> Attachments: HDFS-14116-HDFS-12943.000.patch, 
> HDFS-14116-HDFS-12943.001.patch, HDFS-14116-HDFS-12943.002.patch, 
> HDFS-14116-HDFS-12943.003.patch, HDFS-14116-HDFS-12943.004.patch, 
> HDFS-14116-HDFS-12943.005.patch
>
>
> Currently in {{ObserverReadProxyProvider}} constructor there is this line 
> {code}
> ((ClientHAProxyFactory) factory).setAlignmentContext(alignmentContext);
> {code}
> This could potentially cause failure, because it is possible that factory can 
> not be casted here. Specifically,  
> {{NameNodeProxiesClient.createFailoverProxyProvider}} is where the 
> constructor will be called, and there are two paths that could call into this:
> (1).{{NameNodeProxies.createProxy}}
> (2).{{NameNodeProxiesClient.createFailoverProxyProvider}}
> (2) works fine because it always uses {{ClientHAProxyFactory}} but (1) uses 
> {{NameNodeHAProxyFactory}} which can not be casted to 
> {{ClientHAProxyFactory}}, this happens when, for example, running 
> NNThroughputBenmarck. To fix this we can at least:
> 1. introduce setAlignmentContext to HAProxyFactory which is the parent of 
> both  ClientHAProxyFactory and NameNodeHAProxyFactory OR
> 2. only setAlignmentContext when it is ClientHAProxyFactory by, say, having a 
> if check with reflection. 
> Depending on whether it make sense to have alignment context for the case (1) 
> calling code paths.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-393) Audit Parser tool for processing ozone audit logs

2018-12-17 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-393:
---
Attachment: HDDS-393.003.patch

> Audit Parser tool for processing ozone audit logs
> -
>
> Key: HDDS-393
> URL: https://issues.apache.org/jira/browse/HDDS-393
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: alpha2
> Attachments: HDDS-393.001.patch, HDDS-393.002.patch, 
> HDDS-393.003.patch
>
>
> Jira to create audit parser tool to process ozone audit logs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12943) Consistent Reads from Standby Node

2018-12-17 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723757#comment-16723757
 ] 

Chao Sun commented on HDFS-12943:
-

[~brahmareddy] [~xkrogen]: unfortunately I can't provide enough data points on 
this. In our production we deployed a slight different version than upstream - 
the observer hosts are fixed in config so no {{getHAServiceState}} is issued 
(on the downside observer cannot participate in failover). I do intend to run 
some benchmark with the latest upstream code though. Perhaps will update later.

> Consistent Reads from Standby Node
> --
>
> Key: HDFS-12943
> URL: https://issues.apache.org/jira/browse/HDFS-12943
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs
>Reporter: Konstantin Shvachko
>Priority: Major
> Attachments: ConsistentReadsFromStandbyNode.pdf, 
> ConsistentReadsFromStandbyNode.pdf, HDFS-12943-001.patch, 
> HDFS-12943-002.patch, TestPlan-ConsistentReadsFromStandbyNode.pdf
>
>
> StandbyNode in HDFS is a replica of the active NameNode. The states of the 
> NameNodes are coordinated via the journal. It is natural to consider 
> StandbyNode as a read-only replica. As with any replicated distributed system 
> the problem of stale reads should be resolved. Our main goal is to provide 
> reads from standby in a consistent way in order to enable a wide range of 
> existing applications running on top of HDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-912) Update ozone to latest ratis snapshot build (0.4.0-3b0be02-SNAPSHOT)

2018-12-17 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-912:
---
Target Version/s: 0.4.0

> Update ozone to latest ratis snapshot build (0.4.0-3b0be02-SNAPSHOT)
> 
>
> Key: HDDS-912
> URL: https://issues.apache.org/jira/browse/HDDS-912
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-912.001.patch, HDDS-912.002.patch
>
>
> We can update ratis snapshot build in ozone to 0.4.0-3b0be02-SNAPSHOT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-893) pipeline status is ALLOCATED in scmcli listPipelines command

2018-12-17 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-893:
---
Target Version/s: 0.4.0

> pipeline status is ALLOCATED in scmcli listPipelines command
> 
>
> Key: HDDS-893
> URL: https://issues.apache.org/jira/browse/HDDS-893
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Nilotpal Nandi
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-893.001.patch, all-node-ozone-logs-1543837271.tar.gz
>
>
> Pipeline status cannot be allocated , It should be either OPEN or CLOSING.
> {noformat}
> [root@ctr-e139-1542663976389-11261-01-05 test_files]# ozone scmcli 
> listPipelines
> Pipeline[ Id: 202f7208-6977-4f65-b070-c1e7e57cb2ed, Nodes: 
> 06e074f7-67b4-4dde-8f20-a437ca60b7a1{ip: 172.27.20.97, host: 
> ctr-e139-1542663976389-11261-01-07.hwx.site}c5bf9a9f-d471-4cef-aae4-61cb387ea9e3{ip:
>  172.27.79.145, host: 
> ctr-e139-1542663976389-11261-01-06.hwx.site}96c18fe3-5520-4941-844b-ff7186a146a6{ip:
>  172.27.55.132, host: ctr-e139-1542663976389-11261-01-03.hwx.site}, 
> Type:RATIS, Factor:THREE, State:ALLOCATED]{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14156) RBF : RollEdit command fail with router

2018-12-17 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723739#comment-16723739
 ] 

Surendra Singh Lilhore commented on HDFS-14156:
---

[~shubham.dewan], added in contributer list.

> RBF : RollEdit command fail with router
> ---
>
> Key: HDFS-14156
> URL: https://issues.apache.org/jira/browse/HDFS-14156
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Harshakiran Reddy
>Assignee: Shubham Dewan
>Priority: Major
>  Labels: RBF
>
> {noformat}
> bin> ./hdfs dfsadmin -rollEdits
> rollEdits: Cannot cast java.lang.Long to long
> bin>
> {noformat}
> Trace :-
> {noformat}
> org.apache.hadoop.ipc.RemoteException(java.lang.ClassCastException): Cannot 
> cast java.lang.Long to long
> at java.lang.Class.cast(Class.java:3369)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1085)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:982)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterClientProtocol.rollEdits(RouterClientProtocol.java:900)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.rollEdits(RouterRpcServer.java:862)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rollEdits(ClientNamenodeProtocolServerSideTranslatorPB.java:899)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1520)
> at org.apache.hadoop.ipc.Client.call(Client.java:1466)
> at org.apache.hadoop.ipc.Client.call(Client.java:1376)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy11.rollEdits(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.rollEdits(ClientNamenodeProtocolTranslatorPB.java:804)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy12.rollEdits(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.rollEdits(DFSClient.java:2350)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.rollEdits(DistributedFileSystem.java:1550)
> at org.apache.hadoop.hdfs.tools.DFSAdmin.rollEdits(DFSAdmin.java:850)
> at org.apache.hadoop.hdfs.tools.DFSAdmin.run(DFSAdmin.java:2353)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at org.apache.hadoop.hdfs.tools.DFSAdmin.main(DFSAdmin.java:2568)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-805) Block token: Client api changes for block token

2018-12-17 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723715#comment-16723715
 ] 

Xiaoyu Yao commented on HDDS-805:
-

Thanks [~ajayydv] for the update. Patch v6 looks good to me. A few more 
comments:

ContainerProtocolCalls.java

Line 112: datanodeBlockID is the protobuf format and the toString() may not 
match with the blockID#toString() in other paces when calling 
getEncodedBlockToken().

 

Line 392/424/454/482: please update the javadoc after adding the parameter

 

Line 564: please add document wrt. the format of the service that are expected 
passing in here.

 

Line 567: should we have a static OzoneBlockTokenSelector to avoid create 
object upon each call?

 

KeyManagerImpl.java

Line 216: If the server does not provider remote user info, are we suppose to 
throw here?

I don't think we should use  OM's current user as remote user for block token.

 

 

OzoneManager.java

Line 335: NIT: use Objects.notNull to replace Preconditions.checkNotNull?

 

Line 368: switch the order of the if condition to check isGrpcBlockTokenEnabled 
first?

 

Line 373/385: NIT: Inability->Unable

Line 375: the error message can be refined to "block token secret manager"

Line 387: "delegation token secret manager"

 

ChunkOutputStreamEntry.java

Line 70/78: redundant token initializations.

 

TestSecureOzoneRpcClient.java

Line 135-139: can be wrapped with try-with-resource (same with line 174-180)

Line 142-150: input steam needs to be closed, ideally with try-with-resource

> Block token: Client api changes for block token
> ---
>
> Key: HDDS-805
> URL: https://issues.apache.org/jira/browse/HDDS-805
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-805-HDDS-4.00.patch, HDDS-805-HDDS-4.01.patch, 
> HDDS-805-HDDS-4.02.patch, HDDS-805-HDDS-4.03.patch, HDDS-805-HDDS-4.04.patch, 
> HDDS-805-HDDS-4.05.patch, HDDS-805-HDDS-4.06.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14156) RBF : RollEdit command fail with router

2018-12-17 Thread Shubham Dewan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shubham Dewan reassigned HDFS-14156:


Assignee: Shubham Dewan

> RBF : RollEdit command fail with router
> ---
>
> Key: HDFS-14156
> URL: https://issues.apache.org/jira/browse/HDFS-14156
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Harshakiran Reddy
>Assignee: Shubham Dewan
>Priority: Major
>  Labels: RBF
>
> {noformat}
> bin> ./hdfs dfsadmin -rollEdits
> rollEdits: Cannot cast java.lang.Long to long
> bin>
> {noformat}
> Trace :-
> {noformat}
> org.apache.hadoop.ipc.RemoteException(java.lang.ClassCastException): Cannot 
> cast java.lang.Long to long
> at java.lang.Class.cast(Class.java:3369)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1085)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:982)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterClientProtocol.rollEdits(RouterClientProtocol.java:900)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.rollEdits(RouterRpcServer.java:862)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rollEdits(ClientNamenodeProtocolServerSideTranslatorPB.java:899)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1520)
> at org.apache.hadoop.ipc.Client.call(Client.java:1466)
> at org.apache.hadoop.ipc.Client.call(Client.java:1376)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy11.rollEdits(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.rollEdits(ClientNamenodeProtocolTranslatorPB.java:804)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy12.rollEdits(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.rollEdits(DFSClient.java:2350)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.rollEdits(DistributedFileSystem.java:1550)
> at org.apache.hadoop.hdfs.tools.DFSAdmin.rollEdits(DFSAdmin.java:850)
> at org.apache.hadoop.hdfs.tools.DFSAdmin.run(DFSAdmin.java:2353)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at org.apache.hadoop.hdfs.tools.DFSAdmin.main(DFSAdmin.java:2568)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14156) RBF : RollEdit command fail with router

2018-12-17 Thread Shubham Dewan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723722#comment-16723722
 ] 

Shubham Dewan commented on HDFS-14156:
--

Pls someone assign to me

> RBF : RollEdit command fail with router
> ---
>
> Key: HDFS-14156
> URL: https://issues.apache.org/jira/browse/HDFS-14156
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Harshakiran Reddy
>Priority: Major
>  Labels: RBF
>
> {noformat}
> bin> ./hdfs dfsadmin -rollEdits
> rollEdits: Cannot cast java.lang.Long to long
> bin>
> {noformat}
> Trace :-
> {noformat}
> org.apache.hadoop.ipc.RemoteException(java.lang.ClassCastException): Cannot 
> cast java.lang.Long to long
> at java.lang.Class.cast(Class.java:3369)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1085)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:982)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterClientProtocol.rollEdits(RouterClientProtocol.java:900)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.rollEdits(RouterRpcServer.java:862)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rollEdits(ClientNamenodeProtocolServerSideTranslatorPB.java:899)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1520)
> at org.apache.hadoop.ipc.Client.call(Client.java:1466)
> at org.apache.hadoop.ipc.Client.call(Client.java:1376)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy11.rollEdits(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.rollEdits(ClientNamenodeProtocolTranslatorPB.java:804)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy12.rollEdits(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.rollEdits(DFSClient.java:2350)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.rollEdits(DistributedFileSystem.java:1550)
> at org.apache.hadoop.hdfs.tools.DFSAdmin.rollEdits(DFSAdmin.java:850)
> at org.apache.hadoop.hdfs.tools.DFSAdmin.run(DFSAdmin.java:2353)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at org.apache.hadoop.hdfs.tools.DFSAdmin.main(DFSAdmin.java:2568)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-932) Add blockade Tests for Network partition

2018-12-17 Thread Nilotpal Nandi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nilotpal Nandi updated HDDS-932:

Attachment: HDDS-932.001.patch

> Add blockade Tests for Network partition
> 
>
> Key: HDDS-932
> URL: https://issues.apache.org/jira/browse/HDDS-932
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-932.001.patch
>
>
> Blockade tests need to be added pertaining to network partition.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14156) RBF : RollEdit command fail with router

2018-12-17 Thread Harshakiran Reddy (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harshakiran Reddy updated HDFS-14156:
-
Labels: RBF  (was: )

> RBF : RollEdit command fail with router
> ---
>
> Key: HDFS-14156
> URL: https://issues.apache.org/jira/browse/HDFS-14156
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Harshakiran Reddy
>Priority: Major
>  Labels: RBF
>
> {noformat}
> bin> ./hdfs dfsadmin -rollEdits
> rollEdits: Cannot cast java.lang.Long to long
> bin>
> {noformat}
> Trace :-
> {noformat}
> org.apache.hadoop.ipc.RemoteException(java.lang.ClassCastException): Cannot 
> cast java.lang.Long to long
> at java.lang.Class.cast(Class.java:3369)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1085)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:982)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterClientProtocol.rollEdits(RouterClientProtocol.java:900)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.rollEdits(RouterRpcServer.java:862)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rollEdits(ClientNamenodeProtocolServerSideTranslatorPB.java:899)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1520)
> at org.apache.hadoop.ipc.Client.call(Client.java:1466)
> at org.apache.hadoop.ipc.Client.call(Client.java:1376)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy11.rollEdits(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.rollEdits(ClientNamenodeProtocolTranslatorPB.java:804)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy12.rollEdits(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.rollEdits(DFSClient.java:2350)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.rollEdits(DistributedFileSystem.java:1550)
> at org.apache.hadoop.hdfs.tools.DFSAdmin.rollEdits(DFSAdmin.java:850)
> at org.apache.hadoop.hdfs.tools.DFSAdmin.run(DFSAdmin.java:2353)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at org.apache.hadoop.hdfs.tools.DFSAdmin.main(DFSAdmin.java:2568)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14156) RBF : RollEdit command fail with router

2018-12-17 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-14156:


 Summary: RBF : RollEdit command fail with router
 Key: HDFS-14156
 URL: https://issues.apache.org/jira/browse/HDFS-14156
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.1.1
Reporter: Harshakiran Reddy


{noformat}
bin> ./hdfs dfsadmin -rollEdits
rollEdits: Cannot cast java.lang.Long to long
bin>
{noformat}

Trace :-
{noformat}
org.apache.hadoop.ipc.RemoteException(java.lang.ClassCastException): Cannot 
cast java.lang.Long to long
at java.lang.Class.cast(Class.java:3369)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1085)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:982)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterClientProtocol.rollEdits(RouterClientProtocol.java:900)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.rollEdits(RouterRpcServer.java:862)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rollEdits(ClientNamenodeProtocolServerSideTranslatorPB.java:899)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)

at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1520)
at org.apache.hadoop.ipc.Client.call(Client.java:1466)
at org.apache.hadoop.ipc.Client.call(Client.java:1376)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy11.rollEdits(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.rollEdits(ClientNamenodeProtocolTranslatorPB.java:804)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy12.rollEdits(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.rollEdits(DFSClient.java:2350)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.rollEdits(DistributedFileSystem.java:1550)
at org.apache.hadoop.hdfs.tools.DFSAdmin.rollEdits(DFSAdmin.java:850)
at org.apache.hadoop.hdfs.tools.DFSAdmin.run(DFSAdmin.java:2353)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.hdfs.tools.DFSAdmin.main(DFSAdmin.java:2568)
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14153) [SPS] : Add Support for Storage Policy Satisfier in WEBHDFS

2018-12-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723691#comment-16723691
 ] 

Hadoop QA commented on HDFS-14153:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
47s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  2s{color} | {color:orange} hadoop-hdfs-project: The patch generated 2 new + 
240 unchanged - 0 fixed = 242 total (was 240) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
40s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 22s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 
38s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}166m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestBpServiceActorScheduler |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14153 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952107/HDFS-14153-01.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3f4f7e4281fb 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Buil

[jira] [Commented] (HDDS-393) Audit Parser tool for processing ozone audit logs

2018-12-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723695#comment-16723695
 ] 

Hadoop QA commented on HDDS-393:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red}  0m 
16s{color} | {color:red} The patch generated 4 new + 0 unchanged - 0 fixed = 4 
total (was 0) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 6 line(s) with tabs. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 26m  5s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
57s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
20s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
|   | hadoop.ozone.om.TestOzoneManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-393 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952122/HDDS-393.002.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  shellcheck  findbugs  
checkstyle  |
| uname | Linux e2d9815209b0 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 94b368f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| shellcheck | v0.4.6 |
| shellcheck | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1949/artifact/out/diff-patch-shellcheck.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1949/artifact/out/whitespace-tabs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1949/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1949/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1949/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 1082 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/docs hadoop-ozone hadoop-ozone/common 
hadoop-ozone/tools U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1949/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Audit Parser tool 

[jira] [Commented] (HDDS-924) MultipartUpload: S3 APi for complete Multipart Upload

2018-12-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723692#comment-16723692
 ] 

Hadoop QA commented on HDDS-924:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 32s{color} | {color:orange} root: The patch generated 8 new + 2 unchanged - 
0 fixed = 10 total (was 2) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 32 line(s) with tabs. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 34s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
45s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.s3.endpoint.TestObjectPut |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-924 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952125/HDDS-924.02.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux 2d7475b11bb5 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 94b368f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1950/artifact/out/diff-checkstyle-root.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1950/artifact/out/whitespace-tabs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1950/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1950/testReport/ |
| Max. process+thread count | 198 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common hadoop-ozone/client hadoop-ozone/common 
hadoop-ozone/dist hadoop-ozone/integration-test hadoop-ozone/ozone-manager 
hadoop-ozone/s3gateway U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1950/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> MultipartUpload: S3 APi for complete Multipart Upload
> -
>
> Key: HDDS-924
> URL: https://issues.apache.org/jira/browse/HDDS-924
> Project: Hado

[jira] [Commented] (HDFS-14132) Add BlockLocation.isStriped() to determine if block is replicated or Striped

2018-12-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723687#comment-16723687
 ] 

Hadoop QA commented on HDFS-14132:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
59s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
54s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 56s{color} | {color:orange} root: The patch generated 5 new + 32 unchanged - 
0 fixed = 37 total (was 32) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
59s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
51s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m  0s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}190m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14132 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952102/HDFS-14132.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0a4280706692 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Person

[jira] [Commented] (HDFS-8738) Limit Exceptions thrown by DataNode when a client makes socket connection and sends an empty message

2018-12-17 Thread Xiang Li (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723678#comment-16723678
 ] 

Xiang Li commented on HDFS-8738:


Agree with Arpit. Add the link as a dup of HDFS-9572.

> Limit Exceptions thrown by DataNode when a client makes socket connection and 
> sends an empty message
> 
>
> Key: HDFS-8738
> URL: https://issues.apache.org/jira/browse/HDFS-8738
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Rajesh Kartha
>Assignee: Rajesh Kartha
>Priority: Minor
> Attachments: HDFS-8738.001.patch
>
>
> When a client creates a socket connection to the Datanode and sends an empty 
> message, the datanode logs have exceptions like these:
> 2015-07-08 20:00:55,427 ERROR datanode.DataNode (DataXceiver.java:run(278)) - 
> bidev17.rtp.ibm.com:50010:DataXceiver error processing unknown operation  
> src: /127.0.0.1:41508 dst: /127.0.0.1:50010
> java.io.EOFException
> at java.io.DataInputStream.readShort(DataInputStream.java:315)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:227)
> at java.lang.Thread.run(Thread.java:745)
> 2015-07-08 20:00:56,671 ERROR datanode.DataNode (DataXceiver.java:run(278)) - 
> bidev17.rtp.ibm.com:50010:DataXceiver error processing unknown operation  
> src: /127.0.0.1:41509 dst: /127.0.0.1:50010
> java.io.EOFException
> at java.io.DataInputStream.readShort(DataInputStream.java:315)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:227)
> at java.lang.Thread.run(Thread.java:745)
> These can fill up the logs and was recently noticed with an Ambari 2.1 based 
> install which tries to check if the datanode is up.
> Can be easily reproduced with a simple Java client creating a Socket 
> connection:
> public static void main(String[] args) {
> Socket DNClient;
> try {
> DNClient = new Socket("127.0.0.1", 50010);
> DataOutputStream os= new 
> DataOutputStream(DNClient.getOutputStream());
> os.writeBytes("");
> os.close();
> } catch (UnknownHostException e) {
> // TODO Auto-generated catch block
> e.printStackTrace();
> } catch (IOException e) {
> // TODO Auto-generated catch block
> e.printStackTrace();
> }
> }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-924) MultipartUpload: S3 APi for complete Multipart Upload

2018-12-17 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-924:

Target Version/s: 0.4.0

> MultipartUpload: S3 APi for complete Multipart Upload
> -
>
> Key: HDDS-924
> URL: https://issues.apache.org/jira/browse/HDDS-924
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-924.01.patch, HDDS-924.02.patch
>
>
> This Jira is to implement Complete Multipart Upload S3 API.
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadComplete.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-924) MultipartUpload: S3 APi for complete Multipart Upload

2018-12-17 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723677#comment-16723677
 ] 

Bharat Viswanadham commented on HDDS-924:
-

HDDS-924.02 contains HDDS-916 HDDS-901 HDDS-902.

HDDS-924.01 is the actual commit of this jira.

> MultipartUpload: S3 APi for complete Multipart Upload
> -
>
> Key: HDDS-924
> URL: https://issues.apache.org/jira/browse/HDDS-924
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-924.01.patch, HDDS-924.02.patch
>
>
> This Jira is to implement Complete Multipart Upload S3 API.
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadComplete.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-924) MultipartUpload: S3 APi for complete Multipart Upload

2018-12-17 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-924:

Attachment: HDDS-924.02.patch
HDDS-924.01.patch

> MultipartUpload: S3 APi for complete Multipart Upload
> -
>
> Key: HDDS-924
> URL: https://issues.apache.org/jira/browse/HDDS-924
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-924.01.patch, HDDS-924.02.patch
>
>
> This Jira is to implement Complete Multipart Upload S3 API.
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadComplete.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-924) MultipartUpload: S3 APi for complete Multipart Upload

2018-12-17 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-924:

Status: Patch Available  (was: In Progress)

> MultipartUpload: S3 APi for complete Multipart Upload
> -
>
> Key: HDDS-924
> URL: https://issues.apache.org/jira/browse/HDDS-924
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-924.01.patch, HDDS-924.02.patch
>
>
> This Jira is to implement Complete Multipart Upload S3 API.
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadComplete.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-99) Adding SCM Audit log

2018-12-17 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-99?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723645#comment-16723645
 ] 

Dinesh Chitlangia commented on HDDS-99:
---

Thanks [~xyao] for commit and all for review.

> Adding SCM Audit log
> 
>
> Key: HDDS-99
> URL: https://issues.apache.org/jira/browse/HDDS-99
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: SCM
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: alpha2
> Fix For: 0.4.0
>
> Attachments: HDDS-99.001.patch, HDDS-99.002.patch
>
>
> This ticket is opened to add SCM audit log.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-914) Add Grafana support to ozoneperf docker container

2018-12-17 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723647#comment-16723647
 ] 

Dinesh Chitlangia commented on HDDS-914:


[~bharatviswa] thanks for review and commit

> Add Grafana support to ozoneperf docker container
> -
>
> Key: HDDS-914
> URL: https://issues.apache.org/jira/browse/HDDS-914
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: docker
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDDS-914.001.patch, HDDS-914.002.patch, 
> HDDS-914.003.patch, HDDS-914.004.patch
>
>
> Since we are using Prometheus as a datasource to capture metrics from OM & 
> SCM, it will be useful to have basic Grafana Dashboards made available once 
> we start the docker cluster.
> This will be useful for investigation/analysis.
>  
> This Jira proposes to have grafana + prometheus support for ozoneperf docker 
> docntainer.
>  
> This Jira will also add 2 simple ozone dashboards - Basic Object Creations 
> stats, RPC Metrics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-393) Audit Parser tool for processing ozone audit logs

2018-12-17 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723644#comment-16723644
 ] 

Dinesh Chitlangia commented on HDDS-393:


attached patch 002 to fix license and whitespace issues.

> Audit Parser tool for processing ozone audit logs
> -
>
> Key: HDDS-393
> URL: https://issues.apache.org/jira/browse/HDDS-393
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: alpha2
> Attachments: HDDS-393.001.patch, HDDS-393.002.patch
>
>
> Jira to create audit parser tool to process ozone audit logs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-393) Audit Parser tool for processing ozone audit logs

2018-12-17 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-393:
---
Attachment: HDDS-393.002.patch
Status: Patch Available  (was: Open)

> Audit Parser tool for processing ozone audit logs
> -
>
> Key: HDDS-393
> URL: https://issues.apache.org/jira/browse/HDDS-393
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: alpha2
> Attachments: HDDS-393.001.patch, HDDS-393.002.patch
>
>
> Jira to create audit parser tool to process ozone audit logs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-393) Audit Parser tool for processing ozone audit logs

2018-12-17 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-393:
---
Status: Open  (was: Patch Available)

> Audit Parser tool for processing ozone audit logs
> -
>
> Key: HDDS-393
> URL: https://issues.apache.org/jira/browse/HDDS-393
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: alpha2
> Attachments: HDDS-393.001.patch
>
>
> Jira to create audit parser tool to process ozone audit logs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-393) Audit Parser tool for processing ozone audit logs

2018-12-17 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-393:
---
Attachment: (was: HDDS-393.002.patch)

> Audit Parser tool for processing ozone audit logs
> -
>
> Key: HDDS-393
> URL: https://issues.apache.org/jira/browse/HDDS-393
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: alpha2
> Attachments: HDDS-393.001.patch
>
>
> Jira to create audit parser tool to process ozone audit logs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-393) Audit Parser tool for processing ozone audit logs

2018-12-17 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-393:
---
Attachment: HDDS-393.002.patch

> Audit Parser tool for processing ozone audit logs
> -
>
> Key: HDDS-393
> URL: https://issues.apache.org/jira/browse/HDDS-393
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: alpha2
> Attachments: HDDS-393.001.patch, HDDS-393.002.patch
>
>
> Jira to create audit parser tool to process ozone audit logs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12943) Consistent Reads from Standby Node

2018-12-17 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723631#comment-16723631
 ] 

Brahma Reddy Battula commented on HDFS-12943:
-

Hi [~vagarychen]
{quote}I have never seen the second-call issue. Here is an output from our 
cluster (log outpu part omitted), and I think you are right about lowering 
dfs.ha.tail-edits.period, we had similar numbers here:
{quote}
you can see this issue if "dfs.ha.tail-edits.period" is default value.
{quote}Curious, how many NN you had in the testing? and was there any error 
from NN logs?
{quote}
1 ANN,1 SNN,1 Obserserver. No error logs from NN's.

Hi [~csun]
{quote}I think we should document {{dfs.ha.tail-edits.period}} in the user 
guide - the default value is just too large for observer reads. Filed 
HDFS-14154.
{quote}
Yes, thanks for reporting the same.

Hi [~xkrogen]
{quote}Your concern about heavier load in the JournalNode would have previously 
been valid, but with the completion of HDFS-13150 and 
{{dfs.ha.tail-edits.in-progress}} enabled, the Standby/Observer no longer 
creates a new stream to tail edits, instead polling for edits via RPC (and thus 
making use of connection keepalive). This greatly reduces the overheads 
involved with each iteration of edit tailing, enabling it to be done much more 
frequently.
{quote}
Yes,this is one of my concern. Gone through *fast path*(HDFS-13150)  thanks,it 
can improve.
{quote}I'm not aware of any good benchmark numbers produced after finishing the 
feature, maybe [~csun] can provide them?
{quote}
[~csun] can you provide..? I am sure this feature going to be great advantage 
over rpc workload on ANN, just i want to know write benchmarks also ( as 
getHAserviceState() and fast editing tailing edits are intrdouced).Sorry for 
pitching very late..

> Consistent Reads from Standby Node
> --
>
> Key: HDFS-12943
> URL: https://issues.apache.org/jira/browse/HDFS-12943
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs
>Reporter: Konstantin Shvachko
>Priority: Major
> Attachments: ConsistentReadsFromStandbyNode.pdf, 
> ConsistentReadsFromStandbyNode.pdf, HDFS-12943-001.patch, 
> HDFS-12943-002.patch, TestPlan-ConsistentReadsFromStandbyNode.pdf
>
>
> StandbyNode in HDFS is a replica of the active NameNode. The states of the 
> NameNodes are coordinated via the journal. It is natural to consider 
> StandbyNode as a read-only replica. As with any replicated distributed system 
> the problem of stale reads should be resolved. Our main goal is to provide 
> reads from standby in a consistent way in order to enable a wide range of 
> existing applications running on top of HDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14151) RBF: Make the read-only column of Mount Table clearly understandable

2018-12-17 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723620#comment-16723620
 ] 

Ayush Saxena commented on HDFS-14151:
-

Thanx [~tasanuma0829] for the patch and screenshot.

The new layout looks quite good. :)

> RBF: Make the read-only column of Mount Table clearly understandable
> 
>
> Key: HDFS-14151
> URL: https://issues.apache.org/jira/browse/HDFS-14151
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14151.1.patch, HDFS-14151.2.patch, 
> HDFS-14151.3.patch, mount_table_3rd_patch.png, mount_table_before.png, 
> read_only_a.png, read_only_b.png
>
>
> The read-only column of Mount Table is a little confusing now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14059) Test reads from standby on a secure cluster with Configured failover

2018-12-17 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723616#comment-16723616
 ] 

Brahma Reddy Battula commented on HDFS-14059:
-

[~shv] I have checked HDFS-14058 which is having only read hence only I asked 
here. As the tests performed on teragen/tersort/DFSIO with all the options.

bq.{{dfs.ha.tail-edits.period}} is not very much relevant any more with fast 
edits tailing, see HDFS-13150.

I dn't think so,*fast edits tailing* also should be triggered by editlog tailer 
thread which depends on this config.

> Test reads from standby on a secure cluster with Configured failover
> 
>
> Key: HDFS-14059
> URL: https://issues.apache.org/jira/browse/HDFS-14059
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Konstantin Shvachko
>Assignee: Plamen Jeliazkov
>Priority: Major
>
> Run standard HDFS tests to verify reading from ObserverNode on a secure HA 
> cluster with {{ConfiguredFailoverProxyProvider}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14151) RBF: Make the read-only column of Mount Table clearly understandable

2018-12-17 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723611#comment-16723611
 ] 

Íñigo Goiri commented on HDFS-14151:


+1 on  [^HDFS-14151.3.patch].

> RBF: Make the read-only column of Mount Table clearly understandable
> 
>
> Key: HDFS-14151
> URL: https://issues.apache.org/jira/browse/HDFS-14151
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14151.1.patch, HDFS-14151.2.patch, 
> HDFS-14151.3.patch, mount_table_3rd_patch.png, mount_table_before.png, 
> read_only_a.png, read_only_b.png
>
>
> The read-only column of Mount Table is a little confusing now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14151) RBF: Make the read-only column of Mount Table clearly understandable

2018-12-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723607#comment-16723607
 ] 

Hadoop QA commented on HDFS-14151:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
29m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14151 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952109/HDFS-14151.3.patch |
| Optional Tests |  dupname  asflicense  shadedclient  |
| uname | Linux c7596a3d846e 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 94b368f |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 445 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25824/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: Make the read-only column of Mount Table clearly understandable
> 
>
> Key: HDFS-14151
> URL: https://issues.apache.org/jira/browse/HDFS-14151
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14151.1.patch, HDFS-14151.2.patch, 
> HDFS-14151.3.patch, mount_table_3rd_patch.png, mount_table_before.png, 
> read_only_a.png, read_only_b.png
>
>
> The read-only column of Mount Table is a little confusing now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14151) RBF: Make the read-only column of Mount Table clearly understandable

2018-12-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723606#comment-16723606
 ] 

Hadoop QA commented on HDFS-14151:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-14151 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-14151 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25826/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: Make the read-only column of Mount Table clearly understandable
> 
>
> Key: HDFS-14151
> URL: https://issues.apache.org/jira/browse/HDFS-14151
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14151.1.patch, HDFS-14151.2.patch, 
> HDFS-14151.3.patch, mount_table_3rd_patch.png, mount_table_before.png, 
> read_only_a.png, read_only_b.png
>
>
> The read-only column of Mount Table is a little confusing now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14151) RBF: Make the read-only column of Mount Table clearly understandable

2018-12-17 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723605#comment-16723605
 ] 

Akira Ajisaka commented on HDFS-14151:
--

The 3rd patch LGTM.

> RBF: Make the read-only column of Mount Table clearly understandable
> 
>
> Key: HDFS-14151
> URL: https://issues.apache.org/jira/browse/HDFS-14151
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14151.1.patch, HDFS-14151.2.patch, 
> HDFS-14151.3.patch, mount_table_3rd_patch.png, mount_table_before.png, 
> read_only_a.png, read_only_b.png
>
>
> The read-only column of Mount Table is a little confusing now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14151) RBF: Make the read-only column of Mount Table clearly understandable

2018-12-17 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723604#comment-16723604
 ] 

Takanobu Asanuma commented on HDFS-14151:
-

Sure, attached the image of the 3rd patch.

!mount_table_3rd_patch.png!

> RBF: Make the read-only column of Mount Table clearly understandable
> 
>
> Key: HDFS-14151
> URL: https://issues.apache.org/jira/browse/HDFS-14151
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14151.1.patch, HDFS-14151.2.patch, 
> HDFS-14151.3.patch, mount_table_3rd_patch.png, mount_table_before.png, 
> read_only_a.png, read_only_b.png
>
>
> The read-only column of Mount Table is a little confusing now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14151) RBF: Make the read-only column of Mount Table clearly understandable

2018-12-17 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-14151:

Attachment: mount_table_3rd_patch.png

> RBF: Make the read-only column of Mount Table clearly understandable
> 
>
> Key: HDFS-14151
> URL: https://issues.apache.org/jira/browse/HDFS-14151
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14151.1.patch, HDFS-14151.2.patch, 
> HDFS-14151.3.patch, mount_table_3rd_patch.png, mount_table_before.png, 
> read_only_a.png, read_only_b.png
>
>
> The read-only column of Mount Table is a little confusing now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14151) RBF: Make the read-only column of Mount Table clearly understandable

2018-12-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723591#comment-16723591
 ] 

Hadoop QA commented on HDFS-14151:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
30m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 45s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14151 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952104/HDFS-14151.2.patch |
| Optional Tests |  dupname  asflicense  shadedclient  |
| uname | Linux b6eac7092cd1 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 94b368f |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 438 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25823/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: Make the read-only column of Mount Table clearly understandable
> 
>
> Key: HDFS-14151
> URL: https://issues.apache.org/jira/browse/HDFS-14151
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14151.1.patch, HDFS-14151.2.patch, 
> HDFS-14151.3.patch, mount_table_before.png, read_only_a.png, read_only_b.png
>
>
> The read-only column of Mount Table is a little confusing now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14151) RBF: Make the read-only column of Mount Table clearly understandable

2018-12-17 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723596#comment-16723596
 ] 

Íñigo Goiri commented on HDFS-14151:


 [^HDFS-14151.3.patch] LGTM.
[~tasanuma0829] can you attach a screenshot (with the text if possible)?

> RBF: Make the read-only column of Mount Table clearly understandable
> 
>
> Key: HDFS-14151
> URL: https://issues.apache.org/jira/browse/HDFS-14151
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14151.1.patch, HDFS-14151.2.patch, 
> HDFS-14151.3.patch, mount_table_before.png, read_only_a.png, read_only_b.png
>
>
> The read-only column of Mount Table is a little confusing now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14151) RBF: Make the read-only column of Mount Table clearly understandable

2018-12-17 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-14151:

Attachment: HDFS-14151.3.patch

> RBF: Make the read-only column of Mount Table clearly understandable
> 
>
> Key: HDFS-14151
> URL: https://issues.apache.org/jira/browse/HDFS-14151
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14151.1.patch, HDFS-14151.2.patch, 
> HDFS-14151.3.patch, mount_table_before.png, read_only_a.png, read_only_b.png
>
>
> The read-only column of Mount Table is a little confusing now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14151) RBF: Make the read-only column of Mount Table clearly understandable

2018-12-17 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-14151:

Attachment: (was: HDFS-14151.2.patch)

> RBF: Make the read-only column of Mount Table clearly understandable
> 
>
> Key: HDFS-14151
> URL: https://issues.apache.org/jira/browse/HDFS-14151
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14151.1.patch, HDFS-14151.2.patch, 
> mount_table_before.png, read_only_a.png, read_only_b.png
>
>
> The read-only column of Mount Table is a little confusing now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14151) RBF: Make the read-only column of Mount Table clearly understandable

2018-12-17 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723582#comment-16723582
 ] 

Takanobu Asanuma commented on HDFS-14151:
-

Uploaded the 3rd patch addressing the [~elgoiri]'s last comment.

> RBF: Make the read-only column of Mount Table clearly understandable
> 
>
> Key: HDFS-14151
> URL: https://issues.apache.org/jira/browse/HDFS-14151
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14151.1.patch, HDFS-14151.2.patch, 
> HDFS-14151.3.patch, mount_table_before.png, read_only_a.png, read_only_b.png
>
>
> The read-only column of Mount Table is a little confusing now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14151) RBF: Make the read-only column of Mount Table clearly understandable

2018-12-17 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-14151:

Attachment: HDFS-14151.2.patch

> RBF: Make the read-only column of Mount Table clearly understandable
> 
>
> Key: HDFS-14151
> URL: https://issues.apache.org/jira/browse/HDFS-14151
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14151.1.patch, HDFS-14151.2.patch, 
> mount_table_before.png, read_only_a.png, read_only_b.png
>
>
> The read-only column of Mount Table is a little confusing now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14153) [SPS] : Add Support for Storage Policy Satisfier in WEBHDFS

2018-12-17 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14153:

Status: Patch Available  (was: Open)

> [SPS] : Add Support for Storage Policy Satisfier in WEBHDFS
> ---
>
> Key: HDFS-14153
> URL: https://issues.apache.org/jira/browse/HDFS-14153
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14153-01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14153) [SPS] : Add Support for Storage Policy Satisfier in WEBHDFS

2018-12-17 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14153:

Attachment: HDFS-14153-01.patch

> [SPS] : Add Support for Storage Policy Satisfier in WEBHDFS
> ---
>
> Key: HDFS-14153
> URL: https://issues.apache.org/jira/browse/HDFS-14153
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14153-01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-805) Block token: Client api changes for block token

2018-12-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723574#comment-16723574
 ] 

Hadoop QA commented on HDDS-805:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 13 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
29s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} HDDS-4 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
36s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 38s{color} | {color:orange} root: The patch generated 2 new + 22 unchanged - 
2 fixed = 24 total (was 24) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 33m 19s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  6m  
5s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
|   | hadoop.ozone.container.ozoneimpl.TestOzoneContainerWithTLS |
|   | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.ozone.ozShell.TestOzoneShell |
|   | hadoop.ozone.om.TestOmMetrics |
|   | hadoop.ozone.container.metrics.TestContainerMetrics |
|   | hadoop.ozone.container.TestContainerReplication |
|   | hadoop.ozone.web.client.TestKeys |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-805 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952100/HDDS-805-HDDS-4.06.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux e91bf8816cdc 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | HDDS-4 / 614bcda |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1948/artifact/out/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1948/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1948/testReport/ |
| Max. process+thread count | 1323 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/client hadoop-hdds/common hadoop-ozone/client 
hadoop-ozone/common hadoop-ozone/integration-test 
hadoop-ozone/objectstore-service hadoop-ozone/ozone-manager U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1948/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http:/

[jira] [Commented] (HDFS-14151) RBF: Make the read-only column of Mount Table clearly understandable

2018-12-17 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723569#comment-16723569
 ] 

Íñigo Goiri commented on HDFS-14151:


Instead of Enabled/Disabled maybe empty and "Read only"?

> RBF: Make the read-only column of Mount Table clearly understandable
> 
>
> Key: HDFS-14151
> URL: https://issues.apache.org/jira/browse/HDFS-14151
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14151.1.patch, HDFS-14151.2.patch, 
> mount_table_before.png, read_only_a.png, read_only_b.png
>
>
> The read-only column of Mount Table is a little confusing now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14151) RBF: Make the read-only column of Mount Table clearly understandable

2018-12-17 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-14151:

Attachment: HDFS-14151.2.patch

> RBF: Make the read-only column of Mount Table clearly understandable
> 
>
> Key: HDFS-14151
> URL: https://issues.apache.org/jira/browse/HDFS-14151
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14151.1.patch, HDFS-14151.2.patch, 
> mount_table_before.png, read_only_a.png, read_only_b.png
>
>
> The read-only column of Mount Table is a little confusing now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14151) RBF: Make the read-only column of Mount Table clearly understandable

2018-12-17 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723562#comment-16723562
 ] 

Takanobu Asanuma commented on HDFS-14151:
-

Thanks for the review, [~elgoiri]. Uploaded the 2nd patch.

Seems alt text is only for images. We can use a title attribute here.

> RBF: Make the read-only column of Mount Table clearly understandable
> 
>
> Key: HDFS-14151
> URL: https://issues.apache.org/jira/browse/HDFS-14151
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14151.1.patch, HDFS-14151.2.patch, 
> mount_table_before.png, read_only_a.png, read_only_b.png
>
>
> The read-only column of Mount Table is a little confusing now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14132) Add BlockLocation.isStriped() to determine if block is replicated or Striped

2018-12-17 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HDFS-14132:
--
Attachment: (was: HDFS-14132.002.patch)

> Add BlockLocation.isStriped() to determine if block is replicated or Striped
> 
>
> Key: HDFS-14132
> URL: https://issues.apache.org/jira/browse/HDFS-14132
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Shweta
>Assignee: Shweta
>Priority: Major
> Attachments: HDFS-14132.001.patch, HDFS-14132.002.patch
>
>
> Impala uses FileSystem#getBlockLocation to get block locations. We can add 
> isStriped() method for it to easier determine the block is belonged to 
> replicated file or striped file.
> In HDFS, this isStriped information is already in 
> HdfsBlockLocation#LocatedBlock#isStriped(), adding this method to 
> BlockLocation does not introduce space overhead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14132) Add BlockLocation.isStriped() to determine if block is replicated or Striped

2018-12-17 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HDFS-14132:
--
Attachment: HDFS-14132.002.patch

> Add BlockLocation.isStriped() to determine if block is replicated or Striped
> 
>
> Key: HDFS-14132
> URL: https://issues.apache.org/jira/browse/HDFS-14132
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Shweta
>Assignee: Shweta
>Priority: Major
> Attachments: HDFS-14132.001.patch, HDFS-14132.002.patch
>
>
> Impala uses FileSystem#getBlockLocation to get block locations. We can add 
> isStriped() method for it to easier determine the block is belonged to 
> replicated file or striped file.
> In HDFS, this isStriped information is already in 
> HdfsBlockLocation#LocatedBlock#isStriped(), adding this method to 
> BlockLocation does not introduce space overhead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14132) Add BlockLocation.isStriped() to determine if block is replicated or Striped

2018-12-17 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723555#comment-16723555
 ] 

Shweta commented on HDFS-14132:
---

Thanks for the review [~knanasi] and [~templedf] for the review and 
suggestions. 
1. I guess this happened due to the formatting in my IDE and I should have 
looked at the upstream code for BlockLocation.java for the correct format. I 
have updated that in the patch.
2. Thanks for pointing out, I have added the assert messages for when tests 
fail.
3. Most of the tests in TestAddStripedBlocks.java used the Assert.assertEquals 
and other assert methods and hence I kept that consistent in the first patch. I 
guess it has to do with accurately differentiating TestCase.assertTrue from 
Assert.assertTrue  but as Assert was already imported, this shouldn't be an 
issue and I have updated the patch to not have Assert.

Thanks for pointing out. I have uploaded an updated patch. Please review.

> Add BlockLocation.isStriped() to determine if block is replicated or Striped
> 
>
> Key: HDFS-14132
> URL: https://issues.apache.org/jira/browse/HDFS-14132
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Shweta
>Assignee: Shweta
>Priority: Major
> Attachments: HDFS-14132.001.patch, HDFS-14132.002.patch
>
>
> Impala uses FileSystem#getBlockLocation to get block locations. We can add 
> isStriped() method for it to easier determine the block is belonged to 
> replicated file or striped file.
> In HDFS, this isStriped information is already in 
> HdfsBlockLocation#LocatedBlock#isStriped(), adding this method to 
> BlockLocation does not introduce space overhead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14132) Add BlockLocation.isStriped() to determine if block is replicated or Striped

2018-12-17 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HDFS-14132:
--
Attachment: HDFS-14132.002.patch

> Add BlockLocation.isStriped() to determine if block is replicated or Striped
> 
>
> Key: HDFS-14132
> URL: https://issues.apache.org/jira/browse/HDFS-14132
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Shweta
>Assignee: Shweta
>Priority: Major
> Attachments: HDFS-14132.001.patch, HDFS-14132.002.patch
>
>
> Impala uses FileSystem#getBlockLocation to get block locations. We can add 
> isStriped() method for it to easier determine the block is belonged to 
> replicated file or striped file.
> In HDFS, this isStriped information is already in 
> HdfsBlockLocation#LocatedBlock#isStriped(), adding this method to 
> BlockLocation does not introduce space overhead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14116) Fix class cast error in NNThroughputBenchmark with ObserverReadProxyProvider.

2018-12-17 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723546#comment-16723546
 ] 

Konstantin Shvachko commented on HDFS-14116:


Hey [~csun] I don't know what users can do with the knowledge about using ORPP 
with different protocols. For developers the type clearly states it extends 
{{ClientProtocol}}, but for a user guide it's probably TMID (too much 
implementation details).

> Fix class cast error in NNThroughputBenchmark with ObserverReadProxyProvider.
> -
>
> Key: HDFS-14116
> URL: https://issues.apache.org/jira/browse/HDFS-14116
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Chen Liang
>Assignee: Chao Sun
>Priority: Major
> Fix For: HDFS-12943
>
> Attachments: HDFS-14116-HDFS-12943.000.patch, 
> HDFS-14116-HDFS-12943.001.patch, HDFS-14116-HDFS-12943.002.patch, 
> HDFS-14116-HDFS-12943.003.patch, HDFS-14116-HDFS-12943.004.patch, 
> HDFS-14116-HDFS-12943.005.patch
>
>
> Currently in {{ObserverReadProxyProvider}} constructor there is this line 
> {code}
> ((ClientHAProxyFactory) factory).setAlignmentContext(alignmentContext);
> {code}
> This could potentially cause failure, because it is possible that factory can 
> not be casted here. Specifically,  
> {{NameNodeProxiesClient.createFailoverProxyProvider}} is where the 
> constructor will be called, and there are two paths that could call into this:
> (1).{{NameNodeProxies.createProxy}}
> (2).{{NameNodeProxiesClient.createFailoverProxyProvider}}
> (2) works fine because it always uses {{ClientHAProxyFactory}} but (1) uses 
> {{NameNodeHAProxyFactory}} which can not be casted to 
> {{ClientHAProxyFactory}}, this happens when, for example, running 
> NNThroughputBenmarck. To fix this we can at least:
> 1. introduce setAlignmentContext to HAProxyFactory which is the parent of 
> both  ClientHAProxyFactory and NameNodeHAProxyFactory OR
> 2. only setAlignmentContext when it is ClientHAProxyFactory by, say, having a 
> if check with reflection. 
> Depending on whether it make sense to have alignment context for the case (1) 
> calling code paths.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14149) Adjust annotations on new interfaces/classes for SBN reads.

2018-12-17 Thread Konstantin Shvachko (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-14149:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-12943
   Status: Resolved  (was: Patch Available)

I just committed this to HDFS-12943 branch. With a minor rebase. Thank you 
[~csun].

> Adjust annotations on new interfaces/classes for SBN reads.
> ---
>
> Key: HDFS-14149
> URL: https://issues.apache.org/jira/browse/HDFS-14149
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-12943
>Reporter: Konstantin Shvachko
>Assignee: Chao Sun
>Priority: Major
> Fix For: HDFS-12943
>
> Attachments: HDFS-14149-HDFS-12943.000.patch, 
> HDFS-14149-HDFS-12943.001.patch, HDFS-14149-HDFS-12943.002.patch
>
>
> Let's make sure that all new classes and interfaces
> # do have annotations, as some of them don't, like 
> {{ObserverReadProxyProvider}}
> # that they are annotated as {{Private}} and {{Evolving}}, to allow room for 
> changes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13762) Support non-volatile storage class memory(SCM) in HDFS cache directives

2018-12-17 Thread Wei Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723539#comment-16723539
 ] 

Wei Zhou commented on HDFS-13762:
-

The test failures seems not related to this patch, thanks!

> Support non-volatile storage class memory(SCM) in HDFS cache directives
> ---
>
> Key: HDFS-13762
> URL: https://issues.apache.org/jira/browse/HDFS-13762
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: caching, datanode
>Reporter: Sammi Chen
>Assignee: Wei Zhou
>Priority: Major
> Attachments: HDFS-13762.000.patch, HDFS-13762.001.patch, 
> HDFS-13762.002.patch, HDFS-13762.003.patch, HDFS-13762.004.patch, 
> HDFS-13762.005.patch, HDFS-13762.006.patch, HDFS-13762.007.patch, 
> SCMCacheDesign-2018-11-08.pdf, SCMCacheTestPlan.pdf
>
>
> No-volatile storage class memory is a type of memory that can keep the data 
> content after power failure or between the power cycle. Non-volatile storage 
> class memory device usually has near access speed as memory DIMM while has 
> lower cost than memory.  So today It is usually used as a supplement to 
> memory to hold long tern persistent data, such as data in cache. 
> Currently in HDFS, we have OS page cache backed read only cache and RAMDISK 
> based lazy write cache.  Non-volatile memory suits for both these functions. 
> This Jira aims to enable storage class memory first in read cache. Although 
> storage class memory has non-volatile characteristics, to keep the same 
> behavior as current read only cache, we don't use its persistent 
> characteristics currently.  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14149) Adjust annotations on new interfaces/classes for SBN reads.

2018-12-17 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723534#comment-16723534
 ] 

Konstantin Shvachko commented on HDFS-14149:


+1 unit test failures are clearly unrelated.

> Adjust annotations on new interfaces/classes for SBN reads.
> ---
>
> Key: HDFS-14149
> URL: https://issues.apache.org/jira/browse/HDFS-14149
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-12943
>Reporter: Konstantin Shvachko
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14149-HDFS-12943.000.patch, 
> HDFS-14149-HDFS-12943.001.patch, HDFS-14149-HDFS-12943.002.patch
>
>
> Let's make sure that all new classes and interfaces
> # do have annotations, as some of them don't, like 
> {{ObserverReadProxyProvider}}
> # that they are annotated as {{Private}} and {{Evolving}}, to allow room for 
> changes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14116) Fix class cast error in NNThroughputBenchmark with ObserverReadProxyProvider.

2018-12-17 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723526#comment-16723526
 ] 

Chao Sun commented on HDFS-14116:
-

Thanks [~shv]. I'm fine with forbidding using ORPP with any protocol other than 
{{ClientProtocol}}, as long as we document this in the user guide. I'll add 
that in HDFS-14154.

> Fix class cast error in NNThroughputBenchmark with ObserverReadProxyProvider.
> -
>
> Key: HDFS-14116
> URL: https://issues.apache.org/jira/browse/HDFS-14116
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Chen Liang
>Assignee: Chao Sun
>Priority: Major
> Fix For: HDFS-12943
>
> Attachments: HDFS-14116-HDFS-12943.000.patch, 
> HDFS-14116-HDFS-12943.001.patch, HDFS-14116-HDFS-12943.002.patch, 
> HDFS-14116-HDFS-12943.003.patch, HDFS-14116-HDFS-12943.004.patch, 
> HDFS-14116-HDFS-12943.005.patch
>
>
> Currently in {{ObserverReadProxyProvider}} constructor there is this line 
> {code}
> ((ClientHAProxyFactory) factory).setAlignmentContext(alignmentContext);
> {code}
> This could potentially cause failure, because it is possible that factory can 
> not be casted here. Specifically,  
> {{NameNodeProxiesClient.createFailoverProxyProvider}} is where the 
> constructor will be called, and there are two paths that could call into this:
> (1).{{NameNodeProxies.createProxy}}
> (2).{{NameNodeProxiesClient.createFailoverProxyProvider}}
> (2) works fine because it always uses {{ClientHAProxyFactory}} but (1) uses 
> {{NameNodeHAProxyFactory}} which can not be casted to 
> {{ClientHAProxyFactory}}, this happens when, for example, running 
> NNThroughputBenmarck. To fix this we can at least:
> 1. introduce setAlignmentContext to HAProxyFactory which is the parent of 
> both  ClientHAProxyFactory and NameNodeHAProxyFactory OR
> 2. only setAlignmentContext when it is ClientHAProxyFactory by, say, having a 
> if check with reflection. 
> Depending on whether it make sense to have alignment context for the case (1) 
> calling code paths.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-805) Block token: Client api changes for block token

2018-12-17 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-805:

Attachment: HDDS-805-HDDS-4.06.patch

> Block token: Client api changes for block token
> ---
>
> Key: HDDS-805
> URL: https://issues.apache.org/jira/browse/HDDS-805
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-805-HDDS-4.00.patch, HDDS-805-HDDS-4.01.patch, 
> HDDS-805-HDDS-4.02.patch, HDDS-805-HDDS-4.03.patch, HDDS-805-HDDS-4.04.patch, 
> HDDS-805-HDDS-4.05.patch, HDDS-805-HDDS-4.06.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14116) ObserverReadProxyProvider should work with protocols other than ClientProtocol

2018-12-17 Thread Konstantin Shvachko (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko resolved HDFS-14116.

  Resolution: Fixed
Hadoop Flags: Reviewed

Pushed to HDFS-12943 branch. Thanks everybody.

> ObserverReadProxyProvider should work with protocols other than ClientProtocol
> --
>
> Key: HDFS-14116
> URL: https://issues.apache.org/jira/browse/HDFS-14116
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Chen Liang
>Assignee: Chao Sun
>Priority: Major
> Fix For: HDFS-12943
>
> Attachments: HDFS-14116-HDFS-12943.000.patch, 
> HDFS-14116-HDFS-12943.001.patch, HDFS-14116-HDFS-12943.002.patch, 
> HDFS-14116-HDFS-12943.003.patch, HDFS-14116-HDFS-12943.004.patch, 
> HDFS-14116-HDFS-12943.005.patch
>
>
> Currently in {{ObserverReadProxyProvider}} constructor there is this line 
> {code}
> ((ClientHAProxyFactory) factory).setAlignmentContext(alignmentContext);
> {code}
> This could potentially cause failure, because it is possible that factory can 
> not be casted here. Specifically,  
> {{NameNodeProxiesClient.createFailoverProxyProvider}} is where the 
> constructor will be called, and there are two paths that could call into this:
> (1).{{NameNodeProxies.createProxy}}
> (2).{{NameNodeProxiesClient.createFailoverProxyProvider}}
> (2) works fine because it always uses {{ClientHAProxyFactory}} but (1) uses 
> {{NameNodeHAProxyFactory}} which can not be casted to 
> {{ClientHAProxyFactory}}, this happens when, for example, running 
> NNThroughputBenmarck. To fix this we can at least:
> 1. introduce setAlignmentContext to HAProxyFactory which is the parent of 
> both  ClientHAProxyFactory and NameNodeHAProxyFactory OR
> 2. only setAlignmentContext when it is ClientHAProxyFactory by, say, having a 
> if check with reflection. 
> Depending on whether it make sense to have alignment context for the case (1) 
> calling code paths.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14116) Fix class cast error in NNThroughputBenchmark with ObserverReadProxyProvider.

2018-12-17 Thread Konstantin Shvachko (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-14116:
---
Summary: Fix class cast error in NNThroughputBenchmark with 
ObserverReadProxyProvider.  (was: ObserverReadProxyProvider should work with 
protocols other than ClientProtocol)

> Fix class cast error in NNThroughputBenchmark with ObserverReadProxyProvider.
> -
>
> Key: HDFS-14116
> URL: https://issues.apache.org/jira/browse/HDFS-14116
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Chen Liang
>Assignee: Chao Sun
>Priority: Major
> Fix For: HDFS-12943
>
> Attachments: HDFS-14116-HDFS-12943.000.patch, 
> HDFS-14116-HDFS-12943.001.patch, HDFS-14116-HDFS-12943.002.patch, 
> HDFS-14116-HDFS-12943.003.patch, HDFS-14116-HDFS-12943.004.patch, 
> HDFS-14116-HDFS-12943.005.patch
>
>
> Currently in {{ObserverReadProxyProvider}} constructor there is this line 
> {code}
> ((ClientHAProxyFactory) factory).setAlignmentContext(alignmentContext);
> {code}
> This could potentially cause failure, because it is possible that factory can 
> not be casted here. Specifically,  
> {{NameNodeProxiesClient.createFailoverProxyProvider}} is where the 
> constructor will be called, and there are two paths that could call into this:
> (1).{{NameNodeProxies.createProxy}}
> (2).{{NameNodeProxiesClient.createFailoverProxyProvider}}
> (2) works fine because it always uses {{ClientHAProxyFactory}} but (1) uses 
> {{NameNodeHAProxyFactory}} which can not be casted to 
> {{ClientHAProxyFactory}}, this happens when, for example, running 
> NNThroughputBenmarck. To fix this we can at least:
> 1. introduce setAlignmentContext to HAProxyFactory which is the parent of 
> both  ClientHAProxyFactory and NameNodeHAProxyFactory OR
> 2. only setAlignmentContext when it is ClientHAProxyFactory by, say, having a 
> if check with reflection. 
> Depending on whether it make sense to have alignment context for the case (1) 
> calling code paths.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14130) Make ZKFC ObserverNode aware

2018-12-17 Thread xiangheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiangheng reassigned HDFS-14130:


Assignee: xiangheng

> Make ZKFC ObserverNode aware
> 
>
> Key: HDFS-14130
> URL: https://issues.apache.org/jira/browse/HDFS-14130
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha
>Affects Versions: HDFS-12943
>Reporter: Konstantin Shvachko
>Assignee: xiangheng
>Priority: Major
>
> Need to fix automatic failover with ZKFC. Currently it does not know about 
> ObserverNodes trying to convert them to SBNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14116) ObserverReadProxyProvider should work with protocols other than ClientProtocol

2018-12-17 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723519#comment-16723519
 ] 

Konstantin Shvachko commented on HDFS-14116:


v005 is a minimized version of Chao's patch. Tested it locally.

> ObserverReadProxyProvider should work with protocols other than ClientProtocol
> --
>
> Key: HDFS-14116
> URL: https://issues.apache.org/jira/browse/HDFS-14116
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Chen Liang
>Assignee: Chao Sun
>Priority: Major
> Fix For: HDFS-12943
>
> Attachments: HDFS-14116-HDFS-12943.000.patch, 
> HDFS-14116-HDFS-12943.001.patch, HDFS-14116-HDFS-12943.002.patch, 
> HDFS-14116-HDFS-12943.003.patch, HDFS-14116-HDFS-12943.004.patch, 
> HDFS-14116-HDFS-12943.005.patch
>
>
> Currently in {{ObserverReadProxyProvider}} constructor there is this line 
> {code}
> ((ClientHAProxyFactory) factory).setAlignmentContext(alignmentContext);
> {code}
> This could potentially cause failure, because it is possible that factory can 
> not be casted here. Specifically,  
> {{NameNodeProxiesClient.createFailoverProxyProvider}} is where the 
> constructor will be called, and there are two paths that could call into this:
> (1).{{NameNodeProxies.createProxy}}
> (2).{{NameNodeProxiesClient.createFailoverProxyProvider}}
> (2) works fine because it always uses {{ClientHAProxyFactory}} but (1) uses 
> {{NameNodeHAProxyFactory}} which can not be casted to 
> {{ClientHAProxyFactory}}, this happens when, for example, running 
> NNThroughputBenmarck. To fix this we can at least:
> 1. introduce setAlignmentContext to HAProxyFactory which is the parent of 
> both  ClientHAProxyFactory and NameNodeHAProxyFactory OR
> 2. only setAlignmentContext when it is ClientHAProxyFactory by, say, having a 
> if check with reflection. 
> Depending on whether it make sense to have alignment context for the case (1) 
> calling code paths.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14116) ObserverReadProxyProvider should work with protocols other than ClientProtocol

2018-12-17 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723521#comment-16723521
 ] 

Chen Liang commented on HDFS-14116:
---

Thanks [~shv] for the patch! +1 from me on v005 patch

> ObserverReadProxyProvider should work with protocols other than ClientProtocol
> --
>
> Key: HDFS-14116
> URL: https://issues.apache.org/jira/browse/HDFS-14116
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Chen Liang
>Assignee: Chao Sun
>Priority: Major
> Fix For: HDFS-12943
>
> Attachments: HDFS-14116-HDFS-12943.000.patch, 
> HDFS-14116-HDFS-12943.001.patch, HDFS-14116-HDFS-12943.002.patch, 
> HDFS-14116-HDFS-12943.003.patch, HDFS-14116-HDFS-12943.004.patch, 
> HDFS-14116-HDFS-12943.005.patch
>
>
> Currently in {{ObserverReadProxyProvider}} constructor there is this line 
> {code}
> ((ClientHAProxyFactory) factory).setAlignmentContext(alignmentContext);
> {code}
> This could potentially cause failure, because it is possible that factory can 
> not be casted here. Specifically,  
> {{NameNodeProxiesClient.createFailoverProxyProvider}} is where the 
> constructor will be called, and there are two paths that could call into this:
> (1).{{NameNodeProxies.createProxy}}
> (2).{{NameNodeProxiesClient.createFailoverProxyProvider}}
> (2) works fine because it always uses {{ClientHAProxyFactory}} but (1) uses 
> {{NameNodeHAProxyFactory}} which can not be casted to 
> {{ClientHAProxyFactory}}, this happens when, for example, running 
> NNThroughputBenmarck. To fix this we can at least:
> 1. introduce setAlignmentContext to HAProxyFactory which is the parent of 
> both  ClientHAProxyFactory and NameNodeHAProxyFactory OR
> 2. only setAlignmentContext when it is ClientHAProxyFactory by, say, having a 
> if check with reflection. 
> Depending on whether it make sense to have alignment context for the case (1) 
> calling code paths.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-393) Audit Parser tool for processing ozone audit logs

2018-12-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723515#comment-16723515
 ] 

Hadoop QA commented on HDDS-393:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
23s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 11 line(s) with tabs. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 30m 49s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m  
0s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
18s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.om.TestOzoneManager |
|   | hadoop.ozone.web.client.TestKeys |
|   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-393 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952093/HDDS-393.001.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  shellcheck  findbugs  
checkstyle  |
| uname | Linux be5d69c995e5 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 5426653 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| shellcheck | v0.4.6 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1947/artifact/out/whitespace-tabs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1947/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1947/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1947/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 1087 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/docs hadoop-ozone/common hadoop-ozone/tools U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1947/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Audit Parser tool for processing ozone audit logs
> -
>
> Key: H

[jira] [Updated] (HDFS-14116) ObserverReadProxyProvider should work with protocols other than ClientProtocol

2018-12-17 Thread Konstantin Shvachko (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-14116:
---
Attachment: HDFS-14116-HDFS-12943.005.patch

> ObserverReadProxyProvider should work with protocols other than ClientProtocol
> --
>
> Key: HDFS-14116
> URL: https://issues.apache.org/jira/browse/HDFS-14116
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Chen Liang
>Assignee: Chao Sun
>Priority: Major
> Fix For: HDFS-12943
>
> Attachments: HDFS-14116-HDFS-12943.000.patch, 
> HDFS-14116-HDFS-12943.001.patch, HDFS-14116-HDFS-12943.002.patch, 
> HDFS-14116-HDFS-12943.003.patch, HDFS-14116-HDFS-12943.004.patch, 
> HDFS-14116-HDFS-12943.005.patch
>
>
> Currently in {{ObserverReadProxyProvider}} constructor there is this line 
> {code}
> ((ClientHAProxyFactory) factory).setAlignmentContext(alignmentContext);
> {code}
> This could potentially cause failure, because it is possible that factory can 
> not be casted here. Specifically,  
> {{NameNodeProxiesClient.createFailoverProxyProvider}} is where the 
> constructor will be called, and there are two paths that could call into this:
> (1).{{NameNodeProxies.createProxy}}
> (2).{{NameNodeProxiesClient.createFailoverProxyProvider}}
> (2) works fine because it always uses {{ClientHAProxyFactory}} but (1) uses 
> {{NameNodeHAProxyFactory}} which can not be casted to 
> {{ClientHAProxyFactory}}, this happens when, for example, running 
> NNThroughputBenmarck. To fix this we can at least:
> 1. introduce setAlignmentContext to HAProxyFactory which is the parent of 
> both  ClientHAProxyFactory and NameNodeHAProxyFactory OR
> 2. only setAlignmentContext when it is ClientHAProxyFactory by, say, having a 
> if check with reflection. 
> Depending on whether it make sense to have alignment context for the case (1) 
> calling code paths.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-539) ozone datanode ignores the invalid options

2018-12-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723513#comment-16723513
 ] 

Hadoop QA commented on HDDS-539:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 28s{color} | {color:orange} root: The patch generated 1 new + 1 unchanged - 
0 fixed = 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 25m  9s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
41s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.om.TestOzoneManager |
|   | hadoop.ozone.web.client.TestKeys |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-539 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952096/HDDS-539.010.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux 4d99e5507e53 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 5426653 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1946/artifact/out/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1946/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1946/testReport/ |
| Max. process+thread count | 1084 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/container-service hadoop-ozone/integration-test U: . 
|
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1946/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> ozone datanode ignores the invalid options
> --
>
> Key: HDDS-539
> URL: https://issues.apache.org/jira/browse/HDDS-539
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Vinicius Higa Murakami
>Priority: Major
>  

[jira] [Updated] (HDDS-393) Audit Parser tool for processing ozone audit logs

2018-12-17 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-393:
---
Attachment: HDDS-393.001.patch
Status: Patch Available  (was: Open)

[~anu] , [~bharatviswa] - Thank you for the preview and early feedback. 
Attached patch 001.

> Audit Parser tool for processing ozone audit logs
> -
>
> Key: HDDS-393
> URL: https://issues.apache.org/jira/browse/HDDS-393
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: alpha2
> Attachments: HDDS-393.001.patch
>
>
> Jira to create audit parser tool to process ozone audit logs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-929) Remove ozone.max.key.len property

2018-12-17 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723481#comment-16723481
 ] 

Xiaoyu Yao commented on HDDS-929:
-

I did some research and found that this OZONE_MAX_KEY_LEN_DEFAULT = 1024 * 1024 
is mostly for restrict the protobuf packet size  to be less than 1 MB for 
performance. Should we reuse some other existing keys if exist instead of 
removing the checking logic completely. 

> Remove ozone.max.key.len property 
> --
>
> Key: HDDS-929
> URL: https://issues.apache.org/jira/browse/HDDS-929
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-929-HDDS-4.00.patch
>
>
> Generation of secret key in OzoneSecretManager is different from hadoop 
> SecretManager. Since key-pair generation is not the responsibility of 
> {{OzoneSecretManager}}/{{OzoneSecretKey}} we don't need maxKeyLen related 
> checks inside them. Jira proposes to remove ozone.max.key.len.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-99) Adding SCM Audit log

2018-12-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-99?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723480#comment-16723480
 ] 

Hudson commented on HDDS-99:


FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #15624 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15624/])
HDDS-99. Adding SCM Audit log. Contributed by Dinesh Chitlangia. (xyao: rev 
94b368f29fb5286253f4e5cac2d30b61cb62a7e5)
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/SCMAction.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/BlockGroup.java
* (add) hadoop-ozone/dist/src/main/conf/scm-audit-log4j2.properties
* (edit) hadoop-ozone/dist/dev-support/bin/dist-layout-stitching
* (edit) hadoop-ozone/common/src/main/bin/ozone
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMBlockProtocolServer.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeProtocolServer.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java


> Adding SCM Audit log
> 
>
> Key: HDDS-99
> URL: https://issues.apache.org/jira/browse/HDDS-99
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: SCM
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: alpha2
> Fix For: 0.4.0
>
> Attachments: HDDS-99.001.patch, HDDS-99.002.patch
>
>
> This ticket is opened to add SCM audit log.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-99) Adding SCM Audit log

2018-12-17 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-99?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-99:
---
Issue Type: New Feature  (was: Sub-task)
Parent: (was: HDDS-4)

> Adding SCM Audit log
> 
>
> Key: HDDS-99
> URL: https://issues.apache.org/jira/browse/HDDS-99
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: SCM
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: alpha2
> Fix For: 0.4.0
>
> Attachments: HDDS-99.001.patch, HDDS-99.002.patch
>
>
> This ticket is opened to add SCM audit log.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-99) Adding SCM Audit log

2018-12-17 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-99?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-99:
---
   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

Thanks [~dineshchitlangia] for the contribution and all for the reviews. I've 
commit the patch to trunk. 

> Adding SCM Audit log
> 
>
> Key: HDDS-99
> URL: https://issues.apache.org/jira/browse/HDDS-99
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: alpha2
> Fix For: 0.4.0
>
> Attachments: HDDS-99.001.patch, HDDS-99.002.patch
>
>
> This ticket is opened to add SCM audit log.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14116) ObserverReadProxyProvider should work with protocols other than ClientProtocol

2018-12-17 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723470#comment-16723470
 ] 

Konstantin Shvachko commented on HDFS-14116:


Also, it seems that NNThroughputBenchmark does not work with any HA 
configuration as it assumes that {{nnUri}} is a physical address rather than a 
logical one as in HA configuration. So let's just adopt v000 patch.
If desired we can open a new jira for trunk to make it work for HA config.

> ObserverReadProxyProvider should work with protocols other than ClientProtocol
> --
>
> Key: HDFS-14116
> URL: https://issues.apache.org/jira/browse/HDFS-14116
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Chen Liang
>Assignee: Chao Sun
>Priority: Major
> Fix For: HDFS-12943
>
> Attachments: HDFS-14116-HDFS-12943.000.patch, 
> HDFS-14116-HDFS-12943.001.patch, HDFS-14116-HDFS-12943.002.patch, 
> HDFS-14116-HDFS-12943.003.patch, HDFS-14116-HDFS-12943.004.patch
>
>
> Currently in {{ObserverReadProxyProvider}} constructor there is this line 
> {code}
> ((ClientHAProxyFactory) factory).setAlignmentContext(alignmentContext);
> {code}
> This could potentially cause failure, because it is possible that factory can 
> not be casted here. Specifically,  
> {{NameNodeProxiesClient.createFailoverProxyProvider}} is where the 
> constructor will be called, and there are two paths that could call into this:
> (1).{{NameNodeProxies.createProxy}}
> (2).{{NameNodeProxiesClient.createFailoverProxyProvider}}
> (2) works fine because it always uses {{ClientHAProxyFactory}} but (1) uses 
> {{NameNodeHAProxyFactory}} which can not be casted to 
> {{ClientHAProxyFactory}}, this happens when, for example, running 
> NNThroughputBenmarck. To fix this we can at least:
> 1. introduce setAlignmentContext to HAProxyFactory which is the parent of 
> both  ClientHAProxyFactory and NameNodeHAProxyFactory OR
> 2. only setAlignmentContext when it is ClientHAProxyFactory by, say, having a 
> if check with reflection. 
> Depending on whether it make sense to have alignment context for the case (1) 
> calling code paths.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-102) SCM CA: SCM CA server signs certificate for approved CSR

2018-12-17 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723463#comment-16723463
 ] 

Ajay Kumar edited comment on HDDS-102 at 12/17/18 11:25 PM:


[~anu] thanks for the important patch. LGTM. Some additional comments from what 
[~xyao] has already mentioned:

KeyCodec
 L213/258 Specify the  security provider as well? (i.e BC)

L238 readPublicKey: Shall we read public key first time form file and than 
cache it for further purposes?

DefaultApprover
 Method sign Shall we add documentation to ensure users call approver#validate 
before it.

DefaultCAServer
 L139: Typo "configureable"
 L62 should be

 

package-info

L76: Possible typo "The last, and method which never"
 L78 "CSR is the base" perhaps "is" should be replaced with "if"?

 

TestDefaultCAServer

Unused imports

L168 Shall we validate the received certificate? (signature etc)

 

TestDefaultProfile

Add a TODO for unimplemented test cases?

 

 


was (Author: ajayydv):
[~anu] thanks for the important patch. LGTM. Some additional comments from what 
[~xyao] has already mentioned:

KeyCodec
L213/258 Specify the  security provider as well? (i.e BC)

L238 readPublicKey: Shall we read public key first time form file and than 
cache it for further purposes?

DefaultApprover
Method sign Shall we add documentation to ensure users call approver#validate 
before it or we can refactor it to call validate internally to ensure we always 
sign a valida csr.


DefaultCAServer
L139: Typo "configureable"
L62 should be

 

package-info

L76: Possible typo "The last, and method which never"
L78 "CSR is the base" perhaps "is" should be replaced with "if"?

 

TestDefaultCAServer

Unused imports

L168 Shall we validate the received certificate? (signature etc)

 

TestDefaultProfile

Add a TODO for unimplemented test cases?

 

 

> SCM CA: SCM CA server signs certificate for approved CSR
> 
>
> Key: HDDS-102
> URL: https://issues.apache.org/jira/browse/HDDS-102
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-102-HDDS-4.001.patch, HDDS-102-HDDS-4.001.patch, 
> HDDS-102-HDDS-4.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-102) SCM CA: SCM CA server signs certificate for approved CSR

2018-12-17 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723463#comment-16723463
 ] 

Ajay Kumar commented on HDDS-102:
-

[~anu] thanks for the important patch. LGTM. Some additional comments from what 
[~xyao] has already mentioned:

KeyCodec
L213/258 Specify the  security provider as well? (i.e BC)

L238 readPublicKey: Shall we read public key first time form file and than 
cache it for further purposes?

DefaultApprover
Method sign Shall we add documentation to ensure users call approver#validate 
before it or we can refactor it to call validate internally to ensure we always 
sign a valida csr.


DefaultCAServer
L139: Typo "configureable"
L62 should be

 

package-info

L76: Possible typo "The last, and method which never"
L78 "CSR is the base" perhaps "is" should be replaced with "if"?

 

TestDefaultCAServer

Unused imports

L168 Shall we validate the received certificate? (signature etc)

 

TestDefaultProfile

Add a TODO for unimplemented test cases?

 

 

> SCM CA: SCM CA server signs certificate for approved CSR
> 
>
> Key: HDDS-102
> URL: https://issues.apache.org/jira/browse/HDDS-102
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-102-HDDS-4.001.patch, HDDS-102-HDDS-4.001.patch, 
> HDDS-102-HDDS-4.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-539) ozone datanode ignores the invalid options

2018-12-17 Thread Vinicius Higa Murakami (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723466#comment-16723466
 ] 

Vinicius Higa Murakami commented on HDDS-539:
-

Sure thing [~shashikant]! Thanks for the heads up :) 

> ozone datanode ignores the invalid options
> --
>
> Key: HDDS-539
> URL: https://issues.apache.org/jira/browse/HDDS-539
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Vinicius Higa Murakami
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-539.003.patch, HDDS-539.004.patch, 
> HDDS-539.005.patch, HDDS-539.006.patch, HDDS-539.007.patch, 
> HDDS-539.008.patch, HDDS-539.009.patch, HDDS-539.010.patch, HDDS-539.patch
>
>
> ozone datanode command starts datanode and ignores the invalid option, apart 
> from help
> {code:java}
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone datanode -help
> Starts HDDS Datanode
> {code}
> For all the other invalid options, it just ignores and starts the DN like 
> below:
> {code:java}
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone datanode -ABC
> 2018-09-22 00:59:34,462 [main] INFO - STARTUP_MSG:
> /
> STARTUP_MSG: Starting HddsDatanodeService
> STARTUP_MSG: host = 
> ctr-e138-1518143905142-481027-01-02.hwx.site/172.27.54.20
> STARTUP_MSG: args = [-ABC]
> STARTUP_MSG: version = 3.2.0-SNAPSHOT
> STARTUP_MSG: classpath = 
> /root/ozone-0.3.0-SNAPSHOT/etc/hadoop:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/commons-cli-1.2.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/kerb-crypto-1.0.1.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/guava-11.0.2.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/hadoop-auth-3.2.0-SNAPSHOT.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jsr305-3.0.0.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/commons-compress-1.4.1.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/commons-beanutils-1.9.3.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/commons-collections-3.2.2.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jsp-api-2.1.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/kerb-simplekdc-1.0.1.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/zookeeper-3.4.9.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/gson-2.2.4.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/token-provider-1.0.1.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/dnsjava-2.1.7.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/avro-1.7.7.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jersey-json-1.19.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/stax2-api-3.1.4.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/log4j-1.2.17.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/accessors-smart-1.2.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/commons-lang3-3.7.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jersey-server-1.19.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/netty-3.10.5.Final.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/snappy-java-1.0.5.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/kerby-config-1.0.1.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/kerby-util-1.0.1.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/httpclient-4.5.2.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jetty-security-9.3.19.v20170502.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/hadoop-annotations-3.2.0-SNAPSHOT.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/re2j-1.1.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jackson-databind-2.9.5.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/commons-math3-3.1.1.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/commons-logging-1.1.3.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jersey-core-1.19.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/kerb-client-1.0.1.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jsch-0.1.54.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jersey-servlet-1.19.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/asm-5.0.4.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jackson-core-2.9.5.jar:/root/ozone-0.3.0-SNAPSHOT

[jira] [Updated] (HDDS-539) ozone datanode ignores the invalid options

2018-12-17 Thread Vinicius Higa Murakami (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinicius Higa Murakami updated HDDS-539:

Attachment: HDDS-539.010.patch

> ozone datanode ignores the invalid options
> --
>
> Key: HDDS-539
> URL: https://issues.apache.org/jira/browse/HDDS-539
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Vinicius Higa Murakami
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-539.003.patch, HDDS-539.004.patch, 
> HDDS-539.005.patch, HDDS-539.006.patch, HDDS-539.007.patch, 
> HDDS-539.008.patch, HDDS-539.009.patch, HDDS-539.010.patch, HDDS-539.patch
>
>
> ozone datanode command starts datanode and ignores the invalid option, apart 
> from help
> {code:java}
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone datanode -help
> Starts HDDS Datanode
> {code}
> For all the other invalid options, it just ignores and starts the DN like 
> below:
> {code:java}
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone datanode -ABC
> 2018-09-22 00:59:34,462 [main] INFO - STARTUP_MSG:
> /
> STARTUP_MSG: Starting HddsDatanodeService
> STARTUP_MSG: host = 
> ctr-e138-1518143905142-481027-01-02.hwx.site/172.27.54.20
> STARTUP_MSG: args = [-ABC]
> STARTUP_MSG: version = 3.2.0-SNAPSHOT
> STARTUP_MSG: classpath = 
> /root/ozone-0.3.0-SNAPSHOT/etc/hadoop:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/commons-cli-1.2.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/kerb-crypto-1.0.1.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/guava-11.0.2.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/hadoop-auth-3.2.0-SNAPSHOT.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jsr305-3.0.0.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/commons-compress-1.4.1.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/commons-beanutils-1.9.3.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/commons-collections-3.2.2.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jsp-api-2.1.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/kerb-simplekdc-1.0.1.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/zookeeper-3.4.9.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/gson-2.2.4.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/token-provider-1.0.1.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/dnsjava-2.1.7.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/avro-1.7.7.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jersey-json-1.19.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/stax2-api-3.1.4.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/log4j-1.2.17.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/accessors-smart-1.2.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/commons-lang3-3.7.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jersey-server-1.19.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/netty-3.10.5.Final.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/snappy-java-1.0.5.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/kerby-config-1.0.1.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/kerby-util-1.0.1.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/httpclient-4.5.2.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jetty-security-9.3.19.v20170502.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/hadoop-annotations-3.2.0-SNAPSHOT.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/re2j-1.1.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jackson-databind-2.9.5.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/commons-math3-3.1.1.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/commons-logging-1.1.3.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jersey-core-1.19.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/kerb-client-1.0.1.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jsch-0.1.54.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jersey-servlet-1.19.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/asm-5.0.4.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jackson-core-2.9.5.jar:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/jetty-util-9.3.19.v20170502.jar:/root/ozone-0.3.0-SN

[jira] [Comment Edited] (HDDS-102) SCM CA: SCM CA server signs certificate for approved CSR

2018-12-17 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723463#comment-16723463
 ] 

Ajay Kumar edited comment on HDDS-102 at 12/17/18 11:26 PM:


[~anu] thanks for the important patch. LGTM. Some additional comments from what 
[~xyao] has already mentioned:

KeyCodec
 L213/258 Specify the  security provider as well? (i.e BC)

L238 readPublicKey: Shall we read public key first time form file and than 
cache it for further purposes?

DefaultApprover
 Method sign Shall we add documentation to ensure users call approver#validate 
before it.

DefaultCAServer
 L139: Typo "configureable"

 

package-info

L76: Possible typo "The last, and method which never"
 L78 "CSR is the base" perhaps "is" should be replaced with "if"?

 

TestDefaultCAServer

Unused imports

L168 Shall we validate the received certificate? (signature etc)

 

TestDefaultProfile

Add a TODO for unimplemented test cases?

 

 


was (Author: ajayydv):
[~anu] thanks for the important patch. LGTM. Some additional comments from what 
[~xyao] has already mentioned:

KeyCodec
 L213/258 Specify the  security provider as well? (i.e BC)

L238 readPublicKey: Shall we read public key first time form file and than 
cache it for further purposes?

DefaultApprover
 Method sign Shall we add documentation to ensure users call approver#validate 
before it.

DefaultCAServer
 L139: Typo "configureable"
 L62 should be

 

package-info

L76: Possible typo "The last, and method which never"
 L78 "CSR is the base" perhaps "is" should be replaced with "if"?

 

TestDefaultCAServer

Unused imports

L168 Shall we validate the received certificate? (signature etc)

 

TestDefaultProfile

Add a TODO for unimplemented test cases?

 

 

> SCM CA: SCM CA server signs certificate for approved CSR
> 
>
> Key: HDDS-102
> URL: https://issues.apache.org/jira/browse/HDDS-102
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-102-HDDS-4.001.patch, HDDS-102-HDDS-4.001.patch, 
> HDDS-102-HDDS-4.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12943) Consistent Reads from Standby Node

2018-12-17 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723272#comment-16723272
 ] 

Chen Liang edited comment on HDFS-12943 at 12/17/18 10:45 PM:
--

Hi [~brahmareddy],

Thanks for testing! The timeout issue seems interesting. To start with, it is 
expected to see some performance degradation *from CLI*, because CLI initiates 
a DFSClient every time for each command, a fresh DFSClient has to get status of 
name nodes every time. But if it is the same DFSClient being reused, this would 
not be an issue. I have never seen the second-call issue. Here is an output 
from our cluster (log outpu part omitted), and I think you are right about 
lowering dfs.ha.tail-edits.period, we had similar numbers here:
{code:java}
$time hdfs --loglevel debug dfs 
-Ddfs.client.failover.proxy.provider.***=org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider
 -mkdir /TestsORF1
real0m2.254s
user0m3.608s
sys 0m0.331s
$time hdfs --loglevel debug dfs 
-Ddfs.client.failover.proxy.provider.***=org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider
 -mkdir /TestsORF2
real0m2.159s
user0m3.855s
sys 0m0.330s{code}
Curious, how many NN you had in the testing? and was there any error from NN 
logs?


was (Author: vagarychen):
Hi [~brahmareddy],

Thanks for testing! The timeout issue seems interesting. To start with, it is 
expected to see some performance degradation *from CLI*, because CLI initiates 
a DFSClient every time for each command, a fresh DFSClient has to get status of 
name nodes every time. But if it is the same DFSClient being reused, this would 
not be an issue. I have never seen the second-call issue. Here is an output 
from our cluster (log outpu part omitted), and I think you are right about 
lowering dfs.ha.tail-edits.period, we had similar numbers here:
{code:java}
$time hdfs --loglevel debug dfs 
-Ddfs.client.failover.proxy.provider.ltx1-unonn01=org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider
 -mkdir /TestsORF1
real0m2.254s
user0m3.608s
sys 0m0.331s
$time hdfs --loglevel debug dfs 
-Ddfs.client.failover.proxy.provider.ltx1-unonn01=org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider
 -mkdir /TestsORF2
real0m2.159s
user0m3.855s
sys 0m0.330s{code}
 ** Curious, how many NN you had in the testing? and was there any error from 
NN logs?

> Consistent Reads from Standby Node
> --
>
> Key: HDFS-12943
> URL: https://issues.apache.org/jira/browse/HDFS-12943
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs
>Reporter: Konstantin Shvachko
>Priority: Major
> Attachments: ConsistentReadsFromStandbyNode.pdf, 
> ConsistentReadsFromStandbyNode.pdf, HDFS-12943-001.patch, 
> HDFS-12943-002.patch, TestPlan-ConsistentReadsFromStandbyNode.pdf
>
>
> StandbyNode in HDFS is a replica of the active NameNode. The states of the 
> NameNodes are coordinated via the journal. It is natural to consider 
> StandbyNode as a read-only replica. As with any replicated distributed system 
> the problem of stale reads should be resolved. Our main goal is to provide 
> reads from standby in a consistent way in order to enable a wide range of 
> existing applications running on top of HDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14130) Make ZKFC ObserverNode aware

2018-12-17 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723435#comment-16723435
 ] 

Konstantin Shvachko commented on HDFS-14130:


There is no contradiction. It's just stages. In this jira we want to make ZKFC 
do failover to SBN just as it does without Observers. With this fix Observers 
will not be used for failover. They can become failover targets when Observer 
is transitioned (say manually) to Standby.
The goal for HDFS-13182 is to go further and make it possible to failover 
directly to Observer. It will still have to transition to Standby in the 
process, but it will happen automatically.
Hope this makes sense.

> Make ZKFC ObserverNode aware
> 
>
> Key: HDFS-14130
> URL: https://issues.apache.org/jira/browse/HDFS-14130
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha
>Affects Versions: HDFS-12943
>Reporter: Konstantin Shvachko
>Priority: Major
>
> Need to fix automatic failover with ZKFC. Currently it does not know about 
> ObserverNodes trying to convert them to SBNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-14116) ObserverReadProxyProvider should work with protocols other than ClientProtocol

2018-12-17 Thread Konstantin Shvachko (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko reopened HDFS-14116:


Reopening. Yes I was looking at this patch and it is as [~vagarychen] said.
The {{AlignmentContext}} is set in {{ClientHAProfyFactory}}. It is sort of the 
source of truth there. ORPP needs that to function properly. 
NNThroughputBenchmark uses in the end {{NameNodePoxyFactory}} to create 
essentially a non-ha proxy for a non-ClientProtocol interface. So it should use 
{{createNonHAProxy()}} rather than building ORPP.
So I propose to revert the patch. And let's think how we should fix it.
We should make {{NameNodeProxiesClient.createFailoverProxyProvider()}} return 
null if somebody tries to instantiate ORPP with non-ClientProtocol interface. 
Then things should fall in place.

> ObserverReadProxyProvider should work with protocols other than ClientProtocol
> --
>
> Key: HDFS-14116
> URL: https://issues.apache.org/jira/browse/HDFS-14116
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Chen Liang
>Assignee: Chao Sun
>Priority: Major
> Fix For: HDFS-12943
>
> Attachments: HDFS-14116-HDFS-12943.000.patch, 
> HDFS-14116-HDFS-12943.001.patch, HDFS-14116-HDFS-12943.002.patch, 
> HDFS-14116-HDFS-12943.003.patch, HDFS-14116-HDFS-12943.004.patch
>
>
> Currently in {{ObserverReadProxyProvider}} constructor there is this line 
> {code}
> ((ClientHAProxyFactory) factory).setAlignmentContext(alignmentContext);
> {code}
> This could potentially cause failure, because it is possible that factory can 
> not be casted here. Specifically,  
> {{NameNodeProxiesClient.createFailoverProxyProvider}} is where the 
> constructor will be called, and there are two paths that could call into this:
> (1).{{NameNodeProxies.createProxy}}
> (2).{{NameNodeProxiesClient.createFailoverProxyProvider}}
> (2) works fine because it always uses {{ClientHAProxyFactory}} but (1) uses 
> {{NameNodeHAProxyFactory}} which can not be casted to 
> {{ClientHAProxyFactory}}, this happens when, for example, running 
> NNThroughputBenmarck. To fix this we can at least:
> 1. introduce setAlignmentContext to HAProxyFactory which is the parent of 
> both  ClientHAProxyFactory and NameNodeHAProxyFactory OR
> 2. only setAlignmentContext when it is ClientHAProxyFactory by, say, having a 
> if check with reflection. 
> Depending on whether it make sense to have alignment context for the case (1) 
> calling code paths.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-99) Adding SCM Audit log

2018-12-17 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-99?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723413#comment-16723413
 ] 

Xiaoyu Yao commented on HDDS-99:


+1, Let's fix the audit object creation issue in followup. I will commit this 
one shortly.

> Adding SCM Audit log
> 
>
> Key: HDDS-99
> URL: https://issues.apache.org/jira/browse/HDDS-99
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: alpha2
> Attachments: HDDS-99.001.patch, HDDS-99.002.patch
>
>
> This ticket is opened to add SCM audit log.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14059) Test reads from standby on a secure cluster with Configured failover

2018-12-17 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723411#comment-16723411
 ] 

Konstantin Shvachko commented on HDFS-14059:


Hi [~brahmareddy], some benchmark results were published by [~vagarychen] under 
HDFS-14058. Please take a look.
{{dfs.ha.tail-edits.period}} is not very much relevant any more with fast edits 
tailing, see HDFS-13150.

> Test reads from standby on a secure cluster with Configured failover
> 
>
> Key: HDFS-14059
> URL: https://issues.apache.org/jira/browse/HDFS-14059
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Konstantin Shvachko
>Assignee: Plamen Jeliazkov
>Priority: Major
>
> Run standard HDFS tests to verify reading from ObserverNode on a secure HA 
> cluster with {{ConfiguredFailoverProxyProvider}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-102) SCM CA: SCM CA server signs certificate for approved CSR

2018-12-17 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723393#comment-16723393
 ] 

Xiaoyu Yao commented on HDDS-102:
-

Thanks [~anu] for the patch. It looks good to me overall. Here are a few minor 
comments:

 

PKIProfile.java

 

Line 57-67: I think we could move getGeneralNames() and 
isSupportedGeneralName() into the implementation.

And validateGeneralName() is a good candidate for the interface method.

Also, getGeneralNameTypes() and isSupportedGeneralNameType() will be more 
accurate. 

Line 84: NIT: getExtensions() -> getSupportedExtensions()

Line 91: NIT: the name could be simpified into isSupported(Extension 
extension), also could be potentially consolidated into

implementation detail as shown in validateExtension() to simplify the Profile 
interface.

Line 105: can validateExtendedKeyUsage covered by validateExtension()?

Line 125/133: can you elaboration on the difference between isValidRDN and 
validateRND, from the declaration both accept a RDN name and returns boolean?

 

BaseApprover.java

Line 93: need to check null for returned array from 
attribute.getAttributeValues() to avoid NPE.

Line 102: NIT: the comment is incomplete.

Line 107: NIT: maybe rename it to getExtensionList to be consistent with 
getExtensionsList.

Line 110: same as Line 93

 

CertificateServer.java

Line 107: can we move ApprovalType enum to CertificateApprover interface as 
CertificateApprover#type?

 

CertificateSignRequest.java

Line 111: this can be moved within try-with-resource and 113 can be removed.

 

DefaultCAServer.java

Line 206: do we need to reread CA public/private key from file for each CSR? 
This may slow down the perf of the CA server.

Line 207/208: can we use DateTime#toDate() instead of java.sql.Date.valueOf?

 

 SecurityConfig.java

Line 176: {color:#1948a6}HDDS_X509_DEFAULT_DURATION is confusing with 
HDDS_X509_CERT_DURATION_DEFAULT.{color}

Maybe we can just name them as HDDS_X509_CERT_VALID_DURATION AND 
HDDS_X509_CERT_VALID_DURATION_DEFAULT

> SCM CA: SCM CA server signs certificate for approved CSR
> 
>
> Key: HDDS-102
> URL: https://issues.apache.org/jira/browse/HDDS-102
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-102-HDDS-4.001.patch, HDDS-102-HDDS-4.001.patch, 
> HDDS-102-HDDS-4.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-901) MultipartUpload: S3 API for Initiate multipart upload

2018-12-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723388#comment-16723388
 ] 

Hadoop QA commented on HDDS-901:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 24m 17s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
34s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.om.TestOzoneManager |
|   | hadoop.ozone.web.client.TestKeys |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-901 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952076/HDDS-901.03.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux 790f5f8adbc3 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 5426653 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1945/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1945/testReport/ |
| Max. process+thread count | 1081 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/dist hadoop-ozone/s3gateway U: hadoop-ozone |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1945/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> MultipartUpload: S3 API for Initiate multipart upload
> -
>
> Key: HDDS-901
> URL: https://issues.apache.org/jira/browse/HDDS-901
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-901.00.patch, HDDS-901.01.patch, HDDS-901.02.patch, 
> HDDS-901.03.patch
>
>
> This Jira is to implement this API.
> [https://docs.aws.amazon.co

[jira] [Commented] (HDFS-14149) Adjust annotations on new interfaces/classes for SBN reads.

2018-12-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723378#comment-16723378
 ] 

Hadoop QA commented on HDFS-14149:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
3s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
32s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
31s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 6s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
16s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
25s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
13s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
55s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
46s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 50s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
34s{color} | {color:red} The patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}182m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
|   | hadoop.hdfs.server.blockmanagement.TestBlockManagerSafeMode |
|   | hadoop.hdfs.TestWriteReadStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14149 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952066/HDFS-14149-HDFS-12943.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkst

[jira] [Commented] (HDDS-902) MultipartUpload: S3 API for uploading a part file

2018-12-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723353#comment-16723353
 ] 

Hadoop QA commented on HDDS-902:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HDDS-902 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDDS-902 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952079/HDDS-902.02.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1944/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> MultipartUpload: S3 API for uploading a part file
> -
>
> Key: HDDS-902
> URL: https://issues.apache.org/jira/browse/HDDS-902
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-902.00.patch, HDDS-902.02.patch, HDDS-902.03.patch, 
> HDDS-902.04.patch
>
>
> This Jira is created to track the work required for Uploading a part.
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPart.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-908) NPE in TestOzoneRpcClient

2018-12-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723347#comment-16723347
 ] 

Hudson commented on HDDS-908:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #15622 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15622/])
HDDS-908: NPE in TestOzoneRpcClient. Contributed by Ajay Kumar. (aengineer: rev 
54266538192a558c6d80725c25912005090e14c4)
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java


> NPE in TestOzoneRpcClient
> -
>
> Key: HDDS-908
> URL: https://issues.apache.org/jira/browse/HDDS-908
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-908.00.patch
>
>
> Fix NPE in TestOzoneRpcClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-902) MultipartUpload: S3 API for uploading a part file

2018-12-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723344#comment-16723344
 ] 

Hadoop QA commented on HDDS-902:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 28s{color} | {color:orange} root: The patch generated 1 new + 2 unchanged - 
0 fixed = 3 total (was 2) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 11 line(s) with tabs. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 33s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
38s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.s3.endpoint.TestObjectPut |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-902 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952075/HDDS-902.04.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux aea7c6ddf202 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 71e0b0d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1943/artifact/out/diff-checkstyle-root.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1943/artifact/out/whitespace-tabs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1943/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1943/testReport/ |
| Max. process+thread count | 197 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common hadoop-ozone/client hadoop-ozone/common 
hadoop-ozone/dist hadoop-ozone/integration-test hadoop-ozone/ozone-manager 
hadoop-ozone/s3gateway U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1943/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> MultipartUpload: S3 API for uploading a part file
> -
>
> Key: HDDS-902
> URL: https://issues.apache.org/jira/browse/HDDS-902
> Project: Hadoop Distri

[jira] [Updated] (HDDS-908) NPE in TestOzoneRpcClient

2018-12-17 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-908:
--
   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

[~dineshchitlangia] Thanks for the review. [~ajayydv] Thanks for the 
contribution. Irrespective of this issue is causing an NPE, adding the null 
check is the right thing to do. I have committed this patch to the trunk

> NPE in TestOzoneRpcClient
> -
>
> Key: HDDS-908
> URL: https://issues.apache.org/jira/browse/HDDS-908
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-908.00.patch
>
>
> Fix NPE in TestOzoneRpcClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-924) MultipartUpload: S3 APi for complete Multipart Upload

2018-12-17 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-924 started by Bharat Viswanadham.
---
> MultipartUpload: S3 APi for complete Multipart Upload
> -
>
> Key: HDDS-924
> URL: https://issues.apache.org/jira/browse/HDDS-924
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> This Jira is to implement Complete Multipart Upload S3 API.
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadComplete.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >