[jira] [Commented] (HDFS-13601) Optimize ByteString conversions in PBHelper

2018-05-21 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483545#comment-16483545
 ] 

Xiao Chen commented on HDFS-13601:
--

Thanks Andrew for the work here.

Looks pretty good to me overall. Some minors:
 - {{DatanodeID}} constructor: it seems a {{DatanodeID(String datanodeUuid, 
DatanodeID from)}} overload can still be used so no changes to 
TestComputeInvalidateWork / DatanodeRegistration necessary
 - Can we make the var name {{fixedBytestringCache}} {{bytestringCache}} to be 
camel case (ByteString instead of Bytestring)? At one point I saw that as 'by 
test ring' and get myself confused for a nanosecond. :)
 - Do you think it's helpful to add some comments in DatanodeId to explain the 
perf motivation of saving the ByteString's?

For the extra memory increase, since this only applies to a few fields which 
should have a small set of values, it should be ok. (linux user/group names 
have max length of 32 chars, bpid/kind/service are smaller).

> Optimize ByteString conversions in PBHelper
> ---
>
> Key: HDFS-13601
> URL: https://issues.apache.org/jira/browse/HDFS-13601
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Major
> Attachments: HDFS-13601.001.patch, HDFS-13601.002.patch
>
>
> While doing some profiling of the NN with JMC, I saw a lot of time being 
> spent on String->ByteString conversions. These are often the same strings 
> being converted over and over again, meaning there's room for optimization.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13601) Optimize ByteString conversions in PBHelper

2018-05-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-13601:
---
Attachment: HDFS-13601.002.patch

> Optimize ByteString conversions in PBHelper
> ---
>
> Key: HDFS-13601
> URL: https://issues.apache.org/jira/browse/HDFS-13601
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Major
> Attachments: HDFS-13601.001.patch, HDFS-13601.002.patch
>
>
> While doing some profiling of the NN with JMC, I saw a lot of time being 
> spent on String->ByteString conversions. These are often the same strings 
> being converted over and over again, meaning there's room for optimization.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13601) Optimize ByteString conversions in PBHelper

2018-05-21 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483423#comment-16483423
 ] 

genericqa commented on HDFS-13601:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
32s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 59s{color} | {color:orange} hadoop-hdfs-project: The patch generated 2 new + 
51 unchanged - 0 fixed = 53 total (was 51) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 11m  
2s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
46s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 5 new 
+ 0 unchanged - 0 fixed = 5 total (was 0) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
4s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
31s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 54s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
28s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}174m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  Found reliance on default encoding in new 
org.apache.hadoop.hdfs.protocol.DatanodeID(String, String, String, int, int, 
int, int):in new org.apache.hadoop.hdfs.protocol.DatanodeID(String, String, 
String, int, int, int, int): String.getBytes()  At DatanodeID.java:[line 99] |
|  |  Found reliance on default encoding in 
org.apache.hadoop.hdfs.protocolPB.PBHelperClient.getFixedByteString(String):in 
org.apache.hadoop.hdfs.protocolPB.PBHelperClient.getFixedByteString(

[jira] [Updated] (HDDS-49) Standalone protocol should use grpc in place of netty.

2018-05-21 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-49?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-49:
--
Attachment: HDDS-49.007.patch

> Standalone protocol should use grpc in place of netty.
> --
>
> Key: HDDS-49
> URL: https://issues.apache.org/jira/browse/HDDS-49
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-49.001.patch, HDDS-49.002.patch, HDDS-49.003.patch, 
> HDDS-49.004.patch, HDDS-49.005.patch, HDDS-49.006.patch, HDDS-49.007.patch
>
>
> Currently an Ozone client in standalone communicates with datanode over 
> netty. However while using ratis, grpc is the default protocol. 
> In order to reduce the number of rpc protocol and handling, this jira aims to 
> convert the standalone protocol to use grpc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-49) Standalone protocol should use grpc in place of netty.

2018-05-21 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-49?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483496#comment-16483496
 ] 

Mukul Kumar Singh commented on HDDS-49:
---

Thanks for the review [~anu]. The v7 patch addresses the review comments.

DatanodeDetailsProto: Instead of adding another port to DatabideDetailsProto, 
can we support a port message. For example,
messsage port { string name; uint32 port; }

bq. this will be fixed by HDDS-88.

2. MiniOzoneClusterImpl.java:227. Not a change from you, but can please rename 
conf to config? That will fix a checkstyle warning and I think it is a very 
fair warning.
bq. done

3. With this we have now 3 RPC service ports on Datanode, Don't you think that 
is excessive ? and we should start either Netty or gRPC and not both, 
especially since we decided not to introduce new pipelines.
bq. done



> Standalone protocol should use grpc in place of netty.
> --
>
> Key: HDDS-49
> URL: https://issues.apache.org/jira/browse/HDDS-49
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-49.001.patch, HDDS-49.002.patch, HDDS-49.003.patch, 
> HDDS-49.004.patch, HDDS-49.005.patch, HDDS-49.006.patch
>
>
> Currently an Ozone client in standalone communicates with datanode over 
> netty. However while using ratis, grpc is the default protocol. 
> In order to reduce the number of rpc protocol and handling, this jira aims to 
> convert the standalone protocol to use grpc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-70) Fix config names for secure ksm and scm

2018-05-21 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-70?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483428#comment-16483428
 ] 

genericqa commented on HDDS-70:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  6m 
51s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
45s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 32m  
9s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
29s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
26s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/acceptance-test hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
18s{color} | {color:red} hadoop-hdds/common in HDDS-4 has 19 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
47s{color} | {color:red} hadoop-hdds/server-scm in HDDS-4 has 1 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
48s{color} | {color:red} hadoop-ozone/ozone-manager in HDDS-4 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
19s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 30m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
33s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 35s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test hadoop-ozone/acceptance-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
23s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:

[jira] [Commented] (HDFS-13540) DFSStripedInputStream should only allocate new buffers when reading

2018-05-21 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483434#comment-16483434
 ] 

Xiao Chen commented on HDFS-13540:
--

Failed tests are related, because the pool is a static var of the class. Mock 
is pretty difficult as {{ElasticByteBufferPool}} is a final class, so went with 
the approach in  [^HDFS-13540.05.patch]  to test this while change as little to 
the {{ElasticByteBufferPool}} class as possible.

> DFSStripedInputStream should only allocate new buffers when reading
> ---
>
> Key: HDFS-13540
> URL: https://issues.apache.org/jira/browse/HDFS-13540
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HDFS-13540.01.patch, HDFS-13540.02.patch, 
> HDFS-13540.03.patch, HDFS-13540.04.patch, HDFS-13540.05.patch
>
>
> This was found in the same scenario where HDFS-13539 is caught.
> There are 2 OOM that looks interesting:
> {noformat}
> FSDataInputStream#close error:
> OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct 
> buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:694)
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
> at 
> org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:672)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.close(DFSStripedInputStream.java:181)
> at java.io.FilterInputStream.close(FilterInputStream.java:181)
> {noformat}
> and 
> {noformat}
> org/apache/hadoop/fs/FSDataInputStream#unbuffer failed: error:
> OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct 
> buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:694)
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
> at 
> org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.unbuffer(DFSInputStream.java:1782)
> at 
> org.apache.hadoop.fs.StreamCapabilitiesPolicy.unbuffer(StreamCapabilitiesPolicy.java:48)
> at 
> org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:230)
> {noformat}
> As the stack trace goes, {{resetCurStripeBuffer}} will get buffer from the 
> buffer pool. We could save the cost of doing so if it's not for a read (e.g. 
> close, unbuffer etc.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13540) DFSStripedInputStream should only allocate new buffers when reading

2018-05-21 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13540:
-
Attachment: HDFS-13540.05.patch

> DFSStripedInputStream should only allocate new buffers when reading
> ---
>
> Key: HDFS-13540
> URL: https://issues.apache.org/jira/browse/HDFS-13540
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HDFS-13540.01.patch, HDFS-13540.02.patch, 
> HDFS-13540.03.patch, HDFS-13540.04.patch, HDFS-13540.05.patch
>
>
> This was found in the same scenario where HDFS-13539 is caught.
> There are 2 OOM that looks interesting:
> {noformat}
> FSDataInputStream#close error:
> OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct 
> buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:694)
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
> at 
> org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:672)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.close(DFSStripedInputStream.java:181)
> at java.io.FilterInputStream.close(FilterInputStream.java:181)
> {noformat}
> and 
> {noformat}
> org/apache/hadoop/fs/FSDataInputStream#unbuffer failed: error:
> OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct 
> buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:694)
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
> at 
> org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.unbuffer(DFSInputStream.java:1782)
> at 
> org.apache.hadoop.fs.StreamCapabilitiesPolicy.unbuffer(StreamCapabilitiesPolicy.java:48)
> at 
> org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:230)
> {noformat}
> As the stack trace goes, {{resetCurStripeBuffer}} will get buffer from the 
> buffer pool. We could save the cost of doing so if it's not for a read (e.g. 
> close, unbuffer etc.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13339) Volume reference can't be released and leads to deadlock when DataXceiver does a check volume

2018-05-21 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483454#comment-16483454
 ] 

Xiao Chen commented on HDFS-13339:
--

Thanks [~liaoyuxiangqin] for reporting the issue, great analysis!
Also thanks Zsolt for providing a patch and Daryn for a quick review.

It doesn't seem like we have a good way to unit test this, so I think we can go 
without a test.
The failed TestDatasetVolumeChecker tests look related though.

> Volume reference can't be released and leads to deadlock when DataXceiver 
> does a check volume
> -
>
> Key: HDFS-13339
> URL: https://issues.apache.org/jira/browse/HDFS-13339
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
> Environment: os: Linux 2.6.32-358.el6.x86_64
> hadoop version: hadoop-3.2.0-SNAPSHOT
> unit: mvn test -Pnative 
> -Dtest=TestDataNodeVolumeFailureReporting#testVolFailureStatsPreservedOnNNRestart
>Reporter: liaoyuxiangqin
>Assignee: liaoyuxiangqin
>Priority: Critical
>  Labels: DataNode, volumes
> Attachments: HDFS-13339.001.patch, HDFS-13339.002.patch
>
>
> When i execute Unit Test of
>  TestDataNodeVolumeFailureReporting#testVolFailureStatsPreservedOnNNRestart, 
> the process blocks on waitReplication, detail information as follows:
> [INFO] ---
>  [INFO] T E S T S
>  [INFO] ---
>  [INFO] Running 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
>  [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 307.492 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
>  [ERROR] 
> testVolFailureStatsPreservedOnNNRestart(org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting)
>  Time elapsed: 307.206 s <<< ERROR!
>  java.util.concurrent.TimeoutException: Timed out waiting for /test1 to reach 
> 2 replicas
>  at org.apache.hadoop.hdfs.DFSTestUtil.waitReplication(DFSTestUtil.java:800)
>  at 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting.testVolFailureStatsPreservedOnNNRestart(TestDataNodeVolumeFailureReporting.java:283)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-88) Create separate message structure to represent ports in DatanodeDetails

2018-05-21 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-88?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483404#comment-16483404
 ] 

genericqa commented on HDDS-88:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 22 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
2s{color} | {color:red} hadoop-hdds/common in trunk has 19 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
41s{color} | {color:red} hadoop-hdds/server-scm in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
41s{color} | {color:red} hadoop-ozone/ozone-manager in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
44s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 28m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
6s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 29s{color} 
| {color:red} server-scm in the patch failed. {color} |
| {

[jira] [Commented] (HDFS-13589) Add dfsAdmin command to query if "upgrade" is finalized

2018-05-21 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483383#comment-16483383
 ] 

genericqa commented on HDFS-13589:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
44s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 16m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
29s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  4s{color} | {color:orange} hadoop-hdfs-project: The patch generated 6 new + 
402 unchanged - 0 fixed = 408 total (was 402) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch 3 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
31s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}104m  1s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 49s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}215m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS 
|
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
|   | hadoop.hdfs.server.namenode.ha.TestHASafeMode |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13589 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12

[jira] [Commented] (HDFS-13589) Add dfsAdmin command to query if "upgrade" is finalized

2018-05-21 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483233#comment-16483233
 ] 

Bharat Viswanadham commented on HDFS-13589:
---

[~hanishakoneru] Thanks for the patch. Overall patch LGTM.

Few minor comments:
 # Can we add @return to upgradeStatus in DistributedFileSystem.java. Similar 
to ClientProtocol
 # Javadoc missing for DFSAdmin.java getUpgradeStatus and finalizeUpgrade
 # Line 687, 712 in DFSAdmin contains tab characters

> Add dfsAdmin command to query if "upgrade" is finalized
> ---
>
> Key: HDFS-13589
> URL: https://issues.apache.org/jira/browse/HDFS-13589
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDFS-13589.001.patch, HDFS-13589.002.patch, 
> HDFS-13589.003.patch, HDFS-13589.004.patch
>
>
> When we do upgrade on a Namenode (non rollingUpgrade), we should be able to 
> query whether the upgrade has been finalized or not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-70) Fix config names for secure ksm and scm

2018-05-21 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-70?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483215#comment-16483215
 ] 

genericqa commented on HDDS-70:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
1s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
37s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m  
3s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
56s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m  
8s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test hadoop-ozone/acceptance-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
7s{color} | {color:red} hadoop-hdds/common in HDDS-4 has 19 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
45s{color} | {color:red} hadoop-hdds/server-scm in HDDS-4 has 1 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
39s{color} | {color:red} hadoop-ozone/ozone-manager in HDDS-4 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
39s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
33s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test hadoop-ozone/acceptance-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
21s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:

[jira] [Commented] (HDFS-13601) Optimize ByteString conversions in PBHelper

2018-05-21 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483210#comment-16483210
 ] 

Andrew Wang commented on HDFS-13601:


I've attached a patch for some flavor and a precommit run. The basic idea is to 
cache what are likely to be fixed strings, or strings coming from a limited set.

I tested this on CDH 5, but the same findings should also apply to trunk. For a 
pure listing-with-locations workload, JMC shows a reduction of TLAB allocation 
from 499MB/s to 384MB/s after applying this patch. Previously, 13% of stacks 
showed up in StringEncoder.encode (converting from String to byte array for 
PB), and now that's reduced to 5.6%. The hotspot is now creating the 
LocatedBlocks and adding all the StorageIDs, which is something separate to 
tackle.

> Optimize ByteString conversions in PBHelper
> ---
>
> Key: HDFS-13601
> URL: https://issues.apache.org/jira/browse/HDFS-13601
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Major
> Attachments: HDFS-13601.001.patch
>
>
> While doing some profiling of the NN with JMC, I saw a lot of time being 
> spent on String->ByteString conversions. These are often the same strings 
> being converted over and over again, meaning there's room for optimization.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13601) Optimize ByteString conversions in PBHelper

2018-05-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-13601:
---
Status: Patch Available  (was: Open)

> Optimize ByteString conversions in PBHelper
> ---
>
> Key: HDFS-13601
> URL: https://issues.apache.org/jira/browse/HDFS-13601
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.1, 3.1.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Major
> Attachments: HDFS-13601.001.patch
>
>
> While doing some profiling of the NN with JMC, I saw a lot of time being 
> spent on String->ByteString conversions. These are often the same strings 
> being converted over and over again, meaning there's room for optimization.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-82) Merge ContainerData and ContainerStatus classes

2018-05-21 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-82?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483208#comment-16483208
 ] 

genericqa commented on HDDS-82:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
59s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  0s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
3s{color} | {color:red} hadoop-hdds/common in trunk has 19 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 29m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 29m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 29s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
36s{color} | {color:red} hadoop-hdds_container-service generated 3 new + 1 
unchanged - 0 fixed = 4 total (was 1) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
6s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} container-service in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 28s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}151m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestOzoneConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client

[jira] [Commented] (HDDS-70) Fix config names for secure ksm and scm

2018-05-21 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-70?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483205#comment-16483205
 ] 

Ajay Kumar commented on HDDS-70:


[~xyao] removed "ozone.web.authentication.kerberos.principal" from 
ozone-default.xml as we are not using it anymore. TestOzoneConfigurationFields 
is still failing as 
{{ozone.scm.container.creation.lease.timeout,ozone.scm.container.close.threshold}}
 are not added to xml file but these are not related to our security changes 
and we can fix them in trunk.

> Fix config names for secure ksm and scm
> ---
>
> Key: HDDS-70
> URL: https://issues.apache.org/jira/browse/HDDS-70
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-70-HDDS-4.00.patch, HDDS-70-HDDS-4.01.patch, 
> HDDS-70-HDDS-4.02.patch, HDDS-70-HDDS-4.03.patch, HDDS-70-HDDS-4.04.patch
>
>
> There are some inconsistencies in ksm and scm config for kerberos. Jira 
> intends to correct them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13601) Optimize ByteString conversions in PBHelper

2018-05-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-13601:
---
Attachment: HDFS-13601.001.patch

> Optimize ByteString conversions in PBHelper
> ---
>
> Key: HDFS-13601
> URL: https://issues.apache.org/jira/browse/HDFS-13601
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Major
> Attachments: HDFS-13601.001.patch
>
>
> While doing some profiling of the NN with JMC, I saw a lot of time being 
> spent on String->ByteString conversions. These are often the same strings 
> being converted over and over again, meaning there's room for optimization.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-82) Merge ContainerData and ContainerStatus classes

2018-05-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-82?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483204#comment-16483204
 ] 

Hudson commented on HDDS-82:


SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14248 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14248/])
HDDS-82. Merge ContainerData and ContainerStatus classes. Contributed by (xyao: 
rev 5e88126776e6d682a48f737d8ab1ad0e04d3e767)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerPersistence.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/helpers/ContainerData.java
* (delete) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerStatus.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/TopNOrderedContainerDeletionChoosingPolicy.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerManagerImpl.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/RandomContainerDeletionChoosingPolicy.java
* (edit) hadoop-hdds/common/src/main/proto/DatanodeContainerProtocol.proto
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/interfaces/ContainerDeletionChoosingPolicy.java


> Merge ContainerData and ContainerStatus classes
> ---
>
> Key: HDDS-82
> URL: https://issues.apache.org/jira/browse/HDDS-82
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: reviewed
> Fix For: 0.2.1
>
> Attachments: HDDS-82.001.patch, HDDS-82.002.patch, HDDS-82.003.patch, 
> HDDS-82.004.patch
>
>
> According to refactoring of containerIO, ContainerData has common fields for 
> different kinds of containerTypes, and each Container will extend 
> ConatinerData to add its fields. So, for this merging ContainerStatus fields 
> to ConatinerData.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-70) Fix config names for secure ksm and scm

2018-05-21 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-70?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-70:
---
Attachment: HDDS-70-HDDS-4.04.patch

> Fix config names for secure ksm and scm
> ---
>
> Key: HDDS-70
> URL: https://issues.apache.org/jira/browse/HDDS-70
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-70-HDDS-4.00.patch, HDDS-70-HDDS-4.01.patch, 
> HDDS-70-HDDS-4.02.patch, HDDS-70-HDDS-4.03.patch, HDDS-70-HDDS-4.04.patch
>
>
> There are some inconsistencies in ksm and scm config for kerberos. Jira 
> intends to correct them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13601) Optimize ByteString conversions in PBHelper

2018-05-21 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-13601:
--

 Summary: Optimize ByteString conversions in PBHelper
 Key: HDFS-13601
 URL: https://issues.apache.org/jira/browse/HDFS-13601
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.9.1, 3.1.0
Reporter: Andrew Wang
Assignee: Andrew Wang


While doing some profiling of the NN with JMC, I saw a lot of time being spent 
on String->ByteString conversions. These are often the same strings being 
converted over and over again, meaning there's room for optimization.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13589) Add dfsAdmin command to query if "upgrade" is finalized

2018-05-21 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483198#comment-16483198
 ] 

genericqa commented on HDFS-13589:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 16m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
58s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  4s{color} | {color:orange} hadoop-hdfs-project: The patch generated 6 new + 
402 unchanged - 0 fixed = 408 total (was 402) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 3 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
32s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 25s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 
46s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}228m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13589 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12924395/HDFS-13589.003.patch |
| Optional Tests |  asflicense  compile  javac  javado

[jira] [Updated] (HDDS-82) Merge ContainerData and ContainerStatus classes

2018-05-21 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-82?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-82:
---
Labels: reviewed  (was: )

> Merge ContainerData and ContainerStatus classes
> ---
>
> Key: HDDS-82
> URL: https://issues.apache.org/jira/browse/HDDS-82
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: reviewed
> Fix For: 0.2.1
>
> Attachments: HDDS-82.001.patch, HDDS-82.002.patch, HDDS-82.003.patch, 
> HDDS-82.004.patch
>
>
> According to refactoring of containerIO, ContainerData has common fields for 
> different kinds of containerTypes, and each Container will extend 
> ConatinerData to add its fields. So, for this merging ContainerStatus fields 
> to ConatinerData.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-82) Merge ContainerData and ContainerStatus classes

2018-05-21 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-82?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-82:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks [~bharatviswa] for the contribution. I'v commit the patch to trunk.

> Merge ContainerData and ContainerStatus classes
> ---
>
> Key: HDDS-82
> URL: https://issues.apache.org/jira/browse/HDDS-82
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: reviewed
> Fix For: 0.2.1
>
> Attachments: HDDS-82.001.patch, HDDS-82.002.patch, HDDS-82.003.patch, 
> HDDS-82.004.patch
>
>
> According to refactoring of containerIO, ContainerData has common fields for 
> different kinds of containerTypes, and each Container will extend 
> ConatinerData to add its fields. So, for this merging ContainerStatus fields 
> to ConatinerData.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-10) docker changes to test secure ozone cluster

2018-05-21 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-10?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483179#comment-16483179
 ] 

Elek, Marton commented on HDDS-10:
--

large binary could be removed from the patch either with downloading it from 
the release page: https://github.com/flokkr/issuer/releases or with importing 
the source (one go file + few batch file) and compile it with docker multi 
stage build. 

> docker changes to test secure ozone cluster
> ---
>
> Key: HDDS-10
> URL: https://issues.apache.org/jira/browse/HDDS-10
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-10-HDDS-4.00.patch, HDDS-10-HDDS-4.01.patch, 
> HDDS-10-HDDS-4.02.patch
>
>
> Update docker compose and settings to test secure ozone cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13540) DFSStripedInputStream should only allocate new buffers when reading

2018-05-21 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13540:
-
Attachment: (was: HDFS-13540.05.patch)

> DFSStripedInputStream should only allocate new buffers when reading
> ---
>
> Key: HDFS-13540
> URL: https://issues.apache.org/jira/browse/HDFS-13540
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HDFS-13540.01.patch, HDFS-13540.02.patch, 
> HDFS-13540.03.patch, HDFS-13540.04.patch
>
>
> This was found in the same scenario where HDFS-13539 is caught.
> There are 2 OOM that looks interesting:
> {noformat}
> FSDataInputStream#close error:
> OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct 
> buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:694)
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
> at 
> org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:672)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.close(DFSStripedInputStream.java:181)
> at java.io.FilterInputStream.close(FilterInputStream.java:181)
> {noformat}
> and 
> {noformat}
> org/apache/hadoop/fs/FSDataInputStream#unbuffer failed: error:
> OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct 
> buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:694)
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
> at 
> org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.unbuffer(DFSInputStream.java:1782)
> at 
> org.apache.hadoop.fs.StreamCapabilitiesPolicy.unbuffer(StreamCapabilitiesPolicy.java:48)
> at 
> org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:230)
> {noformat}
> As the stack trace goes, {{resetCurStripeBuffer}} will get buffer from the 
> buffer pool. We could save the cost of doing so if it's not for a read (e.g. 
> close, unbuffer etc.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13540) DFSStripedInputStream should only allocate new buffers when reading

2018-05-21 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13540:
-
Attachment: HDFS-13540.05.patch

> DFSStripedInputStream should only allocate new buffers when reading
> ---
>
> Key: HDFS-13540
> URL: https://issues.apache.org/jira/browse/HDFS-13540
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HDFS-13540.01.patch, HDFS-13540.02.patch, 
> HDFS-13540.03.patch, HDFS-13540.04.patch, HDFS-13540.05.patch
>
>
> This was found in the same scenario where HDFS-13539 is caught.
> There are 2 OOM that looks interesting:
> {noformat}
> FSDataInputStream#close error:
> OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct 
> buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:694)
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
> at 
> org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:672)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.close(DFSStripedInputStream.java:181)
> at java.io.FilterInputStream.close(FilterInputStream.java:181)
> {noformat}
> and 
> {noformat}
> org/apache/hadoop/fs/FSDataInputStream#unbuffer failed: error:
> OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct 
> buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:694)
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
> at 
> org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.unbuffer(DFSInputStream.java:1782)
> at 
> org.apache.hadoop.fs.StreamCapabilitiesPolicy.unbuffer(StreamCapabilitiesPolicy.java:48)
> at 
> org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:230)
> {noformat}
> As the stack trace goes, {{resetCurStripeBuffer}} will get buffer from the 
> buffer pool. We could save the cost of doing so if it's not for a read (e.g. 
> close, unbuffer etc.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-82) Merge ContainerData and ContainerStatus classes

2018-05-21 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-82?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483142#comment-16483142
 ] 

genericqa commented on HDDS-82:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
43s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
35s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 31m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
1s{color} | {color:red} hadoop-hdds/common in trunk has 19 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 29m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 29m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
38s{color} | {color:red} hadoop-hdds_container-service generated 1 new + 1 
unchanged - 0 fixed = 2 total (was 1) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
17s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} container-service in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m 20s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}158m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.ozone.TestStorageContainerManager |
|   | 

[jira] [Commented] (HDDS-90) Create ContainerData, Container classes

2018-05-21 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-90?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483141#comment-16483141
 ] 

genericqa commented on HDDS-90:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 32m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
5s{color} | {color:red} hadoop-hdds/common in trunk has 19 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 45s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
7s{color} | {color:red} hadoop-hdds/container-service generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-hdds_container-service generated 1 new + 1 
unchanged - 0 fixed = 2 total (was 1) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
57s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} container-service in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
23s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdds/container-service |
|  |  Unchecked/unconfirmed cast from 
org.apache.hadoop.ozone.container.common.impl.ContainerData to 
org.apache.hadoop.ozone.container.common.impl.KeyValueContainerData in new 
org.apache.hadoop.ozone.container.common.impl.KeyValueContainer(ContainerData)  
At 
KeyValueContainer.java:org.apache.hadoop.ozone.container.common.impl.KeyValueContainerData
 in new 
org.apache.hadoop.ozone.container.common.impl

[jira] [Updated] (HDDS-71) Send ContainerType to Datanode during container creation

2018-05-21 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-71?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-71:
---
Description: 
This Jira is to send ContainerType during container creation command from 
RpcClient.

And during serializing containerData into a .container file, we need to persist 
ContainerType information also in the .container file.

And also add containerDBType information to .container file, which will help to 
know the DB type used for this container creation.

  was:
This Jira is to send ContainerType during container creation command from 
RpcClient.

And during serializing containerData in to a .container file, we need to 
persist ContainerType information also in the .container file.


> Send ContainerType to Datanode during container creation
> 
>
> Key: HDDS-71
> URL: https://issues.apache.org/jira/browse/HDDS-71
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-71.00.patch, HDDS-71.01.patch
>
>
> This Jira is to send ContainerType during container creation command from 
> RpcClient.
> And during serializing containerData into a .container file, we need to 
> persist ContainerType information also in the .container file.
> And also add containerDBType information to .container file, which will help 
> to know the DB type used for this container creation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-71) Send ContainerType to Datanode during container creation

2018-05-21 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-71?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-71:
---
Description: 
This Jira is to send ContainerType during container creation command from 
RpcClient.

And during serializing containerData in to a .container file, we need to 
persist ContainerType information also in the .container file.

> Send ContainerType to Datanode during container creation
> 
>
> Key: HDDS-71
> URL: https://issues.apache.org/jira/browse/HDDS-71
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-71.00.patch, HDDS-71.01.patch
>
>
> This Jira is to send ContainerType during container creation command from 
> RpcClient.
> And during serializing containerData in to a .container file, we need to 
> persist ContainerType information also in the .container file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-13600) Add toString() for RemoteMethod

2018-05-21 Thread Chao Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun resolved HDFS-13600.
-
Resolution: Duplicate

> Add toString() for RemoteMethod
> ---
>
> Key: HDFS-13600
> URL: https://issues.apache.org/jira/browse/HDFS-13600
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Minor
>
> Saw messages like:
> {code}
> 2018-05-21 18:23:19,011 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient: Invocation 
> to "XXX" for 
> "org.apache.hadoop.hdfs.server.federation.router.RemoteMethod@390c38d2" timed 
> out
> {code}
> I think {{RemoteMethod}} needs a {{toString}} method.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13600) Add toString() for RemoteMethod

2018-05-21 Thread Chao Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483104#comment-16483104
 ] 

Chao Sun commented on HDFS-13600:
-

Oops found out this is already resolved via HDFS-13364.

> Add toString() for RemoteMethod
> ---
>
> Key: HDFS-13600
> URL: https://issues.apache.org/jira/browse/HDFS-13600
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Minor
>
> Saw messages like:
> {code}
> 2018-05-21 18:23:19,011 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient: Invocation 
> to "XXX" for 
> "org.apache.hadoop.hdfs.server.federation.router.RemoteMethod@390c38d2" timed 
> out
> {code}
> I think {{RemoteMethod}} needs a {{toString}} method.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-92) Use containerDBType during parsing .container files

2018-05-21 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-92?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483096#comment-16483096
 ] 

genericqa commented on HDDS-92:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
54s{color} | {color:red} hadoop-hdds/common in trunk has 19 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
2s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-92 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12924410/HDDS-92.00.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a89e2a5fbcdd 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0b4c44b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
ht

[jira] [Commented] (HDDS-82) Merge ContainerData and ContainerStatus classes

2018-05-21 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-82?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483085#comment-16483085
 ] 

Xiaoyu Yao commented on HDDS-82:


Thanks [~bharatviswa] for the update. +1 v4 patch pending Jenkins.

> Merge ContainerData and ContainerStatus classes
> ---
>
> Key: HDDS-82
> URL: https://issues.apache.org/jira/browse/HDDS-82
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-82.001.patch, HDDS-82.002.patch, HDDS-82.003.patch, 
> HDDS-82.004.patch
>
>
> According to refactoring of containerIO, ContainerData has common fields for 
> different kinds of containerTypes, and each Container will extend 
> ConatinerData to add its fields. So, for this merging ContainerStatus fields 
> to ConatinerData.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-70) Fix config names for secure ksm and scm

2018-05-21 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-70?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483081#comment-16483081
 ] 

Xiaoyu Yao edited comment on HDDS-70 at 5/21/18 9:26 PM:
-

[~ajayydv], thanks for the clarification. Please fix the unit test failure: 
TestOzoneConfigurationFields that are related. +1, pending...


was (Author: xyao):
[~ajayydv], thanks for the clarification. +1, I will commit it shortly.

> Fix config names for secure ksm and scm
> ---
>
> Key: HDDS-70
> URL: https://issues.apache.org/jira/browse/HDDS-70
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-70-HDDS-4.00.patch, HDDS-70-HDDS-4.01.patch, 
> HDDS-70-HDDS-4.02.patch, HDDS-70-HDDS-4.03.patch
>
>
> There are some inconsistencies in ksm and scm config for kerberos. Jira 
> intends to correct them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13589) Add dfsAdmin command to query if "upgrade" is finalized

2018-05-21 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483082#comment-16483082
 ] 

Hanisha Koneru commented on HDFS-13589:
---

Updated site docs (\{{HDFSCommands.md}} and 
\{{HDFSHighAvailabilityWithQJM.md}}) in patch v04.

> Add dfsAdmin command to query if "upgrade" is finalized
> ---
>
> Key: HDFS-13589
> URL: https://issues.apache.org/jira/browse/HDFS-13589
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDFS-13589.001.patch, HDFS-13589.002.patch, 
> HDFS-13589.003.patch, HDFS-13589.004.patch
>
>
> When we do upgrade on a Namenode (non rollingUpgrade), we should be able to 
> query whether the upgrade has been finalized or not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13589) Add dfsAdmin command to query if "upgrade" is finalized

2018-05-21 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-13589:
--
Attachment: HDFS-13589.004.patch

> Add dfsAdmin command to query if "upgrade" is finalized
> ---
>
> Key: HDFS-13589
> URL: https://issues.apache.org/jira/browse/HDFS-13589
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDFS-13589.001.patch, HDFS-13589.002.patch, 
> HDFS-13589.003.patch, HDFS-13589.004.patch
>
>
> When we do upgrade on a Namenode (non rollingUpgrade), we should be able to 
> query whether the upgrade has been finalized or not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-70) Fix config names for secure ksm and scm

2018-05-21 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-70?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483081#comment-16483081
 ] 

Xiaoyu Yao commented on HDDS-70:


[~ajayydv], thanks for the clarification. +1, I will commit it shortly.

> Fix config names for secure ksm and scm
> ---
>
> Key: HDDS-70
> URL: https://issues.apache.org/jira/browse/HDDS-70
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-70-HDDS-4.00.patch, HDDS-70-HDDS-4.01.patch, 
> HDDS-70-HDDS-4.02.patch, HDDS-70-HDDS-4.03.patch
>
>
> There are some inconsistencies in ksm and scm config for kerberos. Jira 
> intends to correct them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-88) Create separate message structure to represent ports in DatanodeDetails

2018-05-21 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-88?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-88:

Attachment: HDDS-88.001.patch

> Create separate message structure to represent ports in DatanodeDetails 
> 
>
> Key: HDDS-88
> URL: https://issues.apache.org/jira/browse/HDDS-88
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.2.1
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-88.000.patch, HDDS-88.001.patch
>
>
> DataNode uses many ports which have to be set in DatanodeDetails and sent to 
> SCM. This port details can be extracted into a separate protobuf message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-82) Merge ContainerData and ContainerStatus classes

2018-05-21 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-82?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483056#comment-16483056
 ] 

Bharat Viswanadham commented on HDDS-82:


Thank You [~xyao] for review.

Addressed review comments and also added a ContainerData constructor to take 
ContainerLifeCycleState as an argument.

> Merge ContainerData and ContainerStatus classes
> ---
>
> Key: HDDS-82
> URL: https://issues.apache.org/jira/browse/HDDS-82
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-82.001.patch, HDDS-82.002.patch, HDDS-82.003.patch, 
> HDDS-82.004.patch
>
>
> According to refactoring of containerIO, ContainerData has common fields for 
> different kinds of containerTypes, and each Container will extend 
> ConatinerData to add its fields. So, for this merging ContainerStatus fields 
> to ConatinerData.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-70) Fix config names for secure ksm and scm

2018-05-21 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-70?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483059#comment-16483059
 ] 

Ajay Kumar commented on HDDS-70:


[~xyao] no its all driven through docker-config.

> Fix config names for secure ksm and scm
> ---
>
> Key: HDDS-70
> URL: https://issues.apache.org/jira/browse/HDDS-70
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-70-HDDS-4.00.patch, HDDS-70-HDDS-4.01.patch, 
> HDDS-70-HDDS-4.02.patch, HDDS-70-HDDS-4.03.patch
>
>
> There are some inconsistencies in ksm and scm config for kerberos. Jira 
> intends to correct them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-82) Merge ContainerData and ContainerStatus classes

2018-05-21 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-82?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-82:
---
Attachment: HDDS-82.004.patch

> Merge ContainerData and ContainerStatus classes
> ---
>
> Key: HDDS-82
> URL: https://issues.apache.org/jira/browse/HDDS-82
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-82.001.patch, HDDS-82.002.patch, HDDS-82.003.patch, 
> HDDS-82.004.patch
>
>
> According to refactoring of containerIO, ContainerData has common fields for 
> different kinds of containerTypes, and each Container will extend 
> ConatinerData to add its fields. So, for this merging ContainerStatus fields 
> to ConatinerData.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-13566) Add configurable additional RPC listener to NameNode

2018-05-21 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-13566 started by Chen Liang.
-
> Add configurable additional RPC listener to NameNode
> 
>
> Key: HDFS-13566
> URL: https://issues.apache.org/jira/browse/HDFS-13566
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ipc
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-13566.001.patch
>
>
> This Jira aims to add the capability to NameNode to run additional 
> listener(s). Such that NameNode can be accessed from multiple ports. 
> Fundamentally, this Jira tries to extend ipc.Server to allow configured with 
> more listeners, binding to different ports, but sharing the same call queue 
> and the handlers. Useful when different clients are only allowed to access 
> certain different ports. Combined with HDFS-13547, this also allows different 
> ports to have different SASL security levels. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13600) Add toString() for RemoteMethod

2018-05-21 Thread Chao Sun (JIRA)
Chao Sun created HDFS-13600:
---

 Summary: Add toString() for RemoteMethod
 Key: HDFS-13600
 URL: https://issues.apache.org/jira/browse/HDFS-13600
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Chao Sun
Assignee: Chao Sun


Saw messages like:
{code}
2018-05-21 18:23:19,011 ERROR 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient: Invocation to 
"XXX" for 
"org.apache.hadoop.hdfs.server.federation.router.RemoteMethod@390c38d2" timed 
out
{code}

I think {{RemoteMethod}} needs a {{toString}} method.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-82) Merge ContainerData and ContainerStatus classes

2018-05-21 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-82?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483034#comment-16483034
 ] 

Xiaoyu Yao commented on HDDS-82:


Thanks [~bharatviswa] for the update. One more ask: can we wrap the 
|containerData.getState() == ContainerProtos.ContainerLifeCycleState.INVALID|

into ContainerData#isValid()?

 

> Merge ContainerData and ContainerStatus classes
> ---
>
> Key: HDDS-82
> URL: https://issues.apache.org/jira/browse/HDDS-82
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-82.001.patch, HDDS-82.002.patch, HDDS-82.003.patch
>
>
> According to refactoring of containerIO, ContainerData has common fields for 
> different kinds of containerTypes, and each Container will extend 
> ConatinerData to add its fields. So, for this merging ContainerStatus fields 
> to ConatinerData.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-70) Fix config names for secure ksm and scm

2018-05-21 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-70?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483031#comment-16483031
 ] 

Xiaoyu Yao commented on HDDS-70:


[~ajayydv], do we need to update the docker build to produce keytab with the 
new names?

> Fix config names for secure ksm and scm
> ---
>
> Key: HDDS-70
> URL: https://issues.apache.org/jira/browse/HDDS-70
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-70-HDDS-4.00.patch, HDDS-70-HDDS-4.01.patch, 
> HDDS-70-HDDS-4.02.patch, HDDS-70-HDDS-4.03.patch
>
>
> There are some inconsistencies in ksm and scm config for kerberos. Jira 
> intends to correct them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-90) Create ContainerData, Container classes

2018-05-21 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-90?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-90:
---
Status: Patch Available  (was: Open)

> Create ContainerData, Container classes
> ---
>
> Key: HDDS-90
> URL: https://issues.apache.org/jira/browse/HDDS-90
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-90.00.patch
>
>
> This Jira is to create following classes:
> ContainerData (to have generic fields for different types of containers)
> KeyValueContainerData (To extend ContainerData and have fields specific to 
> KeyValueContainer)
> Container (For Container meta operations)
> KeyValueContainer(To extend Container)
>  
> In this Jira implementation of KeyValueContainer is not done, as this 
> requires volume classes.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-70) Fix config names for secure ksm and scm

2018-05-21 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-70?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483022#comment-16483022
 ] 

Ajay Kumar commented on HDDS-70:


[~xyao] thanks for review. Patch v3 to address comments.

> Fix config names for secure ksm and scm
> ---
>
> Key: HDDS-70
> URL: https://issues.apache.org/jira/browse/HDDS-70
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-70-HDDS-4.00.patch, HDDS-70-HDDS-4.01.patch, 
> HDDS-70-HDDS-4.02.patch, HDDS-70-HDDS-4.03.patch
>
>
> There are some inconsistencies in ksm and scm config for kerberos. Jira 
> intends to correct them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-70) Fix config names for secure ksm and scm

2018-05-21 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-70?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-70:
---
Attachment: HDDS-70-HDDS-4.03.patch

> Fix config names for secure ksm and scm
> ---
>
> Key: HDDS-70
> URL: https://issues.apache.org/jira/browse/HDDS-70
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-70-HDDS-4.00.patch, HDDS-70-HDDS-4.01.patch, 
> HDDS-70-HDDS-4.02.patch, HDDS-70-HDDS-4.03.patch
>
>
> There are some inconsistencies in ksm and scm config for kerberos. Jira 
> intends to correct them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-71) Send ContainerType to Datanode during container creation

2018-05-21 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-71?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483009#comment-16483009
 ] 

Bharat Viswanadham commented on HDDS-71:


[~msingh] Created HDDS-92 to use containerDBType in DN.

 

> Send ContainerType to Datanode during container creation
> 
>
> Key: HDDS-71
> URL: https://issues.apache.org/jira/browse/HDDS-71
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-71.00.patch, HDDS-71.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-82) Merge ContainerData and ContainerStatus classes

2018-05-21 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-82?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483006#comment-16483006
 ] 

genericqa commented on HDDS-82:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
54s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 56s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
1s{color} | {color:red} hadoop-hdds/container-service generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
40s{color} | {color:red} hadoop-hdds_container-service generated 1 new + 1 
unchanged - 0 fixed = 2 total (was 1) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} container-service in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m 25s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}143m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdds/container-service |
|  |  Null passed for non-null parameter of 
java.util.concurrent.ConcurrentSkipListMap.put(Object, Object) in 
org.apache.hadoop.ozone.container.common.impl.ContainerManagerImpl.readContainerInfo(String)
  At ContainerManagerImpl.java:of 
java.util.concurrent.ConcurrentSkipListMap.put(Object, Object) in 
org.apa

[jira] [Updated] (HDDS-92) Use containerDBType during parsing .container files

2018-05-21 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-92?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-92:
---
Status: Patch Available  (was: Open)

> Use containerDBType during parsing .container files
> ---
>
> Key: HDDS-92
> URL: https://issues.apache.org/jira/browse/HDDS-92
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-92.00.patch
>
>
> Now with HDDS-71, when container is created we store containerDBType 
> information in .container file.
> Use containerDBType which is stored in .container files during parsing of 
> .container files.
> If intially during cluster setup we use ozone.metastore.impl as default, and 
> later changed the ozone.metastore.impl, with current code, we will not be 
> able to read those container's.
> With this Jira, we can address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-92) Use containerDBType during parsing .container files

2018-05-21 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-92?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-92:
---
Attachment: HDDS-92.00.patch

> Use containerDBType during parsing .container files
> ---
>
> Key: HDDS-92
> URL: https://issues.apache.org/jira/browse/HDDS-92
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-92.00.patch
>
>
> Now with HDDS-71, when container is created we store containerDBType 
> information in .container file.
> Use containerDBType which is stored in .container files during parsing of 
> .container files.
> If intially during cluster setup we use ozone.metastore.impl as default, and 
> later changed the ozone.metastore.impl, with current code, we will not be 
> able to read those container's.
> With this Jira, we can address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time

2018-05-21 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483001#comment-16483001
 ] 

Íñigo Goiri commented on HDFS-13388:


Thanks [~yzhangal] for taking care of this.

> RequestHedgingProxyProvider calls multiple configured NNs all the time
> --
>
> Key: HDFS-13388
> URL: https://issues.apache.org/jira/browse/HDFS-13388
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.2.0, 3.1.1, 3.0.3
>
> Attachments: HADOOP-13388.0001.patch, HADOOP-13388.0002.patch, 
> HADOOP-13388.0003.patch, HADOOP-13388.0004.patch, HADOOP-13388.0005.patch, 
> HADOOP-13388.0006.patch, HADOOP-13388.0007.patch, HADOOP-13388.0008.patch, 
> HADOOP-13388.0009.patch, HADOOP-13388.0010.patch, HADOOP-13388.0011.patch, 
> HADOOP-13388.0012.patch, HADOOP-13388.0013.patch, HADOOP-13388.0014.patch
>
>
> In HDFS-7858 RequestHedgingProxyProvider was designed to "first 
> simultaneously call multiple configured NNs to decide which is the active 
> Namenode and then for subsequent calls it will invoke the previously 
> successful NN ." But the current code call multiple configured NNs every time 
> even when we already got the successful NN. 
>  That's because in RetryInvocationHandler.java, ProxyDescriptor's member 
> proxyInfo is assigned only when it is constructed or when failover occurs. 
> RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the 
> only proxy we can get is always a dynamic proxy handled by 
> RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class 
> handles invoked method by calling multiple configured NNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-92) Use containerDBType during parsing .container files

2018-05-21 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-92?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-92:
---
Description: 
Now with HDDS-71, when container is created we store containerDBType 
information in .container file.

Use containerDBType which is stored in .container files during parsing of 
.container files.

If intially during cluster setup we use ozone.metastore.impl as default, and 
later changed the ozone.metastore.impl, with current code, we will not be able 
to read those container's.

With this Jira, we can address this.

  was:
Now with HDDS-71, when container is created we store containerDBType 
information in .container file.

Use containerDBType which is stored in .container files during parsing of 
.container files.

If intially during clustersetup we use ozone.metastore.impl as default, and 
later changed the ozone.metastore.impl

default metaNow with this, we can support if containerDBType is changed later


> Use containerDBType during parsing .container files
> ---
>
> Key: HDDS-92
> URL: https://issues.apache.org/jira/browse/HDDS-92
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> Now with HDDS-71, when container is created we store containerDBType 
> information in .container file.
> Use containerDBType which is stored in .container files during parsing of 
> .container files.
> If intially during cluster setup we use ozone.metastore.impl as default, and 
> later changed the ozone.metastore.impl, with current code, we will not be 
> able to read those container's.
> With this Jira, we can address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-92) Use containerDBType during parsing .container files

2018-05-21 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-92?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-92:
---
Description: 
Now with HDDS-71, when container is created we store containerDBType 
information in .container file.

Use containerDBType which is stored in .container files during parsing of 
.container files.

If intially during clustersetup we use ozone.metastore.impl as default, and 
later changed the ozone.metastore.impl

default metaNow with this, we can support if containerDBType is changed later

  was:
Now with HDDS-71, when container is created we store containerDBType 
information in .container file.

Use containerDBType which is stored in .container files during parsing of 
.container files.

If intially during clustersetup we use ozone default metaNow with this, we can 
support if containerDBType is changed later


> Use containerDBType during parsing .container files
> ---
>
> Key: HDDS-92
> URL: https://issues.apache.org/jira/browse/HDDS-92
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> Now with HDDS-71, when container is created we store containerDBType 
> information in .container file.
> Use containerDBType which is stored in .container files during parsing of 
> .container files.
> If intially during clustersetup we use ozone.metastore.impl as default, and 
> later changed the ozone.metastore.impl
> default metaNow with this, we can support if containerDBType is changed later



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-92) Use containerDBType during parsing .container files

2018-05-21 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-92?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-92:
---
Description: 
Now with HDDS-71, when container is created we store containerDBType 
information in .container file.

Use containerDBType which is stored in .container files during parsing of 
.container files.

If intially during clustersetup we use ozone default metaNow with this, we can 
support if containerDBType is changed later

  was:
Now with HDDS-71, when container is created we store containerDBType 
information in .container file.

Use containerDBType which is stored in .container files during parsing of 
.container files.


> Use containerDBType during parsing .container files
> ---
>
> Key: HDDS-92
> URL: https://issues.apache.org/jira/browse/HDDS-92
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> Now with HDDS-71, when container is created we store containerDBType 
> information in .container file.
> Use containerDBType which is stored in .container files during parsing of 
> .container files.
> If intially during clustersetup we use ozone default metaNow with this, we 
> can support if containerDBType is changed later



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-92) Use containerDBType during parsing .container files

2018-05-21 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-92?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-92:
---
Description: 
Now with HDDS-71, when container is created we store containerDBType 
information in .container file.

Use containerDBType which is stored in .container files during parsing of 
.container files.

> Use containerDBType during parsing .container files
> ---
>
> Key: HDDS-92
> URL: https://issues.apache.org/jira/browse/HDDS-92
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> Now with HDDS-71, when container is created we store containerDBType 
> information in .container file.
> Use containerDBType which is stored in .container files during parsing of 
> .container files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-92) Use containerDBType during parsing .container files

2018-05-21 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-92?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-92:
--

Assignee: Bharat Viswanadham

> Use containerDBType during parsing .container files
> ---
>
> Key: HDDS-92
> URL: https://issues.apache.org/jira/browse/HDDS-92
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-92) Use containerDBType during parsing .container files

2018-05-21 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-92:
--

 Summary: Use containerDBType during parsing .container files
 Key: HDDS-92
 URL: https://issues.apache.org/jira/browse/HDDS-92
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Datanode
Reporter: Bharat Viswanadham






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13281) Namenode#createFile should be /.reserved/raw/ aware.

2018-05-21 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482956#comment-16482956
 ] 

Rushabh S Shah commented on HDFS-13281:
---

Ran all of failed tests but only couple of them failed on my local machine.
{noformat}
[INFO] Results:
[INFO] 
[ERROR] Failures: 
[ERROR]   
TestRetryCacheWithHA.testUpdatePipeline:1222->testClientRetryWithFailover:1324 
After waiting the operation updatePipeline still has not taken effect on NN yet
[ERROR] Errors: 
[ERROR]   TestDataNodeVolumeFailure.testUnderReplicationAfterVolFailure:424 » 
Timeout Ti...
[INFO] 
[ERROR] Tests run: 84, Failures: 1, Errors: 1, Skipped: 37
{noformat}
Both of the tests are frequently failing tests.
Will commit #4 of the patch tomorrow if no objections.

> Namenode#createFile should be /.reserved/raw/ aware.
> 
>
> Key: HDFS-13281
> URL: https://issues.apache.org/jira/browse/HDFS-13281
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: encryption
>Affects Versions: 2.8.3
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Critical
> Attachments: HDFS-13281.001.patch, HDFS-13281.002.branch-2.patch, 
> HDFS-13281.002.patch, HDFS-13281.003.patch, HDFS-13281.004.patch
>
>
> If I want to write to /.reserved/raw/ and if that directory happens to 
> be in EZ, then namenode *should not* create edek and just copy the raw bytes 
> from the source.
>  Namenode#startFileInt should be /.reserved/raw/ aware.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-70) Fix config names for secure ksm and scm

2018-05-21 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-70?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482926#comment-16482926
 ] 

Xiaoyu Yao commented on HDDS-70:


Thanks [~ajayydv] for the update. Just one minor comment, +1 after that.

 

docker-config

Line 28-29/33: NIT: should we rename ksm->om for the keytab file and principle 
names?

 

> Fix config names for secure ksm and scm
> ---
>
> Key: HDDS-70
> URL: https://issues.apache.org/jira/browse/HDDS-70
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-70-HDDS-4.00.patch, HDDS-70-HDDS-4.01.patch, 
> HDDS-70-HDDS-4.02.patch
>
>
> There are some inconsistencies in ksm and scm config for kerberos. Jira 
> intends to correct them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-82) Merge ContainerData and ContainerStatus classes

2018-05-21 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-82?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482923#comment-16482923
 ] 

Bharat Viswanadham commented on HDDS-82:


Uploaded patch v03 to address review comments.

> Merge ContainerData and ContainerStatus classes
> ---
>
> Key: HDDS-82
> URL: https://issues.apache.org/jira/browse/HDDS-82
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-82.001.patch, HDDS-82.002.patch, HDDS-82.003.patch
>
>
> According to refactoring of containerIO, ContainerData has common fields for 
> different kinds of containerTypes, and each Container will extend 
> ConatinerData to add its fields. So, for this merging ContainerStatus fields 
> to ConatinerData.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-82) Merge ContainerData and ContainerStatus classes

2018-05-21 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-82?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-82:
---
Attachment: HDDS-82.003.patch

> Merge ContainerData and ContainerStatus classes
> ---
>
> Key: HDDS-82
> URL: https://issues.apache.org/jira/browse/HDDS-82
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-82.001.patch, HDDS-82.002.patch, HDDS-82.003.patch
>
>
> According to refactoring of containerIO, ContainerData has common fields for 
> different kinds of containerTypes, and each Container will extend 
> ConatinerData to add its fields. So, for this merging ContainerStatus fields 
> to ConatinerData.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13589) Add dfsAdmin command to query if "upgrade" is finalized

2018-05-21 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-13589:
--
Status: Patch Available  (was: Open)

> Add dfsAdmin command to query if "upgrade" is finalized
> ---
>
> Key: HDFS-13589
> URL: https://issues.apache.org/jira/browse/HDFS-13589
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDFS-13589.001.patch, HDFS-13589.002.patch, 
> HDFS-13589.003.patch
>
>
> When we do upgrade on a Namenode (non rollingUpgrade), we should be able to 
> query whether the upgrade has been finalized or not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13589) Add dfsAdmin command to query if "upgrade" is finalized

2018-05-21 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-13589:
--
Attachment: HDFS-13589.003.patch

> Add dfsAdmin command to query if "upgrade" is finalized
> ---
>
> Key: HDFS-13589
> URL: https://issues.apache.org/jira/browse/HDFS-13589
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDFS-13589.001.patch, HDFS-13589.002.patch, 
> HDFS-13589.003.patch
>
>
> When we do upgrade on a Namenode (non rollingUpgrade), we should be able to 
> query whether the upgrade has been finalized or not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13589) Add dfsAdmin command to query if "upgrade" is finalized

2018-05-21 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482894#comment-16482894
 ] 

Hanisha Koneru commented on HDFS-13589:
---

Thanks for the review [~arpitagarwal]. Addressed the review comments in patch 
v03.

> Add dfsAdmin command to query if "upgrade" is finalized
> ---
>
> Key: HDFS-13589
> URL: https://issues.apache.org/jira/browse/HDFS-13589
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDFS-13589.001.patch, HDFS-13589.002.patch
>
>
> When we do upgrade on a Namenode (non rollingUpgrade), we should be able to 
> query whether the upgrade has been finalized or not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-82) Merge ContainerData and ContainerStatus classes

2018-05-21 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-82?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482872#comment-16482872
 ] 

Xiaoyu Yao edited comment on HDDS-82 at 5/21/18 6:33 PM:
-

Thanks [~bharatviswa] for working on this. The patch v2 looks good to me 
overall. I just have few minor comments:

 

ContainerManagerImpl.java

Line 245/286: If we change here to allow putting null into containerMap, we 
will need to add null check in many places when containerMap.get() is called 
such as Line 835/930/941,etc. to avoid NPE. I would suggest we follow the 
existing pattern by putting a "new ContainerData(containerID, conf)" with an 
INVALID state;

 

Line 462: here we need to check the containerData.state == INVALID


was (Author: xyao):
Thanks [~bharatviswa] for working on this. The patch v2 looks good to me 
overall. I just have few minor comments:

 

ContainerManagerImpl.java

Line 245/286: If we change here to allow putting null into containerMap, we 
will need to add null check in many places when containerMap.get() is called 
such as Line 835/930/941,etc. to avoid NPE. I would suggest we follow the 
existing pattern by putting a "new ContainerData(containerID, conf)" with an 
INVALID state (HDDS-85 will add this to protobuf def);

 

Line 462: here we need to check the containerData.state == INVALID

> Merge ContainerData and ContainerStatus classes
> ---
>
> Key: HDDS-82
> URL: https://issues.apache.org/jira/browse/HDDS-82
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-82.001.patch, HDDS-82.002.patch
>
>
> According to refactoring of containerIO, ContainerData has common fields for 
> different kinds of containerTypes, and each Container will extend 
> ConatinerData to add its fields. So, for this merging ContainerStatus fields 
> to ConatinerData.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-82) Merge ContainerData and ContainerStatus classes

2018-05-21 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-82?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482872#comment-16482872
 ] 

Xiaoyu Yao edited comment on HDDS-82 at 5/21/18 6:32 PM:
-

Thanks [~bharatviswa] for working on this. The patch v2 looks good to me 
overall. I just have few minor comments:

 

ContainerManagerImpl.java

Line 245/286: If we change here to allow putting null into containerMap, we 
will need to add null check in many places when containerMap.get() is called 
such as Line 835/930/941,etc. to avoid NPE. I would suggest we follow the 
existing pattern by putting a "new ContainerData(containerID, conf)" with an 
INVALID state (HDDS-85 will add this to protobuf def);

 

Line 462: here we need to check the containerData.state == INVALID


was (Author: xyao):
Thanks [~bharatviswa] for working on this. The patch v2 looks good to me 
overall. I just have few minor comments:

 

ContainerManagerImpl.java

Line 245/286: If we change here to allow putting null into containerMap, we 
will need to add null check in many places when containerMap.get() is called 
such as Line 835/930/941,etc. to avoid NPE. I would suggest we follow the 
existing pattern by putting a "new ContainerData(containerID, conf)" with an 
INVALID state;

 

Line 462: here we need to check the containerData.state == INVALID

> Merge ContainerData and ContainerStatus classes
> ---
>
> Key: HDDS-82
> URL: https://issues.apache.org/jira/browse/HDDS-82
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-82.001.patch, HDDS-82.002.patch
>
>
> According to refactoring of containerIO, ContainerData has common fields for 
> different kinds of containerTypes, and each Container will extend 
> ConatinerData to add its fields. So, for this merging ContainerStatus fields 
> to ConatinerData.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-82) Merge ContainerData and ContainerStatus classes

2018-05-21 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-82?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482872#comment-16482872
 ] 

Xiaoyu Yao commented on HDDS-82:


Thanks [~bharatviswa] for working on this. The patch v2 looks good to me 
overall. I just have few minor comments:

 

ContainerManagerImpl.java

Line 245/286: If we change here to allow putting null into containerMap, we 
will need to add null check in many places when containerMap.get() is called 
such as Line 835/930/941,etc. to avoid NPE. I would suggest we follow the 
existing pattern by putting a "new ContainerData(containerID, conf)" with an 
INVALID state;

 

Line 462: here we need to check the containerData.state == INVALID

> Merge ContainerData and ContainerStatus classes
> ---
>
> Key: HDDS-82
> URL: https://issues.apache.org/jira/browse/HDDS-82
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-82.001.patch, HDDS-82.002.patch
>
>
> According to refactoring of containerIO, ContainerData has common fields for 
> different kinds of containerTypes, and each Container will extend 
> ConatinerData to add its fields. So, for this merging ContainerStatus fields 
> to ConatinerData.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12978) Fine-grained locking while consuming journal stream.

2018-05-21 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen reassigned HDFS-12978:
--

Assignee: Konstantin Shvachko  (was: Erik Krogen)

> Fine-grained locking while consuming journal stream.
> 
>
> Key: HDFS-12978
> URL: https://issues.apache.org/jira/browse/HDFS-12978
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Major
>
> In current implementation SBN consumes the entire segment of transactions 
> under a single namesystem lock, which does not allow reads over a long period 
> of time until the segment is processed. We should break the lock into fine 
> grained chunks. In extreme case each transaction should release the lock once 
> it is applied.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13587) TestQuorumJournalManager fails on Windows

2018-05-21 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482852#comment-16482852
 ] 

Chris Douglas commented on HDFS-13587:
--

bq. TestQuorumJournalManager originally uses default getBaseDirectory from 
MiniDFSCluster, which sets the variable to true in MiniDFSCluster
Specifically, it causes {{MiniDFSCluster}} to be loaded, which triggers the 
[static 
block|https://github.com/apache/hadoop/blob/132a547/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java#L161]
 that sets this mode. Since the design relies on static initialization 
dependencies, we're unlikely to find a clean solution. Adding a similar, static 
block to {{MiniJournalCluster}} looks correct, and it would avoid fixing the 
same failure in each QJM test. Do you see a problem with this approach?

> TestQuorumJournalManager fails on Windows
> -
>
> Key: HDFS-13587
> URL: https://issues.apache.org/jira/browse/HDFS-13587
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
>  Labels: Windows
> Attachments: HDFS-13587.000.patch, HDFS-13587.001.patch
>
>
> There are 12 test failures in TestQuorumJournalManager on Windows. Local run 
> shows:
> {color:#d04437}[INFO] Running 
> org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager
> [ERROR] Tests run: 21, Failures: 0, Errors: 12, Skipped: 0, Time elapsed: 
> 106.81 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager
> [ERROR] 
> testCrashBetweenSyncLogAndPersistPaxosData(org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager)
>   Time elapsed: 1.93 s  <<< ERROR!
> org.apache.hadoop.hdfs.qjournal.client.QuorumException:
> Could not format one or more JournalNodes. 2 successful responses:
> 127.0.0.1:27044: null [success]
> 127.0.0.1:27064: null [success]
> 1 exceptions thrown:
> 127.0.0.1:27054: Directory 
> E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\journalnode-1\test-journal
>  is in an inconsistent state: Can't format the storage directory because the 
> current directory is not empty.
> at 
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.checkEmptyCurrent(Storage.java:498)
> at 
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:574)
> at 
> org.apache.hadoop.hdfs.qjournal.server.JNStorage.format(JNStorage.java:185)
> at 
> org.apache.hadoop.hdfs.qjournal.server.Journal.format(Journal.java:221)
> at 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.format(JournalNodeRpcServer.java:157)
> at 
> org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.format(QJournalProtocolServerSideTranslatorPB.java:145)
> at 
> org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25419)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:503)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:871)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:817)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1889)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2606)
> at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
> at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:286)
> at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.format(QuorumJournalManager.java:212)
> at 
> org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager.setup(TestQuorumJournalManager.java:109)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java

[jira] [Commented] (HDDS-71) Send ContainerType to Datanode during container creation

2018-05-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-71?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482845#comment-16482845
 ] 

Hudson commented on HDDS-71:


SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14246 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14246/])
HDDS-71. Send ContainerType to Datanode during container creation. (msingh: rev 
132a547dea4081948c39c149c59d6453003fa277)
* (edit) hadoop-hdds/common/src/main/proto/DatanodeContainerProtocol.proto
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/helpers/ContainerData.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerManagerImpl.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/storage/ContainerProtocolCalls.java


> Send ContainerType to Datanode during container creation
> 
>
> Key: HDDS-71
> URL: https://issues.apache.org/jira/browse/HDDS-71
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-71.00.patch, HDDS-71.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-82) Merge ContainerData and ContainerStatus classes

2018-05-21 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-82?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-82:
---
Attachment: HDDS-82.002.patch

> Merge ContainerData and ContainerStatus classes
> ---
>
> Key: HDDS-82
> URL: https://issues.apache.org/jira/browse/HDDS-82
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-82.001.patch, HDDS-82.002.patch
>
>
> According to refactoring of containerIO, ContainerData has common fields for 
> different kinds of containerTypes, and each Container will extend 
> ConatinerData to add its fields. So, for this merging ContainerStatus fields 
> to ConatinerData.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-82) Merge ContainerData and ContainerStatus classes

2018-05-21 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-82?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482821#comment-16482821
 ] 

Bharat Viswanadham commented on HDDS-82:


Rebased patch as the older patch does not apply to trunk anymore.

 

> Merge ContainerData and ContainerStatus classes
> ---
>
> Key: HDDS-82
> URL: https://issues.apache.org/jira/browse/HDDS-82
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-82.001.patch, HDDS-82.002.patch
>
>
> According to refactoring of containerIO, ContainerData has common fields for 
> different kinds of containerTypes, and each Container will extend 
> ConatinerData to add its fields. So, for this merging ContainerStatus fields 
> to ConatinerData.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-91) Calculate under/over replicated containers from the container reports

2018-05-21 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-91?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482811#comment-16482811
 ] 

Anu Engineer commented on HDDS-91:
--

[~elek] Thanks for the patch. I think this patch will change a little bit as we 
work forward towards the Event based SCM.

 # The class {{DN2ContainerMap}} has a function called {{processReport}} that 
function will the mismatched Containers. That is it will return the missing 
containers and new containers in the ResultSet.
# We should take those lists and update the container State Map, perhaps using 
event queue or directly.
# Then we need to update the DN2ContainerMap 
# Then we can post messages to various other participants in SCM. Let us chat 
about this sometime to get more clarity into this. 

> Calculate under/over replicated containers from the container reports
> -
>
> Key: HDDS-91
> URL: https://issues.apache.org/jira/browse/HDDS-91
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-91.001.patch
>
>
> In the current InProgressPool we calculate the existing replica numbers for 
> all the containers based on the container reports. But we don't do anything 
> in case of missing replicase.
> This patch is the initial step to process the reported data by comparing the 
> reported replica numbers with the state saved in the Mapping database.
> I prerefer to do smaller patches instead of one big one, so this patch 
> doesn't solve over/under replcation the problem yet just detect it.
> 1. It integrates the EventQueue with the scm and makes it available to the 
> ContainerSupervisor (constructor + field changes)
> 2. In finalizeReconciliation it sends events to compare expected and current 
> replicase (expected replicas are from the ContainerMapping)
> 3. Will send a new event in case of under/over replication.
> Further works are needed to react to the new events and send delete/copy 
> container commands to the datanode. It also requires more information about 
> the current in-progress replication: If we alread asked a new datanode to 
> replicate the container we need to save it to a map to make the call 
> idempotent: on the next container replication we should not request an other 
> replication. I would prefer to put this additional information to the 
> ContainerMapping instead of a new map.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-89) Create ozone specific inline documentation as part of the buld

2018-05-21 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-89?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482791#comment-16482791
 ] 

Anu Engineer commented on HDDS-89:
--

The generate site is still failing from me.

 
{noformat}

~/d/h/d/d/bin> ./generate-site.sh
Started building sites ...
Error: Error building site: No source directory found, expecting to find it at 
/Users/aengineer/diskBalancer/hadoop-ozone/docs/content
/Users/aengineer/diskBalancer/hadoop-ozone/docs/dev-support/bin{noformat}
if I go the _hadoop-ozone/docs_ directory and do a *hugo serve* I get a similar 
error.
{noformat}
hugo serve
Started building sites ...
Error: Error building site: No source directory found, expecting to find it at 
/Users/aengineer/diskBalancer/hadoop-ozone/docs/content
{noformat}

Do we need to tell the config.toml the source files are not in the content 
directory ?

> Create ozone specific inline documentation as part of the buld
> --
>
> Key: HDDS-89
> URL: https://issues.apache.org/jira/browse/HDDS-89
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager, SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-89.002.patch
>
>
> As ozone/hdds distribution is separated from the hadoop distribution we need 
> a separated documentation package. The idea is to include make the 
> documentation available from the scm/ksm web pages but later it also should 
> be uploaded to the hadoop site together with the release artifacts.
> This patch creates the HTML pages from the existing markdown files in the 
> build process. It's an optional step, but if the documentation is available 
> it will be displayed from the scm/ksm web page.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-71) Send ContainerType to Datanode during container creation

2018-05-21 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-71?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-71:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks for the contribution [~bharatviswa] and thanks [~ajayydv] for the 
reviews. I have committed this to trunk.

> Send ContainerType to Datanode during container creation
> 
>
> Key: HDDS-71
> URL: https://issues.apache.org/jira/browse/HDDS-71
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-71.00.patch, HDDS-71.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-79) Remove ReportState from SCMHeartbeatRequestProto

2018-05-21 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-79?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482740#comment-16482740
 ] 

Xiaoyu Yao commented on HDDS-79:


[~nandakumar131], thanks for working on this. The patch looks good to me 
overall. Just few minor issue:

 

The patch needs to be rebased on trunk.

 

ContainerReportManager.java and ContainerReportManagerImpl can be removed.

(The randomization logic should stay in the datanode side with the push model) 

 

ContainerManagerImpl.java

Line 129/203-204: this can be removed since it is only used by the pull model 

 

ContainerReportHandler.java

Lien 66: I think we should have randomization logic around here so that DN 
won't send container report simultaneously to SCM 

 

TestNodeManager.java

Line 28: unused import

> Remove ReportState from SCMHeartbeatRequestProto
> 
>
> Key: HDDS-79
> URL: https://issues.apache.org/jira/browse/HDDS-79
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode, SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-79.000.patch
>
>
> Since datanode will be sending container report in the configured interval 
> there is no need to send {{ReportState}} in heartbeat. {{ReportState}} is 
> only useful in pull model implementation of container report, this change can 
> be reverted in future if we also want to support pull model.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12837) Intermittent failure TestReencryptionWithKMS#testReencryptionKMSDown

2018-05-21 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482691#comment-16482691
 ] 

Xiao Chen commented on HDFS-12837:
--

Will look into the test failure which seems related

> Intermittent failure TestReencryptionWithKMS#testReencryptionKMSDown
> 
>
> Key: HDFS-12837
> URL: https://issues.apache.org/jira/browse/HDFS-12837
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, test
>Affects Versions: 3.0.0-beta1
>Reporter: Surendra Singh Lilhore
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HDFS-12837.01.patch, HDFS-12837.02.patch
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/22112/testReport/org.apache.hadoop.hdfs.server.namenode/TestReencryptionWithKMS/testReencryptionKMSDown/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-71) Send ContainerType to Datanode during container creation

2018-05-21 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-71?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482598#comment-16482598
 ] 

Mukul Kumar Singh commented on HDDS-71:
---

Thanks for the update [~bharatviswa].
+1, the patch looks good to me. I will commit this shortly.

> Send ContainerType to Datanode during container creation
> 
>
> Key: HDDS-71
> URL: https://issues.apache.org/jira/browse/HDDS-71
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-71.00.patch, HDDS-71.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-87) Fix test failures with uninitialized storageLocation field in storageReport

2018-05-21 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-87?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482596#comment-16482596
 ] 

Bharat Viswanadham commented on HDDS-87:


Hudson failure is related to libprotoc version. 

> Fix test failures with uninitialized storageLocation field in storageReport
> ---
>
> Key: HDDS-87
> URL: https://issues.apache.org/jira/browse/HDDS-87
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-87.00.patch, HDDS-87.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-87) Fix test failures with uninitialized storageLocation field in storageReport

2018-05-21 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-87?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482596#comment-16482596
 ] 

Bharat Viswanadham edited comment on HDDS-87 at 5/21/18 3:06 PM:
-

Hudson failure is related to libprotoc version mismatch.


was (Author: bharatviswa):
Hudson failure is related to libprotoc version. 

> Fix test failures with uninitialized storageLocation field in storageReport
> ---
>
> Key: HDDS-87
> URL: https://issues.apache.org/jira/browse/HDDS-87
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-87.00.patch, HDDS-87.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-87) Fix test failures with uninitialized storageLocation field in storageReport

2018-05-21 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-87?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482590#comment-16482590
 ] 

Bharat Viswanadham commented on HDDS-87:


I have committed this to trunk.

> Fix test failures with uninitialized storageLocation field in storageReport
> ---
>
> Key: HDDS-87
> URL: https://issues.apache.org/jira/browse/HDDS-87
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-87.00.patch, HDDS-87.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-87) Fix test failures with uninitialized storageLocation field in storageReport

2018-05-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-87?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482591#comment-16482591
 ] 

Hudson commented on HDDS-87:


FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #14243 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14243/])
HDDS-87:Fix test failures with uninitialized storageLocation field in (bharat: 
rev 3d2d9dbcaa73fd72d614a8cf5a5be2806dd31537)
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/TestUtils.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestNodeManager.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestSCMNodeStorageStatMap.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestContainerPlacement.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/common/TestEndPoint.java


> Fix test failures with uninitialized storageLocation field in storageReport
> ---
>
> Key: HDDS-87
> URL: https://issues.apache.org/jira/browse/HDDS-87
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-87.00.patch, HDDS-87.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-87) Fix test failures with uninitialized storageLocation field in storageReport

2018-05-21 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-87?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-87:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Fix test failures with uninitialized storageLocation field in storageReport
> ---
>
> Key: HDDS-87
> URL: https://issues.apache.org/jira/browse/HDDS-87
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-87.00.patch, HDDS-87.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-71) Send ContainerType to Datanode during container creation

2018-05-21 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-71?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482573#comment-16482573
 ] 

Bharat Viswanadham edited comment on HDDS-71 at 5/21/18 3:00 PM:
-

[~msingh] Yes containerDBType set in the DN side usage I will upload a new Jira 
for this.

As now with this patch we need to support multiple containerDBTypes. Will do 
that as a part of new jira.


was (Author: bharatviswa):
[~msingh] Yes containerDBType set in the DN side usage I will upload a new Jira 
for this.

As now, we need to support multiple containerDBTypes. Will do that as a part of 
new jira.

> Send ContainerType to Datanode during container creation
> 
>
> Key: HDDS-71
> URL: https://issues.apache.org/jira/browse/HDDS-71
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-71.00.patch, HDDS-71.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-87) Fix test failures with uninitialized storageLocation field in storageReport

2018-05-21 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-87?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482578#comment-16482578
 ] 

Bharat Viswanadham commented on HDDS-87:


Thanks [~shashikant] for addressing review comments.

+1 on V01 patch. Will commit this patch shortly.

> Fix test failures with uninitialized storageLocation field in storageReport
> ---
>
> Key: HDDS-87
> URL: https://issues.apache.org/jira/browse/HDDS-87
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-87.00.patch, HDDS-87.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-71) Send ContainerType to Datanode during container creation

2018-05-21 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-71?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482573#comment-16482573
 ] 

Bharat Viswanadham commented on HDDS-71:


[~msingh] Yes containerDBType set in the DN side usage I will upload a new Jira 
for this.

As now, we need to support multiple containerDBTypes. Will do that as a part of 
new jira.

> Send ContainerType to Datanode during container creation
> 
>
> Key: HDDS-71
> URL: https://issues.apache.org/jira/browse/HDDS-71
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-71.00.patch, HDDS-71.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-89) Create ozone specific inline documentation as part of the buld

2018-05-21 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-89?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482572#comment-16482572
 ] 

genericqa commented on HDDS-89:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 32m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 21m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  1m  
7s{color} | {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 27m 
25s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-dist in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
22s{color} | {color:red} docs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 35m 
16s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 35m 16s{color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  3m  
0s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red}  0m  
2s{color} | {color:red} The patch generated 3 new + 0 unchanged - 0 fixed = 3 
total (was 0) {color} |
| {color:orange}-0{color} | {color:orange} shelldocs {color} | {color:orange}  
0m 22s{color} | {color:orange} The patch generated 286 new + 114 unchanged - 0 
fixed = 400 total (was 114) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  6m 
51s{color} | {color:red} root in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}156m 59s{color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  1m 
19s{color} | {color:red} The patch generated 10 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}346m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
|   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.client.impl.TestBlockReaderLocal |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Ser

[jira] [Commented] (HDFS-13344) adl.AdlFilesystem.close() doesn't release locks on open files

2018-05-21 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482558#comment-16482558
 ] 

genericqa commented on HDFS-13344:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} hadoop-tools/hadoop-azure-datalake: The patch 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 35s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
46s{color} | {color:green} hadoop-azure-datalake in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13344 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12924344/HDFS-13344-001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 968dc02bf028 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c97df77 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24268/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure-datalake.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24268/testReport/ |
| Max. process+thread count | 333 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-azure-datalake U: 
hadoop-tools/hadoop-azure-datalake |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24268/console |
| Powered by | Apache Yetus 0.8

[jira] [Commented] (HDDS-57) TestContainerCloser#testRepeatedClose and TestContainerCloser#testCleanupThreadRuns fail consistently

2018-05-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-57?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482527#comment-16482527
 ] 

Hudson commented on HDDS-57:


SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14242 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14242/])
HDDS-57. TestContainerCloser#testRepeatedClose and (msingh: rev 
c97df7712ce35938c2f4ccbbdc60c6671a7a67b0)
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/closer/TestContainerCloser.java


> TestContainerCloser#testRepeatedClose and 
> TestContainerCloser#testCleanupThreadRuns fail consistently
> -
>
> Key: HDDS-57
> URL: https://issues.apache.org/jira/browse/HDDS-57
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Minor
> Fix For: 0.2.1
>
> Attachments: HDDS-57.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-57) TestContainerCloser#testRepeatedClose and TestContainerCloser#testCleanupThreadRuns fail consistently

2018-05-21 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-57?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-57:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks for the contribution [~shashikant]. I have committed this to trunk.

> TestContainerCloser#testRepeatedClose and 
> TestContainerCloser#testCleanupThreadRuns fail consistently
> -
>
> Key: HDDS-57
> URL: https://issues.apache.org/jira/browse/HDDS-57
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Minor
> Fix For: 0.2.1
>
> Attachments: HDDS-57.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13344) adl.AdlFilesystem.close() doesn't release locks on open files

2018-05-21 Thread Vishwajeet Dusane (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishwajeet Dusane updated HDFS-13344:
-
Status: Patch Available  (was: Open)

maintain list of open streams and close unclosed stream on FileSystem.close. 
Similar to DFS implementation.

> adl.AdlFilesystem.close() doesn't release locks on open files
> -
>
> Key: HDFS-13344
> URL: https://issues.apache.org/jira/browse/HDFS-13344
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 2.7.3
> Environment: HDInsight on MS Azure:
>  
> Hadoop 2.7.3.2.6.2.25-1
> Subversion g...@github.com:hortonworks/hadoop.git -r 
> 1ceeb58bb3bb5904df0cbb7983389bcaf2ffd0b6
> Compiled by jenkins on 2017-11-29T15:28Z
> Compiled with protoc 2.5.0
> From source with checksum 90b73c4c185645c1f47b61f942230
> This command was run using 
> /usr/hdp/2.6.2.25-1/hadoop/hadoop-common-2.7.3.2.6.2.25-1.jar
>Reporter: Jay Hankinson
>Assignee: Vishwajeet Dusane
>Priority: Major
> Attachments: HDFS-13344-001.patch
>
>
> If you write to a file on and Azure ADL filesystem and close the file system 
> but not the file before the process exits, the next time you try open the 
> file for append it fails with:
> Exception in thread "main" java.io.IOException: APPEND failed with error 
> 0x83090a16 (Failed to perform the requested operation because the file is 
> currently open in write mode by another user or process.). 
> [a67c6b32-e78b-4852-9fac-142a3e2ba963][2018-03-22T20:54:08.3520940-07:00]
>  The following moves local file to HDFS if it doesn't exist or appends it's 
> contents if it does:
>  
> {code:java}
> public void addFile(String source, String dest, Configuration conf) throws 
> IOException {
> FileSystem fileSystem = FileSystem.get(conf);
> // Get the filename out of the file path
> String filename = source.substring(source.lastIndexOf('/') + 
> 1,source.length());
> // Create the destination path including the filename.
> if (dest.charAt(dest.length() - 1) != '/')
> { dest = dest + "/" + filename; }
> else {
> dest = dest + filename;
> }
> // Check if the file already exists
> Path path = new Path(dest);
> FSDataOutputStream out;
> if (fileSystem.exists(path)) {
> System.out.println("File " + dest + " already exists appending");
> out = fileSystem.append(path);
> } else {
> out = fileSystem.create(path);
> }
> // Create a new file and write data to it.
> InputStream in = new BufferedInputStream(new FileInputStream(new File(
> source)));
> byte[] b = new byte[1024];
> int numBytes = 0;
> while ((numBytes = in.read(b)) > 0) {
> out.write(b, 0, numBytes);
> }
> // Close the file system not the file
> in.close();
> //out.close();
> fileSystem.close();
> }
> {code}
>  If "dest" is an adl:// location, invoking the function a second time (after 
> the process has exited) it raises the error. If it's a regular hdfs:// file 
> system, it doesn't as all the locks are released. The same exception is also 
> raised if a subsequent append is done using: hdfs dfs  -appendToFile.
> As I can't see a way to force lease recovery in this situation, this seems 
> like a bug. org.apache.hadoop.fs.adl.AdlFileSystem inherits close() from 
> org.apache.hadoop.fs.FileSystem
> [https://hadoop.apache.org/docs/r3.0.0/api/org/apache/hadoop/fs/adl/AdlFileSystem.html]
> Which states:
> Close this FileSystem instance. Will release any held locks. This does not 
> seem to be the case



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13344) adl.AdlFilesystem.close() doesn't release locks on open files

2018-05-21 Thread Vishwajeet Dusane (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishwajeet Dusane updated HDFS-13344:
-
Attachment: HDFS-13344-001.patch

> adl.AdlFilesystem.close() doesn't release locks on open files
> -
>
> Key: HDFS-13344
> URL: https://issues.apache.org/jira/browse/HDFS-13344
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 2.7.3
> Environment: HDInsight on MS Azure:
>  
> Hadoop 2.7.3.2.6.2.25-1
> Subversion g...@github.com:hortonworks/hadoop.git -r 
> 1ceeb58bb3bb5904df0cbb7983389bcaf2ffd0b6
> Compiled by jenkins on 2017-11-29T15:28Z
> Compiled with protoc 2.5.0
> From source with checksum 90b73c4c185645c1f47b61f942230
> This command was run using 
> /usr/hdp/2.6.2.25-1/hadoop/hadoop-common-2.7.3.2.6.2.25-1.jar
>Reporter: Jay Hankinson
>Assignee: Vishwajeet Dusane
>Priority: Major
> Attachments: HDFS-13344-001.patch
>
>
> If you write to a file on and Azure ADL filesystem and close the file system 
> but not the file before the process exits, the next time you try open the 
> file for append it fails with:
> Exception in thread "main" java.io.IOException: APPEND failed with error 
> 0x83090a16 (Failed to perform the requested operation because the file is 
> currently open in write mode by another user or process.). 
> [a67c6b32-e78b-4852-9fac-142a3e2ba963][2018-03-22T20:54:08.3520940-07:00]
>  The following moves local file to HDFS if it doesn't exist or appends it's 
> contents if it does:
>  
> {code:java}
> public void addFile(String source, String dest, Configuration conf) throws 
> IOException {
> FileSystem fileSystem = FileSystem.get(conf);
> // Get the filename out of the file path
> String filename = source.substring(source.lastIndexOf('/') + 
> 1,source.length());
> // Create the destination path including the filename.
> if (dest.charAt(dest.length() - 1) != '/')
> { dest = dest + "/" + filename; }
> else {
> dest = dest + filename;
> }
> // Check if the file already exists
> Path path = new Path(dest);
> FSDataOutputStream out;
> if (fileSystem.exists(path)) {
> System.out.println("File " + dest + " already exists appending");
> out = fileSystem.append(path);
> } else {
> out = fileSystem.create(path);
> }
> // Create a new file and write data to it.
> InputStream in = new BufferedInputStream(new FileInputStream(new File(
> source)));
> byte[] b = new byte[1024];
> int numBytes = 0;
> while ((numBytes = in.read(b)) > 0) {
> out.write(b, 0, numBytes);
> }
> // Close the file system not the file
> in.close();
> //out.close();
> fileSystem.close();
> }
> {code}
>  If "dest" is an adl:// location, invoking the function a second time (after 
> the process has exited) it raises the error. If it's a regular hdfs:// file 
> system, it doesn't as all the locks are released. The same exception is also 
> raised if a subsequent append is done using: hdfs dfs  -appendToFile.
> As I can't see a way to force lease recovery in this situation, this seems 
> like a bug. org.apache.hadoop.fs.adl.AdlFileSystem inherits close() from 
> org.apache.hadoop.fs.FileSystem
> [https://hadoop.apache.org/docs/r3.0.0/api/org/apache/hadoop/fs/adl/AdlFileSystem.html]
> Which states:
> Close this FileSystem instance. Will release any held locks. This does not 
> seem to be the case



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-57) TestContainerCloser#testRepeatedClose and TestContainerCloser#testCleanupThreadRuns fail consistently

2018-05-21 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-57?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482477#comment-16482477
 ] 

Mukul Kumar Singh commented on HDDS-57:
---

Thanks for working on this [~shashikant]. +1, the patch looks good to me.
I will commit this shortly.

> TestContainerCloser#testRepeatedClose and 
> TestContainerCloser#testCleanupThreadRuns fail consistently
> -
>
> Key: HDDS-57
> URL: https://issues.apache.org/jira/browse/HDDS-57
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Minor
> Fix For: 0.2.1
>
> Attachments: HDDS-57.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-88) Create separate message structure to represent ports in DatanodeDetails

2018-05-21 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-88?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482447#comment-16482447
 ] 

Nanda kumar commented on HDDS-88:
-

Thanks [~ajayydv] & [~anu] for reviewing.

bq. For message Port in hdds.proto, bind name to enum type
Initially, I was also thinking of having the enum in hdds.proto (use the same 
in DatanodeDetails as well), but this will make the {{Port}} message in 
hdds.proto very specific to Datanodes. Now it doesn't have any predefined set 
of values and can be used by SCM, KSM or in any other place where we need to 
represent a port in protobuf. To make the port names type-safe the enum is 
introduced in java code (DatanodeDetails).
bq. Lets implement hashcode as well
Sure, will update the patch with hashcode implementation
bq. wrapper for setRatis port and SetContainerPort and SetRestPort
Good idea, it will make our life easy. Will create a follow-up jira to do the 
same.


> Create separate message structure to represent ports in DatanodeDetails 
> 
>
> Key: HDDS-88
> URL: https://issues.apache.org/jira/browse/HDDS-88
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.2.1
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-88.000.patch
>
>
> DataNode uses many ports which have to be set in DatanodeDetails and sent to 
> SCM. This port details can be extracted into a separate protobuf message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-57) TestContainerCloser#testRepeatedClose and TestContainerCloser#testCleanupThreadRuns fail consistently

2018-05-21 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-57?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-57:
--
Fix Version/s: 0.2.1

> TestContainerCloser#testRepeatedClose and 
> TestContainerCloser#testCleanupThreadRuns fail consistently
> -
>
> Key: HDDS-57
> URL: https://issues.apache.org/jira/browse/HDDS-57
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Minor
> Fix For: 0.2.1
>
> Attachments: HDDS-57.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-57) TestContainerCloser#testRepeatedClose and TestContainerCloser#testCleanupThreadRuns fail consistently

2018-05-21 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-57?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482414#comment-16482414
 ] 

genericqa commented on HDDS-57:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
48s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
40s{color} | {color:red} hadoop-hdds/server-scm in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 23s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 29s{color} 
| {color:red} server-scm in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.ozone.container.replication.TestContainerSupervisor |
|   | hadoop.ozone.container.common.TestEndPoint |
|   | hadoop.hdds.scm.node.TestContainerPlacement |
|   | hadoop.hdds.scm.node.TestNodeManager |
|   | hadoop.hdds.scm.node.TestSCMNodeStorageStatMap |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-57 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12924330/HDDS-57.00.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7b854cb6f10e 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 
21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ba84284 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDDS-Build/150/artifact/out/branch-findbugs-hadoop-hdds_server-scm-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/150/artifact/out/patch-unit-hadoop-hdds_server-scm.txt
 |
|  Test Result

  1   2   >