[jira] [Updated] (HDDS-45) Removal of old OzoneRestClient

2018-05-18 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-45?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-45:
-
Status: Patch Available  (was: Open)

> Removal of old OzoneRestClient
> --
>
> Key: HDDS-45
> URL: https://issues.apache.org/jira/browse/HDDS-45
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nanda kumar
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-45.001.patch
>
>
> Once new REST based OzoneClient is ready, the old OzoneRestClient can be 
> removed. This jira is to track the same.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-70) Fix config names for secure ksm and scm

2018-05-18 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-70?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16481449#comment-16481449
 ] 

genericqa commented on HDDS-70:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
26s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m  
5s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
21s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
17s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test hadoop-ozone/acceptance-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
7s{color} | {color:red} hadoop-hdds/common in HDDS-4 has 19 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
47s{color} | {color:red} hadoop-hdds/server-scm in HDDS-4 has 1 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
47s{color} | {color:red} hadoop-ozone/ozone-manager in HDDS-4 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
15s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
33s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test hadoop-ozone/acceptance-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
13s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit 

[jira] [Commented] (HDFS-13591) TestDFSShell#testSetrepLow fails on Windows

2018-05-18 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16481399#comment-16481399
 ] 

Íñigo Goiri commented on HDFS-13591:


This is similar to what [~giovanni.fumarola] found in YARN-8327.

> TestDFSShell#testSetrepLow fails on Windows
> ---
>
> Key: HDFS-13591
> URL: https://issues.apache.org/jira/browse/HDFS-13591
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13591.000.patch
>
>
> https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs/TestDFSShell/testSetrepLow/
>  shows
> {code:java}
> Error message is not the expected error message 
> expected:<...testFileForSetrepLow[]
> > but was:<...testFileForSetrepLow[
> ]
> >
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-70) Fix config names for secure ksm and scm

2018-05-18 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-70?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16481389#comment-16481389
 ] 

Ajay Kumar commented on HDDS-70:


patch v3 to update docker config file for acceptance test. 

> Fix config names for secure ksm and scm
> ---
>
> Key: HDDS-70
> URL: https://issues.apache.org/jira/browse/HDDS-70
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-70-HDDS-4.00.patch, HDDS-70-HDDS-4.01.patch, 
> HDDS-70-HDDS-4.02.patch
>
>
> There are some inconsistencies in ksm and scm config for kerberos. Jira 
> intends to correct them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-70) Fix config names for secure ksm and scm

2018-05-18 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-70?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-70:
---
Attachment: HDDS-70-HDDS-4.02.patch

> Fix config names for secure ksm and scm
> ---
>
> Key: HDDS-70
> URL: https://issues.apache.org/jira/browse/HDDS-70
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-70-HDDS-4.00.patch, HDDS-70-HDDS-4.01.patch, 
> HDDS-70-HDDS-4.02.patch
>
>
> There are some inconsistencies in ksm and scm config for kerberos. Jira 
> intends to correct them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-71) Send ContainerType to Datanode during container creation

2018-05-18 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-71?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16481386#comment-16481386
 ] 

Ajay Kumar commented on HDDS-71:


LGTM.  [~msingh] I think we don't need to expose DB type on protobuf as this is 
internal implementation detail for DN, not sure if we have a valid case where 
SCM will direct DN to use some specific DBtype.

> Send ContainerType to Datanode during container creation
> 
>
> Key: HDDS-71
> URL: https://issues.apache.org/jira/browse/HDDS-71
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-71.00.patch, HDDS-71.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13591) TestDFSShell#testSetrepLow fails on Windows

2018-05-18 Thread Lukas Majercak (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16481359#comment-16481359
 ] 

Lukas Majercak commented on HDFS-13591:
---

Should we do .startsWith/contains ? 

> TestDFSShell#testSetrepLow fails on Windows
> ---
>
> Key: HDFS-13591
> URL: https://issues.apache.org/jira/browse/HDFS-13591
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13591.000.patch
>
>
> https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs/TestDFSShell/testSetrepLow/
>  shows
> {code:java}
> Error message is not the expected error message 
> expected:<...testFileForSetrepLow[]
> > but was:<...testFileForSetrepLow[
> ]
> >
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-87) Fix test failures with uninitialized storageLocation field in storageReport

2018-05-18 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-87?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16481227#comment-16481227
 ] 

genericqa commented on HDDS-87:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
33s{color} | {color:red} hadoop-hdds/server-scm in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 49s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 15s{color} 
| {color:red} server-scm in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.ozone.container.replication.TestContainerSupervisor |
|   | hadoop.hdds.scm.container.closer.TestContainerCloser |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-87 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12924189/HDDS-87.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2bdae20d2b2f 3.13.0-141-generic #190-Ubuntu SMP Fri Jan 19 
12:52:38 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 89f5911 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDDS-Build/141/artifact/out/branch-findbugs-hadoop-hdds_server-scm-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/141/artifact/out/patch-unit-hadoop-hdds_server-scm.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/141/testReport/ |
| Max. process+thread count | 304 (vs. ulimit of 1) |
| modules | C: 

[jira] [Created] (HDFS-13596) NN restart fails after RollingUpgrade from 2.x to 3.x

2018-05-18 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDFS-13596:
-

 Summary: NN restart fails after RollingUpgrade from 2.x to 3.x
 Key: HDFS-13596
 URL: https://issues.apache.org/jira/browse/HDFS-13596
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Reporter: Hanisha Koneru


After rollingUpgrade NN from 2.x and 3.x, if the NN is restarted, it fails 
while replaying edit logs.
 * After NN is started with rollingUpgrade, the layoutVersion written to 
editLogs (before finalizing the upgrade) is the pre-upgrade layout version (so 
as to support downgrade).
 * When writing transactions to log, NN writes as per the current layout 
version. In 3.x, erasureCoding bits are added to the editLog transactions.
 * So any edit log written after the upgrade and before finalizing the upgrade 
will have the old layout version but the new format of transactions.
 * When NN is restarted and the edit logs are replayed, the NN reads the old 
layout version from the editLog file. When parsing the transactions, it assumes 
that the transactions are also from the previous layout and hence skips parsing 
the erasureCoding bits.
 * This cascades into reading the wrong set of bits for other fields and leads 
to NN shutting down.

Sample error output:
{code:java}
java.lang.IllegalArgumentException: Invalid clientId - length is 0 expected 
length 16
 at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
 at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:74)
 at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:86)
 at 
org.apache.hadoop.ipc.RetryCache$CacheEntryWithPayload.(RetryCache.java:163)
 at 
org.apache.hadoop.ipc.RetryCache.addCacheEntryWithPayload(RetryCache.java:322)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheEntryWithPayload(FSNamesystem.java:960)
 at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:397)
 at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:249)
 at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
 at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
 at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
 at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:910)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
2018-05-17 19:10:06,522 WARN 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception 
loading fsimage
java.io.IOException: java.lang.IllegalStateException: Cannot skip to less than 
the current value (=16389), where newValue=16388
 at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.resetLastInodeId(FSDirectory.java:1945)
 at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:298)
 at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
 at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
 at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
 at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:910)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
Caused by: java.lang.IllegalStateException: Cannot skip to less than the 
current value (=16389), where newValue=16388
 at org.apache.hadoop.util.SequentialNumber.skipTo(SequentialNumber.java:58)
 at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.resetLastInodeId(FSDirectory.java:1943)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HDFS-13589) Add dfsAdmin command to query if "upgrade" is finalized

2018-05-18 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16481198#comment-16481198
 ] 

Arpit Agarwal commented on HDFS-13589:
--

The patch looks good. A few comments:
# Let's update the javadoc for {{ClientNameNodeProtocol#upgradeStatus}} to 
clarify when 'true' is returned.
# Minor coding style issues in {{DFSAdmin#getUpgradeStatus}}. Need a space 
after opening braces in try/catch, and a space after {{if}}
# Minor: errors should use System.err.println
{code}
  System.out.println("Getting upgrade status failed for " +
  proxy.getAddress());
{code}
# We should add unit tests for the new command and for protobuf 
[de]serialization of the new message.


> Add dfsAdmin command to query if "upgrade" is finalized
> ---
>
> Key: HDFS-13589
> URL: https://issues.apache.org/jira/browse/HDFS-13589
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDFS-13589.001.patch, HDFS-13589.002.patch
>
>
> When we do upgrade on a Namenode (non rollingUpgrade), we should be able to 
> query whether the upgrade has been finalized or not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-9) Add OzoneManager Block Token Support

2018-05-18 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-9?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reassigned HDDS-9:
-

Assignee: Xiaoyu Yao

> Add OzoneManager Block Token Support
> 
>
> Key: HDDS-9
> URL: https://issues.apache.org/jira/browse/HDDS-9
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Fix For: 0.3.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-8) Add OzoneManager Delegation Token support

2018-05-18 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-8?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reassigned HDDS-8:
-

Assignee: Ajay Kumar  (was: Xiaoyu Yao)

> Add OzoneManager Delegation Token support
> -
>
> Key: HDDS-8
> URL: https://issues.apache.org/jira/browse/HDDS-8
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-70) Fix config names for secure ksm and scm

2018-05-18 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-70?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-70:
---
Issue Type: Sub-task  (was: Bug)
Parent: HDDS-4

> Fix config names for secure ksm and scm
> ---
>
> Key: HDDS-70
> URL: https://issues.apache.org/jira/browse/HDDS-70
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-70-HDDS-4.00.patch, HDDS-70-HDDS-4.01.patch
>
>
> There are some inconsistencies in ksm and scm config for kerberos. Jira 
> intends to correct them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-87) Fix test failures with uninitialized storageLocation field in storageReport

2018-05-18 Thread Shashikant Banerjee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-87?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16481164#comment-16481164
 ] 

Shashikant Banerjee commented on HDDS-87:
-

Thanks [~bharatviswa], for the review . Patch v1 addresses your review comments.

> Fix test failures with uninitialized storageLocation field in storageReport
> ---
>
> Key: HDDS-87
> URL: https://issues.apache.org/jira/browse/HDDS-87
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-87.00.patch, HDDS-87.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-87) Fix test failures with uninitialized storageLocation field in storageReport

2018-05-18 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-87?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-87:

Attachment: HDDS-87.01.patch

> Fix test failures with uninitialized storageLocation field in storageReport
> ---
>
> Key: HDDS-87
> URL: https://issues.apache.org/jira/browse/HDDS-87
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-87.00.patch, HDDS-87.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-7) Enable kerberos auth for Ozone client in hadoop rpc

2018-05-18 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-7?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-7:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks [~ajayydv] for the contribution. I've commit the patch to the feature 
branch. 

> Enable kerberos auth for Ozone client in hadoop rpc 
> 
>
> Key: HDDS-7
> URL: https://issues.apache.org/jira/browse/HDDS-7
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Client, SCM Client
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-4-HDDS-7-poc.patch, HDDS-7-HDDS-4.00.patch, 
> HDDS-7-HDDS-4.01.patch, HDDS-7-HDDS-4.02.patch
>
>
> Enable kerberos auth for Ozone client in hadoop rpc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-90) Create ContainerData, Container classes

2018-05-18 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-90?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16481131#comment-16481131
 ] 

Bharat Viswanadham commented on HDDS-90:


This is dependant on HDDS-71.

> Create ContainerData, Container classes
> ---
>
> Key: HDDS-90
> URL: https://issues.apache.org/jira/browse/HDDS-90
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-90.00.patch
>
>
> This Jira is to create following classes:
> ContainerData (to have generic fields for different types of containers)
> KeyValueContainerData (To extend ContainerData and have fields specific to 
> KeyValueContainer)
> Container (For Container meta operations)
> KeyValueContainer(To extend Container)
>  
> In this Jira implementation of KeyValueContainer is not done, as this 
> requires volume classes.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-7) Enable kerberos auth for Ozone client in hadoop rpc

2018-05-18 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-7?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16481126#comment-16481126
 ] 

Xiaoyu Yao commented on HDDS-7:
---

Thanks [~ajayydv] for the update. +1 for v2 patch. I will commit it shortly.

> Enable kerberos auth for Ozone client in hadoop rpc 
> 
>
> Key: HDDS-7
> URL: https://issues.apache.org/jira/browse/HDDS-7
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Client, SCM Client
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-4-HDDS-7-poc.patch, HDDS-7-HDDS-4.00.patch, 
> HDDS-7-HDDS-4.01.patch, HDDS-7-HDDS-4.02.patch
>
>
> Enable kerberos auth for Ozone client in hadoop rpc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-90) Create ContainerData, Container classes

2018-05-18 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-90?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-90:
---
Attachment: HDDS-90.00.patch

> Create ContainerData, Container classes
> ---
>
> Key: HDDS-90
> URL: https://issues.apache.org/jira/browse/HDDS-90
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-90.00.patch
>
>
> This Jira is to create following classes:
> ContainerData (to have generic fields for different types of containers)
> KeyValueContainerData (To extend ContainerData and have fields specific to 
> KeyValueContainer)
> Container (For Container meta operations)
> KeyValueContainer(To extend Container)
>  
> In this Jira implementation of KeyValueContainer is not done, as this 
> requires volume classes.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-90) Create ContainerData, Container classes

2018-05-18 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-90?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-90:
---
Summary: Create ContainerData, Container classes  (was: Create 
ContainerData, KeyValueContainerData)

> Create ContainerData, Container classes
> ---
>
> Key: HDDS-90
> URL: https://issues.apache.org/jira/browse/HDDS-90
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> This Jira is to create following classes:
> ContainerData (to have generic fields for different types of containers)
> KeyValueContainerData (To extend ContainerData and have fields specific to 
> KeyValueContainer)
> Container (For Container meta operations)
> KeyValueContainer(To extend Container)
>  
> In this Jira implementation of KeyValueContainer is not done, as this 
> requires volume classes.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13448) HDFS Block Placement - Ignore Locality for First Block Replica

2018-05-18 Thread BELUGA BEHR (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16481102#comment-16481102
 ] 

BELUGA BEHR commented on HDFS-13448:


 [~shahrs87] [~ajayydv] [~daryn] [~kihwal]  Please consider including into the 
project.  Thanks.

> HDFS Block Placement - Ignore Locality for First Block Replica
> --
>
> Key: HDFS-13448
> URL: https://issues.apache.org/jira/browse/HDFS-13448
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: block placement, hdfs-client
>Affects Versions: 2.9.0, 3.0.1
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-13448.1.patch, HDFS-13448.2.patch, 
> HDFS-13448.3.patch, HDFS-13448.4.patch, HDFS-13448.5.patch, 
> HDFS-13448.6.patch, HDFS-13448.7.patch, HDFS-13448.8.patch
>
>
> According to the HDFS Block Place Rules:
> {quote}
> /**
>  * The replica placement strategy is that if the writer is on a datanode,
>  * the 1st replica is placed on the local machine, 
>  * otherwise a random datanode. The 2nd replica is placed on a datanode
>  * that is on a different rack. The 3rd replica is placed on a datanode
>  * which is on a different node of the rack as the second replica.
>  */
> {quote}
> However, there is a hint for the hdfs-client that allows the block placement 
> request to not put a block replica on the local datanode _where 'local' means 
> the same host as the client is being run on._
> {quote}
>   /**
>* Advise that a block replica NOT be written to the local DataNode where
>* 'local' means the same host as the client is being run on.
>*
>* @see CreateFlag#NO_LOCAL_WRITE
>*/
> {quote}
> I propose that we add a new flag that allows the hdfs-client to request that 
> the first block replica be placed on a random DataNode in the cluster.  The 
> subsequent block replicas should follow the normal block placement rules.
> The issue is that when the {{NO_LOCAL_WRITE}} is enabled, the first block 
> replica is not placed on the local node, but it is still placed on the local 
> rack.  Where this comes into play is where you have, for example, a flume 
> agent that is loading data into HDFS.
> If the Flume agent is running on a DataNode, then by default, the DataNode 
> local to the Flume agent will always get the first block replica and this 
> leads to un-even block placements, with the local node always filling up 
> faster than any other node in the cluster.
> Modifying this example, if the DataNode is removed from the host where the 
> Flume agent is running, or this {{NO_LOCAL_WRITE}} is enabled by Flume, then 
> the default block placement policy will still prefer the local rack.  This 
> remedies the situation only so far as now the first block replica will always 
> be distributed to a DataNode on the local rack.
> This new flag would allow a single Flume agent to distribute the blocks 
> randomly, evenly, over the entire cluster instead of hot-spotting the local 
> node or the local rack.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-90) Create ContainerData, KeyValueContainerData

2018-05-18 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-90?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-90:
--

Assignee: Bharat Viswanadham

> Create ContainerData, KeyValueContainerData
> ---
>
> Key: HDDS-90
> URL: https://issues.apache.org/jira/browse/HDDS-90
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> This Jira is to create following classes:
> ContainerData (to have generic fields for different types of containers)
> KeyValueContainerData (To extend ContainerData and have fields specific to 
> KeyValueContainer)
> Container (For Container meta operations)
> KeyValueContainer(To extend Container)
>  
> In this Jira implementation of KeyValueContainer is not done, as this 
> requires volume classes.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-90) Create ContainerData, KeyValueContainerData

2018-05-18 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-90:
--

 Summary: Create ContainerData, KeyValueContainerData
 Key: HDDS-90
 URL: https://issues.apache.org/jira/browse/HDDS-90
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Bharat Viswanadham


This Jira is to create following classes:

ContainerData (to have generic fields for different types of containers)

KeyValueContainerData (To extend ContainerData and have fields specific to 
KeyValueContainer)

Container (For Container meta operations)

KeyValueContainer(To extend Container)

 

In this Jira implementation of KeyValueContainer is not done, as this requires 
volume classes.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-89) Create ozone specific inline documentation as part of the buld

2018-05-18 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-89?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16481083#comment-16481083
 ] 

Anu Engineer edited comment on HDDS-89 at 5/18/18 7:21 PM:
---

[~elek] Thanks for the patch. Here are some comments.
 # [ERROR] Failed to execute goal 
org.codehaus.mojo:exec-maven-plugin:1.6.0:exec (default) on project 
hadoop-ozone-docs: Command execution failed. Cannot run program 
"dev-support/bin/generate-site.sh" (in directory 
"/Users/aengineer/diskBalancer/hadoop-ozone/docs"): error=13, Permission denied 
-> [Help 1]
 # had to fix it with  ~/d/h/d/d/bin> chmod +x generate-site.sh
 # if I run
{noformat}
cd ./hadoop-dist/target/compose/ozone
docker-compose up
{noformat}

followed by [http://localhost:9874|http://localhost:9874/] I am not able to see 
the docs link.
What am I doing wrong?

 

Also the patch can be renamed as HDDS-89.001.patch, thx

 

 


was (Author: anu):
[~elek] Thanks for the patch. Here are some comments.

#  [ERROR] Failed to execute goal 
org.codehaus.mojo:exec-maven-plugin:1.6.0:exec (default) on project 
hadoop-ozone-docs: Command execution failed. Cannot run program 
"dev-support/bin/generate-site.sh" (in directory 
"/Users/aengineer/diskBalancer/hadoop-ozone/docs"): error=13, Permission denied 
-> [Help 1]
#  had to fix it with  ~/d/h/d/d/bin> chmod +x generate-site.sh
# if I run
{noformat}
cd ./hadoop-dist/target/compose/ozone
docker-compose up
{noformat}

followed by [http://localhost:9874|http://localhost:9874/] I am not able to see 
the docs link.
What am I doing wrong?

 

 

 

> Create ozone specific inline documentation as part of the buld
> --
>
> Key: HDDS-89
> URL: https://issues.apache.org/jira/browse/HDDS-89
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager, SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-89-HDDS-48.001.patch
>
>
> As ozone/hdds distribution is separated from the hadoop distribution we need 
> a separated documentation package. The idea is to include make the 
> documentation available from the scm/ksm web pages but later it also should 
> be uploaded to the hadoop site together with the release artifacts.
> This patch creates the HTML pages from the existing markdown files in the 
> build process. It's an optional step, but if the documentation is available 
> it will be displayed from the scm/ksm web page.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-89) Create ozone specific inline documentation as part of the buld

2018-05-18 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-89?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16481083#comment-16481083
 ] 

Anu Engineer commented on HDDS-89:
--

[~elek] Thanks for the patch. Here are some comments.

#  [ERROR] Failed to execute goal 
org.codehaus.mojo:exec-maven-plugin:1.6.0:exec (default) on project 
hadoop-ozone-docs: Command execution failed. Cannot run program 
"dev-support/bin/generate-site.sh" (in directory 
"/Users/aengineer/diskBalancer/hadoop-ozone/docs"): error=13, Permission denied 
-> [Help 1]
#  had to fix it with  ~/d/h/d/d/bin> chmod +x generate-site.sh
# if I run
{noformat}
cd ./hadoop-dist/target/compose/ozone
docker-compose up
{noformat}

followed by [http://localhost:9874|http://localhost:9874/] I am not able to see 
the docs link.
What am I doing wrong?

 

 

 

> Create ozone specific inline documentation as part of the buld
> --
>
> Key: HDDS-89
> URL: https://issues.apache.org/jira/browse/HDDS-89
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager, SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-89-HDDS-48.001.patch
>
>
> As ozone/hdds distribution is separated from the hadoop distribution we need 
> a separated documentation package. The idea is to include make the 
> documentation available from the scm/ksm web pages but later it also should 
> be uploaded to the hadoop site together with the release artifacts.
> This patch creates the HTML pages from the existing markdown files in the 
> build process. It's an optional step, but if the documentation is available 
> it will be displayed from the scm/ksm web page.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-71) Send ContainerType to Datanode during container creation

2018-05-18 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-71?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16481034#comment-16481034
 ] 

genericqa commented on HDDS-71:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
53s{color} | {color:red} hadoop-hdds/common in trunk has 19 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 48s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
5s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-71 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12924152/HDDS-71.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux 3f2e826184d5 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 

[jira] [Updated] (HDDS-85) Send Container State Info while sending the container report from Datanode to SCM

2018-05-18 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-85?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-85:
-
Fix Version/s: 0.2.1

> Send Container State Info while sending the container report from Datanode to 
> SCM
> -
>
> Key: HDDS-85
> URL: https://issues.apache.org/jira/browse/HDDS-85
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-85.00.patch, HDDS-85.01.patch
>
>
> While sending the container report, the container lifecycle state info is not 
> sent. This information will be required in the event of a datanode loss/disk 
> loss to figure out the open containers which need to be closed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-87) Fix test failures with uninitialized storageLocation field in storageReport

2018-05-18 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-87?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-87:
-
Fix Version/s: 0.2.1

> Fix test failures with uninitialized storageLocation field in storageReport
> ---
>
> Key: HDDS-87
> URL: https://issues.apache.org/jira/browse/HDDS-87
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-87.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-88) Create separate message structure to represent ports in DatanodeDetails

2018-05-18 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-88?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16481003#comment-16481003
 ] 

Ajay Kumar commented on HDDS-88:


[~nandakumar131] thanks for working on this. 
* For message Port in hdds.proto, bind name to enum type. i.e by adding enum to 
message. For long term compatibility i will suggest adding a new entry 
"UNKNOWN" as first entry to enum. (Both in proto class as well as 
DatanodeDetails.Port.Name) 
* Findbugs warning for not implementing hashcode along with equals for 
Datanode.Port is valid. Lets implement hashcode as well

> Create separate message structure to represent ports in DatanodeDetails 
> 
>
> Key: HDDS-88
> URL: https://issues.apache.org/jira/browse/HDDS-88
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.2.1
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-88.000.patch
>
>
> DataNode uses many ports which have to be set in DatanodeDetails and sent to 
> SCM. This port details can be extracted into a separate protobuf message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13590) Backport HDFS-12378 to branch-2

2018-05-18 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13590:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: (was: 2.9.1)
   (was: 2.9.0)
   2.10.0
   Status: Resolved  (was: Patch Available)

Thanks [~lukmajercak] for the backport.
Committed to branch-2 and branch-2.9.

> Backport HDFS-12378 to branch-2
> ---
>
> Key: HDFS-13590
> URL: https://issues.apache.org/jira/browse/HDFS-13590
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs, test
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
>  Labels: flaky-test
> Fix For: 2.10.0, 2.9.2
>
> Attachments: HDFS-13590_branch-2.000.patch, 
> image-2018-05-18-11-01-05-739.png
>
>
> The unit tests are flaky in 2.9. We should backport this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13590) Backport HDFS-12378 to branch-2

2018-05-18 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480993#comment-16480993
 ] 

Íñigo Goiri commented on HDFS-13590:


Thanks [~lukmajercak], given this is already running in 3.X with no issues, 
this should be safe to add.
+1 on the backport.
Applying to branch-2 and branch-2.9.


> Backport HDFS-12378 to branch-2
> ---
>
> Key: HDFS-13590
> URL: https://issues.apache.org/jira/browse/HDFS-13590
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs, test
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
>  Labels: flaky-test
> Fix For: 2.9.0, 2.9.1, 2.9.2
>
> Attachments: HDFS-13590_branch-2.000.patch, 
> image-2018-05-18-11-01-05-739.png
>
>
> The unit tests are flaky in 2.9. We should backport this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13590) Backport HDFS-12378 to branch-2

2018-05-18 Thread Lukas Majercak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HDFS-13590:
--
Attachment: image-2018-05-18-11-01-05-739.png

> Backport HDFS-12378 to branch-2
> ---
>
> Key: HDFS-13590
> URL: https://issues.apache.org/jira/browse/HDFS-13590
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs, test
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
>  Labels: flaky-test
> Fix For: 2.9.0, 2.9.1, 2.9.2
>
> Attachments: HDFS-13590_branch-2.000.patch, 
> image-2018-05-18-11-01-05-739.png
>
>
> The unit tests are flaky in 2.9. We should backport this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13590) Backport HDFS-12378 to branch-2

2018-05-18 Thread Lukas Majercak (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480988#comment-16480988
 ] 

Lukas Majercak commented on HDFS-13590:
---

One of the tests affected by this is TestPipelinesFailover. I ran the suite 
20times with the patch applied; seem to be running fine:

!image-2018-05-18-11-01-05-739.png!

> Backport HDFS-12378 to branch-2
> ---
>
> Key: HDFS-13590
> URL: https://issues.apache.org/jira/browse/HDFS-13590
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs, test
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
>  Labels: flaky-test
> Fix For: 2.9.0, 2.9.1, 2.9.2
>
> Attachments: HDFS-13590_branch-2.000.patch, 
> image-2018-05-18-11-01-05-739.png
>
>
> The unit tests are flaky in 2.9. We should backport this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-13595) Edit tailing period configuration should accept time units

2018-05-18 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen resolved HDFS-13595.

Resolution: Invalid

This is already done. Looked at the wrong branch, my mistake.

> Edit tailing period configuration should accept time units
> --
>
> Key: HDFS-13595
> URL: https://issues.apache.org/jira/browse/HDFS-13595
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha, namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
>
> The {{dfs.ha.tail-edits.period}} config should accept time units to be able 
> to more easily specified across a wide range, and in particular for 
> HDFS-13150 it is useful to have a period shorter than 1 second which is not 
> currently possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-13595) Edit tailing period configuration should accept time units

2018-05-18 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-13595 started by Erik Krogen.
--
> Edit tailing period configuration should accept time units
> --
>
> Key: HDFS-13595
> URL: https://issues.apache.org/jira/browse/HDFS-13595
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha, namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
>
> The {{dfs.ha.tail-edits.period}} config should accept time units to be able 
> to more easily specified across a wide range, and in particular for 
> HDFS-13150 it is useful to have a period shorter than 1 second which is not 
> currently possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13595) Edit tailing period configuration should accept time units

2018-05-18 Thread Erik Krogen (JIRA)
Erik Krogen created HDFS-13595:
--

 Summary: Edit tailing period configuration should accept time units
 Key: HDFS-13595
 URL: https://issues.apache.org/jira/browse/HDFS-13595
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: ha, namenode
Reporter: Erik Krogen
Assignee: Erik Krogen


The {{dfs.ha.tail-edits.period}} config should accept time units to be able to 
more easily specified across a wide range, and in particular for HDFS-13150 it 
is useful to have a period shorter than 1 second which is not currently 
possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-87) Fix test failures with uninitialized storageLocation field in storageReport

2018-05-18 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-87?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480974#comment-16480974
 ] 

Bharat Viswanadham commented on HDDS-87:


I have one suggestion, instead of creating nodeReport in each test case, can we 
use TestUtils createNodeReport()?

> Fix test failures with uninitialized storageLocation field in storageReport
> ---
>
> Key: HDDS-87
> URL: https://issues.apache.org/jira/browse/HDDS-87
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-87.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-87) Fix test failures with uninitialized storageLocation field in storageReport

2018-05-18 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-87?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480974#comment-16480974
 ] 

Bharat Viswanadham edited comment on HDDS-87 at 5/18/18 5:47 PM:
-

Thank You [~shashikant]  for reporting and providing the fix.

I have one suggestion, instead of creating nodeReport in each test case, can we 
use TestUtils createNodeReport()?


was (Author: bharatviswa):
I have one suggestion, instead of creating nodeReport in each test case, can we 
use TestUtils createNodeReport()?

> Fix test failures with uninitialized storageLocation field in storageReport
> ---
>
> Key: HDDS-87
> URL: https://issues.apache.org/jira/browse/HDDS-87
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-87.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-76) Modify SCMStorageReportProto to include the data dir paths as well as the StorageType info

2018-05-18 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-76?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480962#comment-16480962
 ] 

Bharat Viswanadham commented on HDDS-76:


Thank You [~shashikant] for info.

Will look into HDDS-87.

> Modify SCMStorageReportProto to include the data dir paths as well as the 
> StorageType info
> --
>
> Key: HDDS-76
> URL: https://issues.apache.org/jira/browse/HDDS-76
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-76.00.patch, HDDS-76.01.patch
>
>
> Currently, SCMStorageReport contains the storageUUID which are sent across to 
> SCM for maintaining storage Report info. This Jira aims to include the data 
> dir paths for actual disks as well as the storage Type info for each volume 
> on datanode to be sent to SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-76) Modify SCMStorageReportProto to include the data dir paths as well as the StorageType info

2018-05-18 Thread Shashikant Banerjee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-76?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480949#comment-16480949
 ] 

Shashikant Banerjee edited comment on HDDS-76 at 5/18/18 5:30 PM:
--

The tests have been fixed with HDDS-87. The jenkins runs seem to run the tests 
only for the packages for which the files source files have changed.


was (Author: shashikant):
The tests have been fixed with HDDS-87. The jenkins runs seem to run the tests 
only for the pacakages for which the files source files have changed.

> Modify SCMStorageReportProto to include the data dir paths as well as the 
> StorageType info
> --
>
> Key: HDDS-76
> URL: https://issues.apache.org/jira/browse/HDDS-76
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-76.00.patch, HDDS-76.01.patch
>
>
> Currently, SCMStorageReport contains the storageUUID which are sent across to 
> SCM for maintaining storage Report info. This Jira aims to include the data 
> dir paths for actual disks as well as the storage Type info for each volume 
> on datanode to be sent to SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-71) Send ContainerType to Datanode during container creation

2018-05-18 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-71?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-71:
---
Attachment: HDDS-71.01.patch

> Send ContainerType to Datanode during container creation
> 
>
> Key: HDDS-71
> URL: https://issues.apache.org/jira/browse/HDDS-71
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-71.00.patch, HDDS-71.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-76) Modify SCMStorageReportProto to include the data dir paths as well as the StorageType info

2018-05-18 Thread Shashikant Banerjee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-76?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480949#comment-16480949
 ] 

Shashikant Banerjee edited comment on HDDS-76 at 5/18/18 5:31 PM:
--

The tests have been fixed with HDDS-87. The jenkins runs seem to run the tests 
only for the packages for which the source files have changed.


was (Author: shashikant):
The tests have been fixed with HDDS-87. The jenkins runs seem to run the tests 
only for the packages for which the files source files have changed.

> Modify SCMStorageReportProto to include the data dir paths as well as the 
> StorageType info
> --
>
> Key: HDDS-76
> URL: https://issues.apache.org/jira/browse/HDDS-76
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-76.00.patch, HDDS-76.01.patch
>
>
> Currently, SCMStorageReport contains the storageUUID which are sent across to 
> SCM for maintaining storage Report info. This Jira aims to include the data 
> dir paths for actual disks as well as the storage Type info for each volume 
> on datanode to be sent to SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-76) Modify SCMStorageReportProto to include the data dir paths as well as the StorageType info

2018-05-18 Thread Shashikant Banerjee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-76?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480949#comment-16480949
 ] 

Shashikant Banerjee edited comment on HDDS-76 at 5/18/18 5:30 PM:
--

The tests have been fixed with HDDS-87. The jenkins runs seem to run the tests 
only for the pacakages for which the files source files have changed.


was (Author: shashikant):
The tests have been fixed with HDDS-87.

> Modify SCMStorageReportProto to include the data dir paths as well as the 
> StorageType info
> --
>
> Key: HDDS-76
> URL: https://issues.apache.org/jira/browse/HDDS-76
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-76.00.patch, HDDS-76.01.patch
>
>
> Currently, SCMStorageReport contains the storageUUID which are sent across to 
> SCM for maintaining storage Report info. This Jira aims to include the data 
> dir paths for actual disks as well as the storage Type info for each volume 
> on datanode to be sent to SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-76) Modify SCMStorageReportProto to include the data dir paths as well as the StorageType info

2018-05-18 Thread Shashikant Banerjee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-76?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480949#comment-16480949
 ] 

Shashikant Banerjee commented on HDDS-76:
-

The tests have been fixed with HDDS-87.

> Modify SCMStorageReportProto to include the data dir paths as well as the 
> StorageType info
> --
>
> Key: HDDS-76
> URL: https://issues.apache.org/jira/browse/HDDS-76
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-76.00.patch, HDDS-76.01.patch
>
>
> Currently, SCMStorageReport contains the storageUUID which are sent across to 
> SCM for maintaining storage Report info. This Jira aims to include the data 
> dir paths for actual disks as well as the storage Type info for each volume 
> on datanode to be sent to SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-71) Send ContainerType to Datanode during container creation

2018-05-18 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-71?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480948#comment-16480948
 ] 

Bharat Viswanadham commented on HDDS-71:


Attached patch v01, as the older patch is not applying to trunk.

> Send ContainerType to Datanode during container creation
> 
>
> Key: HDDS-71
> URL: https://issues.apache.org/jira/browse/HDDS-71
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-71.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-76) Modify SCMStorageReportProto to include the data dir paths as well as the StorageType info

2018-05-18 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-76?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480945#comment-16480945
 ] 

Bharat Viswanadham edited comment on HDDS-76 at 5/18/18 5:28 PM:
-

Sorry I missed this earlier.

I think this has caused some test failures in TestEndPoint.java.

 


was (Author: bharatviswa):
I think this has caused some test failures in TestEndPoint.java

> Modify SCMStorageReportProto to include the data dir paths as well as the 
> StorageType info
> --
>
> Key: HDDS-76
> URL: https://issues.apache.org/jira/browse/HDDS-76
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-76.00.patch, HDDS-76.01.patch
>
>
> Currently, SCMStorageReport contains the storageUUID which are sent across to 
> SCM for maintaining storage Report info. This Jira aims to include the data 
> dir paths for actual disks as well as the storage Type info for each volume 
> on datanode to be sent to SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-76) Modify SCMStorageReportProto to include the data dir paths as well as the StorageType info

2018-05-18 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-76?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480945#comment-16480945
 ] 

Bharat Viswanadham commented on HDDS-76:


I think this has caused some test failures in TestEndPoint.java

> Modify SCMStorageReportProto to include the data dir paths as well as the 
> StorageType info
> --
>
> Key: HDDS-76
> URL: https://issues.apache.org/jira/browse/HDDS-76
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-76.00.patch, HDDS-76.01.patch
>
>
> Currently, SCMStorageReport contains the storageUUID which are sent across to 
> SCM for maintaining storage Report info. This Jira aims to include the data 
> dir paths for actual disks as well as the storage Type info for each volume 
> on datanode to be sent to SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13593) TestBlockReaderLocalLegacy#testBlockReaderLocalLegacyWithAppend fails on Windows

2018-05-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480944#comment-16480944
 ] 

Hudson commented on HDFS-13593:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14238 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14238/])
HDFS-13593. (inigoiri: rev 9775ecb2355d7bed3514fcd54bf69e8351c4ab99)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/client/impl/TestBlockReaderLocalLegacy.java


> TestBlockReaderLocalLegacy#testBlockReaderLocalLegacyWithAppend fails on 
> Windows
> 
>
> Key: HDFS-13593
> URL: https://issues.apache.org/jira/browse/HDFS-13593
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HDFS-13593.000.patch
>
>
> https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs.client.impl/TestBlockReaderLocalLegacy/testBlockReaderLocalLegacyWithAppend/
>  shows error message:
> {code:java}
> Cannot remove data directory: 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4\dfs\datapath
>  
> 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4\dfs\data':
>  
>  
> absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4\dfs\data
>  permissions: drwx
> path 
> 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4\dfs':
>  
>  
> absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4\dfs
>  permissions: drwx
> path 
> 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4':
>  
>  
> absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4
>  permissions: drwx
> path 
> 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data':
>  
>  
> absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data
>  permissions: drwx
> path 
> 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test': 
>  
> absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test
>  permissions: drwx
> path 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target': 
>  absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target
>  permissions: drwx
> path 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs': 
>  absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs
>  permissions: drwx
> path 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project': 
>  absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project
>  permissions: drwx
> path 'F:\short\hadoop-trunk-win\s': 
>  absolute:F:\short\hadoop-trunk-win\s
>  permissions: drwx
> path 'F:\short\hadoop-trunk-win': 
>  absolute:F:\short\hadoop-trunk-win
>  permissions: drwx
> path 'F:\short': 
>  absolute:F:\short
>  permissions: drwx
> path 'F:\': 
>  absolute:F:\
>  permissions: dr-x
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time

2018-05-18 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480934#comment-16480934
 ] 

Íñigo Goiri commented on HDFS-13388:


[~yzhangal], sorry I missed this.
I think it would be best to backport HDFS-12813 to branch-3 and the backport 
this one.

> RequestHedgingProxyProvider calls multiple configured NNs all the time
> --
>
> Key: HDFS-13388
> URL: https://issues.apache.org/jira/browse/HDFS-13388
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.2.0, 3.1.1
>
> Attachments: HADOOP-13388.0001.patch, HADOOP-13388.0002.patch, 
> HADOOP-13388.0003.patch, HADOOP-13388.0004.patch, HADOOP-13388.0005.patch, 
> HADOOP-13388.0006.patch, HADOOP-13388.0007.patch, HADOOP-13388.0008.patch, 
> HADOOP-13388.0009.patch, HADOOP-13388.0010.patch, HADOOP-13388.0011.patch, 
> HADOOP-13388.0012.patch, HADOOP-13388.0013.patch, HADOOP-13388.0014.patch
>
>
> In HDFS-7858 RequestHedgingProxyProvider was designed to "first 
> simultaneously call multiple configured NNs to decide which is the active 
> Namenode and then for subsequent calls it will invoke the previously 
> successful NN ." But the current code call multiple configured NNs every time 
> even when we already got the successful NN. 
>  That's because in RetryInvocationHandler.java, ProxyDescriptor's member 
> proxyInfo is assigned only when it is constructed or when failover occurs. 
> RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the 
> only proxy we can get is always a dynamic proxy handled by 
> RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class 
> handles invoked method by calling multiple configured NNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13590) Backport HDFS-12378 to branch-2

2018-05-18 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480932#comment-16480932
 ] 

Íñigo Goiri commented on HDFS-13590:


Yetus hasn't been able to build branch-2 for a while.
[~lukmajercak] can you post your local results at least?

> Backport HDFS-12378 to branch-2
> ---
>
> Key: HDFS-13590
> URL: https://issues.apache.org/jira/browse/HDFS-13590
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs, test
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
>  Labels: flaky-test
> Fix For: 2.9.0, 2.9.1, 2.9.2
>
> Attachments: HDFS-13590_branch-2.000.patch
>
>
> The unit tests are flaky in 2.9. We should backport this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13593) TestBlockReaderLocalLegacy#testBlockReaderLocalLegacyWithAppend fails on Windows

2018-05-18 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13593:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.3
   2.9.2
   3.1.1
   3.2.0
   2.10.0
   Status: Resolved  (was: Patch Available)

 Thanks [~huanbang1993] for the fix and [~giovanni.fumarola] for the review.
Committed to trunk, branch-3.1, branch-3.0, branch-2, and branch-2.9.

> TestBlockReaderLocalLegacy#testBlockReaderLocalLegacyWithAppend fails on 
> Windows
> 
>
> Key: HDFS-13593
> URL: https://issues.apache.org/jira/browse/HDFS-13593
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HDFS-13593.000.patch
>
>
> https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs.client.impl/TestBlockReaderLocalLegacy/testBlockReaderLocalLegacyWithAppend/
>  shows error message:
> {code:java}
> Cannot remove data directory: 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4\dfs\datapath
>  
> 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4\dfs\data':
>  
>  
> absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4\dfs\data
>  permissions: drwx
> path 
> 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4\dfs':
>  
>  
> absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4\dfs
>  permissions: drwx
> path 
> 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4':
>  
>  
> absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4
>  permissions: drwx
> path 
> 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data':
>  
>  
> absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data
>  permissions: drwx
> path 
> 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test': 
>  
> absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test
>  permissions: drwx
> path 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target': 
>  absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target
>  permissions: drwx
> path 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs': 
>  absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs
>  permissions: drwx
> path 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project': 
>  absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project
>  permissions: drwx
> path 'F:\short\hadoop-trunk-win\s': 
>  absolute:F:\short\hadoop-trunk-win\s
>  permissions: drwx
> path 'F:\short\hadoop-trunk-win': 
>  absolute:F:\short\hadoop-trunk-win
>  permissions: drwx
> path 'F:\short': 
>  absolute:F:\short
>  permissions: drwx
> path 'F:\': 
>  absolute:F:\
>  permissions: dr-x
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13592) TestNameNodePrunesMissingStorages#testNameNodePrunesUnreportedStorages does not shut down cluster properly

2018-05-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480927#comment-16480927
 ] 

Hudson commented on HDFS-13592:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14237 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14237/])
HDFS-13592. (inigoiri: rev 57b893de3d36d20f65ee81b5cc3cfef12594b75b)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestNameNodePrunesMissingStorages.java


> TestNameNodePrunesMissingStorages#testNameNodePrunesUnreportedStorages does 
> not shut down cluster properly
> --
>
> Key: HDFS-13592
> URL: https://issues.apache.org/jira/browse/HDFS-13592
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HDFS-13592.000.patch
>
>
> Without cluster shutdown in 
> TestNameNodePrunesMissingStorages#testNameNodePrunesUnreportedStorages, the 
> below two tests fail (referring to 
> https://builds.apache.org/job/hadoop-trunk-win/469/testReport/)
> * 
> [TestNameNodePrunesMissingStorages#testUnusedStorageIsPruned|https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs.server.blockmanagement/TestNameNodePrunesMissingStorages/testUnusedStorageIsPruned/]
> * 
> [TestNameNodePrunesMissingStorages#testRemovingStorageDoesNotProduceZombies|https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs.server.blockmanagement/TestNameNodePrunesMissingStorages/testRemovingStorageDoesNotProduceZombies/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13590) Backport HDFS-12378 to branch-2

2018-05-18 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480919#comment-16480919
 ] 

genericqa commented on HDFS-13590:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-13590 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-13590 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12924042/HDFS-13590_branch-2.000.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24262/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Backport HDFS-12378 to branch-2
> ---
>
> Key: HDFS-13590
> URL: https://issues.apache.org/jira/browse/HDFS-13590
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs, test
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
>  Labels: flaky-test
> Fix For: 2.9.0, 2.9.1, 2.9.2
>
> Attachments: HDFS-13590_branch-2.000.patch
>
>
> The unit tests are flaky in 2.9. We should backport this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13590) Backport HDFS-12378 to branch-2

2018-05-18 Thread Lukas Majercak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HDFS-13590:
--
Status: Patch Available  (was: Open)

> Backport HDFS-12378 to branch-2
> ---
>
> Key: HDFS-13590
> URL: https://issues.apache.org/jira/browse/HDFS-13590
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs, test
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
>  Labels: flaky-test
> Fix For: 2.9.2, 2.9.1, 2.9.0
>
> Attachments: HDFS-13590_branch-2.000.patch
>
>
> The unit tests are flaky in 2.9. We should backport this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13590) Backport HDFS-12378 to branch-2

2018-05-18 Thread Lukas Majercak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HDFS-13590:
--
Status: In Progress  (was: Patch Available)

> Backport HDFS-12378 to branch-2
> ---
>
> Key: HDFS-13590
> URL: https://issues.apache.org/jira/browse/HDFS-13590
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs, test
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
>  Labels: flaky-test
> Fix For: 2.9.2, 2.9.1, 2.9.0
>
> Attachments: HDFS-13590_branch-2.000.patch
>
>
> The unit tests are flaky in 2.9. We should backport this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13590) Backport HDFS-12378 to branch-2

2018-05-18 Thread Lukas Majercak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HDFS-13590:
--
Status: Patch Available  (was: In Progress)

> Backport HDFS-12378 to branch-2
> ---
>
> Key: HDFS-13590
> URL: https://issues.apache.org/jira/browse/HDFS-13590
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs, test
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
>  Labels: flaky-test
> Fix For: 2.9.2, 2.9.1, 2.9.0
>
> Attachments: HDFS-13590_branch-2.000.patch
>
>
> The unit tests are flaky in 2.9. We should backport this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-13590) Backport HDFS-12378 to branch-2

2018-05-18 Thread Lukas Majercak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-13590 started by Lukas Majercak.
-
> Backport HDFS-12378 to branch-2
> ---
>
> Key: HDFS-13590
> URL: https://issues.apache.org/jira/browse/HDFS-13590
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs, test
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
>  Labels: flaky-test
> Fix For: 2.9.0, 2.9.1, 2.9.2
>
> Attachments: HDFS-13590_branch-2.000.patch
>
>
> The unit tests are flaky in 2.9. We should backport this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work stopped] (HDFS-13590) Backport HDFS-12378 to branch-2

2018-05-18 Thread Lukas Majercak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-13590 stopped by Lukas Majercak.
-
> Backport HDFS-12378 to branch-2
> ---
>
> Key: HDFS-13590
> URL: https://issues.apache.org/jira/browse/HDFS-13590
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs, test
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
>  Labels: flaky-test
> Fix For: 2.9.0, 2.9.1, 2.9.2
>
> Attachments: HDFS-13590_branch-2.000.patch
>
>
> The unit tests are flaky in 2.9. We should backport this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13593) TestBlockReaderLocalLegacy#testBlockReaderLocalLegacyWithAppend fails on Windows

2018-05-18 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480891#comment-16480891
 ] 

Giovanni Matteo Fumarola commented on HDFS-13593:
-

The 
[test|https://builds.apache.org/job/PreCommit-HDFS-Build/24260/testReport/org.apache.hadoop.hdfs.client.impl/TestBlockReaderLocalLegacy/]
 shows as passing by Yetus. [~elgoiri] I think we can commit this one without a 
problem.

 

 

> TestBlockReaderLocalLegacy#testBlockReaderLocalLegacyWithAppend fails on 
> Windows
> 
>
> Key: HDFS-13593
> URL: https://issues.apache.org/jira/browse/HDFS-13593
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13593.000.patch
>
>
> https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs.client.impl/TestBlockReaderLocalLegacy/testBlockReaderLocalLegacyWithAppend/
>  shows error message:
> {code:java}
> Cannot remove data directory: 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4\dfs\datapath
>  
> 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4\dfs\data':
>  
>  
> absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4\dfs\data
>  permissions: drwx
> path 
> 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4\dfs':
>  
>  
> absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4\dfs
>  permissions: drwx
> path 
> 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4':
>  
>  
> absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4
>  permissions: drwx
> path 
> 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data':
>  
>  
> absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data
>  permissions: drwx
> path 
> 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test': 
>  
> absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test
>  permissions: drwx
> path 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target': 
>  absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target
>  permissions: drwx
> path 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs': 
>  absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs
>  permissions: drwx
> path 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project': 
>  absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project
>  permissions: drwx
> path 'F:\short\hadoop-trunk-win\s': 
>  absolute:F:\short\hadoop-trunk-win\s
>  permissions: drwx
> path 'F:\short\hadoop-trunk-win': 
>  absolute:F:\short\hadoop-trunk-win
>  permissions: drwx
> path 'F:\short': 
>  absolute:F:\short
>  permissions: drwx
> path 'F:\': 
>  absolute:F:\
>  permissions: dr-x
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13592) TestNameNodePrunesMissingStorages#testNameNodePrunesUnreportedStorages does not shut down cluster properly

2018-05-18 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13592:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.3
   2.9.2
   3.1.1
   3.2.0
   2.10.0
   Status: Resolved  (was: Patch Available)

Thanks [~huanbang1993] for the fix and [~giovanni.fumarola] for the review.
Committed to trunk, branch-3.1, branch-3.0, branch-2, and branch-2.9.

> TestNameNodePrunesMissingStorages#testNameNodePrunesUnreportedStorages does 
> not shut down cluster properly
> --
>
> Key: HDFS-13592
> URL: https://issues.apache.org/jira/browse/HDFS-13592
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HDFS-13592.000.patch
>
>
> Without cluster shutdown in 
> TestNameNodePrunesMissingStorages#testNameNodePrunesUnreportedStorages, the 
> below two tests fail (referring to 
> https://builds.apache.org/job/hadoop-trunk-win/469/testReport/)
> * 
> [TestNameNodePrunesMissingStorages#testUnusedStorageIsPruned|https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs.server.blockmanagement/TestNameNodePrunesMissingStorages/testUnusedStorageIsPruned/]
> * 
> [TestNameNodePrunesMissingStorages#testRemovingStorageDoesNotProduceZombies|https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs.server.blockmanagement/TestNameNodePrunesMissingStorages/testRemovingStorageDoesNotProduceZombies/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13592) TestNameNodePrunesMissingStorages#testNameNodePrunesUnreportedStorages does not shut down cluster properly

2018-05-18 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480872#comment-16480872
 ] 

Íñigo Goiri commented on HDFS-13592:


bq. We should write a comment in MiniDFSCluster to make sure the new unit tests 
follow the rule of closing the object.

Yes, this is an issue all over the code.
An option would be to make it closeable so the compiler shows as a warning if 
not closed.
We can investigate that in a separate JIRA.

> TestNameNodePrunesMissingStorages#testNameNodePrunesUnreportedStorages does 
> not shut down cluster properly
> --
>
> Key: HDFS-13592
> URL: https://issues.apache.org/jira/browse/HDFS-13592
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13592.000.patch
>
>
> Without cluster shutdown in 
> TestNameNodePrunesMissingStorages#testNameNodePrunesUnreportedStorages, the 
> below two tests fail (referring to 
> https://builds.apache.org/job/hadoop-trunk-win/469/testReport/)
> * 
> [TestNameNodePrunesMissingStorages#testUnusedStorageIsPruned|https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs.server.blockmanagement/TestNameNodePrunesMissingStorages/testUnusedStorageIsPruned/]
> * 
> [TestNameNodePrunesMissingStorages#testRemovingStorageDoesNotProduceZombies|https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs.server.blockmanagement/TestNameNodePrunesMissingStorages/testRemovingStorageDoesNotProduceZombies/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13588) Fix TestFsDatasetImpl test failures on Windows

2018-05-18 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480868#comment-16480868
 ] 

Íñigo Goiri commented on HDFS-13588:


[~surmountian], can you give more insights on this fix?

> Fix TestFsDatasetImpl test failures on Windows
> --
>
> Key: HDFS-13588
> URL: https://issues.apache.org/jira/browse/HDFS-13588
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13588-branch-2.000.patch, HDFS-13588.000.patch
>
>
> Some test cases of TestFsDatasetImpl failed on Windows due to:
>  # using File#setWritable interface;
>  # test directory conflict between test cases (details in HDFS-13408);
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13587) TestQuorumJournalManager fails on Windows

2018-05-18 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480865#comment-16480865
 ] 

Íñigo Goiri commented on HDFS-13587:


As I mentioned before, I'm a little concern about having to randomly add:
{code}
DefaultMetricsSystem.setMiniClusterMode(true);
{code}
Is there a more generic way to manage that?
Originally {{getBaseDirectory()}} takes care of this, should we do something 
similar for the random path generation?

> TestQuorumJournalManager fails on Windows
> -
>
> Key: HDFS-13587
> URL: https://issues.apache.org/jira/browse/HDFS-13587
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
>  Labels: Windows
> Attachments: HDFS-13587.000.patch, HDFS-13587.001.patch
>
>
> There are 12 test failures in TestQuorumJournalManager on Windows. Local run 
> shows:
> {color:#d04437}[INFO] Running 
> org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager
> [ERROR] Tests run: 21, Failures: 0, Errors: 12, Skipped: 0, Time elapsed: 
> 106.81 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager
> [ERROR] 
> testCrashBetweenSyncLogAndPersistPaxosData(org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager)
>   Time elapsed: 1.93 s  <<< ERROR!
> org.apache.hadoop.hdfs.qjournal.client.QuorumException:
> Could not format one or more JournalNodes. 2 successful responses:
> 127.0.0.1:27044: null [success]
> 127.0.0.1:27064: null [success]
> 1 exceptions thrown:
> 127.0.0.1:27054: Directory 
> E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\journalnode-1\test-journal
>  is in an inconsistent state: Can't format the storage directory because the 
> current directory is not empty.
> at 
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.checkEmptyCurrent(Storage.java:498)
> at 
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:574)
> at 
> org.apache.hadoop.hdfs.qjournal.server.JNStorage.format(JNStorage.java:185)
> at 
> org.apache.hadoop.hdfs.qjournal.server.Journal.format(Journal.java:221)
> at 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.format(JournalNodeRpcServer.java:157)
> at 
> org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.format(QJournalProtocolServerSideTranslatorPB.java:145)
> at 
> org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25419)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:503)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:871)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:817)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1889)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2606)
> at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
> at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:286)
> at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.format(QuorumJournalManager.java:212)
> at 
> org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager.setup(TestQuorumJournalManager.java:109)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
> at 

[jira] [Commented] (HDFS-12837) Intermittent failure TestReencryptionWithKMS#testReencryptionKMSDown

2018-05-18 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480864#comment-16480864
 ] 

genericqa commented on HDFS-12837:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}101m  6s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}164m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-12837 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12914163/HDFS-12837.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f16ce0765adc 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6e99686 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24261/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24261/testReport/ |
| Max. process+thread count | 3136 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Commented] (HDFS-13592) TestNameNodePrunesMissingStorages#testNameNodePrunesUnreportedStorages does not shut down cluster properly

2018-05-18 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480863#comment-16480863
 ] 

Giovanni Matteo Fumarola commented on HDFS-13592:
-

Thanks [~huanbang1993] for working on this.

LGTM +1.

However, most of the unit tests that failed in Windows had the missing close 
clause. We should write a comment in \{{MiniDFSCluster}} to make sure the new 
unit tests follow the rule of closing the object.

> TestNameNodePrunesMissingStorages#testNameNodePrunesUnreportedStorages does 
> not shut down cluster properly
> --
>
> Key: HDFS-13592
> URL: https://issues.apache.org/jira/browse/HDFS-13592
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13592.000.patch
>
>
> Without cluster shutdown in 
> TestNameNodePrunesMissingStorages#testNameNodePrunesUnreportedStorages, the 
> below two tests fail (referring to 
> https://builds.apache.org/job/hadoop-trunk-win/469/testReport/)
> * 
> [TestNameNodePrunesMissingStorages#testUnusedStorageIsPruned|https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs.server.blockmanagement/TestNameNodePrunesMissingStorages/testUnusedStorageIsPruned/]
> * 
> [TestNameNodePrunesMissingStorages#testRemovingStorageDoesNotProduceZombies|https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs.server.blockmanagement/TestNameNodePrunesMissingStorages/testRemovingStorageDoesNotProduceZombies/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13480) RBF: Separate namenodeHeartbeat and routerHeartbeat to different config key

2018-05-18 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480861#comment-16480861
 ] 

Íñigo Goiri commented on HDFS-13480:


The error in TestRouterQuota seems suspicious.
BTW, keep in mind we are still in lockdown until we figure out what we do with 
HDFS-12615.
We also need to add a design doc to HDFS-13575.

> RBF: Separate namenodeHeartbeat and routerHeartbeat to different config key
> ---
>
> Key: HDFS-13480
> URL: https://issues.apache.org/jira/browse/HDFS-13480
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
> Attachments: HDFS-13480.001.patch, HDFS-13480.002.patch, 
> HDFS-13480.002.patch, HDFS-13480.003.patch, HDFS-13480.004.patch
>
>
> Now, if i enable the heartbeat.enable, but i do not want to monitor any 
> namenode, i get an ERROR log like:
> {code:java}
> [2018-04-19T14:00:03.057+08:00] [ERROR] 
> federation.router.Router.serviceInit(Router.java 214) [main] : Heartbeat is 
> enabled but there are no namenodes to monitor
> {code}
> and if i disable the heartbeat.enable, we cannot get any mounttable update, 
> because the following logic in Router.java:
> {code:java}
> if (conf.getBoolean(
> RBFConfigKeys.DFS_ROUTER_HEARTBEAT_ENABLE,
> RBFConfigKeys.DFS_ROUTER_HEARTBEAT_ENABLE_DEFAULT)) {
>   // Create status updater for each monitored Namenode
>   this.namenodeHeartbeatServices = createNamenodeHeartbeatServices();
>   for (NamenodeHeartbeatService hearbeatService :
>   this.namenodeHeartbeatServices) {
> addService(hearbeatService);
>   }
>   if (this.namenodeHeartbeatServices.isEmpty()) {
> LOG.error("Heartbeat is enabled but there are no namenodes to 
> monitor");
>   }
>   // Periodically update the router state
>   this.routerHeartbeatService = new RouterHeartbeatService(this);
>   addService(this.routerHeartbeatService);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13591) TestDFSShell#testSetrepLow fails on Windows

2018-05-18 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480860#comment-16480860
 ] 

Íñigo Goiri commented on HDFS-13591:


I'm not sure this is the right fix, hardcoding \r\r\n seems like a bad approach.
Can we fix the source?

> TestDFSShell#testSetrepLow fails on Windows
> ---
>
> Key: HDFS-13591
> URL: https://issues.apache.org/jira/browse/HDFS-13591
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13591.000.patch
>
>
> https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs/TestDFSShell/testSetrepLow/
>  shows
> {code:java}
> Error message is not the expected error message 
> expected:<...testFileForSetrepLow[]
> > but was:<...testFileForSetrepLow[
> ]
> >
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13593) TestBlockReaderLocalLegacy#testBlockReaderLocalLegacyWithAppend fails on Windows

2018-05-18 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480859#comment-16480859
 ] 

Giovanni Matteo Fumarola commented on HDFS-13593:
-

Thanks [~huanbang1993] for working on this. I shared the same comment as 
[~elgoiri] .

> TestBlockReaderLocalLegacy#testBlockReaderLocalLegacyWithAppend fails on 
> Windows
> 
>
> Key: HDFS-13593
> URL: https://issues.apache.org/jira/browse/HDFS-13593
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13593.000.patch
>
>
> https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs.client.impl/TestBlockReaderLocalLegacy/testBlockReaderLocalLegacyWithAppend/
>  shows error message:
> {code:java}
> Cannot remove data directory: 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4\dfs\datapath
>  
> 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4\dfs\data':
>  
>  
> absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4\dfs\data
>  permissions: drwx
> path 
> 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4\dfs':
>  
>  
> absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4\dfs
>  permissions: drwx
> path 
> 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4':
>  
>  
> absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4
>  permissions: drwx
> path 
> 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data':
>  
>  
> absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data
>  permissions: drwx
> path 
> 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test': 
>  
> absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test
>  permissions: drwx
> path 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target': 
>  absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target
>  permissions: drwx
> path 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs': 
>  absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs
>  permissions: drwx
> path 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project': 
>  absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project
>  permissions: drwx
> path 'F:\short\hadoop-trunk-win\s': 
>  absolute:F:\short\hadoop-trunk-win\s
>  permissions: drwx
> path 'F:\short\hadoop-trunk-win': 
>  absolute:F:\short\hadoop-trunk-win
>  permissions: drwx
> path 'F:\short': 
>  absolute:F:\short
>  permissions: drwx
> path 'F:\': 
>  absolute:F:\
>  permissions: dr-x
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13592) TestNameNodePrunesMissingStorages#testNameNodePrunesUnreportedStorages does not shut down cluster properly

2018-05-18 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480857#comment-16480857
 ] 

Íñigo Goiri commented on HDFS-13592:


[^HDFS-13592.000.patch] shuts down the cluster; straightforward fix.
TestWebHdfsTimeouts and TestDataNodeVolumeFailureReporting are usual suspects.
The unit test being fix passes 
[here|https://builds.apache.org/job/PreCommit-HDFS-Build/24259/testReport/org.apache.hadoop.hdfs.server.blockmanagement/TestNameNodePrunesMissingStorages/].

> TestNameNodePrunesMissingStorages#testNameNodePrunesUnreportedStorages does 
> not shut down cluster properly
> --
>
> Key: HDFS-13592
> URL: https://issues.apache.org/jira/browse/HDFS-13592
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13592.000.patch
>
>
> Without cluster shutdown in 
> TestNameNodePrunesMissingStorages#testNameNodePrunesUnreportedStorages, the 
> below two tests fail (referring to 
> https://builds.apache.org/job/hadoop-trunk-win/469/testReport/)
> * 
> [TestNameNodePrunesMissingStorages#testUnusedStorageIsPruned|https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs.server.blockmanagement/TestNameNodePrunesMissingStorages/testUnusedStorageIsPruned/]
> * 
> [TestNameNodePrunesMissingStorages#testRemovingStorageDoesNotProduceZombies|https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs.server.blockmanagement/TestNameNodePrunesMissingStorages/testRemovingStorageDoesNotProduceZombies/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13593) TestBlockReaderLocalLegacy#testBlockReaderLocalLegacyWithAppend fails on Windows

2018-05-18 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480855#comment-16480855
 ] 

Íñigo Goiri edited comment on HDFS-13593 at 5/18/18 4:16 PM:
-

[^HDFS-13593.000.patch] fixes the standard issue solved by HDFS-13408.
TestDataNodeVolumeFailure has been consistently failing lately and it's not 
related to this JIRA.
+1



was (Author: elgoiri):
[^HDFS-13593.000.patch] fixes the standard issue solved by HDFS-13408.
TestDataNodeVolumeFailure has been consistently fialing lately and it's not 
related to this JIRA.
+1


> TestBlockReaderLocalLegacy#testBlockReaderLocalLegacyWithAppend fails on 
> Windows
> 
>
> Key: HDFS-13593
> URL: https://issues.apache.org/jira/browse/HDFS-13593
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13593.000.patch
>
>
> https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs.client.impl/TestBlockReaderLocalLegacy/testBlockReaderLocalLegacyWithAppend/
>  shows error message:
> {code:java}
> Cannot remove data directory: 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4\dfs\datapath
>  
> 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4\dfs\data':
>  
>  
> absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4\dfs\data
>  permissions: drwx
> path 
> 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4\dfs':
>  
>  
> absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4\dfs
>  permissions: drwx
> path 
> 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4':
>  
>  
> absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4
>  permissions: drwx
> path 
> 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data':
>  
>  
> absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data
>  permissions: drwx
> path 
> 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test': 
>  
> absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test
>  permissions: drwx
> path 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target': 
>  absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target
>  permissions: drwx
> path 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs': 
>  absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs
>  permissions: drwx
> path 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project': 
>  absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project
>  permissions: drwx
> path 'F:\short\hadoop-trunk-win\s': 
>  absolute:F:\short\hadoop-trunk-win\s
>  permissions: drwx
> path 'F:\short\hadoop-trunk-win': 
>  absolute:F:\short\hadoop-trunk-win
>  permissions: drwx
> path 'F:\short': 
>  absolute:F:\short
>  permissions: drwx
> path 'F:\': 
>  absolute:F:\
>  permissions: dr-x
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13593) TestBlockReaderLocalLegacy#testBlockReaderLocalLegacyWithAppend fails on Windows

2018-05-18 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480855#comment-16480855
 ] 

Íñigo Goiri commented on HDFS-13593:


[^HDFS-13593.000.patch] fixes the standard issue solved by HDFS-13408.
TestDataNodeVolumeFailure has been consistently fialing lately and it's not 
related to this JIRA.
+1


> TestBlockReaderLocalLegacy#testBlockReaderLocalLegacyWithAppend fails on 
> Windows
> 
>
> Key: HDFS-13593
> URL: https://issues.apache.org/jira/browse/HDFS-13593
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13593.000.patch
>
>
> https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs.client.impl/TestBlockReaderLocalLegacy/testBlockReaderLocalLegacyWithAppend/
>  shows error message:
> {code:java}
> Cannot remove data directory: 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4\dfs\datapath
>  
> 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4\dfs\data':
>  
>  
> absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4\dfs\data
>  permissions: drwx
> path 
> 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4\dfs':
>  
>  
> absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4\dfs
>  permissions: drwx
> path 
> 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4':
>  
>  
> absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\4
>  permissions: drwx
> path 
> 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data':
>  
>  
> absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data
>  permissions: drwx
> path 
> 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test': 
>  
> absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test
>  permissions: drwx
> path 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target': 
>  absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target
>  permissions: drwx
> path 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs': 
>  absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs
>  permissions: drwx
> path 'F:\short\hadoop-trunk-win\s\hadoop-hdfs-project': 
>  absolute:F:\short\hadoop-trunk-win\s\hadoop-hdfs-project
>  permissions: drwx
> path 'F:\short\hadoop-trunk-win\s': 
>  absolute:F:\short\hadoop-trunk-win\s
>  permissions: drwx
> path 'F:\short\hadoop-trunk-win': 
>  absolute:F:\short\hadoop-trunk-win
>  permissions: drwx
> path 'F:\short': 
>  absolute:F:\short
>  permissions: drwx
> path 'F:\': 
>  absolute:F:\
>  permissions: dr-x
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-85) Send Container State Info while sending the container report from Datanode to SCM

2018-05-18 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-85?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480787#comment-16480787
 ] 

genericqa commented on HDDS-85:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
3s{color} | {color:red} hadoop-hdds/common in trunk has 19 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
9s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 27m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
16s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
42s{color} | {color:green} container-service in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 15s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}150m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.web.client.TestKeysRatis |
|   | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.ozone.container.common.impl.TestContainerPersistence |
|   | 

[jira] [Commented] (HDDS-88) Create separate message structure to represent ports in DatanodeDetails

2018-05-18 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-88?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480771#comment-16480771
 ] 

genericqa commented on HDDS-88:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 22 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
1s{color} | {color:red} hadoop-hdds/common in trunk has 19 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
46s{color} | {color:red} hadoop-hdds/server-scm in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
44s{color} | {color:red} hadoop-ozone/ozone-manager in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 30m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 30m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 32s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
10s{color} | {color:red} hadoop-hdds/common generated 1 new + 19 unchanged - 0 
fixed = 20 total (was 19) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
16s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
38s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 18s{color} 
| {color:red} 

[jira] [Commented] (HDDS-78) Add per volume level storage stats in SCM.

2018-05-18 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-78?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480705#comment-16480705
 ] 

genericqa commented on HDDS-78:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
34s{color} | {color:red} hadoop-hdds/server-scm in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m  1s{color} 
| {color:red} server-scm in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdds.scm.container.closer.TestContainerCloser |
|   | hadoop.ozone.container.replication.TestContainerSupervisor |
|   | hadoop.ozone.container.common.TestEndPoint |
|   | hadoop.hdds.scm.node.TestNodeManager |
|   | hadoop.hdds.scm.node.TestContainerPlacement |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-78 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12924119/HDDS-78.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux fc888e0a9fa8 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6e99686 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDDS-Build/139/artifact/out/branch-findbugs-hadoop-hdds_server-scm-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/139/artifact/out/patch-unit-hadoop-hdds_server-scm.txt
 |
|  Test Results | 

[jira] [Updated] (HDDS-45) Removal of old OzoneRestClient

2018-05-18 Thread Lokesh Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-45?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-45:

Attachment: HDDS-45.001.patch

> Removal of old OzoneRestClient
> --
>
> Key: HDDS-45
> URL: https://issues.apache.org/jira/browse/HDDS-45
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nanda kumar
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-45.001.patch
>
>
> Once new REST based OzoneClient is ready, the old OzoneRestClient can be 
> removed. This jira is to track the same.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-45) Removal of old OzoneRestClient

2018-05-18 Thread Lokesh Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-45?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-45:

Attachment: (was: HDDS-45.001.patch)

> Removal of old OzoneRestClient
> --
>
> Key: HDDS-45
> URL: https://issues.apache.org/jira/browse/HDDS-45
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nanda kumar
>Assignee: Lokesh Jain
>Priority: Major
>
> Once new REST based OzoneClient is ready, the old OzoneRestClient can be 
> removed. This jira is to track the same.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-45) Removal of old OzoneRestClient

2018-05-18 Thread Lokesh Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-45?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-45:

Attachment: HDDS-45.001.patch

> Removal of old OzoneRestClient
> --
>
> Key: HDDS-45
> URL: https://issues.apache.org/jira/browse/HDDS-45
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nanda kumar
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-45.001.patch
>
>
> Once new REST based OzoneClient is ready, the old OzoneRestClient can be 
> removed. This jira is to track the same.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12837) Intermittent failure TestReencryptionWithKMS#testReencryptionKMSDown

2018-05-18 Thread Zsolt Venczel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480649#comment-16480649
 ] 

Zsolt Venczel commented on HDFS-12837:
--

Thanks [~xiaochen] for the patch!

As I can see, by running the test suite several times, your solution gets rid 
of the race condition efficiently.

What do you think, would it be more concise having the
{code:java}
synchronized (this) {
{code}
constructs changed to
{code:java}
public synchronized void
{code}
Also by running this test suite multiple times I found another test to behave 
in a flaky way and having this in the logs:
{code:java}
2018-05-18 07:08:35 [ERROR] 
testReencryptionBasic(org.apache.hadoop.hdfs.server.namenode.TestReencryption)  
Time elapsed: 3.731 s  <<< FAILURE!
2018-05-18 07:08:35 java.lang.AssertionError
2018-05-18 07:08:35 at org.junit.Assert.fail(Assert.java:86)
2018-05-18 07:08:35 at org.junit.Assert.assertTrue(Assert.java:41)
2018-05-18 07:08:35 at org.junit.Assert.assertTrue(Assert.java:52)
2018-05-18 07:08:35 at 
org.apache.hadoop.hdfs.server.namenode.TestReencryption.verifyZoneStatus(TestReencryption.java:604)
2018-05-18 07:08:35 at 
org.apache.hadoop.hdfs.server.namenode.TestReencryption.testReencryptionBasic(TestReencryption.java:194)
2018-05-18 07:08:35 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
2018-05-18 07:08:35 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2018-05-18 07:08:35 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2018-05-18 07:08:35 at java.lang.reflect.Method.invoke(Method.java:498)
2018-05-18 07:08:35 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
2018-05-18 07:08:35 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
2018-05-18 07:08:35 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
2018-05-18 07:08:35 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
2018-05-18 07:08:35 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
2018-05-18 07:08:35 at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
2018-05-18 07:08:35 at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
2018-05-18 07:08:35 
2018-05-18 07:08:35 [INFO] 
2018-05-18 07:08:35 [INFO] Results:
2018-05-18 07:08:35 [INFO] 
2018-05-18 07:08:35 [ERROR] Failures: 
2018-05-18 07:08:35 [ERROR]   
TestReencryption.testReencryptionBasic:194->verifyZoneStatus:604
{code}
I assume this should be treated as a separate issue.

> Intermittent failure TestReencryptionWithKMS#testReencryptionKMSDown
> 
>
> Key: HDFS-12837
> URL: https://issues.apache.org/jira/browse/HDFS-12837
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, test
>Affects Versions: 3.0.0-beta1
>Reporter: Surendra Singh Lilhore
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HDFS-12837.01.patch
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/22112/testReport/org.apache.hadoop.hdfs.server.namenode/TestReencryptionWithKMS/testReencryptionKMSDown/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-78) Add per volume level storage stats in SCM.

2018-05-18 Thread Shashikant Banerjee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-78?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480618#comment-16480618
 ] 

Shashikant Banerjee commented on HDDS-78:
-

Patch v1 fixes the findBug issue.

> Add per volume level storage stats in SCM. 
> ---
>
> Key: HDDS-78
> URL: https://issues.apache.org/jira/browse/HDDS-78
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-78.00.patch, HDDS-78.01.patch
>
>
> HDDS-38 adds Storage Statistics per Datanode in SCM. This Jira aims to add 
> per volume per Datanode storage stats in SCM. These will be useful while 
> figuring out failed volumes, out of space disks, over utilized and under 
> utilized disks  which will be used in balancing the data within a datanode 
> across multiple disks as well as across the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-78) Add per volume level storage stats in SCM.

2018-05-18 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-78?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-78:

Attachment: HDDS-78.01.patch

> Add per volume level storage stats in SCM. 
> ---
>
> Key: HDDS-78
> URL: https://issues.apache.org/jira/browse/HDDS-78
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-78.00.patch, HDDS-78.01.patch
>
>
> HDDS-38 adds Storage Statistics per Datanode in SCM. This Jira aims to add 
> per volume per Datanode storage stats in SCM. These will be useful while 
> figuring out failed volumes, out of space disks, over utilized and under 
> utilized disks  which will be used in balancing the data within a datanode 
> across multiple disks as well as across the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-87) Fix test failures with uninitialized storageLocation field in storageReport

2018-05-18 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-87?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480599#comment-16480599
 ] 

genericqa commented on HDDS-87:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
36s{color} | {color:red} hadoop-hdds/server-scm in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 45s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 25s{color} 
| {color:red} server-scm in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.ozone.container.replication.TestContainerSupervisor |
|   | hadoop.hdds.scm.container.closer.TestContainerCloser |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-87 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12924107/HDDS-87.00.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6c3b705b9b56 4.4.0-121-generic #145-Ubuntu SMP Fri Apr 13 
13:47:23 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6e99686 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDDS-Build/136/artifact/out/branch-findbugs-hadoop-hdds_server-scm-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/136/artifact/out/patch-unit-hadoop-hdds_server-scm.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/136/testReport/ |
| Max. process+thread count | 410 (vs. ulimit of 1) |
| modules | C: 

[jira] [Updated] (HDDS-85) Send Container State Info while sending the container report from Datanode to SCM

2018-05-18 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-85?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-85:

Attachment: HDDS-85.01.patch

> Send Container State Info while sending the container report from Datanode to 
> SCM
> -
>
> Key: HDDS-85
> URL: https://issues.apache.org/jira/browse/HDDS-85
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-85.00.patch, HDDS-85.01.patch
>
>
> While sending the container report, the container lifecycle state info is not 
> sent. This information will be required in the event of a datanode loss/disk 
> loss to figure out the open containers which need to be closed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-85) Send Container State Info while sending the container report from Datanode to SCM

2018-05-18 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-85?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-85:

Summary: Send Container State Info while sending the container report from 
Datanode to SCM  (was: Send Container State while sending the container report 
from Datanode to SCM)

> Send Container State Info while sending the container report from Datanode to 
> SCM
> -
>
> Key: HDDS-85
> URL: https://issues.apache.org/jira/browse/HDDS-85
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-85.00.patch
>
>
> While sending the container report, the container lifecycle state info is not 
> sent. This information will be required in the event of a datanode loss/disk 
> loss to figure out the open containers which need to be closed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-89) Create ozone specific inline documentation as part of the buld

2018-05-18 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-89?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-89:
-
Affects Version/s: (was: Acadia)

> Create ozone specific inline documentation as part of the buld
> --
>
> Key: HDDS-89
> URL: https://issues.apache.org/jira/browse/HDDS-89
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager, SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-89-HDDS-48.001.patch
>
>
> As ozone/hdds distribution is separated from the hadoop distribution we need 
> a separated documentation package. The idea is to include make the 
> documentation available from the scm/ksm web pages but later it also should 
> be uploaded to the hadoop site together with the release artifacts.
> This patch creates the HTML pages from the existing markdown files in the 
> build process. It's an optional step, but if the documentation is available 
> it will be displayed from the scm/ksm web page.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-89) Create ozone specific inline documentation as part of the buld

2018-05-18 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-89?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-89:
-
Fix Version/s: 0.2.1

> Create ozone specific inline documentation as part of the buld
> --
>
> Key: HDDS-89
> URL: https://issues.apache.org/jira/browse/HDDS-89
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager, SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-89-HDDS-48.001.patch
>
>
> As ozone/hdds distribution is separated from the hadoop distribution we need 
> a separated documentation package. The idea is to include make the 
> documentation available from the scm/ksm web pages but later it also should 
> be uploaded to the hadoop site together with the release artifacts.
> This patch creates the HTML pages from the existing markdown files in the 
> build process. It's an optional step, but if the documentation is available 
> it will be displayed from the scm/ksm web page.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-89) Create ozone specific inline documentation as part of the buld

2018-05-18 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-89?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-89:
-
Fix Version/s: (was: Acadia)

> Create ozone specific inline documentation as part of the buld
> --
>
> Key: HDDS-89
> URL: https://issues.apache.org/jira/browse/HDDS-89
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager, SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-89-HDDS-48.001.patch
>
>
> As ozone/hdds distribution is separated from the hadoop distribution we need 
> a separated documentation package. The idea is to include make the 
> documentation available from the scm/ksm web pages but later it also should 
> be uploaded to the hadoop site together with the release artifacts.
> This patch creates the HTML pages from the existing markdown files in the 
> build process. It's an optional step, but if the documentation is available 
> it will be displayed from the scm/ksm web page.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-89) Create ozone specific inline documentation as part of the buld

2018-05-18 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-89?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-89:
-
Affects Version/s: Acadia

> Create ozone specific inline documentation as part of the buld
> --
>
> Key: HDDS-89
> URL: https://issues.apache.org/jira/browse/HDDS-89
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager, SCM
>Affects Versions: Acadia
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: Acadia
>
> Attachments: HDDS-89-HDDS-48.001.patch
>
>
> As ozone/hdds distribution is separated from the hadoop distribution we need 
> a separated documentation package. The idea is to include make the 
> documentation available from the scm/ksm web pages but later it also should 
> be uploaded to the hadoop site together with the release artifacts.
> This patch creates the HTML pages from the existing markdown files in the 
> build process. It's an optional step, but if the documentation is available 
> it will be displayed from the scm/ksm web page.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-89) Create ozone specific inline documentation as part of the buld

2018-05-18 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-89?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480525#comment-16480525
 ] 

Elek, Marton commented on HDDS-89:
--

No new documentation here just the modification of the build process + a very 
simple theme to display the rendered documentation. The theme could be 
replaced/improved later, in the current form we don't need to add anything to 
the license/notice files as bootstrap/jquery/glyphicons are already used and 
handled.

To test, put hugo to your PATH and do a full build. After starting scm/ksm you 
can see a new documentation menu with the existing content. 

> Create ozone specific inline documentation as part of the buld
> --
>
> Key: HDDS-89
> URL: https://issues.apache.org/jira/browse/HDDS-89
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager, SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: Acadia
>
> Attachments: HDDS-89-HDDS-48.001.patch
>
>
> As ozone/hdds distribution is separated from the hadoop distribution we need 
> a separated documentation package. The idea is to include make the 
> documentation available from the scm/ksm web pages but later it also should 
> be uploaded to the hadoop site together with the release artifacts.
> This patch creates the HTML pages from the existing markdown files in the 
> build process. It's an optional step, but if the documentation is available 
> it will be displayed from the scm/ksm web page.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-89) Create ozone specific inline documentation as part of the buld

2018-05-18 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-89?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-89:
-
Status: Patch Available  (was: Open)

> Create ozone specific inline documentation as part of the buld
> --
>
> Key: HDDS-89
> URL: https://issues.apache.org/jira/browse/HDDS-89
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager, SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: Acadia
>
> Attachments: HDDS-89-HDDS-48.001.patch
>
>
> As ozone/hdds distribution is separated from the hadoop distribution we need 
> a separated documentation package. The idea is to include make the 
> documentation available from the scm/ksm web pages but later it also should 
> be uploaded to the hadoop site together with the release artifacts.
> This patch creates the HTML pages from the existing markdown files in the 
> build process. It's an optional step, but if the documentation is available 
> it will be displayed from the scm/ksm web page.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-89) Create ozone specific inline documentation as part of the buld

2018-05-18 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-89?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-89:
-
Attachment: HDDS-89-HDDS-48.001.patch

> Create ozone specific inline documentation as part of the buld
> --
>
> Key: HDDS-89
> URL: https://issues.apache.org/jira/browse/HDDS-89
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager, SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: Acadia
>
> Attachments: HDDS-89-HDDS-48.001.patch
>
>
> As ozone/hdds distribution is separated from the hadoop distribution we need 
> a separated documentation package. The idea is to include make the 
> documentation available from the scm/ksm web pages but later it also should 
> be uploaded to the hadoop site together with the release artifacts.
> This patch creates the HTML pages from the existing markdown files in the 
> build process. It's an optional step, but if the documentation is available 
> it will be displayed from the scm/ksm web page.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-89) Create ozone specific inline documentation as part of the buld

2018-05-18 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-89:


 Summary: Create ozone specific inline documentation as part of the 
buld
 Key: HDDS-89
 URL: https://issues.apache.org/jira/browse/HDDS-89
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Manager, SCM
Reporter: Elek, Marton
Assignee: Elek, Marton
 Fix For: Acadia


As ozone/hdds distribution is separated from the hadoop distribution we need a 
separated documentation package. The idea is to include make the documentation 
available from the scm/ksm web pages but later it also should be uploaded to 
the hadoop site together with the release artifacts.

This patch creates the HTML pages from the existing markdown files in the build 
process. It's an optional step, but if the documentation is available it will 
be displayed from the scm/ksm web page.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-87) Fix test failures with uninitialized storageLocation field in storageReport

2018-05-18 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-87?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee reassigned HDDS-87:
---

Assignee: Shashikant Banerjee

> Fix test failures with uninitialized storageLocation field in storageReport
> ---
>
> Key: HDDS-87
> URL: https://issues.apache.org/jira/browse/HDDS-87
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-87.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-87) Fix test failures with uninitialized storageLocation field in storageReport

2018-05-18 Thread Shashikant Banerjee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-87?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480516#comment-16480516
 ] 

Shashikant Banerjee commented on HDDS-87:
-

Patch v0 fixes the related test failures with uninitialized storageLocation 
field in storageReport.

> Fix test failures with uninitialized storageLocation field in storageReport
> ---
>
> Key: HDDS-87
> URL: https://issues.apache.org/jira/browse/HDDS-87
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-87.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-87) Fix test failures with uninitialized storageLocation field in storageReport

2018-05-18 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-87?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-87:

Attachment: HDDS-87.00.patch

> Fix test failures with uninitialized storageLocation field in storageReport
> ---
>
> Key: HDDS-87
> URL: https://issues.apache.org/jira/browse/HDDS-87
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-87.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-87) Fix test failures with uninitialized storageLocation field in storageReport

2018-05-18 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-87?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-87:

Status: Patch Available  (was: Open)

> Fix test failures with uninitialized storageLocation field in storageReport
> ---
>
> Key: HDDS-87
> URL: https://issues.apache.org/jira/browse/HDDS-87
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-87.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-87) Fix test failures with uninitialized storageLocation field in storageReport

2018-05-18 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-87?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-87:
--
Parent Issue: HDDS-26  (was: HDDS-76)

> Fix test failures with uninitialized storageLocation field in storageReport
> ---
>
> Key: HDDS-87
> URL: https://issues.apache.org/jira/browse/HDDS-87
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Shashikant Banerjee
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-87) Fix test failures with uninitialized storageLocation field in storageReport

2018-05-18 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-87?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-87:

Issue Type: Sub-task  (was: Bug)
Parent: HDDS-76

> Fix test failures with uninitialized storageLocation field in storageReport
> ---
>
> Key: HDDS-87
> URL: https://issues.apache.org/jira/browse/HDDS-87
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Shashikant Banerjee
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-87) Fix test failures with uninitialized storageLocation field in storageReport

2018-05-18 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-87?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-87:

Summary: Fix test failures with uninitialized storageLocation field in 
storageReport  (was: Make storageLocation field in StoargeReport protoBuf 
message optional)

> Fix test failures with uninitialized storageLocation field in storageReport
> ---
>
> Key: HDDS-87
> URL: https://issues.apache.org/jira/browse/HDDS-87
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Shashikant Banerjee
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-85) Send Container State while sending the container report from Datanode to SCM

2018-05-18 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-85?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480483#comment-16480483
 ] 

genericqa commented on HDDS-85:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
55s{color} | {color:red} hadoop-hdds/common in trunk has 19 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
19s{color} | {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
35s{color} | {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  0m 35s{color} | 
{color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 35s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
20s{color} | {color:red} container-service in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
18s{color} | {color:red} container-service in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
3s{color} | {color:green} common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 22s{color} 
| {color:red} container-service in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-85 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12924087/HDDS-85.00.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  javadoc  
mvninstall  shadedclient  findbugs  checkstyle  |
| uname | Linux a93236ecf58e 4.4.0-121-generic #145-Ubuntu SMP Fri Apr 13 
13:47:23 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Created] (HDFS-13594) the lock of ShortCircuitCache is hold while close the ShortCircuitReplica

2018-05-18 Thread Gang Xie (JIRA)
Gang Xie created HDFS-13594:
---

 Summary: the lock of ShortCircuitCache is hold while close the 
ShortCircuitReplica  
 Key: HDFS-13594
 URL: https://issues.apache.org/jira/browse/HDFS-13594
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 3.0.2
Reporter: Gang Xie
 Attachments: no_hdfs.svg

When we are profiling SC read, we find that ShortCircuitCache's lock is a hot 
spot. After look into the code, we find that when close BlockReaderLocal, it 
tries to  trimEvictionMaps, and several ShortCircuitReplicas are closed while 
the lock being hold.  This slows down the close of the BlockReaderLocal, and 
the worse is that it blocks the other allocating of the new 
ShortCircuitReplicas. 

An idea to avoid this is to close the replica in an async way. I will do a 
prototype and get the performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >