[jira] [Commented] (HDFS-15597) ContentSummary.getSpaceConsumed does not consider replication

2020-10-02 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17206602#comment-17206602
 ] 

Hadoop QA commented on HDFS-15597:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m  
3s{color} |  | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} |  | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} |  | {color:green} The patch does not contain any @author tags. 
{color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} |  | {color:green} The patch appears to include 1 new or modified 
test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
52s{color} |  | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
10s{color} |  | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
31s{color} |  | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} |  | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} |  | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 46s{color} |  | {color:green} branch has no errors when building and 
testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} |  | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} |  | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
16s{color} |  | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
14s{color} |  | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
16s{color} |  | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 
16s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
33s{color} |  | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
33s{color} |  | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 45s{color} | 
[/diff-checkstyle-hadoop-common-project_hadoop-common.txt|https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/218/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt]
 | {color:orange} hadoop-common-project/hadoop-common: The patch generated 1 
new + 54 unchanged - 0 fixed = 55 total (was 54) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} |  | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 28s{color} |  | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} |  | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} |  | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
22s{color} |  | {color:green} 

[jira] [Work logged] (HDFS-15458) TestNameNodeRetryCacheMetrics fails intermittently

2020-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15458?focusedWorklogId=494235=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-494235
 ]

ASF GitHub Bot logged work on HDFS-15458:
-

Author: ASF GitHub Bot
Created on: 03/Oct/20 03:17
Start Date: 03/Oct/20 03:17
Worklog Time Spent: 10m 
  Work Description: ferhui commented on pull request #2356:
URL: https://github.com/apache/hadoop/pull/2356#issuecomment-703038301


   and 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2356/2/Yetus_20Report/



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 494235)
Time Spent: 50m  (was: 40m)

> TestNameNodeRetryCacheMetrics fails intermittently
> --
>
> Key: HDFS-15458
> URL: https://issues.apache.org/jira/browse/HDFS-15458
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Reporter: Ahmed Hussein
>Assignee: Hui Fei
>Priority: Major
>  Labels: pull-request-available, test
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {{TestNameNodeRetryCacheMetrics}} fails intermittently on trunk
> {code:bash}
> [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 9.604 
> s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics
> [ERROR] 
> testRetryCacheMetrics(org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics)
>   Time elapsed: 9.512 s  <<< FAILURE!
> java.lang.AssertionError: CacheHit expected:<2> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics.checkMetrics(TestNameNodeRetryCacheMetrics.java:103)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics.testRetryCacheMetrics(TestNameNodeRetryCacheMetrics.java:95)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HDFS-15458) TestNameNodeRetryCacheMetrics fails intermittently

2020-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15458?focusedWorklogId=494234=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-494234
 ]

ASF GitHub Bot logged work on HDFS-15458:
-

Author: ASF GitHub Bot
Created on: 03/Oct/20 03:17
Start Date: 03/Oct/20 03:17
Worklog Time Spent: 10m 
  Work Description: ferhui commented on pull request #2356:
URL: https://github.com/apache/hadoop/pull/2356#issuecomment-703038255


   Test result is 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2356/2/testReport/



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 494234)
Time Spent: 40m  (was: 0.5h)

> TestNameNodeRetryCacheMetrics fails intermittently
> --
>
> Key: HDFS-15458
> URL: https://issues.apache.org/jira/browse/HDFS-15458
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Reporter: Ahmed Hussein
>Assignee: Hui Fei
>Priority: Major
>  Labels: pull-request-available, test
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {{TestNameNodeRetryCacheMetrics}} fails intermittently on trunk
> {code:bash}
> [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 9.604 
> s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics
> [ERROR] 
> testRetryCacheMetrics(org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics)
>   Time elapsed: 9.512 s  <<< FAILURE!
> java.lang.AssertionError: CacheHit expected:<2> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics.checkMetrics(TestNameNodeRetryCacheMetrics.java:103)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics.testRetryCacheMetrics(TestNameNodeRetryCacheMetrics.java:95)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HDFS-15458) TestNameNodeRetryCacheMetrics fails intermittently

2020-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15458?focusedWorklogId=494233=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-494233
 ]

ASF GitHub Bot logged work on HDFS-15458:
-

Author: ASF GitHub Bot
Created on: 03/Oct/20 03:15
Start Date: 03/Oct/20 03:15
Worklog Time Spent: 10m 
  Work Description: ferhui commented on pull request #2356:
URL: https://github.com/apache/hadoop/pull/2356#issuecomment-703038144


   Failed tests are unrelated. 
TestNameNodeRetryCacheMetrics#testRetryCacheMetrics with the fix pass stably.
   @goiri @ayushtkn Please review again. Thanks!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 494233)
Time Spent: 0.5h  (was: 20m)

> TestNameNodeRetryCacheMetrics fails intermittently
> --
>
> Key: HDFS-15458
> URL: https://issues.apache.org/jira/browse/HDFS-15458
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Reporter: Ahmed Hussein
>Assignee: Hui Fei
>Priority: Major
>  Labels: pull-request-available, test
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {{TestNameNodeRetryCacheMetrics}} fails intermittently on trunk
> {code:bash}
> [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 9.604 
> s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics
> [ERROR] 
> testRetryCacheMetrics(org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics)
>   Time elapsed: 9.512 s  <<< FAILURE!
> java.lang.AssertionError: CacheHit expected:<2> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics.checkMetrics(TestNameNodeRetryCacheMetrics.java:103)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics.testRetryCacheMetrics(TestNameNodeRetryCacheMetrics.java:95)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}



--
This message was sent by Atlassian 

[jira] [Commented] (HDFS-15597) ContentSummary.getSpaceConsumed does not consider replication

2020-10-02 Thread Aihua Xu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17206583#comment-17206583
 ] 

Aihua Xu commented on HDFS-15597:
-

[~weichiu] Can you review the simple fix? Thanks.

> ContentSummary.getSpaceConsumed does not consider replication
> -
>
> Key: HDFS-15597
> URL: https://issues.apache.org/jira/browse/HDFS-15597
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfs
>Affects Versions: 2.6.0
>Reporter: Ajmal Ahammed
>Assignee: Aihua Xu
>Priority: Minor
> Attachments: HDFS-15597.patch
>
>
> I am trying to get the disk space consumed by an HDFS directory using the 
> {{ContentSummary.getSpaceConsumed}} method. I can't get the space consumption 
> correctly considering the replication factor. The replication factor is 2, 
> and I was expecting twice the size of the actual file size from the above 
> method.
> I can't get the space consumption correctly considering the replication 
> factor. The replication factor is 2, and I was expecting twice the size of 
> the actual file size from the above method.
> {code}
> ubuntu@ubuntu:~/ht$ sudo -u hdfs hdfs dfs -ls /var/lib/ubuntu
> Found 2 items
> -rw-r--r--   2 ubuntu ubuntu3145728 2020-09-08 09:55 
> /var/lib/ubuntu/size-test
> drwxrwxr-x   - ubuntu ubuntu  0 2020-09-07 06:37 /var/lib/ubuntu/test
> {code}
> But when I run the following code,
> {code}
> String path = "/etc/hadoop/conf/";
> conf.addResource(new Path(path + "core-site.xml"));
> conf.addResource(new Path(path + "hdfs-site.xml"));
> long size = 
> FileContext.getFileContext(conf).util().getContentSummary(fileStatus).getSpaceConsumed();
> System.out.println("Replication : " + fileStatus.getReplication());
> System.out.println("File size : " + size);
> {code}
> The output is
> {code}
> Replication : 0
> File size : 3145728
> {code}
> Both the file size and the replication factor seems to be incorrect.
> /etc/hadoop/conf/hdfs-site.xml contains the following config:
> {code}
>   
> dfs.replication
> 2
>   
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15597) ContentSummary.getSpaceConsumed does not consider replication

2020-10-02 Thread Aihua Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HDFS-15597:

Status: Patch Available  (was: Open)

Patch-1: update getContentSummary function to consider replication for 
spaceConsumed field.

> ContentSummary.getSpaceConsumed does not consider replication
> -
>
> Key: HDFS-15597
> URL: https://issues.apache.org/jira/browse/HDFS-15597
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfs
>Affects Versions: 2.6.0
>Reporter: Ajmal Ahammed
>Assignee: Aihua Xu
>Priority: Minor
> Attachments: HDFS-15597.patch
>
>
> I am trying to get the disk space consumed by an HDFS directory using the 
> {{ContentSummary.getSpaceConsumed}} method. I can't get the space consumption 
> correctly considering the replication factor. The replication factor is 2, 
> and I was expecting twice the size of the actual file size from the above 
> method.
> I can't get the space consumption correctly considering the replication 
> factor. The replication factor is 2, and I was expecting twice the size of 
> the actual file size from the above method.
> {code}
> ubuntu@ubuntu:~/ht$ sudo -u hdfs hdfs dfs -ls /var/lib/ubuntu
> Found 2 items
> -rw-r--r--   2 ubuntu ubuntu3145728 2020-09-08 09:55 
> /var/lib/ubuntu/size-test
> drwxrwxr-x   - ubuntu ubuntu  0 2020-09-07 06:37 /var/lib/ubuntu/test
> {code}
> But when I run the following code,
> {code}
> String path = "/etc/hadoop/conf/";
> conf.addResource(new Path(path + "core-site.xml"));
> conf.addResource(new Path(path + "hdfs-site.xml"));
> long size = 
> FileContext.getFileContext(conf).util().getContentSummary(fileStatus).getSpaceConsumed();
> System.out.println("Replication : " + fileStatus.getReplication());
> System.out.println("File size : " + size);
> {code}
> The output is
> {code}
> Replication : 0
> File size : 3145728
> {code}
> Both the file size and the replication factor seems to be incorrect.
> /etc/hadoop/conf/hdfs-site.xml contains the following config:
> {code}
>   
> dfs.replication
> 2
>   
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15597) ContentSummary.getSpaceConsumed does not consider replication

2020-10-02 Thread Aihua Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HDFS-15597:

Attachment: HDFS-15597.patch

> ContentSummary.getSpaceConsumed does not consider replication
> -
>
> Key: HDFS-15597
> URL: https://issues.apache.org/jira/browse/HDFS-15597
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfs
>Affects Versions: 2.6.0
>Reporter: Ajmal Ahammed
>Assignee: Aihua Xu
>Priority: Minor
> Attachments: HDFS-15597.patch
>
>
> I am trying to get the disk space consumed by an HDFS directory using the 
> {{ContentSummary.getSpaceConsumed}} method. I can't get the space consumption 
> correctly considering the replication factor. The replication factor is 2, 
> and I was expecting twice the size of the actual file size from the above 
> method.
> I can't get the space consumption correctly considering the replication 
> factor. The replication factor is 2, and I was expecting twice the size of 
> the actual file size from the above method.
> {code}
> ubuntu@ubuntu:~/ht$ sudo -u hdfs hdfs dfs -ls /var/lib/ubuntu
> Found 2 items
> -rw-r--r--   2 ubuntu ubuntu3145728 2020-09-08 09:55 
> /var/lib/ubuntu/size-test
> drwxrwxr-x   - ubuntu ubuntu  0 2020-09-07 06:37 /var/lib/ubuntu/test
> {code}
> But when I run the following code,
> {code}
> String path = "/etc/hadoop/conf/";
> conf.addResource(new Path(path + "core-site.xml"));
> conf.addResource(new Path(path + "hdfs-site.xml"));
> long size = 
> FileContext.getFileContext(conf).util().getContentSummary(fileStatus).getSpaceConsumed();
> System.out.println("Replication : " + fileStatus.getReplication());
> System.out.println("File size : " + size);
> {code}
> The output is
> {code}
> Replication : 0
> File size : 3145728
> {code}
> Both the file size and the replication factor seems to be incorrect.
> /etc/hadoop/conf/hdfs-site.xml contains the following config:
> {code}
>   
> dfs.replication
> 2
>   
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15458) TestNameNodeRetryCacheMetrics fails intermittently

2020-10-02 Thread Hui Fei (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17206509#comment-17206509
 ] 

Hui Fei commented on HDFS-15458:


[~elgoiri] [~ayushtkn] Update github PR. Please review again. Thanks

> TestNameNodeRetryCacheMetrics fails intermittently
> --
>
> Key: HDFS-15458
> URL: https://issues.apache.org/jira/browse/HDFS-15458
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Reporter: Ahmed Hussein
>Assignee: Hui Fei
>Priority: Major
>  Labels: pull-request-available, test
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {{TestNameNodeRetryCacheMetrics}} fails intermittently on trunk
> {code:bash}
> [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 9.604 
> s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics
> [ERROR] 
> testRetryCacheMetrics(org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics)
>   Time elapsed: 9.512 s  <<< FAILURE!
> java.lang.AssertionError: CacheHit expected:<2> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics.checkMetrics(TestNameNodeRetryCacheMetrics.java:103)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics.testRetryCacheMetrics(TestNameNodeRetryCacheMetrics.java:95)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15458) TestNameNodeRetryCacheMetrics fails intermittently

2020-10-02 Thread Hui Fei (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17206497#comment-17206497
 ] 

Hui Fei commented on HDFS-15458:


[~ayushtkn] Yes, your are right, HDFS-15350 causes this.
Clever fix.

> TestNameNodeRetryCacheMetrics fails intermittently
> --
>
> Key: HDFS-15458
> URL: https://issues.apache.org/jira/browse/HDFS-15458
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Reporter: Ahmed Hussein
>Assignee: Hui Fei
>Priority: Major
>  Labels: pull-request-available, test
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {{TestNameNodeRetryCacheMetrics}} fails intermittently on trunk
> {code:bash}
> [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 9.604 
> s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics
> [ERROR] 
> testRetryCacheMetrics(org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics)
>   Time elapsed: 9.512 s  <<< FAILURE!
> java.lang.AssertionError: CacheHit expected:<2> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics.checkMetrics(TestNameNodeRetryCacheMetrics.java:103)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics.testRetryCacheMetrics(TestNameNodeRetryCacheMetrics.java:95)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15597) ContentSummary.getSpaceConsumed does not consider replication

2020-10-02 Thread Aihua Xu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17206492#comment-17206492
 ] 

Aihua Xu commented on HDFS-15597:
-

Let me take a look.

> ContentSummary.getSpaceConsumed does not consider replication
> -
>
> Key: HDFS-15597
> URL: https://issues.apache.org/jira/browse/HDFS-15597
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfs
>Affects Versions: 2.6.0
>Reporter: Ajmal Ahammed
>Assignee: Aihua Xu
>Priority: Minor
>
> I am trying to get the disk space consumed by an HDFS directory using the 
> {{ContentSummary.getSpaceConsumed}} method. I can't get the space consumption 
> correctly considering the replication factor. The replication factor is 2, 
> and I was expecting twice the size of the actual file size from the above 
> method.
> I can't get the space consumption correctly considering the replication 
> factor. The replication factor is 2, and I was expecting twice the size of 
> the actual file size from the above method.
> {code}
> ubuntu@ubuntu:~/ht$ sudo -u hdfs hdfs dfs -ls /var/lib/ubuntu
> Found 2 items
> -rw-r--r--   2 ubuntu ubuntu3145728 2020-09-08 09:55 
> /var/lib/ubuntu/size-test
> drwxrwxr-x   - ubuntu ubuntu  0 2020-09-07 06:37 /var/lib/ubuntu/test
> {code}
> But when I run the following code,
> {code}
> String path = "/etc/hadoop/conf/";
> conf.addResource(new Path(path + "core-site.xml"));
> conf.addResource(new Path(path + "hdfs-site.xml"));
> long size = 
> FileContext.getFileContext(conf).util().getContentSummary(fileStatus).getSpaceConsumed();
> System.out.println("Replication : " + fileStatus.getReplication());
> System.out.println("File size : " + size);
> {code}
> The output is
> {code}
> Replication : 0
> File size : 3145728
> {code}
> Both the file size and the replication factor seems to be incorrect.
> /etc/hadoop/conf/hdfs-site.xml contains the following config:
> {code}
>   
> dfs.replication
> 2
>   
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-15597) ContentSummary.getSpaceConsumed does not consider replication

2020-10-02 Thread Aihua Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu reassigned HDFS-15597:
---

Assignee: Aihua Xu

> ContentSummary.getSpaceConsumed does not consider replication
> -
>
> Key: HDFS-15597
> URL: https://issues.apache.org/jira/browse/HDFS-15597
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfs
>Affects Versions: 2.6.0
>Reporter: Ajmal Ahammed
>Assignee: Aihua Xu
>Priority: Minor
>
> I am trying to get the disk space consumed by an HDFS directory using the 
> {{ContentSummary.getSpaceConsumed}} method. I can't get the space consumption 
> correctly considering the replication factor. The replication factor is 2, 
> and I was expecting twice the size of the actual file size from the above 
> method.
> I can't get the space consumption correctly considering the replication 
> factor. The replication factor is 2, and I was expecting twice the size of 
> the actual file size from the above method.
> {code}
> ubuntu@ubuntu:~/ht$ sudo -u hdfs hdfs dfs -ls /var/lib/ubuntu
> Found 2 items
> -rw-r--r--   2 ubuntu ubuntu3145728 2020-09-08 09:55 
> /var/lib/ubuntu/size-test
> drwxrwxr-x   - ubuntu ubuntu  0 2020-09-07 06:37 /var/lib/ubuntu/test
> {code}
> But when I run the following code,
> {code}
> String path = "/etc/hadoop/conf/";
> conf.addResource(new Path(path + "core-site.xml"));
> conf.addResource(new Path(path + "hdfs-site.xml"));
> long size = 
> FileContext.getFileContext(conf).util().getContentSummary(fileStatus).getSpaceConsumed();
> System.out.println("Replication : " + fileStatus.getReplication());
> System.out.println("File size : " + size);
> {code}
> The output is
> {code}
> Replication : 0
> File size : 3145728
> {code}
> Both the file size and the replication factor seems to be incorrect.
> /etc/hadoop/conf/hdfs-site.xml contains the following config:
> {code}
>   
> dfs.replication
> 2
>   
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-10-02 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17206484#comment-17206484
 ] 

Ayush Saxena commented on HDFS-15025:
-

Thanx [~huangtianhua] for the details, From the comment I couldn't find any 
test related. Did you just run UT's with and without the patch? and in the last 
one just DFSIO? The problem was suppose to come with 
{{setQuotaWithStorageType}}, I am not sure you tried that also..

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>  Time Spent: 12h 10m
>  Remaining Estimate: 0h
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15458) TestNameNodeRetryCacheMetrics fails intermittently

2020-10-02 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17206417#comment-17206417
 ] 

Ayush Saxena commented on HDFS-15458:
-

This seems to be broken by HDFS-15350.
Adding just this should also fix?
{code:java}
conf.setBoolean(HdfsClientConfigKeys.Failover.RANDOM_ORDER, false);
{code}


> TestNameNodeRetryCacheMetrics fails intermittently
> --
>
> Key: HDFS-15458
> URL: https://issues.apache.org/jira/browse/HDFS-15458
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Reporter: Ahmed Hussein
>Assignee: Hui Fei
>Priority: Major
>  Labels: pull-request-available, test
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {{TestNameNodeRetryCacheMetrics}} fails intermittently on trunk
> {code:bash}
> [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 9.604 
> s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics
> [ERROR] 
> testRetryCacheMetrics(org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics)
>   Time elapsed: 9.512 s  <<< FAILURE!
> java.lang.AssertionError: CacheHit expected:<2> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics.checkMetrics(TestNameNodeRetryCacheMetrics.java:103)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics.testRetryCacheMetrics(TestNameNodeRetryCacheMetrics.java:95)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15458) TestNameNodeRetryCacheMetrics fails intermittently

2020-10-02 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17206410#comment-17206410
 ] 

Íñigo Goiri commented on HDFS-15458:


Aren't we losing a little bit of coverage with this change?
I would prefer to still have Active/Standby if possible.

> TestNameNodeRetryCacheMetrics fails intermittently
> --
>
> Key: HDFS-15458
> URL: https://issues.apache.org/jira/browse/HDFS-15458
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Reporter: Ahmed Hussein
>Assignee: Hui Fei
>Priority: Major
>  Labels: pull-request-available, test
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {{TestNameNodeRetryCacheMetrics}} fails intermittently on trunk
> {code:bash}
> [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 9.604 
> s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics
> [ERROR] 
> testRetryCacheMetrics(org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics)
>   Time elapsed: 9.512 s  <<< FAILURE!
> java.lang.AssertionError: CacheHit expected:<2> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics.checkMetrics(TestNameNodeRetryCacheMetrics.java:103)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics.testRetryCacheMetrics(TestNameNodeRetryCacheMetrics.java:95)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-5389) A Namenode that keeps only a part of the namespace in memory

2020-10-02 Thread Aihua Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-5389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu reassigned HDFS-5389:
--

Assignee: Aihua Xu  (was: Haohui Mai)

> A Namenode that keeps only a part of the namespace in memory
> 
>
> Key: HDFS-5389
> URL: https://issues.apache.org/jira/browse/HDFS-5389
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 0.23.1
>Reporter: Lin Xiao
>Assignee: Aihua Xu
>Priority: Minor
>
> *Background:*
> Currently, the NN Keeps all its namespace in memory. This has had the benefit 
> that the NN code is very simple and, more importantly, helps the NN scale to 
> over 4.5K machines with 60K  to 100K concurrently tasks.  HDFS namespace can 
> be scaled currently using more Ram on the NN and/or using Federation which 
> scales both namespace and performance. The current federation implementation 
> does not allow renames across volumes without data copying but there are 
> proposals to remove that limitation.
> *Motivation:*
>  Hadoop lets customers store huge amounts of data at very economical prices 
> and hence allows customers to store their data for several years. While most 
> customers perform analytics on recent  data (last hour, day, week, months, 
> quarter, year), the ability to have five year old data online for analytics 
> is very attractive for many businesses. Although one can use larger RAM in a 
> NN and/or use Federation, it not really necessary to store the entire 
> namespace in memory since only the recent data is typically heavily accessed. 
> *Proposed Solution:*
> Store a portion of the NN's namespace in memory- the "working set" of the 
> applications that are currently operating. LSM data structures are quite 
> appropriate for maintaining the full namespace in memory. One choice is 
> Google's LevelDB open-source implementation.
> *Benefits:*
>  *  Store larger namespaces without resorting to Federated namespace volumes.
>  * Complementary to NN Federated namespace volumes,  indeed will allow a 
> single NN to easily store multiple larger volumes.
>  *  Faster cold startup - the NN does not have read its full namespace before 
> responding to clients.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-5389) A Namenode that keeps only a part of the namespace in memory

2020-10-02 Thread Aihua Xu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-5389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17206394#comment-17206394
 ] 

Aihua Xu commented on HDFS-5389:


Seems no activity on this but sounds a great area to help scale NN and reduce 
NN memory pressure. I will take a look. [~weichiu] and [~yzhangal]

> A Namenode that keeps only a part of the namespace in memory
> 
>
> Key: HDFS-5389
> URL: https://issues.apache.org/jira/browse/HDFS-5389
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 0.23.1
>Reporter: Lin Xiao
>Assignee: Haohui Mai
>Priority: Minor
>
> *Background:*
> Currently, the NN Keeps all its namespace in memory. This has had the benefit 
> that the NN code is very simple and, more importantly, helps the NN scale to 
> over 4.5K machines with 60K  to 100K concurrently tasks.  HDFS namespace can 
> be scaled currently using more Ram on the NN and/or using Federation which 
> scales both namespace and performance. The current federation implementation 
> does not allow renames across volumes without data copying but there are 
> proposals to remove that limitation.
> *Motivation:*
>  Hadoop lets customers store huge amounts of data at very economical prices 
> and hence allows customers to store their data for several years. While most 
> customers perform analytics on recent  data (last hour, day, week, months, 
> quarter, year), the ability to have five year old data online for analytics 
> is very attractive for many businesses. Although one can use larger RAM in a 
> NN and/or use Federation, it not really necessary to store the entire 
> namespace in memory since only the recent data is typically heavily accessed. 
> *Proposed Solution:*
> Store a portion of the NN's namespace in memory- the "working set" of the 
> applications that are currently operating. LSM data structures are quite 
> appropriate for maintaining the full namespace in memory. One choice is 
> Google's LevelDB open-source implementation.
> *Benefits:*
>  *  Store larger namespaces without resorting to Federated namespace volumes.
>  * Complementary to NN Federated namespace volumes,  indeed will allow a 
> single NN to easily store multiple larger volumes.
>  *  Faster cold startup - the NN does not have read its full namespace before 
> responding to clients.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15042) Add more tests for ByteBufferPositionedReadable

2020-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15042?focusedWorklogId=493987=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-493987
 ]

ASF GitHub Bot logged work on HDFS-15042:
-

Author: ASF GitHub Bot
Created on: 02/Oct/20 15:07
Start Date: 02/Oct/20 15:07
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus removed a comment on pull request #1747:
URL: https://github.com/apache/hadoop/pull/1747#issuecomment-667206558


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 34s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  1s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  24m  3s |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 48s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  20m 48s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   3m  8s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 27s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 24s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 38s |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javadoc  |   0m 49s |  hadoop-hdfs-client in trunk failed with 
JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javadoc  |   0m 43s |  hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | +1 :green_heart: |  javadoc  |   2m 35s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   3m  6s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   7m 50s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 52s |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 17s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  21m 17s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 46s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |  18m 46s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 42s |  root: The patch generated 2 new 
+ 54 unchanged - 0 fixed = 56 total (was 54)  |
   | +1 :green_heart: |  mvnsite  |   3m 57s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 12s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 36s |  hadoop-common in the patch failed with 
JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javadoc  |   0m 45s |  hadoop-hdfs-client in the patch failed 
with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javadoc  |   0m 47s |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | +1 :green_heart: |  javadoc  |   2m 31s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | -1 :x: |  findbugs  |   0m 51s |  hadoop-hdfs-client in the patch failed.  
|
   | -1 :x: |  findbugs  |   1m 17s |  hadoop-hdfs in the patch failed.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 48s |  hadoop-common in the patch passed. 
 |
   | -1 :x: |  unit  |   0m 52s |  hadoop-hdfs-client in the patch failed.  |
   | -1 :x: |  unit  |   1m 15s |  hadoop-hdfs in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 50s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 195m 50s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1747/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1747 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 23156885e1bd 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 

[jira] [Updated] (HDFS-15042) Add more tests for ByteBufferPositionedReadable

2020-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-15042:
--
Labels: pull-request-available  (was: )

> Add more tests for ByteBufferPositionedReadable 
> 
>
> Key: HDFS-15042
> URL: https://issues.apache.org/jira/browse/HDFS-15042
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> There's a few corner cases of ByteBufferPositionedReadable which need to be 
> tested, mainly illegal read positions. Add them



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15042) Add more tests for ByteBufferPositionedReadable

2020-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15042?focusedWorklogId=493986=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-493986
 ]

ASF GitHub Bot logged work on HDFS-15042:
-

Author: ASF GitHub Bot
Created on: 02/Oct/20 15:07
Start Date: 02/Oct/20 15:07
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus removed a comment on pull request #1747:
URL: https://github.com/apache/hadoop/pull/1747#issuecomment-633091515


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  22m 24s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m  4s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  19m 29s |  trunk passed  |
   | +1 :green_heart: |  compile  |  16m 56s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   2m 43s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m  2s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m  1s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 53s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   3m  0s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   7m 33s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 43s |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 14s |  the patch passed  |
   | +1 :green_heart: |  javac  |  16m 14s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 37s |  root: The patch generated 2 new 
+ 54 unchanged - 0 fixed = 56 total (was 54)  |
   | +1 :green_heart: |  mvnsite  |   3m 59s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 26s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 35s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   8m  9s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 41s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   2m 13s |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 100m 16s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  5s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 259m 51s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   |   | hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1747/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1747 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux bd8c86a3ef21 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4d22d1c58f0 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1747/3/artifact/out/diff-checkstyle-root.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1747/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1747/3/testReport/ |
   | Max. process+thread count | 4149 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1747/3/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



[jira] [Commented] (HDFS-15458) TestNameNodeRetryCacheMetrics fails intermittently

2020-10-02 Thread Hui Fei (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17206203#comment-17206203
 ] 

Hui Fei commented on HDFS-15458:


[~ayushtkn][~elgoiri] Could you please take a look? Thanks

> TestNameNodeRetryCacheMetrics fails intermittently
> --
>
> Key: HDFS-15458
> URL: https://issues.apache.org/jira/browse/HDFS-15458
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Reporter: Ahmed Hussein
>Assignee: Hui Fei
>Priority: Major
>  Labels: pull-request-available, test
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {{TestNameNodeRetryCacheMetrics}} fails intermittently on trunk
> {code:bash}
> [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 9.604 
> s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics
> [ERROR] 
> testRetryCacheMetrics(org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics)
>   Time elapsed: 9.512 s  <<< FAILURE!
> java.lang.AssertionError: CacheHit expected:<2> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics.checkMetrics(TestNameNodeRetryCacheMetrics.java:103)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics.testRetryCacheMetrics(TestNameNodeRetryCacheMetrics.java:95)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15458) TestNameNodeRetryCacheMetrics fails intermittently

2020-10-02 Thread Hui Fei (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hui Fei updated HDFS-15458:
---
Status: Patch Available  (was: Open)

> TestNameNodeRetryCacheMetrics fails intermittently
> --
>
> Key: HDFS-15458
> URL: https://issues.apache.org/jira/browse/HDFS-15458
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Reporter: Ahmed Hussein
>Assignee: Hui Fei
>Priority: Major
>  Labels: pull-request-available, test
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {{TestNameNodeRetryCacheMetrics}} fails intermittently on trunk
> {code:bash}
> [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 9.604 
> s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics
> [ERROR] 
> testRetryCacheMetrics(org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics)
>   Time elapsed: 9.512 s  <<< FAILURE!
> java.lang.AssertionError: CacheHit expected:<2> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics.checkMetrics(TestNameNodeRetryCacheMetrics.java:103)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics.testRetryCacheMetrics(TestNameNodeRetryCacheMetrics.java:95)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15458) TestNameNodeRetryCacheMetrics fails intermittently

2020-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15458?focusedWorklogId=493932=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-493932
 ]

ASF GitHub Bot logged work on HDFS-15458:
-

Author: ASF GitHub Bot
Created on: 02/Oct/20 12:57
Start Date: 02/Oct/20 12:57
Worklog Time Spent: 10m 
  Work Description: ferhui commented on pull request #2356:
URL: https://github.com/apache/hadoop/pull/2356#issuecomment-702718625


   - Create no HA cluster.
   
   - Remove uselese client
   
   - cluster.waitActive() is useless, it's in MiniDFSCluster#startDataNodes



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 493932)
Time Spent: 20m  (was: 10m)

> TestNameNodeRetryCacheMetrics fails intermittently
> --
>
> Key: HDFS-15458
> URL: https://issues.apache.org/jira/browse/HDFS-15458
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Reporter: Ahmed Hussein
>Assignee: Hui Fei
>Priority: Major
>  Labels: pull-request-available, test
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {{TestNameNodeRetryCacheMetrics}} fails intermittently on trunk
> {code:bash}
> [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 9.604 
> s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics
> [ERROR] 
> testRetryCacheMetrics(org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics)
>   Time elapsed: 9.512 s  <<< FAILURE!
> java.lang.AssertionError: CacheHit expected:<2> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics.checkMetrics(TestNameNodeRetryCacheMetrics.java:103)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics.testRetryCacheMetrics(TestNameNodeRetryCacheMetrics.java:95)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HDFS-15458) TestNameNodeRetryCacheMetrics fails intermittently

2020-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-15458:
--
Labels: pull-request-available test  (was: test)

> TestNameNodeRetryCacheMetrics fails intermittently
> --
>
> Key: HDFS-15458
> URL: https://issues.apache.org/jira/browse/HDFS-15458
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Reporter: Ahmed Hussein
>Assignee: Hui Fei
>Priority: Major
>  Labels: pull-request-available, test
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {{TestNameNodeRetryCacheMetrics}} fails intermittently on trunk
> {code:bash}
> [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 9.604 
> s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics
> [ERROR] 
> testRetryCacheMetrics(org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics)
>   Time elapsed: 9.512 s  <<< FAILURE!
> java.lang.AssertionError: CacheHit expected:<2> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics.checkMetrics(TestNameNodeRetryCacheMetrics.java:103)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics.testRetryCacheMetrics(TestNameNodeRetryCacheMetrics.java:95)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15458) TestNameNodeRetryCacheMetrics fails intermittently

2020-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15458?focusedWorklogId=493929=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-493929
 ]

ASF GitHub Bot logged work on HDFS-15458:
-

Author: ASF GitHub Bot
Created on: 02/Oct/20 12:52
Start Date: 02/Oct/20 12:52
Worklog Time Spent: 10m 
  Work Description: ferhui opened a new pull request #2356:
URL: https://github.com/apache/hadoop/pull/2356


   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 493929)
Remaining Estimate: 0h
Time Spent: 10m

> TestNameNodeRetryCacheMetrics fails intermittently
> --
>
> Key: HDFS-15458
> URL: https://issues.apache.org/jira/browse/HDFS-15458
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Reporter: Ahmed Hussein
>Assignee: Hui Fei
>Priority: Major
>  Labels: test
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {{TestNameNodeRetryCacheMetrics}} fails intermittently on trunk
> {code:bash}
> [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 9.604 
> s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics
> [ERROR] 
> testRetryCacheMetrics(org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics)
>   Time elapsed: 9.512 s  <<< FAILURE!
> java.lang.AssertionError: CacheHit expected:<2> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics.checkMetrics(TestNameNodeRetryCacheMetrics.java:103)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics.testRetryCacheMetrics(TestNameNodeRetryCacheMetrics.java:95)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> 

[jira] [Commented] (HDFS-15458) TestNameNodeRetryCacheMetrics fails intermittently

2020-10-02 Thread Hui Fei (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17206146#comment-17206146
 ] 

Hui Fei commented on HDFS-15458:


Find the root cause.

{code:java}
  boolean saveNamespace(final long timeWindow, final long txGap)
  throws IOException {
String operationName = "saveNamespace";
checkOperation(OperationCategory.UNCHECKED);
checkSuperuserPrivilege(operationName);
{code}
Cluster is HA with 2 namenodes(0, 1) and client may access 1 namenode( 
standby). but UT metric is from  index 0. So It's flaky


> TestNameNodeRetryCacheMetrics fails intermittently
> --
>
> Key: HDFS-15458
> URL: https://issues.apache.org/jira/browse/HDFS-15458
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Reporter: Ahmed Hussein
>Assignee: Hui Fei
>Priority: Major
>  Labels: test
>
> {{TestNameNodeRetryCacheMetrics}} fails intermittently on trunk
> {code:bash}
> [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 9.604 
> s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics
> [ERROR] 
> testRetryCacheMetrics(org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics)
>   Time elapsed: 9.512 s  <<< FAILURE!
> java.lang.AssertionError: CacheHit expected:<2> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics.checkMetrics(TestNameNodeRetryCacheMetrics.java:103)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics.testRetryCacheMetrics(TestNameNodeRetryCacheMetrics.java:95)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-15458) TestNameNodeRetryCacheMetrics fails intermittently

2020-10-02 Thread Hui Fei (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hui Fei reassigned HDFS-15458:
--

Assignee: Hui Fei

> TestNameNodeRetryCacheMetrics fails intermittently
> --
>
> Key: HDFS-15458
> URL: https://issues.apache.org/jira/browse/HDFS-15458
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Reporter: Ahmed Hussein
>Assignee: Hui Fei
>Priority: Major
>  Labels: test
>
> {{TestNameNodeRetryCacheMetrics}} fails intermittently on trunk
> {code:bash}
> [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 9.604 
> s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics
> [ERROR] 
> testRetryCacheMetrics(org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics)
>   Time elapsed: 9.512 s  <<< FAILURE!
> java.lang.AssertionError: CacheHit expected:<2> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics.checkMetrics(TestNameNodeRetryCacheMetrics.java:103)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics.testRetryCacheMetrics(TestNameNodeRetryCacheMetrics.java:95)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15458) TestNameNodeRetryCacheMetrics fails intermittently

2020-10-02 Thread Hui Fei (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17206104#comment-17206104
 ] 

Hui Fei commented on HDFS-15458:


Interested to this issue and will dig it.
[~ahussein] If you are not working on this, I will assign it to me.

> TestNameNodeRetryCacheMetrics fails intermittently
> --
>
> Key: HDFS-15458
> URL: https://issues.apache.org/jira/browse/HDFS-15458
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Reporter: Ahmed Hussein
>Priority: Major
>  Labels: test
>
> {{TestNameNodeRetryCacheMetrics}} fails intermittently on trunk
> {code:bash}
> [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 9.604 
> s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics
> [ERROR] 
> testRetryCacheMetrics(org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics)
>   Time elapsed: 9.512 s  <<< FAILURE!
> java.lang.AssertionError: CacheHit expected:<2> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics.checkMetrics(TestNameNodeRetryCacheMetrics.java:103)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics.testRetryCacheMetrics(TestNameNodeRetryCacheMetrics.java:95)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15567) [SBN Read] HDFS should expose msync() API to allow downstream applications call it explicetly.

2020-10-02 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17206015#comment-17206015
 ] 

Hadoop QA commented on HDFS-15567:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 32m 
27s{color} |  | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} |  | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} |  | {color:green} The patch does not contain any @author tags. 
{color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} |  | {color:green} The patch appears to include 1 new or modified 
test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
6s{color} |  | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
39s{color} |  | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
49s{color} |  | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
33s{color} |  | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
50s{color} |  | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
50s{color} |  | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
22m 40s{color} |  | {color:green} branch has no errors when building and 
testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
12s{color} |  | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
37s{color} |  | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m 
15s{color} |  | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m  
0s{color} |  | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} |  | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
48s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
24s{color} |  | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 
24s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
46s{color} |  | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
46s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
49s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
48s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} |  | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 33s{color} |  | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
13s{color} |  | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
38s{color} |  | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
27s{color} |  |