[jira] [Work logged] (HDFS-16213) Flaky test TestFsDatasetImpl#testDnRestartWithHardLink
[ https://issues.apache.org/jira/browse/HDFS-16213?focusedWorklogId=647748=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647748 ] ASF GitHub Bot logged work on HDFS-16213: - Author: ASF GitHub Bot Created on: 08/Sep/21 05:46 Start Date: 08/Sep/21 05:46 Worklog Time Spent: 10m Work Description: virajjasani commented on pull request #3386: URL: https://github.com/apache/hadoop/pull/3386#issuecomment-914935498 @ferhui did you run test code twice as part of same test? I can reproduce this failure just by running ``` @Test public void t1() throws Exception { testDnRestartWithHardLink(); testDnRestartWithHardLink(); } ``` Ideally, based on how the test is written, the test code is supposed to be idempotent but it is not because of the replica processing done by `addReplicaThreadPool`. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 647748) Time Spent: 1h 40m (was: 1.5h) > Flaky test TestFsDatasetImpl#testDnRestartWithHardLink > -- > > Key: HDFS-16213 > URL: https://issues.apache.org/jira/browse/HDFS-16213 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 1h 40m > Remaining Estimate: 0h > > Failure case: > [here|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3359/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt] > {code:java} > [ERROR] > testDnRestartWithHardLink(org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl) > Time elapsed: 7.768 s <<< FAILURE![ERROR] > testDnRestartWithHardLink(org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl) > Time elapsed: 7.768 s <<< FAILURE!java.lang.AssertionError at > org.junit.Assert.fail(Assert.java:87) at > org.junit.Assert.assertTrue(Assert.java:42) at > org.junit.Assert.assertTrue(Assert.java:53) at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl.testDnRestartWithHardLink(TestFsDatasetImpl.java:1344) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) at > java.lang.Thread.run(Thread.java:748) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16209) Add description for dfs.namenode.caching.enabled
[ https://issues.apache.org/jira/browse/HDFS-16209?focusedWorklogId=647732=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647732 ] ASF GitHub Bot logged work on HDFS-16209: - Author: ASF GitHub Bot Created on: 08/Sep/21 05:00 Start Date: 08/Sep/21 05:00 Worklog Time Spent: 10m Work Description: tomscut commented on pull request #3378: URL: https://github.com/apache/hadoop/pull/3378#issuecomment-914914593 Thanks @ferhui @ayushtkn @tasanuma @virajjasani @aajisaka for your review and merge. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 647732) Time Spent: 3h 20m (was: 3h 10m) > Add description for dfs.namenode.caching.enabled > > > Key: HDFS-16209 > URL: https://issues.apache.org/jira/browse/HDFS-16209 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.1.0 >Reporter: tomscut >Assignee: tomscut >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 3h 20m > Remaining Estimate: 0h > > Namenode config: > dfs.namenode.write-lock-reporting-threshold-ms=50ms > dfs.namenode.caching.enabled=true (default) > > In fact, the caching feature is not used in our cluster, but this switch is > turned on by default(dfs.namenode.caching.enabled=true), incurring some > additional write lock overhead. We count the number of write lock warnings in > a log file, and find that the number of rescan cache warnings reaches about > 32%, which greatly affects the performance of Namenode. > !namenode-write-lock.jpg! > > We should set 'dfs.namenode.caching.enabled' to false by default and turn it > on when we wants to use it. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16209) Add description for dfs.namenode.caching.enabled
[ https://issues.apache.org/jira/browse/HDFS-16209?focusedWorklogId=647724=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647724 ] ASF GitHub Bot logged work on HDFS-16209: - Author: ASF GitHub Bot Created on: 08/Sep/21 04:40 Start Date: 08/Sep/21 04:40 Worklog Time Spent: 10m Work Description: ferhui commented on pull request #3378: URL: https://github.com/apache/hadoop/pull/3378#issuecomment-914906737 @tomscut Thanks for contribution. @ayushtkn @virajjasani @aajisaka @tasanuma Thanks for review! Merged to trunk. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 647724) Time Spent: 3h 10m (was: 3h) > Add description for dfs.namenode.caching.enabled > > > Key: HDFS-16209 > URL: https://issues.apache.org/jira/browse/HDFS-16209 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.1.0 >Reporter: tomscut >Assignee: tomscut >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 3h 10m > Remaining Estimate: 0h > > Namenode config: > dfs.namenode.write-lock-reporting-threshold-ms=50ms > dfs.namenode.caching.enabled=true (default) > > In fact, the caching feature is not used in our cluster, but this switch is > turned on by default(dfs.namenode.caching.enabled=true), incurring some > additional write lock overhead. We count the number of write lock warnings in > a log file, and find that the number of rescan cache warnings reaches about > 32%, which greatly affects the performance of Namenode. > !namenode-write-lock.jpg! > > We should set 'dfs.namenode.caching.enabled' to false by default and turn it > on when we wants to use it. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16209) Add description for dfs.namenode.caching.enabled
[ https://issues.apache.org/jira/browse/HDFS-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hui Fei updated HDFS-16209: --- Fix Version/s: 3.4.0 Resolution: Fixed Status: Resolved (was: Patch Available) > Add description for dfs.namenode.caching.enabled > > > Key: HDFS-16209 > URL: https://issues.apache.org/jira/browse/HDFS-16209 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.1.0 >Reporter: tomscut >Assignee: tomscut >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 3h > Remaining Estimate: 0h > > Namenode config: > dfs.namenode.write-lock-reporting-threshold-ms=50ms > dfs.namenode.caching.enabled=true (default) > > In fact, the caching feature is not used in our cluster, but this switch is > turned on by default(dfs.namenode.caching.enabled=true), incurring some > additional write lock overhead. We count the number of write lock warnings in > a log file, and find that the number of rescan cache warnings reaches about > 32%, which greatly affects the performance of Namenode. > !namenode-write-lock.jpg! > > We should set 'dfs.namenode.caching.enabled' to false by default and turn it > on when we wants to use it. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16209) Add description for dfs.namenode.caching.enabled
[ https://issues.apache.org/jira/browse/HDFS-16209?focusedWorklogId=647723=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647723 ] ASF GitHub Bot logged work on HDFS-16209: - Author: ASF GitHub Bot Created on: 08/Sep/21 04:39 Start Date: 08/Sep/21 04:39 Worklog Time Spent: 10m Work Description: ferhui merged pull request #3378: URL: https://github.com/apache/hadoop/pull/3378 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 647723) Time Spent: 3h (was: 2h 50m) > Add description for dfs.namenode.caching.enabled > > > Key: HDFS-16209 > URL: https://issues.apache.org/jira/browse/HDFS-16209 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.1.0 >Reporter: tomscut >Assignee: tomscut >Priority: Major > Labels: pull-request-available > Time Spent: 3h > Remaining Estimate: 0h > > Namenode config: > dfs.namenode.write-lock-reporting-threshold-ms=50ms > dfs.namenode.caching.enabled=true (default) > > In fact, the caching feature is not used in our cluster, but this switch is > turned on by default(dfs.namenode.caching.enabled=true), incurring some > additional write lock overhead. We count the number of write lock warnings in > a log file, and find that the number of rescan cache warnings reaches about > 32%, which greatly affects the performance of Namenode. > !namenode-write-lock.jpg! > > We should set 'dfs.namenode.caching.enabled' to false by default and turn it > on when we wants to use it. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16213) Flaky test TestFsDatasetImpl#testDnRestartWithHardLink
[ https://issues.apache.org/jira/browse/HDFS-16213?focusedWorklogId=647722=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647722 ] ASF GitHub Bot logged work on HDFS-16213: - Author: ASF GitHub Bot Created on: 08/Sep/21 04:37 Start Date: 08/Sep/21 04:37 Worklog Time Spent: 10m Work Description: ferhui commented on pull request #3386: URL: https://github.com/apache/hadoop/pull/3386#issuecomment-914905663 @virajjasani Thanks for contribution. I didn't reproduce it by running some times. Could you please introduce to make TestFsDatasetImpl#testDnRestartWithHardLink fail by changing simple codes? I want to dig it -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 647722) Time Spent: 1.5h (was: 1h 20m) > Flaky test TestFsDatasetImpl#testDnRestartWithHardLink > -- > > Key: HDFS-16213 > URL: https://issues.apache.org/jira/browse/HDFS-16213 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 1.5h > Remaining Estimate: 0h > > Failure case: > [here|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3359/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt] > {code:java} > [ERROR] > testDnRestartWithHardLink(org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl) > Time elapsed: 7.768 s <<< FAILURE![ERROR] > testDnRestartWithHardLink(org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl) > Time elapsed: 7.768 s <<< FAILURE!java.lang.AssertionError at > org.junit.Assert.fail(Assert.java:87) at > org.junit.Assert.assertTrue(Assert.java:42) at > org.junit.Assert.assertTrue(Assert.java:53) at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl.testDnRestartWithHardLink(TestFsDatasetImpl.java:1344) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) at > java.lang.Thread.run(Thread.java:748) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-16200) Improve NameNode failover
[ https://issues.apache.org/jira/browse/HDFS-16200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17411600#comment-17411600 ] Hadoop QA commented on HDFS-16200: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 0s{color} | | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | | {color:green} No case conflicting files found. {color} | | {color:blue}0{color} | {color:blue} codespell {color} | {color:blue} 0m 1s{color} | | {color:blue} codespell was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 33m 55s{color} | | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 23s{color} | | {color:green} trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 14s{color} | | {color:green} trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 1s{color} | | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 23s{color} | | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | | {color:green} trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s{color} | | {color:green} trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:green}+1{color} | {color:green} spotbugs {color} | {color:green} 3m 19s{color} | | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 19m 8s{color} | | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 14s{color} | | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 20s{color} | | {color:green} the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 1m 20s{color} | [/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3364/2/artifact/out/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt] | {color:red} hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 1 new + 468 unchanged - 0 fixed = 469 total (was 468) {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 9s{color} | | {color:green} the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 1m 9s{color} | [/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3364/2/artifact/out/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt] | {color:red} hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 1 new + 452 unchanged - 0 fixed = 453 total (was 452) {color} | | {color:green}+1{color} | {color:green} blanks {color} | {color:green} 0m 0s{color} | | {color:green} The patch has no blanks issues. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 55s{color} | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3364/2/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt] | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 10
[jira] [Work logged] (HDFS-16200) Improve NameNode failover
[ https://issues.apache.org/jira/browse/HDFS-16200?focusedWorklogId=647654=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647654 ] ASF GitHub Bot logged work on HDFS-16200: - Author: ASF GitHub Bot Created on: 08/Sep/21 00:52 Start Date: 08/Sep/21 00:52 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3364: URL: https://github.com/apache/hadoop/pull/3364#issuecomment-914740874 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 0s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 55s | | trunk passed | | +1 :green_heart: | compile | 1m 23s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 14s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 1s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 23s | | trunk passed | | +1 :green_heart: | javadoc | 0m 55s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 22s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 19s | | trunk passed | | +1 :green_heart: | shadedclient | 19m 8s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 14s | | the patch passed | | +1 :green_heart: | compile | 1m 20s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | -1 :x: | javac | 1m 20s | [/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3364/2/artifact/out/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 1 new + 468 unchanged - 0 fixed = 469 total (was 468) | | +1 :green_heart: | compile | 1m 9s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -1 :x: | javac | 1m 9s | [/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3364/2/artifact/out/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 1 new + 452 unchanged - 0 fixed = 453 total (was 452) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 55s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3364/2/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 10 new + 467 unchanged - 0 fixed = 477 total (was 467) | | +1 :green_heart: | mvnsite | 1m 16s | | the patch passed | | +1 :green_heart: | javadoc | 0m 48s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 18s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 24s | | the patch passed | | +1 :green_heart: | shadedclient | 19m 14s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 357m 58s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3364/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 1m 46s | | The patch does not generate ASF License warnings. | | | | 452m 34s | | | | Reason | Tests | |---:|:--| |
[jira] [Work logged] (HDFS-16205) Make hdfs_allowSnapshot tool cross platform
[ https://issues.apache.org/jira/browse/HDFS-16205?focusedWorklogId=647644=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647644 ] ASF GitHub Bot logged work on HDFS-16205: - Author: ASF GitHub Bot Created on: 08/Sep/21 00:21 Start Date: 08/Sep/21 00:21 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3388: URL: https://github.com/apache/hadoop/pull/3388#issuecomment-914723127 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 0s | | Docker mode activated. | | -1 :x: | docker | 0m 41s | | Docker failed to build yetus/hadoop:ef5dbc7283a. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hadoop/pull/3388 | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3388/3/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 647644) Time Spent: 1h 20m (was: 1h 10m) > Make hdfs_allowSnapshot tool cross platform > --- > > Key: HDFS-16205 > URL: https://issues.apache.org/jira/browse/HDFS-16205 > Project: Hadoop HDFS > Issue Type: Bug > Components: libhdfs++, tools >Affects Versions: 3.4.0 >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Major > Labels: pull-request-available > Time Spent: 1h 20m > Remaining Estimate: 0h > > The source files for hdfs_allowSnapshot uses *getopt* for parsing the command > line arguments. getopt is available only on Linux and thus, isn't cross > platform. We need to replace getopt with *boost::program_options* to make > this cross platform. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16205) Make hdfs_allowSnapshot tool cross platform
[ https://issues.apache.org/jira/browse/HDFS-16205?focusedWorklogId=647641=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647641 ] ASF GitHub Bot logged work on HDFS-16205: - Author: ASF GitHub Bot Created on: 08/Sep/21 00:18 Start Date: 08/Sep/21 00:18 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3388: URL: https://github.com/apache/hadoop/pull/3388#issuecomment-914721853 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 25m 27s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 7 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 27m 58s | | trunk passed | | +1 :green_heart: | compile | 3m 20s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 39s | | trunk passed | | +1 :green_heart: | shadedclient | 50m 27s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 50m 53s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 22s | | the patch passed | | +1 :green_heart: | compile | 3m 13s | | the patch passed | | +1 :green_heart: | cc | 3m 13s | | the patch passed | | +1 :green_heart: | golang | 3m 13s | | the patch passed | | +1 :green_heart: | javac | 3m 13s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 26s | | the patch passed | | +1 :green_heart: | shadedclient | 17m 37s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 34m 25s | | hadoop-hdfs-native-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 43s | | The patch does not generate ASF License warnings. | | | | 135m 38s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3388/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3388 | | Optional Tests | dupname asflicense compile cc mvnsite javac unit codespell golang | | uname | Linux a1edf74303bc 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / e6c5b31708ea3d4be1b5207afd4406034d07af96 | | Default Java | Red Hat, Inc.-1.8.0_302-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3388/3/testReport/ | | Max. process+thread count | 725 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: hadoop-hdfs-project/hadoop-hdfs-native-client | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3388/3/console | | versions | git=2.27.0 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 647641) Time Spent: 1h 10m (was: 1h) > Make hdfs_allowSnapshot tool cross platform > --- > > Key: HDFS-16205 > URL: https://issues.apache.org/jira/browse/HDFS-16205 > Project: Hadoop HDFS > Issue Type: Bug > Components: libhdfs++, tools >Affects Versions: 3.4.0 >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Major > Labels: pull-request-available > Time Spent: 1h 10m > Remaining Estimate: 0h > > The source files for hdfs_allowSnapshot uses *getopt* for parsing the command > line arguments.
[jira] [Work logged] (HDFS-16205) Make hdfs_allowSnapshot tool cross platform
[ https://issues.apache.org/jira/browse/HDFS-16205?focusedWorklogId=647625=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647625 ] ASF GitHub Bot logged work on HDFS-16205: - Author: ASF GitHub Bot Created on: 07/Sep/21 23:48 Start Date: 07/Sep/21 23:48 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3388: URL: https://github.com/apache/hadoop/pull/3388#issuecomment-914703164 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 0s | | Docker mode activated. | | -1 :x: | docker | 0m 32s | | Docker failed to build yetus/hadoop:ef5dbc7283a. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hadoop/pull/3388 | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3388/2/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 647625) Time Spent: 1h (was: 50m) > Make hdfs_allowSnapshot tool cross platform > --- > > Key: HDFS-16205 > URL: https://issues.apache.org/jira/browse/HDFS-16205 > Project: Hadoop HDFS > Issue Type: Bug > Components: libhdfs++, tools >Affects Versions: 3.4.0 >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > The source files for hdfs_allowSnapshot uses *getopt* for parsing the command > line arguments. getopt is available only on Linux and thus, isn't cross > platform. We need to replace getopt with *boost::program_options* to make > this cross platform. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16205) Make hdfs_allowSnapshot tool cross platform
[ https://issues.apache.org/jira/browse/HDFS-16205?focusedWorklogId=647624=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647624 ] ASF GitHub Bot logged work on HDFS-16205: - Author: ASF GitHub Bot Created on: 07/Sep/21 23:45 Start Date: 07/Sep/21 23:45 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3388: URL: https://github.com/apache/hadoop/pull/3388#issuecomment-914701043 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 20m 39s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 7 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 20m 39s | | trunk passed | | +1 :green_heart: | compile | 2m 52s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 45s | | trunk passed | | +1 :green_heart: | shadedclient | 38m 0s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 38m 28s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 25s | | the patch passed | | +1 :green_heart: | compile | 2m 34s | | the patch passed | | +1 :green_heart: | cc | 2m 34s | | the patch passed | | +1 :green_heart: | golang | 2m 34s | | the patch passed | | +1 :green_heart: | javac | 2m 34s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 27s | | the patch passed | | +1 :green_heart: | shadedclient | 13m 29s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 31m 53s | | hadoop-hdfs-native-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 47s | | The patch does not generate ASF License warnings. | | | | 111m 25s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3388/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3388 | | Optional Tests | dupname asflicense compile cc mvnsite javac unit codespell golang | | uname | Linux f4713b8c666e 4.15.0-151-generic #157-Ubuntu SMP Fri Jul 9 23:07:57 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / e6c5b31708ea3d4be1b5207afd4406034d07af96 | | Default Java | Red Hat, Inc.-1.8.0_302-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3388/2/testReport/ | | Max. process+thread count | 689 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: hadoop-hdfs-project/hadoop-hdfs-native-client | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3388/2/console | | versions | git=2.27.0 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 647624) Time Spent: 50m (was: 40m) > Make hdfs_allowSnapshot tool cross platform > --- > > Key: HDFS-16205 > URL: https://issues.apache.org/jira/browse/HDFS-16205 > Project: Hadoop HDFS > Issue Type: Bug > Components: libhdfs++, tools >Affects Versions: 3.4.0 >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Major > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > > The source files for hdfs_allowSnapshot uses *getopt* for parsing the command > line arguments.
[jira] [Work logged] (HDFS-16205) Make hdfs_allowSnapshot tool cross platform
[ https://issues.apache.org/jira/browse/HDFS-16205?focusedWorklogId=647595=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647595 ] ASF GitHub Bot logged work on HDFS-16205: - Author: ASF GitHub Bot Created on: 07/Sep/21 22:02 Start Date: 07/Sep/21 22:02 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3388: URL: https://github.com/apache/hadoop/pull/3388#issuecomment-914656779 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 35m 23s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 7 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 45s | | trunk passed | | +1 :green_heart: | compile | 3m 22s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 31s | | trunk passed | | +1 :green_heart: | shadedclient | 51m 35s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 51m 54s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 17s | | the patch passed | | +1 :green_heart: | compile | 2m 37s | | the patch passed | | +1 :green_heart: | cc | 2m 37s | | the patch passed | | +1 :green_heart: | golang | 2m 37s | | the patch passed | | +1 :green_heart: | javac | 2m 37s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 16s | | the patch passed | | +1 :green_heart: | shadedclient | 13m 16s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 33m 21s | | hadoop-hdfs-native-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 38s | | The patch does not generate ASF License warnings. | | | | 139m 44s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3388/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3388 | | Optional Tests | dupname asflicense compile cc mvnsite javac unit codespell golang | | uname | Linux a1c04f7bc55c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / e6c5b31708ea3d4be1b5207afd4406034d07af96 | | Default Java | Red Hat, Inc.-1.8.0_302-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3388/3/testReport/ | | Max. process+thread count | 718 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: hadoop-hdfs-project/hadoop-hdfs-native-client | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3388/3/console | | versions | git=2.9.5 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 647595) Time Spent: 40m (was: 0.5h) > Make hdfs_allowSnapshot tool cross platform > --- > > Key: HDFS-16205 > URL: https://issues.apache.org/jira/browse/HDFS-16205 > Project: Hadoop HDFS > Issue Type: Bug > Components: libhdfs++, tools >Affects Versions: 3.4.0 >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Major > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > The source files for hdfs_allowSnapshot uses *getopt* for parsing the command > line arguments. getopt
[jira] [Work logged] (HDFS-16205) Make hdfs_allowSnapshot tool cross platform
[ https://issues.apache.org/jira/browse/HDFS-16205?focusedWorklogId=647591=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647591 ] ASF GitHub Bot logged work on HDFS-16205: - Author: ASF GitHub Bot Created on: 07/Sep/21 21:54 Start Date: 07/Sep/21 21:54 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3388: URL: https://github.com/apache/hadoop/pull/3388#issuecomment-914653063 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 34m 57s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 7 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 30s | | trunk passed | | +1 :green_heart: | compile | 2m 36s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 29s | | trunk passed | | +1 :green_heart: | shadedclient | 48m 6s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 48m 26s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 17s | | the patch passed | | +1 :green_heart: | compile | 2m 27s | | the patch passed | | +1 :green_heart: | cc | 2m 27s | | the patch passed | | +1 :green_heart: | golang | 2m 27s | | the patch passed | | +1 :green_heart: | javac | 2m 27s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 21s | | the patch passed | | +1 :green_heart: | shadedclient | 13m 6s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 31m 18s | | hadoop-hdfs-native-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 36s | | The patch does not generate ASF License warnings. | | | | 133m 27s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3388/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3388 | | Optional Tests | dupname asflicense compile cc mvnsite javac unit codespell golang | | uname | Linux 9bfc9a1e44d0 4.15.0-151-generic #157-Ubuntu SMP Fri Jul 9 23:07:57 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / e6c5b31708ea3d4be1b5207afd4406034d07af96 | | Default Java | Red Hat, Inc.-1.8.0_302-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3388/2/testReport/ | | Max. process+thread count | 548 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: hadoop-hdfs-project/hadoop-hdfs-native-client | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3388/2/console | | versions | git=2.9.5 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 647591) Time Spent: 0.5h (was: 20m) > Make hdfs_allowSnapshot tool cross platform > --- > > Key: HDFS-16205 > URL: https://issues.apache.org/jira/browse/HDFS-16205 > Project: Hadoop HDFS > Issue Type: Bug > Components: libhdfs++, tools >Affects Versions: 3.4.0 >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > The source files for hdfs_allowSnapshot uses *getopt* for parsing the command > line arguments.
[jira] [Work logged] (HDFS-16091) WebHDFS should support getSnapshotDiffReportListing
[ https://issues.apache.org/jira/browse/HDFS-16091?focusedWorklogId=647565=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647565 ] ASF GitHub Bot logged work on HDFS-16091: - Author: ASF GitHub Bot Created on: 07/Sep/21 20:37 Start Date: 07/Sep/21 20:37 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3374: URL: https://github.com/apache/hadoop/pull/3374#issuecomment-914614059 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 19s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 36s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 52s | | trunk passed | | +1 :green_heart: | compile | 5m 18s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 4m 56s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 16s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 6s | | trunk passed | | +1 :green_heart: | javadoc | 2m 20s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 3m 2s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 7m 3s | | trunk passed | | +1 :green_heart: | shadedclient | 15m 1s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 37s | | the patch passed | | +1 :green_heart: | compile | 5m 3s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | -1 :x: | javac | 5m 3s | [/results-compile-javac-hadoop-hdfs-project-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3374/4/artifact/out/results-compile-javac-hadoop-hdfs-project-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-hdfs-project-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 1 new + 652 unchanged - 1 fixed = 653 total (was 653) | | +1 :green_heart: | compile | 4m 49s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -1 :x: | javac | 4m 49s | [/results-compile-javac-hadoop-hdfs-project-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3374/4/artifact/out/results-compile-javac-hadoop-hdfs-project-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | hadoop-hdfs-project-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 1 new + 631 unchanged - 1 fixed = 632 total (was 632) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 7s | [/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3374/4/artifact/out/results-checkstyle-hadoop-hdfs-project.txt) | hadoop-hdfs-project: The patch generated 1 new + 258 unchanged - 0 fixed = 259 total (was 258) | | +1 :green_heart: | mvnsite | 2m 43s | | the patch passed | | +1 :green_heart: | javadoc | 1m 56s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 45s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 7m 17s | | the patch passed | | +1 :green_heart: | shadedclient | 14m 22s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 21s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 410m 3s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3374/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | -1 :x: | unit | 33m 33s |
[jira] [Work logged] (HDFS-15160) ReplicaMap, Disk Balancer, Directory Scanner and various FsDatasetImpl methods should use datanode readlock
[ https://issues.apache.org/jira/browse/HDFS-15160?focusedWorklogId=647527=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647527 ] ASF GitHub Bot logged work on HDFS-15160: - Author: ASF GitHub Bot Created on: 07/Sep/21 19:07 Start Date: 07/Sep/21 19:07 Worklog Time Spent: 10m Work Description: amahussein commented on a change in pull request #3200: URL: https://github.com/apache/hadoop/pull/3200#discussion_r703761237 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java ## @@ -201,16 +201,16 @@ public Block getStoredBlock(String bpid, long blkid) * The deepCopyReplica call doesn't use the datasetock since it will lead the * potential deadlock with the {@link FsVolumeList#addBlockPool} call. */ + @SuppressWarnings("unchecked") @Override public Set deepCopyReplica(String bpid) throws IOException { -Set replicas = null; +Set replicas; Review comment: Thanks @brahmareddybattula I think we can apply some changes to trunk given that the type casting is unchecked and causes some Javac warnings. Perhaps we can file another refactoring jira to address these issues on trunk. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 647527) Time Spent: 5h 20m (was: 5h 10m) > ReplicaMap, Disk Balancer, Directory Scanner and various FsDatasetImpl > methods should use datanode readlock > --- > > Key: HDFS-15160 > URL: https://issues.apache.org/jira/browse/HDFS-15160 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 3.3.0 >Reporter: Stephen O'Donnell >Assignee: Stephen O'Donnell >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0 > > Attachments: HDFS-15160-branch-3.3-001.patch, HDFS-15160.001.patch, > HDFS-15160.002.patch, HDFS-15160.003.patch, HDFS-15160.004.patch, > HDFS-15160.005.patch, HDFS-15160.006.patch, HDFS-15160.007.patch, > HDFS-15160.008.patch, HDFS-15160.branch-3-3.001.patch, > image-2020-04-10-17-18-08-128.png, image-2020-04-10-17-18-55-938.png > > Time Spent: 5h 20m > Remaining Estimate: 0h > > Now we have HDFS-15150, we can start to move some DN operations to use the > read lock rather than the write lock to improve concurrence. The first step > is to make the changes to ReplicaMap, as many other methods make calls to it. > This Jira switches read operations against the volume map to use the readLock > rather than the write lock. > Additionally, some methods make a call to replicaMap.replicas() (eg > getBlockReports, getFinalizedBlocks, deepCopyReplica) and only use the result > in a read only fashion, so they can also be switched to using a readLock. > Next is the directory scanner and disk balancer, which only require a read > lock. > Finally (for this Jira) are various "low hanging fruit" items in BlockSender > and fsdatasetImpl where is it fairly obvious they only need a read lock. > For now, I have avoided changing anything which looks too risky, as I think > its better to do any larger refactoring or risky changes each in their own > Jira. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15230) Sanity check should not assume key base name can be derived from version name
[ https://issues.apache.org/jira/browse/HDFS-15230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17411426#comment-17411426 ] Jason Wen commented on HDFS-15230: -- +1 for fixing this issue. We also hit same issue with our custom KeyProvider. We can keep this sanity check but should allow a configuration option to skip this check. > Sanity check should not assume key base name can be derived from version name > - > > Key: HDFS-15230 > URL: https://issues.apache.org/jira/browse/HDFS-15230 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Wei-Chiu Chuang >Priority: Major > > HDFS-14884 checks if the encryption info of a file matches the encryption > zone key. > {code} > if (!KeyProviderCryptoExtension. > getBaseName(keyVersionName).equals(zoneKeyName)) { > throw new IllegalArgumentException(String.format( > "KeyVersion '%s' does not belong to the key '%s'", > keyVersionName, zoneKeyName)); > } > {code} > Here it assumes the "base name" can be derived from key version name, and > that the base name should be the same as zone key. > However, there is no published definition of what a key version name should > be. > While the code works for the builtin JKS key provider, it may not work for > other kind of key providers. (Specifically, it breaks Cloudera's KeyTrustee > KMS KeyProvider) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16187) SnapshotDiff behaviour with Xattrs and Acls is not consistent across NN restarts with checkpointing
[ https://issues.apache.org/jira/browse/HDFS-16187?focusedWorklogId=647484=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647484 ] ASF GitHub Bot logged work on HDFS-16187: - Author: ASF GitHub Bot Created on: 07/Sep/21 17:00 Start Date: 07/Sep/21 17:00 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3340: URL: https://github.com/apache/hadoop/pull/3340#issuecomment-914471802 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 4s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 36s | | trunk passed | | +1 :green_heart: | compile | 1m 23s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 14s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 0s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 22s | | trunk passed | | +1 :green_heart: | javadoc | 0m 55s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 25s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 20s | | trunk passed | | +1 :green_heart: | shadedclient | 18m 52s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 17s | | the patch passed | | +1 :green_heart: | compile | 1m 18s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 18s | | the patch passed | | +1 :green_heart: | compile | 1m 10s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 10s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 53s | | hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 175 unchanged - 1 fixed = 175 total (was 176) | | +1 :green_heart: | mvnsite | 1m 16s | | the patch passed | | +1 :green_heart: | javadoc | 0m 49s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 17s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 23s | | the patch passed | | +1 :green_heart: | shadedclient | 18m 47s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 235m 21s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3340/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | -1 :x: | asflicense | 1m 8s | [/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3340/4/artifact/out/results-asflicense.txt) | The patch generated 60 ASF License warnings. | | | | 328m 11s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestFileChecksum | | | hadoop.hdfs.web.TestWebHdfsFileSystemContract | | | hadoop.hdfs.TestTrashWithSecureEncryptionZones | | | hadoop.hdfs.TestHDFSFileSystemContract | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3340/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3340 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 170958576ff8 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / c7d160dfe4e96c18480406aa51dbd3248931d69f | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions |
[jira] [Commented] (HDFS-16207) Remove NN logs stack trace for non-existent xattr query
[ https://issues.apache.org/jira/browse/HDFS-16207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17411293#comment-17411293 ] Ahmed Hussein commented on HDFS-16207: -- I submitted a PR to suppress the stack by: # created a new IOEXception class {{XAttrNotFoundException}} # Add {{XAttrNotFoundException}} to the list of "{{Tersed}}" Exceptions in the {{clientRpcServer}}. [~kihwal], Can you please take a look at the PR? > Remove NN logs stack trace for non-existent xattr query > --- > > Key: HDFS-16207 > URL: https://issues.apache.org/jira/browse/HDFS-16207 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 3.4.0, 2.10.2, 3.3.2, 3.2.4 >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > The NN logs a full stack trace every time a getXAttrs is called for a > non-existent xattr. The logging has zero value add. The increased logging > load may harm performance. Something is now probing for xattrs resulting in > many lines of: > {code:bash} > 2021-09-02 13:48:03,340 [IPC Server handler 5 on default port 59951] INFO > ipc.Server (Server.java:logException(3149)) - IPC Server handler 5 on default > port 59951, call Call#17 Retry#0 > org.apache.hadoop.hdfs.protocol.ClientProtocol.getXAttrs from 127.0.0.1:59961 > java.io.IOException: At least one of the attributes provided was not found. > at > org.apache.hadoop.hdfs.server.namenode.FSDirXAttrOp.getXAttrs(FSDirXAttrOp.java:134) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getXAttrs(FSNamesystem.java:8472) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getXAttrs(NameNodeRpcServer.java:2317) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getXAttrs(ClientNamenodeProtocolServerSideTranslatorPB.java:1745) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:604) > at > org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:572) > at > org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:556) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1093) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1155) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1083) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1900) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:3088) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-16196) Namesystem#completeFile method will log incorrect path information when router to access
[ https://issues.apache.org/jira/browse/HDFS-16196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17411230#comment-17411230 ] Hadoop QA commented on HDFS-16196: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 3s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} No case conflicting files found. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red}{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 33m 50s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 37s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 24s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 4s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 32s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 48s{color} | {color:green}{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 32s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 23m 41s{color} | {color:blue}{color} | {color:blue} Both FindBugs and SpotBugs are enabled, using SpotBugs. {color} | | {color:green}+1{color} | {color:green} spotbugs {color} | {color:green} 3m 20s{color} | {color:green}{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 20s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 28s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 28s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 18s{color} | {color:green}{color} | {color:green} the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 18s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 57s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 22s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 48s{color} | {color:green}{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s{color} | {color:green}{color} | {color:green} the patch passed with JDK Private
[jira] [Work logged] (HDFS-16203) Discover datanodes with unbalanced block pool usage by the standard deviation
[ https://issues.apache.org/jira/browse/HDFS-16203?focusedWorklogId=647324=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647324 ] ASF GitHub Bot logged work on HDFS-16203: - Author: ASF GitHub Bot Created on: 07/Sep/21 11:56 Start Date: 07/Sep/21 11:56 Worklog Time Spent: 10m Work Description: tomscut commented on pull request #3366: URL: https://github.com/apache/hadoop/pull/3366#issuecomment-914240125 Hi @jojochuang @tasanuma @ayushtkn @ferhui @Hexiaoqiao , could you please review this PR? Thanks a lot. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 647324) Time Spent: 1h 10m (was: 1h) > Discover datanodes with unbalanced block pool usage by the standard deviation > - > > Key: HDFS-16203 > URL: https://issues.apache.org/jira/browse/HDFS-16203 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: tomscut >Assignee: tomscut >Priority: Major > Labels: pull-request-available > Attachments: image-2021-09-01-19-16-27-172.png > > Time Spent: 1h 10m > Remaining Estimate: 0h > > *Discover datanodes with unbalanced volume usage by the standard deviation.* > *In some scenarios, we may cause unbalanced datanode disk usage:* > 1. Repair the damaged disk and make it online again. > 2. Add disks to some Datanodes. > 3. Some disks are damaged, resulting in slow data writing. > 4. Use some custom volume choosing policies. > In the case of unbalanced disk usage, a sudden increase in datanode write > traffic may result in busy disk I/O with low volume usage, resulting in > decreased throughput across datanodes. > We need to find these nodes in time to do diskBalance, or other processing. > Based on the volume usage of each datanode, we can calculate the standard > deviation of the volume usage. The more unbalanced the volume, the higher the > standard deviation. > *We can display the result on the Web of namenode, and then sorting directly > to find the nodes where the volumes usages are unbalanced.* > *{color:#172b4d}This interface is only used to obtain metrics and does not > adversely affect namenode performance.{color}* > > {color:#172b4d}!image-2021-09-01-19-16-27-172.png|width=581,height=216!{color} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16209) Add description for dfs.namenode.caching.enabled
[ https://issues.apache.org/jira/browse/HDFS-16209?focusedWorklogId=647323=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647323 ] ASF GitHub Bot logged work on HDFS-16209: - Author: ASF GitHub Bot Created on: 07/Sep/21 11:55 Start Date: 07/Sep/21 11:55 Worklog Time Spent: 10m Work Description: tomscut commented on pull request #3378: URL: https://github.com/apache/hadoop/pull/3378#issuecomment-914239291 @ayushtkn @ferhui Could you please review again? Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 647323) Time Spent: 2h 50m (was: 2h 40m) > Add description for dfs.namenode.caching.enabled > > > Key: HDFS-16209 > URL: https://issues.apache.org/jira/browse/HDFS-16209 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.1.0 >Reporter: tomscut >Assignee: tomscut >Priority: Major > Labels: pull-request-available > Time Spent: 2h 50m > Remaining Estimate: 0h > > Namenode config: > dfs.namenode.write-lock-reporting-threshold-ms=50ms > dfs.namenode.caching.enabled=true (default) > > In fact, the caching feature is not used in our cluster, but this switch is > turned on by default(dfs.namenode.caching.enabled=true), incurring some > additional write lock overhead. We count the number of write lock warnings in > a log file, and find that the number of rescan cache warnings reaches about > 32%, which greatly affects the performance of Namenode. > !namenode-write-lock.jpg! > > We should set 'dfs.namenode.caching.enabled' to false by default and turn it > on when we wants to use it. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16210) RBF: Add the option of refreshCallQueue to RouterAdmin
[ https://issues.apache.org/jira/browse/HDFS-16210?focusedWorklogId=647322=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647322 ] ASF GitHub Bot logged work on HDFS-16210: - Author: ASF GitHub Bot Created on: 07/Sep/21 11:51 Start Date: 07/Sep/21 11:51 Worklog Time Spent: 10m Work Description: ferhui commented on pull request #3379: URL: https://github.com/apache/hadoop/pull/3379#issuecomment-914237031 @symious Thanks. Will commit if no other comments. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 647322) Time Spent: 2h (was: 1h 50m) > RBF: Add the option of refreshCallQueue to RouterAdmin > -- > > Key: HDFS-16210 > URL: https://issues.apache.org/jira/browse/HDFS-16210 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Janus Chow >Assignee: Janus Chow >Priority: Major > Labels: pull-request-available > Time Spent: 2h > Remaining Estimate: 0h > > We enabled FairCallQueue to RouterRpcServer, but Router can not > refreshCallQueue as NameNode does. > This ticket is to enable the refreshCallQueue for Router so that we don't > have to restart the Routers when updating FairCallQueue configurations. > > The option is not to refreshCallQueue to NameNodes, just trying to refresh > the callQueue of Router itself. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16091) WebHDFS should support getSnapshotDiffReportListing
[ https://issues.apache.org/jira/browse/HDFS-16091?focusedWorklogId=647319=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647319 ] ASF GitHub Bot logged work on HDFS-16091: - Author: ASF GitHub Bot Created on: 07/Sep/21 11:41 Start Date: 07/Sep/21 11:41 Worklog Time Spent: 10m Work Description: iwasakims commented on pull request #3374: URL: https://github.com/apache/hadoop/pull/3374#issuecomment-914231666 Following junit tests are not related to WebHDFS. I could not reproduce the test failure on my local. * hadoop.hdfs.server.namenode.ha.TestObserverNode * hadoop.hdfs.server.mover.TestMover * hadoop.hdfs.rbfbalance.TestRouterDistCpProcedure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 647319) Time Spent: 1h 10m (was: 1h) > WebHDFS should support getSnapshotDiffReportListing > --- > > Key: HDFS-16091 > URL: https://issues.apache.org/jira/browse/HDFS-16091 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki >Priority: Major > Labels: pull-request-available > Time Spent: 1h 10m > Remaining Estimate: 0h > > When there are millions of diffs between two snapshots, the old > getSnapshotDiffReport() isn't scalable. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16198) Short circuit read leaks Slot objects when InvalidToken exception is thrown
[ https://issues.apache.org/jira/browse/HDFS-16198?focusedWorklogId=647317=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647317 ] ASF GitHub Bot logged work on HDFS-16198: - Author: ASF GitHub Bot Created on: 07/Sep/21 11:38 Start Date: 07/Sep/21 11:38 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3359: URL: https://github.com/apache/hadoop/pull/3359#issuecomment-914229837 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 47s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 49s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 21m 21s | | trunk passed | | +1 :green_heart: | compile | 5m 28s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 5m 11s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 10s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 25s | | trunk passed | | +1 :green_heart: | javadoc | 1m 35s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 11s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 5m 53s | | trunk passed | | +1 :green_heart: | shadedclient | 17m 40s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 11s | | the patch passed | | +1 :green_heart: | compile | 5m 9s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 5m 9s | | the patch passed | | +1 :green_heart: | compile | 4m 50s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 4m 50s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 7s | | the patch passed | | +1 :green_heart: | mvnsite | 2m 10s | | the patch passed | | +1 :green_heart: | javadoc | 1m 20s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 57s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 5m 49s | | the patch passed | | +1 :green_heart: | shadedclient | 17m 11s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 25s | | hadoop-hdfs-client in the patch passed. | | +1 :green_heart: | unit | 240m 6s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 41s | | The patch does not generate ASF License warnings. | | | | 360m 15s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3359/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3359 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux c41024524093 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 9849fa31c91a1aa8fa4752955e3f53a5821dcc44 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3359/5/testReport/ | | Max. process+thread count | 3383 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-client
[jira] [Work logged] (HDFS-16203) Discover datanodes with unbalanced block pool usage by the standard deviation
[ https://issues.apache.org/jira/browse/HDFS-16203?focusedWorklogId=647315=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647315 ] ASF GitHub Bot logged work on HDFS-16203: - Author: ASF GitHub Bot Created on: 07/Sep/21 11:33 Start Date: 07/Sep/21 11:33 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3366: URL: https://github.com/apache/hadoop/pull/3366#issuecomment-914226922 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 57s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | jshint | 0m 0s | | jshint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 33s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 40s | | trunk passed | | +1 :green_heart: | compile | 5m 13s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 4m 59s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 16s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 20s | | trunk passed | | +1 :green_heart: | javadoc | 1m 35s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 6s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 5m 35s | | trunk passed | | +1 :green_heart: | shadedclient | 18m 12s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 14s | | the patch passed | | +1 :green_heart: | compile | 5m 17s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 5m 17s | | the patch passed | | +1 :green_heart: | compile | 4m 57s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 4m 57s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 3s | | hadoop-hdfs-project: The patch generated 0 new + 113 unchanged - 9 fixed = 113 total (was 122) | | +1 :green_heart: | mvnsite | 2m 9s | | the patch passed | | +1 :green_heart: | javadoc | 1m 23s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 54s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 5m 48s | | the patch passed | | +1 :green_heart: | shadedclient | 18m 1s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 19s | | hadoop-hdfs-client in the patch passed. | | +1 :green_heart: | unit | 235m 9s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 48s | | The patch does not generate ASF License warnings. | | | | 355m 21s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3366/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3366 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell jshint | | uname | Linux 82d18bc354e2 4.15.0-151-generic #157-Ubuntu SMP Fri Jul 9 23:07:57 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 9f2a186f9e219167278537b7f0d4d0312f110942 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results |
[jira] [Work logged] (HDFS-16213) Flaky test TestFsDatasetImpl#testDnRestartWithHardLink
[ https://issues.apache.org/jira/browse/HDFS-16213?focusedWorklogId=647299=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647299 ] ASF GitHub Bot logged work on HDFS-16213: - Author: ASF GitHub Bot Created on: 07/Sep/21 11:10 Start Date: 07/Sep/21 11:10 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3386: URL: https://github.com/apache/hadoop/pull/3386#issuecomment-914213445 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 43s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 30m 56s | | trunk passed | | +1 :green_heart: | compile | 1m 23s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 16s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 3s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 24s | | trunk passed | | +1 :green_heart: | javadoc | 0m 58s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 30s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 8s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 23s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 10s | | the patch passed | | +1 :green_heart: | compile | 1m 16s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 16s | | the patch passed | | +1 :green_heart: | compile | 1m 7s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 7s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 54s | | hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 241 unchanged - 3 fixed = 241 total (was 244) | | +1 :green_heart: | mvnsite | 1m 14s | | the patch passed | | +1 :green_heart: | javadoc | 0m 49s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 22s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 8s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 7s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 248m 51s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3386/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 44s | | The patch does not generate ASF License warnings. | | | | 333m 27s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.mover.TestMover | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3386/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3386 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux de7642c3d999 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 65c59d6601a9946d87cd49675516123ccc1a4a80 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3386/6/testReport/ | | Max. process+thread
[jira] [Work logged] (HDFS-16210) RBF: Add the option of refreshCallQueue to RouterAdmin
[ https://issues.apache.org/jira/browse/HDFS-16210?focusedWorklogId=647296=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647296 ] ASF GitHub Bot logged work on HDFS-16210: - Author: ASF GitHub Bot Created on: 07/Sep/21 11:07 Start Date: 07/Sep/21 11:07 Worklog Time Spent: 10m Work Description: symious commented on pull request #3379: URL: https://github.com/apache/hadoop/pull/3379#issuecomment-914211728 > @symious Could you please change the title of PR and JIRA to "RBF: Add xxx", it means that it's for RBF and it convenient for others to track RBF Sure, updated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 647296) Time Spent: 1h 50m (was: 1h 40m) > RBF: Add the option of refreshCallQueue to RouterAdmin > -- > > Key: HDFS-16210 > URL: https://issues.apache.org/jira/browse/HDFS-16210 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Janus Chow >Assignee: Janus Chow >Priority: Major > Labels: pull-request-available > Time Spent: 1h 50m > Remaining Estimate: 0h > > We enabled FairCallQueue to RouterRpcServer, but Router can not > refreshCallQueue as NameNode does. > This ticket is to enable the refreshCallQueue for Router so that we don't > have to restart the Routers when updating FairCallQueue configurations. > > The option is not to refreshCallQueue to NameNodes, just trying to refresh > the callQueue of Router itself. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16091) WebHDFS should support getSnapshotDiffReportListing
[ https://issues.apache.org/jira/browse/HDFS-16091?focusedWorklogId=647295=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647295 ] ASF GitHub Bot logged work on HDFS-16091: - Author: ASF GitHub Bot Created on: 07/Sep/21 11:06 Start Date: 07/Sep/21 11:06 Worklog Time Spent: 10m Work Description: iwasakims commented on pull request #3374: URL: https://github.com/apache/hadoop/pull/3374#issuecomment-914211171 ``` hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java:1490:50:[unchecked] unchecked conversion ``` This can not be addressed as far as we are using TreeList of commons-collections 3 which does not support type parameters. We are already ignoring same warning for DistributedFileSystem. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 647295) Time Spent: 1h (was: 50m) > WebHDFS should support getSnapshotDiffReportListing > --- > > Key: HDFS-16091 > URL: https://issues.apache.org/jira/browse/HDFS-16091 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki >Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > When there are millions of diffs between two snapshots, the old > getSnapshotDiffReport() isn't scalable. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16210) RBF: Add the option of refreshCallQueue to RouterAdmin
[ https://issues.apache.org/jira/browse/HDFS-16210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Janus Chow updated HDFS-16210: -- Summary: RBF: Add the option of refreshCallQueue to RouterAdmin (was: Add the option of refreshCallQueue to RouterAdmin) > RBF: Add the option of refreshCallQueue to RouterAdmin > -- > > Key: HDFS-16210 > URL: https://issues.apache.org/jira/browse/HDFS-16210 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Janus Chow >Assignee: Janus Chow >Priority: Major > Labels: pull-request-available > Time Spent: 1h 40m > Remaining Estimate: 0h > > We enabled FairCallQueue to RouterRpcServer, but Router can not > refreshCallQueue as NameNode does. > This ticket is to enable the refreshCallQueue for Router so that we don't > have to restart the Routers when updating FairCallQueue configurations. > > The option is not to refreshCallQueue to NameNodes, just trying to refresh > the callQueue of Router itself. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16209) Add description for dfs.namenode.caching.enabled
[ https://issues.apache.org/jira/browse/HDFS-16209?focusedWorklogId=647291=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647291 ] ASF GitHub Bot logged work on HDFS-16209: - Author: ASF GitHub Bot Created on: 07/Sep/21 11:02 Start Date: 07/Sep/21 11:02 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3378: URL: https://github.com/apache/hadoop/pull/3378#issuecomment-914208384 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 5s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 56s | | trunk passed | | +1 :green_heart: | compile | 1m 27s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 20s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 0s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 31s | | trunk passed | | +1 :green_heart: | javadoc | 1m 0s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 28s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 7s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 29s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 15s | | the patch passed | | +1 :green_heart: | compile | 1m 17s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 17s | | the patch passed | | +1 :green_heart: | compile | 1m 7s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 7s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 50s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 12s | | the patch passed | | +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 0m 46s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 17s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 13s | | the patch passed | | +1 :green_heart: | shadedclient | 18m 30s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 236m 58s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3378/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 46s | | The patch does not generate ASF License warnings. | | | | 325m 34s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3378/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3378 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml markdownlint | | uname | Linux bb662cbf55f8 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 1c453e212e3bc3da9203ee3524092574965cc8fd | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions |
[jira] [Work logged] (HDFS-16091) WebHDFS should support getSnapshotDiffReportListing
[ https://issues.apache.org/jira/browse/HDFS-16091?focusedWorklogId=647290=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647290 ] ASF GitHub Bot logged work on HDFS-16091: - Author: ASF GitHub Bot Created on: 07/Sep/21 10:55 Start Date: 07/Sep/21 10:55 Worklog Time Spent: 10m Work Description: iwasakims commented on pull request #3374: URL: https://github.com/apache/hadoop/pull/3374#issuecomment-914203666 ``` ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java:1352: case GETSNAPSHOTDIFFLISTING: {:34: Avoid nested blocks. [AvoidNestedBlocks] ``` I filed [HADOOP-17897](https://issues.apache.org/jira/browse/HADOOP-17897) for this javac warnings -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 647290) Time Spent: 50m (was: 40m) > WebHDFS should support getSnapshotDiffReportListing > --- > > Key: HDFS-16091 > URL: https://issues.apache.org/jira/browse/HDFS-16091 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki >Priority: Major > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > > When there are millions of diffs between two snapshots, the old > getSnapshotDiffReport() isn't scalable. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16210) Add the option of refreshCallQueue to RouterAdmin
[ https://issues.apache.org/jira/browse/HDFS-16210?focusedWorklogId=647287=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647287 ] ASF GitHub Bot logged work on HDFS-16210: - Author: ASF GitHub Bot Created on: 07/Sep/21 10:47 Start Date: 07/Sep/21 10:47 Worklog Time Spent: 10m Work Description: ferhui commented on pull request #3379: URL: https://github.com/apache/hadoop/pull/3379#issuecomment-914199242 @symious Could you please change the title of PR and JIRA to "RBF: Add xxx", it means that it's for RBF and it convenient for others to track RBF -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 647287) Time Spent: 1h 40m (was: 1.5h) > Add the option of refreshCallQueue to RouterAdmin > - > > Key: HDFS-16210 > URL: https://issues.apache.org/jira/browse/HDFS-16210 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Janus Chow >Assignee: Janus Chow >Priority: Major > Labels: pull-request-available > Time Spent: 1h 40m > Remaining Estimate: 0h > > We enabled FairCallQueue to RouterRpcServer, but Router can not > refreshCallQueue as NameNode does. > This ticket is to enable the refreshCallQueue for Router so that we don't > have to restart the Routers when updating FairCallQueue configurations. > > The option is not to refreshCallQueue to NameNodes, just trying to refresh > the callQueue of Router itself. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-16044) Fix getListing call getLocatedBlocks even source is a directory
[ https://issues.apache.org/jira/browse/HDFS-16044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17411098#comment-17411098 ] Hadoop QA commented on HDFS-16044: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 46s{color} | | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | | {color:green} No case conflicting files found. {color} | | {color:blue}0{color} | {color:blue} codespell {color} | {color:blue} 0m 1s{color} | | {color:blue} codespell was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 33m 16s{color} | [/branch-mvninstall-root.txt|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/trunk/10/artifact/out/branch-mvninstall-root.txt] | {color:red} root in trunk failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 7s{color} | | {color:green} trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s{color} | | {color:green} trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | | {color:green} trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | | {color:green} trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:green}+1{color} | {color:green} spotbugs {color} | {color:green} 2m 42s{color} | | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 18m 2s{color} | | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 55s{color} | | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | | {color:green} the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 55s{color} | | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | | {color:green} the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 47s{color} | | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} blanks {color} | {color:green} 0m 0s{color} | | {color:green} The patch has no blanks issues. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s{color} | | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s{color} | | {color:green} the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | | {color:green} the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:green}+1{color} | {color:green} spotbugs {color} | {color:green} 2m 40s{color} | | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 54s{color} | | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 24s{color} | | {color:green} hadoop-hdfs-client in the patch
[jira] [Created] (HDFS-16215) File read fails with CannotObtainBlockLengthException after Namenode is restarted
Srinivasu Majeti created HDFS-16215: --- Summary: File read fails with CannotObtainBlockLengthException after Namenode is restarted Key: HDFS-16215 URL: https://issues.apache.org/jira/browse/HDFS-16215 Project: Hadoop HDFS Issue Type: Bug Components: datanode Affects Versions: 3.3.1, 3.2.2 Reporter: Srinivasu Majeti When a file is being written by first client and fsck shows OPENFORWRITE and HDFS outage happens and brough back up , first client is disconnected and a new client tries to open the file we see "Cannot obtain block length for" as shown below. {code:java} /tmp/hosts7 134217728 bytes, replicated: replication=3, 1 block(s), OPENFORWRITE: OK 0. BP-1958960150-172.25.40.87-1628677864204:blk_1073745252_4430 len=134217728 Live_repl=3 [DatanodeInfoWithStorage[172.25.36.14:9866,DS-6357ab37-84ae-4c7c-8794-fef905bcde05,DISK], DatanodeInfoWithStorage[172.25.33.132:9866,DS-92e75140-d066-4ab5-b250-dbfd329289c5,DISK], DatanodeInfoWithStorage[172.25.40.70:9866,DS-1e280bcd-a2ce-4320-9ebb-33fc903d3a47,DISK]] Under Construction Block: 1. BP-1958960150-172.25.40.87-1628677864204:blk_1073745253_4431 len=0 Expected_repl=3 [DatanodeInfoWithStorage[172.25.36.14:9866,DS-6357ab37-84ae-4c7c-8794-fef905bcde05,DISK], DatanodeInfoWithStorage[172.25.33.132:9866,DS-92e75140-d066-4ab5-b250-dbfd329289c5,DISK], DatanodeInfoWithStorage[172.25.40.70:9866,DS-1e280bcd-a2ce-4320-9ebb-33fc903d3a47,DISK]] [root@c1265-node2 ~]# hdfs dfs -get /tmp/hosts7 get: Cannot obtain block length for LocatedBlock{BP-1958960150-172.25.40.87-1628677864204:blk_1073745253_4431; getBlockSize()=0; corrupt=false; offset=134217728; locs=[DatanodeInfoWithStorage[172.25.40.70:9866,DS-1e280bcd-a2ce-4320-9ebb-33fc903d3a47,DISK], DatanodeInfoWithStorage[172.25.33.132:9866,DS-92e75140-d066-4ab5-b250-dbfd329289c5,DISK], DatanodeInfoWithStorage[172.25.36.14:9866,DS-6357ab37-84ae-4c7c-8794-fef905bcde05,DISK]]} *Exception trace from the logs:* Exception in thread "main" org.apache.hadoop.hdfs.CannotObtainBlockLengthException: Cannot obtain block length for LocatedBlock{BP-1958960150-172.25.40.87-1628677864204:blk_1073742720_1896; getBlockSize()=0; corrupt=false; offset=134217728; locs=[DatanodeInfoWithStorage[172.25.33.140:9866,DS-92e75140-d066-4ab5-b250-dbfd329289c5,DISK], DatanodeInfoWithStorage[172.25.40.87:9866,DS-1e280bcd-a2ce-4320-9ebb-33fc903d3a47,DISK], DatanodeInfoWithStorage[172.25.36.17:9866,DS-6357ab37-84ae-4c7c-8794-fef905bcde05,DISK]]} at org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:363) at org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:270) at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:201) at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:185) at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1006) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:312) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:324) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:949) {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-16214) Lock optimization for large deleteing, no locks on the collection block
Xiangyi Zhu created HDFS-16214: -- Summary: Lock optimization for large deleteing, no locks on the collection block Key: HDFS-16214 URL: https://issues.apache.org/jira/browse/HDFS-16214 Project: Hadoop HDFS Issue Type: Improvement Components: namenode Affects Versions: 3.4.0 Reporter: Xiangyi Zhu The time-consuming deletion is mainly reflected in three logics , collecting blocks, deleting Inode from InodeMap, and deleting blocks. The current deletion is divided into two major steps. Step 1 acquires the lock, collects the block and inode, deletes the inode, and releases the lock. Step 2 Acquire the lock and delete the block to release the lock. Phase 2 is currently deleting blocks in batches, which can control the lock holding time. Here we can also delete blocks asynchronously. Now step 1 still has the problem of holding the lock for a long time. For stage 1, we can make the collection block not hold the lock. The process is as follows, step 1 obtains the lock, parent.removeChild, writes to editLog, releases the lock. Step 2 no lock, collects the block. Step 3 acquire lock, update quota, release lease, release lock. Step 4 acquire lock, delete Inode from InodeMap, release lock. Step 5 acquire lock, delete block to release lock. There may be some problems following the above process: 1. When the /a/b/c file is open, then delete the /a/b directory. If the deletion is performed to the collecting block stage, the client writes complete or addBlock to the /a/b/c file at this time. This step is not locked and delete /a/b and editLog has been written successfully. In this case, the order of editLog is delete /a/c and complete /a/b/c. In this case, the standby node playback editLog /a/b/c file has been deleted, and then go to complete /a/b/c file will be abnormal. 2. If a delete operation is executed to the stage of collecting block, then the administrator executes saveNameSpace, and then restarts Namenode. This situation may cause the Inode that has been deleted from the parent childList to remain in the InodeMap. To solve the above problem, in step 1, add the inode being deleted to the Set. When there is a file WriteFileOp (logAllocateBlockId/logCloseFile EditLog), check whether there is this file and one of its parent Inodes in the Set, and throw it if there is. An exception FileNotFoundException occurred. In addition, the execution of saveNamespace needs to wait for all iNodes in Set to be removed before execution. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-16214) Lock optimization for large deleteing, no locks on the collection block
[ https://issues.apache.org/jira/browse/HDFS-16214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiangyi Zhu reassigned HDFS-16214: -- Assignee: Xiangyi Zhu > Lock optimization for large deleteing, no locks on the collection block > --- > > Key: HDFS-16214 > URL: https://issues.apache.org/jira/browse/HDFS-16214 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Affects Versions: 3.4.0 >Reporter: Xiangyi Zhu >Assignee: Xiangyi Zhu >Priority: Major > > The time-consuming deletion is mainly reflected in three logics , collecting > blocks, deleting Inode from InodeMap, and deleting blocks. The current > deletion is divided into two major steps. Step 1 acquires the lock, collects > the block and inode, deletes the inode, and releases the lock. Step 2 Acquire > the lock and delete the block to release the lock. > Phase 2 is currently deleting blocks in batches, which can control the lock > holding time. Here we can also delete blocks asynchronously. > Now step 1 still has the problem of holding the lock for a long time. > For stage 1, we can make the collection block not hold the lock. The process > is as follows, step 1 obtains the lock, parent.removeChild, writes to > editLog, releases the lock. Step 2 no lock, collects the block. Step 3 > acquire lock, update quota, release lease, release lock. Step 4 acquire lock, > delete Inode from InodeMap, release lock. Step 5 acquire lock, delete block > to release lock. > There may be some problems following the above process: > 1. When the /a/b/c file is open, then delete the /a/b directory. If the > deletion is performed to the collecting block stage, the client writes > complete or addBlock to the /a/b/c file at this time. This step is not locked > and delete /a/b and editLog has been written successfully. In this case, the > order of editLog is delete /a/c and complete /a/b/c. In this case, the > standby node playback editLog /a/b/c file has been deleted, and then go to > complete /a/b/c file will be abnormal. > 2. If a delete operation is executed to the stage of collecting block, then > the administrator executes saveNameSpace, and then restarts Namenode. This > situation may cause the Inode that has been deleted from the parent childList > to remain in the InodeMap. > To solve the above problem, in step 1, add the inode being deleted to the > Set. When there is a file WriteFileOp (logAllocateBlockId/logCloseFile > EditLog), check whether there is this file and one of its parent Inodes in > the Set, and throw it if there is. An exception FileNotFoundException > occurred. > In addition, the execution of saveNamespace needs to wait for all iNodes in > Set to be removed before execution. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16205) Make hdfs_allowSnapshot tool cross platform
[ https://issues.apache.org/jira/browse/HDFS-16205?focusedWorklogId=647232=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647232 ] ASF GitHub Bot logged work on HDFS-16205: - Author: ASF GitHub Bot Created on: 07/Sep/21 08:40 Start Date: 07/Sep/21 08:40 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3388: URL: https://github.com/apache/hadoop/pull/3388#issuecomment-914108558 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 47m 47s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 7 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 37m 21s | | trunk passed | | +1 :green_heart: | compile | 2m 56s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 25s | | trunk passed | | +1 :green_heart: | shadedclient | 56m 58s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 57m 15s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 15s | | the patch passed | | +1 :green_heart: | compile | 2m 54s | | the patch passed | | +1 :green_heart: | cc | 2m 54s | | the patch passed | | +1 :green_heart: | golang | 2m 54s | | the patch passed | | +1 :green_heart: | javac | 2m 54s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 16s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 7s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 56m 5s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3388/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt) | hadoop-hdfs-native-client in the patch failed. | | +1 :green_heart: | asflicense | 0m 30s | | The patch does not generate ASF License warnings. | | | | 183m 16s | | | | Reason | Tests | |---:|:--| | Failed CTEST tests | test_libhdfs_threaded_hdfspp_test_shim_static | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3388/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3388 | | Optional Tests | dupname asflicense compile cc mvnsite javac unit codespell golang | | uname | Linux 4b0fe4618deb 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 0fe3315bc042468e3c0040f614ffa754c8515245 | | Default Java | Red Hat, Inc.-1.8.0_302-b08 | | CTEST | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3388/1/artifact/out/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-ctest.txt | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3388/1/testReport/ | | Max. process+thread count | 571 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: hadoop-hdfs-project/hadoop-hdfs-native-client | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3388/1/console | | versions | git=2.9.5 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 647232) Time Spent: 20m (was: 10m) > Make hdfs_allowSnapshot tool cross platform > --- > > Key: HDFS-16205 > URL:
[jira] [Commented] (HDFS-16044) Fix getListing call getLocatedBlocks even source is a directory
[ https://issues.apache.org/jira/browse/HDFS-16044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17411018#comment-17411018 ] Hadoop QA commented on HDFS-16044: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 47s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} {color} | {color:green} 0m 0s{color} | {color:green}test4tests{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 32m 2s{color} | {color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/712/artifact/out/branch-mvninstall-root.txt{color} | {color:red} root in trunk failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 29s{color} | {color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/712/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-client-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt{color} | {color:red} hadoop-hdfs-client in trunk failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 29s{color} | {color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/712/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-client-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt{color} | {color:red} hadoop-hdfs-client in trunk failed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 27s{color} | {color:orange}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/712/artifact/out/buildtool-branch-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt{color} | {color:orange} The patch fails to run checkstyle in hadoop-hdfs-client {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 28s{color} | {color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/712/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-client.txt{color} | {color:red} hadoop-hdfs-client in trunk failed. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 1m 32s{color} | {color:green}{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 29s{color} | {color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/712/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt{color} | {color:red} hadoop-hdfs-client in trunk failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 29s{color} | {color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/712/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt{color} | {color:red} hadoop-hdfs-client in trunk failed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10. {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 3m 1s{color} | {color:blue}{color} | {color:blue} Both FindBugs and SpotBugs are enabled, using SpotBugs. {color} | | {color:red}-1{color} | {color:red} spotbugs {color} | {color:red} 0m 29s{color} | {color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/712/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client.txt{color} | {color:red} hadoop-hdfs-client in trunk failed. {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 22s{color} | {color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/712/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-client.txt{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 22s{color} | {color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/712/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-client-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt{color} | {color:red} hadoop-hdfs-client in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 22s{color} |
[jira] [Work logged] (HDFS-16210) Add the option of refreshCallQueue to RouterAdmin
[ https://issues.apache.org/jira/browse/HDFS-16210?focusedWorklogId=647199=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647199 ] ASF GitHub Bot logged work on HDFS-16210: - Author: ASF GitHub Bot Created on: 07/Sep/21 07:12 Start Date: 07/Sep/21 07:12 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3379: URL: https://github.com/apache/hadoop/pull/3379#issuecomment-914051141 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 39s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 40s | | trunk passed | | +1 :green_heart: | compile | 0m 44s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 36s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 25s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 43s | | trunk passed | | +1 :green_heart: | javadoc | 0m 37s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 52s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 25s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 47s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 34s | | the patch passed | | +1 :green_heart: | compile | 0m 35s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 35s | | the patch passed | | +1 :green_heart: | compile | 0m 29s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 29s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 17s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 35s | | the patch passed | | +1 :green_heart: | javadoc | 0m 32s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 48s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 21s | | the patch passed | | +1 :green_heart: | shadedclient | 14m 41s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 20m 25s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 35s | | The patch does not generate ASF License warnings. | | | | 95m 11s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3379/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3379 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux fa3e9f00b16d 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 3ade7622250b39d226330984a02e865dd27bc2aa | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3379/3/testReport/ | | Max. process+thread count | 2603 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3379/3/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
[jira] [Updated] (HDFS-16187) SnapshotDiff behaviour with Xattrs and Acls is not consistent across NN restarts with checkpointing
[ https://issues.apache.org/jira/browse/HDFS-16187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Srinivasu Majeti updated HDFS-16187: Description: The below test shows the snapshot diff between across snapshots is not consistent with Xattr(EZ here settinh the Xattr) across NN restarts with checkpointed FsImage. {code:java} @Test public void testEncryptionZonesWithSnapshots() throws Exception { final Path snapshottable = new Path("/zones"); fsWrapper.mkdir(snapshottable, FsPermission.getDirDefault(), true); dfsAdmin.allowSnapshot(snapshottable); dfsAdmin.createEncryptionZone(snapshottable, TEST_KEY, NO_TRASH); fs.createSnapshot(snapshottable, "snap1"); SnapshotDiffReport report = fs.getSnapshotDiffReport(snapshottable, "snap1", ""); Assert.assertEquals(0, report.getDiffList().size()); report = fs.getSnapshotDiffReport(snapshottable, "snap1", ""); System.out.println(report); Assert.assertEquals(0, report.getDiffList().size()); fs.setSafeMode(SafeModeAction.SAFEMODE_ENTER); fs.saveNamespace(); fs.setSafeMode(SafeModeAction.SAFEMODE_LEAVE); cluster.restartNameNode(true); report = fs.getSnapshotDiffReport(snapshottable, "snap1", ""); Assert.assertEquals(0, report.getDiffList().size()); }{code} {code:java} Pre Restart: Difference between snapshot snap1 and current directory under directory /zones: Post Restart: Difference between snapshot snap1 and current directory under directory /zones: M .{code} The side effect of this behavior is : distcp with snapshot diff would fail with below error complaining that target cluster has some data changed . {code:java} WARN tools.DistCp: The target has been modified since snapshot x {code} was: The below test shows the snapshot diff between across snapshots is not consistent with Xattr(EZ here settinh the Xattr) across NN restarts with checkpointed FsImage. {code:java} @Test public void testEncryptionZonesWithSnapshots() throws Exception { final Path snapshottable = new Path("/zones"); fsWrapper.mkdir(snapshottable, FsPermission.getDirDefault(), true); dfsAdmin.allowSnapshot(snapshottable); dfsAdmin.createEncryptionZone(snapshottable, TEST_KEY, NO_TRASH); fs.createSnapshot(snapshottable, "snap1"); SnapshotDiffReport report = fs.getSnapshotDiffReport(snapshottable, "snap1", ""); Assert.assertEquals(0, report.getDiffList().size()); report = fs.getSnapshotDiffReport(snapshottable, "snap1", ""); System.out.println(report); Assert.assertEquals(0, report.getDiffList().size()); fs.setSafeMode(SafeModeAction.SAFEMODE_ENTER); fs.saveNamespace(); fs.setSafeMode(SafeModeAction.SAFEMODE_LEAVE); cluster.restartNameNode(true); report = fs.getSnapshotDiffReport(snapshottable, "snap1", ""); Assert.assertEquals(0, report.getDiffList().size()); }{code} {code:java} Pre Restart: Difference between snapshot snap1 and current directory under directory /zones: Post Restart: Difference between snapshot snap1 and current directory under directory /zones: M .{code} > SnapshotDiff behaviour with Xattrs and Acls is not consistent across NN > restarts with checkpointing > --- > > Key: HDFS-16187 > URL: https://issues.apache.org/jira/browse/HDFS-16187 > Project: Hadoop HDFS > Issue Type: Bug > Components: snapshots >Reporter: Srinivasu Majeti >Assignee: Shashikant Banerjee >Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > The below test shows the snapshot diff between across snapshots is not > consistent with Xattr(EZ here settinh the Xattr) across NN restarts with > checkpointed FsImage. > {code:java} > @Test > public void testEncryptionZonesWithSnapshots() throws Exception { > final Path snapshottable = new Path("/zones"); > fsWrapper.mkdir(snapshottable, FsPermission.getDirDefault(), > true); > dfsAdmin.allowSnapshot(snapshottable); > dfsAdmin.createEncryptionZone(snapshottable, TEST_KEY, NO_TRASH); > fs.createSnapshot(snapshottable, "snap1"); > SnapshotDiffReport report = > fs.getSnapshotDiffReport(snapshottable, "snap1", ""); > Assert.assertEquals(0, report.getDiffList().size()); > report = > fs.getSnapshotDiffReport(snapshottable, "snap1", ""); > System.out.println(report); > Assert.assertEquals(0, report.getDiffList().size()); > fs.setSafeMode(SafeModeAction.SAFEMODE_ENTER); > fs.saveNamespace(); > fs.setSafeMode(SafeModeAction.SAFEMODE_LEAVE); > cluster.restartNameNode(true); > report = > fs.getSnapshotDiffReport(snapshottable, "snap1", ""); > Assert.assertEquals(0, report.getDiffList().size()); > }{code} > {code:java} > Pre Restart: > Difference between snapshot snap1 and
[jira] [Commented] (HDFS-16044) Fix getListing call getLocatedBlocks even source is a directory
[ https://issues.apache.org/jira/browse/HDFS-16044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17410945#comment-17410945 ] Hadoop QA commented on HDFS-16044: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 45s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} {color} | {color:green} 0m 0s{color} | {color:green}test4tests{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 40s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 32s{color} | {color:green}{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 18m 29s{color} | {color:blue}{color} | {color:blue} Both FindBugs and SpotBugs are enabled, using SpotBugs. {color} | | {color:green}+1{color} | {color:green} spotbugs {color} | {color:green} 2m 42s{color} | {color:green}{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 50s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 55s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green}{color} | {color:green} the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 51s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 46s{color} | {color:green}{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s{color} | {color:green}{color} | {color:green} the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:green}+1{color} | {color:green} spotbugs {color} | {color:green} 2m 32s{color} | {color:green}{color} |