[jira] [Updated] (HDFS-16231) Fix TestDataNodeMetrics#testReceivePacketSlowMetrics
[ https://issues.apache.org/jira/browse/HDFS-16231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haiyang Hu updated HDFS-16231: -- Description: TestDataNodeMetrics#testReceivePacketSlowMetrics fails with stacktrace: {code:java} java.lang.AssertionError: Expected exactly one metric for name TotalPacketsReceived Expected :1 Actual :0 at org.junit.Assert.fail(Assert.java:89) at org.junit.Assert.failNotEquals(Assert.java:835) at org.junit.Assert.assertEquals(Assert.java:647) at org.apache.hadoop.test.MetricsAsserts.checkCaptured(MetricsAsserts.java:278) at org.apache.hadoop.test.MetricsAsserts.getLongCounter(MetricsAsserts.java:237) at org.apache.hadoop.hdfs.server.datanode.TestDataNodeMetrics.testReceivePacketSlowMetrics(TestDataNodeMetrics.java:200) {code} {code:java} // Error MetricsName in current code,e.g TotalPacketsReceived,TotalPacketsSlowWriteToMirror,TotalPacketsSlowWriteToDisk,TotalPacketsSlowWriteToOsCache MetricsRecordBuilder dnMetrics = getMetrics(datanode.getMetrics().name()); assertTrue("More than 1 packet received", getLongCounter("TotalPacketsReceived", dnMetrics) > 1L); assertTrue("More than 1 slow packet to mirror", getLongCounter("TotalPacketsSlowWriteToMirror", dnMetrics) > 1L); assertCounter("TotalPacketsSlowWriteToDisk", 1L, dnMetrics); assertCounter("TotalPacketsSlowWriteToOsCache", 0L, dnMetrics); {code} was: TestDataNodeMetrics#testReceivePacketSlowMetrics fails with stacktrace: {code:java} java.lang.AssertionError: Expected exactly one metric for name TotalPacketsReceived Expected :1 Actual :0 at org.junit.Assert.fail(Assert.java:89) at org.junit.Assert.failNotEquals(Assert.java:835) at org.junit.Assert.assertEquals(Assert.java:647) at org.apache.hadoop.test.MetricsAsserts.checkCaptured(MetricsAsserts.java:278) at org.apache.hadoop.test.MetricsAsserts.getLongCounter(MetricsAsserts.java:237) at org.apache.hadoop.hdfs.server.datanode.TestDataNodeMetrics.testReceivePacketSlowMetrics(TestDataNodeMetrics.java:200) {code} > Fix TestDataNodeMetrics#testReceivePacketSlowMetrics > > > Key: HDFS-16231 > URL: https://issues.apache.org/jira/browse/HDFS-16231 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Haiyang Hu >Assignee: Haiyang Hu >Priority: Major > > TestDataNodeMetrics#testReceivePacketSlowMetrics fails with stacktrace: > {code:java} > java.lang.AssertionError: Expected exactly one metric for name > TotalPacketsReceived > Expected :1 > Actual :0 > > at org.junit.Assert.fail(Assert.java:89) > at org.junit.Assert.failNotEquals(Assert.java:835) > at org.junit.Assert.assertEquals(Assert.java:647) > at > org.apache.hadoop.test.MetricsAsserts.checkCaptured(MetricsAsserts.java:278) > at > org.apache.hadoop.test.MetricsAsserts.getLongCounter(MetricsAsserts.java:237) > at > org.apache.hadoop.hdfs.server.datanode.TestDataNodeMetrics.testReceivePacketSlowMetrics(TestDataNodeMetrics.java:200) > {code} > {code:java} > // Error MetricsName in current code,e.g > TotalPacketsReceived,TotalPacketsSlowWriteToMirror,TotalPacketsSlowWriteToDisk,TotalPacketsSlowWriteToOsCache > MetricsRecordBuilder dnMetrics = > getMetrics(datanode.getMetrics().name()); > assertTrue("More than 1 packet received", > getLongCounter("TotalPacketsReceived", dnMetrics) > 1L); > assertTrue("More than 1 slow packet to mirror", > getLongCounter("TotalPacketsSlowWriteToMirror", dnMetrics) > 1L); > assertCounter("TotalPacketsSlowWriteToDisk", 1L, dnMetrics); > assertCounter("TotalPacketsSlowWriteToOsCache", 0L, dnMetrics); > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16231) Fix TestDataNodeMetrics#testReceivePacketSlowMetrics
[ https://issues.apache.org/jira/browse/HDFS-16231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haiyang Hu updated HDFS-16231: -- Description: TestDataNodeMetrics#testReceivePacketSlowMetrics fails with stacktrace: {code:java} java.lang.AssertionError: Expected exactly one metric for name TotalPacketsReceived Expected :1 Actual :0 at org.junit.Assert.fail(Assert.java:89) at org.junit.Assert.failNotEquals(Assert.java:835) at org.junit.Assert.assertEquals(Assert.java:647) at org.apache.hadoop.test.MetricsAsserts.checkCaptured(MetricsAsserts.java:278) at org.apache.hadoop.test.MetricsAsserts.getLongCounter(MetricsAsserts.java:237) at org.apache.hadoop.hdfs.server.datanode.TestDataNodeMetrics.testReceivePacketSlowMetrics(TestDataNodeMetrics.java:200) {code} was: TestDataNodeMetrics#testReceivePacketSlowMetrics fails with stacktrace: java.lang.AssertionError: Expected exactly one metric for name TotalPacketsReceived Expected :1 Actual :0 at org.junit.Assert.fail(Assert.java:89) at org.junit.Assert.failNotEquals(Assert.java:835) at org.junit.Assert.assertEquals(Assert.java:647) at org.apache.hadoop.test.MetricsAsserts.checkCaptured(MetricsAsserts.java:278) at org.apache.hadoop.test.MetricsAsserts.getLongCounter(MetricsAsserts.java:237) at org.apache.hadoop.hdfs.server.datanode.TestDataNodeMetrics.testReceivePacketSlowMetrics(TestDataNodeMetrics.java:200) > Fix TestDataNodeMetrics#testReceivePacketSlowMetrics > > > Key: HDFS-16231 > URL: https://issues.apache.org/jira/browse/HDFS-16231 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Haiyang Hu >Assignee: Haiyang Hu >Priority: Major > > TestDataNodeMetrics#testReceivePacketSlowMetrics fails with stacktrace: > {code:java} > java.lang.AssertionError: Expected exactly one metric for name > TotalPacketsReceived > Expected :1 > Actual :0 > > at org.junit.Assert.fail(Assert.java:89) > at org.junit.Assert.failNotEquals(Assert.java:835) > at org.junit.Assert.assertEquals(Assert.java:647) > at > org.apache.hadoop.test.MetricsAsserts.checkCaptured(MetricsAsserts.java:278) > at > org.apache.hadoop.test.MetricsAsserts.getLongCounter(MetricsAsserts.java:237) > at > org.apache.hadoop.hdfs.server.datanode.TestDataNodeMetrics.testReceivePacketSlowMetrics(TestDataNodeMetrics.java:200) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16231) Fix TestDataNodeMetrics#testReceivePacketSlowMetrics
[ https://issues.apache.org/jira/browse/HDFS-16231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haiyang Hu updated HDFS-16231: -- Description: TestDataNodeMetrics#testReceivePacketSlowMetrics fails with stacktrace: java.lang.AssertionError: Expected exactly one metric for name TotalPacketsReceived Expected :1 Actual :0 at org.junit.Assert.fail(Assert.java:89) at org.junit.Assert.failNotEquals(Assert.java:835) at org.junit.Assert.assertEquals(Assert.java:647) at org.apache.hadoop.test.MetricsAsserts.checkCaptured(MetricsAsserts.java:278) at org.apache.hadoop.test.MetricsAsserts.getLongCounter(MetricsAsserts.java:237) at org.apache.hadoop.hdfs.server.datanode.TestDataNodeMetrics.testReceivePacketSlowMetrics(TestDataNodeMetrics.java:200) was: TestMover#testMoverWithStripedFile fails intermittently with stacktrace: > Fix TestDataNodeMetrics#testReceivePacketSlowMetrics > > > Key: HDFS-16231 > URL: https://issues.apache.org/jira/browse/HDFS-16231 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Haiyang Hu >Assignee: Haiyang Hu >Priority: Major > > TestDataNodeMetrics#testReceivePacketSlowMetrics fails with stacktrace: > java.lang.AssertionError: Expected exactly one metric for name > TotalPacketsReceived > Expected :1 > Actual :0 > > at org.junit.Assert.fail(Assert.java:89) > at org.junit.Assert.failNotEquals(Assert.java:835) > at org.junit.Assert.assertEquals(Assert.java:647) > at > org.apache.hadoop.test.MetricsAsserts.checkCaptured(MetricsAsserts.java:278) > at > org.apache.hadoop.test.MetricsAsserts.getLongCounter(MetricsAsserts.java:237) > at > org.apache.hadoop.hdfs.server.datanode.TestDataNodeMetrics.testReceivePacketSlowMetrics(TestDataNodeMetrics.java:200) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16231) Fix TestDataNodeMetrics#testReceivePacketSlowMetrics
[ https://issues.apache.org/jira/browse/HDFS-16231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haiyang Hu updated HDFS-16231: -- Description: TestMover#testMoverWithStripedFile fails intermittently with stacktrace: > Fix TestDataNodeMetrics#testReceivePacketSlowMetrics > > > Key: HDFS-16231 > URL: https://issues.apache.org/jira/browse/HDFS-16231 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Haiyang Hu >Assignee: Haiyang Hu >Priority: Major > > TestMover#testMoverWithStripedFile fails intermittently with stacktrace: -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16231) Fix TestDataNodeMetrics#testReceivePacketSlowMetrics
[ https://issues.apache.org/jira/browse/HDFS-16231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haiyang Hu updated HDFS-16231: -- Parent: HDFS-15646 Issue Type: Sub-task (was: Bug) > Fix TestDataNodeMetrics#testReceivePacketSlowMetrics > > > Key: HDFS-16231 > URL: https://issues.apache.org/jira/browse/HDFS-16231 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Haiyang Hu >Assignee: Haiyang Hu >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16231) Fix TestDataNodeMetrics#testReceivePacketSlowMetrics
[ https://issues.apache.org/jira/browse/HDFS-16231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haiyang Hu updated HDFS-16231: -- Issue Type: Bug (was: Task) > Fix TestDataNodeMetrics#testReceivePacketSlowMetrics > > > Key: HDFS-16231 > URL: https://issues.apache.org/jira/browse/HDFS-16231 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Haiyang Hu >Assignee: Haiyang Hu >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-16231) Fix TestDataNodeMetrics#testReceivePacketSlowMetrics
Haiyang Hu created HDFS-16231: - Summary: Fix TestDataNodeMetrics#testReceivePacketSlowMetrics Key: HDFS-16231 URL: https://issues.apache.org/jira/browse/HDFS-16231 Project: Hadoop HDFS Issue Type: Task Reporter: Haiyang Hu Assignee: Haiyang Hu -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16205) Make hdfs_allowSnapshot tool cross platform
[ https://issues.apache.org/jira/browse/HDFS-16205?focusedWorklogId=652676=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-652676 ] ASF GitHub Bot logged work on HDFS-16205: - Author: ASF GitHub Bot Created on: 19/Sep/21 05:21 Start Date: 19/Sep/21 05:21 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3388: URL: https://github.com/apache/hadoop/pull/3388#issuecomment-922417309 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 42m 27s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 7 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 41m 29s | | trunk passed | | +1 :green_heart: | compile | 2m 34s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 25s | | trunk passed | | +1 :green_heart: | shadedclient | 65m 7s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 65m 25s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 14s | | the patch passed | | +1 :green_heart: | compile | 2m 31s | | the patch passed | | +1 :green_heart: | cc | 2m 31s | | the patch passed | | +1 :green_heart: | golang | 2m 31s | | the patch passed | | +1 :green_heart: | javac | 2m 31s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 15s | | the patch passed | | +1 :green_heart: | shadedclient | 26m 20s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 47m 35s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3388/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt) | hadoop-hdfs-native-client in the patch failed. | | +1 :green_heart: | asflicense | 0m 30s | | The patch does not generate ASF License warnings. | | | | 187m 21s | | | | Reason | Tests | |---:|:--| | Failed CTEST tests | test_libhdfs_threaded_hdfspp_test_shim_static | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3388/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3388 | | Optional Tests | dupname asflicense compile cc mvnsite javac unit codespell golang | | uname | Linux cc3fa12e560d 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 8ac9b188f910fe5c33474ba8acec6c352ba12eec | | Default Java | Red Hat, Inc.-1.8.0_302-b08 | | CTEST | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3388/4/artifact/out/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-ctest.txt | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3388/4/testReport/ | | Max. process+thread count | 519 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: hadoop-hdfs-project/hadoop-hdfs-native-client | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3388/4/console | | versions | git=2.9.5 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 652676) Time Spent: 1.5h (was: 1h 20m) > Make hdfs_allowSnapshot tool cross platform > --- > > Key: HDFS-16205 > URL:
[jira] [Work logged] (HDFS-16220) [FGL]Configurable INodeMap#NAMESPACE_KEY_DEPTH_RANGES_STATIC
[ https://issues.apache.org/jira/browse/HDFS-16220?focusedWorklogId=652674=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-652674 ] ASF GitHub Bot logged work on HDFS-16220: - Author: ASF GitHub Bot Created on: 19/Sep/21 03:26 Start Date: 19/Sep/21 03:26 Worklog Time Spent: 10m Work Description: xinglin commented on a change in pull request #3417: URL: https://github.com/apache/hadoop/pull/3417#discussion_r711675605 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeMap.java ## @@ -191,8 +202,28 @@ static INodeMap newInstance(INodeDirectory rootDir, return map.iterator(); } - private INodeMap(INodeDirectory rootDir, FSNamesystem ns) { + public Set rangeKeys() { +return ((PartitionedGSet)map).entryKeySet(); + } + + private static void setStaticField(Configuration conf) { +numSpaceKeyDepth = conf.getInt(DFSConfigKeys.DFS_NAMENODE_INOD_NAMESPACE_KEY_DEPTH, +DFSConfigKeys.DFS_NAMENODE_INOD_NAMESPACE_KEY_DEPTH_DEFAULT); +if (numSpaceKeyDepth < 1) { + numSpaceKeyDepth = DFSConfigKeys.DFS_NAMENODE_INOD_NAMESPACE_KEY_DEPTH_DEFAULT; +} + +numRanges = conf.getLong(DFSConfigKeys.DFS_NAMENODE_INOD_NUM_RANGES, Review comment: Can we add a check here to make sure numRanges is power of 2? thanks, -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 652674) Time Spent: 2h 40m (was: 2.5h) > [FGL]Configurable INodeMap#NAMESPACE_KEY_DEPTH_RANGES_STATIC > > > Key: HDFS-16220 > URL: https://issues.apache.org/jira/browse/HDFS-16220 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs, namenode >Reporter: JiangHua Zhu >Assignee: JiangHua Zhu >Priority: Major > Labels: pull-request-available > Attachments: debug1.jpg, debug2.jpg > > Time Spent: 2h 40m > Remaining Estimate: 0h > > In INodeMap, NAMESPACE_KEY_DEPTH and NUM_RANGES_STATIC are a fixed value, we > should make it configurable. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16230) Minor bug in TestStorageRestore
[ https://issues.apache.org/jira/browse/HDFS-16230?focusedWorklogId=652668=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-652668 ] ASF GitHub Bot logged work on HDFS-16230: - Author: ASF GitHub Bot Created on: 19/Sep/21 00:25 Start Date: 19/Sep/21 00:25 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3455: URL: https://github.com/apache/hadoop/pull/3455#issuecomment-922393572 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 52s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 43s | | trunk passed | | +1 :green_heart: | compile | 1m 22s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 14s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 0s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 23s | | trunk passed | | +1 :green_heart: | javadoc | 0m 57s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 30s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 10s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 25s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 12s | | the patch passed | | +1 :green_heart: | compile | 1m 14s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 14s | | the patch passed | | +1 :green_heart: | compile | 1m 7s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 7s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 51s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 15s | | the patch passed | | +1 :green_heart: | javadoc | 0m 46s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 19s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 10s | | the patch passed | | +1 :green_heart: | shadedclient | 21m 6s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 227m 10s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 45s | | The patch does not generate ASF License warnings. | | | | 322m 34s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3455/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3455 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux cee30baef7a2 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / ae46a1a5c9afa6306f688f3dd8ac143278d5e0d5 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3455/2/testReport/ | | Max. process+thread count | 2848 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3455/2/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This
[jira] [Work logged] (HDFS-16230) Minor bug in TestStorageRestore
[ https://issues.apache.org/jira/browse/HDFS-16230?focusedWorklogId=652653=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-652653 ] ASF GitHub Bot logged work on HDFS-16230: - Author: ASF GitHub Bot Created on: 18/Sep/21 19:05 Start Date: 18/Sep/21 19:05 Worklog Time Spent: 10m Work Description: thomasleplus commented on a change in pull request #3455: URL: https://github.com/apache/hadoop/pull/3455#discussion_r711628321 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestStorageRestore.java ## @@ -294,7 +294,7 @@ public void testDfsAdminCmd() throws Exception { restore = fsi.getStorage().getRestoreFailedStorage(); assertTrue("After check call restore is " + restore, restore); String commandOutput = cmdResult.getCommandOutput(); - commandOutput.trim(); + commandOutput = commandOutput.trim(); assertTrue(commandOutput.contains("restoreFailedStorage is set to true")); Review comment: I pushed the change as requested. Thanks for your patience. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 652653) Time Spent: 1.5h (was: 1h 20m) > Minor bug in TestStorageRestore > --- > > Key: HDFS-16230 > URL: https://issues.apache.org/jira/browse/HDFS-16230 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Reporter: Thomas Leplus >Priority: Trivial > Labels: pull-request-available > Time Spent: 1.5h > Remaining Estimate: 0h > > Strings being immutable, you need to use the trim() method return value. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16230) Minor bug in TestStorageRestore
[ https://issues.apache.org/jira/browse/HDFS-16230?focusedWorklogId=652652=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-652652 ] ASF GitHub Bot logged work on HDFS-16230: - Author: ASF GitHub Bot Created on: 18/Sep/21 18:59 Start Date: 18/Sep/21 18:59 Worklog Time Spent: 10m Work Description: thomasleplus commented on a change in pull request #3455: URL: https://github.com/apache/hadoop/pull/3455#discussion_r711627758 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestStorageRestore.java ## @@ -294,7 +294,7 @@ public void testDfsAdminCmd() throws Exception { restore = fsi.getStorage().getRestoreFailedStorage(); assertTrue("After check call restore is " + restore, restore); String commandOutput = cmdResult.getCommandOutput(); - commandOutput.trim(); + commandOutput = commandOutput.trim(); assertTrue(commandOutput.contains("restoreFailedStorage is set to true")); Review comment: Yes, they are quick! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 652652) Time Spent: 1h 20m (was: 1h 10m) > Minor bug in TestStorageRestore > --- > > Key: HDFS-16230 > URL: https://issues.apache.org/jira/browse/HDFS-16230 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Reporter: Thomas Leplus >Priority: Trivial > Labels: pull-request-available > Time Spent: 1h 20m > Remaining Estimate: 0h > > Strings being immutable, you need to use the trim() method return value. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16230) Minor bug in TestStorageRestore
[ https://issues.apache.org/jira/browse/HDFS-16230?focusedWorklogId=652651=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-652651 ] ASF GitHub Bot logged work on HDFS-16230: - Author: ASF GitHub Bot Created on: 18/Sep/21 18:56 Start Date: 18/Sep/21 18:56 Worklog Time Spent: 10m Work Description: ayushtkn commented on a change in pull request #3455: URL: https://github.com/apache/hadoop/pull/3455#discussion_r711627442 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestStorageRestore.java ## @@ -294,7 +294,7 @@ public void testDfsAdminCmd() throws Exception { restore = fsi.getStorage().getRestoreFailedStorage(); assertTrue("After check call restore is " + restore, restore); String commandOutput = cmdResult.getCommandOutput(); - commandOutput.trim(); + commandOutput = commandOutput.trim(); assertTrue(commandOutput.contains("restoreFailedStorage is set to true")); Review comment: Your repo seems back https://github.com/thomasleplus/hadoop/tree/patch-1 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 652651) Time Spent: 1h 10m (was: 1h) > Minor bug in TestStorageRestore > --- > > Key: HDFS-16230 > URL: https://issues.apache.org/jira/browse/HDFS-16230 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Reporter: Thomas Leplus >Priority: Trivial > Labels: pull-request-available > Time Spent: 1h 10m > Remaining Estimate: 0h > > Strings being immutable, you need to use the trim() method return value. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16230) Minor bug in TestStorageRestore
[ https://issues.apache.org/jira/browse/HDFS-16230?focusedWorklogId=652649=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-652649 ] ASF GitHub Bot logged work on HDFS-16230: - Author: ASF GitHub Bot Created on: 18/Sep/21 18:43 Start Date: 18/Sep/21 18:43 Worklog Time Spent: 10m Work Description: thomasleplus commented on a change in pull request #3455: URL: https://github.com/apache/hadoop/pull/3455#discussion_r711626143 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestStorageRestore.java ## @@ -294,7 +294,7 @@ public void testDfsAdminCmd() throws Exception { restore = fsi.getStorage().getRestoreFailedStorage(); assertTrue("After check call restore is " + restore, restore); String commandOutput = cmdResult.getCommandOutput(); - commandOutput.trim(); + commandOutput = commandOutput.trim(); assertTrue(commandOutput.contains("restoreFailedStorage is set to true")); Review comment: It says I have to ask GitHub support to restore a forked repo. I've created a ticket to request that, let's see if they can help. Thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 652649) Time Spent: 1h (was: 50m) > Minor bug in TestStorageRestore > --- > > Key: HDFS-16230 > URL: https://issues.apache.org/jira/browse/HDFS-16230 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Reporter: Thomas Leplus >Priority: Trivial > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > Strings being immutable, you need to use the trim() method return value. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16230) Minor bug in TestStorageRestore
[ https://issues.apache.org/jira/browse/HDFS-16230?focusedWorklogId=652648=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-652648 ] ASF GitHub Bot logged work on HDFS-16230: - Author: ASF GitHub Bot Created on: 18/Sep/21 18:27 Start Date: 18/Sep/21 18:27 Worklog Time Spent: 10m Work Description: ayushtkn commented on a change in pull request #3455: URL: https://github.com/apache/hadoop/pull/3455#discussion_r711624343 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestStorageRestore.java ## @@ -294,7 +294,7 @@ public void testDfsAdminCmd() throws Exception { restore = fsi.getStorage().getRestoreFailedStorage(); assertTrue("After check call restore is " + restore, restore); String commandOutput = cmdResult.getCommandOutput(); - commandOutput.trim(); + commandOutput = commandOutput.trim(); assertTrue(commandOutput.contains("restoreFailedStorage is set to true")); Review comment: See, if you can restore the repository https://docs.github.com/en/repositories/creating-and-managing-repositories/restoring-a-deleted-repository -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 652648) Time Spent: 50m (was: 40m) > Minor bug in TestStorageRestore > --- > > Key: HDFS-16230 > URL: https://issues.apache.org/jira/browse/HDFS-16230 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Reporter: Thomas Leplus >Priority: Trivial > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > > Strings being immutable, you need to use the trim() method return value. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16230) Minor bug in TestStorageRestore
[ https://issues.apache.org/jira/browse/HDFS-16230?focusedWorklogId=652645=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-652645 ] ASF GitHub Bot logged work on HDFS-16230: - Author: ASF GitHub Bot Created on: 18/Sep/21 18:04 Start Date: 18/Sep/21 18:04 Worklog Time Spent: 10m Work Description: thomasleplus commented on a change in pull request #3455: URL: https://github.com/apache/hadoop/pull/3455#discussion_r711621922 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestStorageRestore.java ## @@ -294,7 +294,7 @@ public void testDfsAdminCmd() throws Exception { restore = fsi.getStorage().getRestoreFailedStorage(); assertTrue("After check call restore is " + restore, restore); String commandOutput = cmdResult.getCommandOutput(); - commandOutput.trim(); + commandOutput = commandOutput.trim(); assertTrue(commandOutput.contains("restoreFailedStorage is set to true")); Review comment: Indeed. Now I realize that deleting my forked repo rather hastily was a mistake :( I've tried various ways to edit this PR branch but didn't succeed. It seems that it leaves us with two options: either I close this PR and open a new one, or someone with the right permissions edit it. Let me know what you think is best. Sorry for the inconvenience. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 652645) Time Spent: 40m (was: 0.5h) > Minor bug in TestStorageRestore > --- > > Key: HDFS-16230 > URL: https://issues.apache.org/jira/browse/HDFS-16230 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Reporter: Thomas Leplus >Priority: Trivial > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > Strings being immutable, you need to use the trim() method return value. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16227) testMoverWithStripedFile fails intermittently
[ https://issues.apache.org/jira/browse/HDFS-16227?focusedWorklogId=652595=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-652595 ] ASF GitHub Bot logged work on HDFS-16227: - Author: ASF GitHub Bot Created on: 18/Sep/21 11:03 Start Date: 18/Sep/21 11:03 Worklog Time Spent: 10m Work Description: ferhui commented on pull request #3429: URL: https://github.com/apache/hadoop/pull/3429#issuecomment-922259640 @virajjasani Thanks for contribution. @tasanuma @goiri @jojochuang Thanks for review! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 652595) Time Spent: 3h 10m (was: 3h) > testMoverWithStripedFile fails intermittently > - > > Key: HDFS-16227 > URL: https://issues.apache.org/jira/browse/HDFS-16227 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 3h 10m > Remaining Estimate: 0h > > TestMover#testMoverWithStripedFile fails intermittently with stacktrace: > {code:java} > [ERROR] > testMoverWithStripedFile(org.apache.hadoop.hdfs.server.mover.TestMover) Time > elapsed: 48.439 s <<< FAILURE![ERROR] > testMoverWithStripedFile(org.apache.hadoop.hdfs.server.mover.TestMover) Time > elapsed: 48.439 s <<< FAILURE!java.lang.AssertionError: expected: > but was: at org.junit.Assert.fail(Assert.java:89) at > org.junit.Assert.failNotEquals(Assert.java:835) at > org.junit.Assert.assertEquals(Assert.java:120) at > org.junit.Assert.assertEquals(Assert.java:146) at > org.apache.hadoop.hdfs.server.mover.TestMover.testMoverWithStripedFile(TestMover.java:965) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) at > java.lang.Thread.run(Thread.java:748) > {code} > e.g > https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3386/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16227) testMoverWithStripedFile fails intermittently
[ https://issues.apache.org/jira/browse/HDFS-16227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hui Fei updated HDFS-16227: --- Fix Version/s: 3.4.0 Resolution: Fixed Status: Resolved (was: Patch Available) > testMoverWithStripedFile fails intermittently > - > > Key: HDFS-16227 > URL: https://issues.apache.org/jira/browse/HDFS-16227 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 3h 10m > Remaining Estimate: 0h > > TestMover#testMoverWithStripedFile fails intermittently with stacktrace: > {code:java} > [ERROR] > testMoverWithStripedFile(org.apache.hadoop.hdfs.server.mover.TestMover) Time > elapsed: 48.439 s <<< FAILURE![ERROR] > testMoverWithStripedFile(org.apache.hadoop.hdfs.server.mover.TestMover) Time > elapsed: 48.439 s <<< FAILURE!java.lang.AssertionError: expected: > but was: at org.junit.Assert.fail(Assert.java:89) at > org.junit.Assert.failNotEquals(Assert.java:835) at > org.junit.Assert.assertEquals(Assert.java:120) at > org.junit.Assert.assertEquals(Assert.java:146) at > org.apache.hadoop.hdfs.server.mover.TestMover.testMoverWithStripedFile(TestMover.java:965) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) at > java.lang.Thread.run(Thread.java:748) > {code} > e.g > https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3386/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16227) testMoverWithStripedFile fails intermittently
[ https://issues.apache.org/jira/browse/HDFS-16227?focusedWorklogId=652594=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-652594 ] ASF GitHub Bot logged work on HDFS-16227: - Author: ASF GitHub Bot Created on: 18/Sep/21 11:02 Start Date: 18/Sep/21 11:02 Worklog Time Spent: 10m Work Description: ferhui merged pull request #3429: URL: https://github.com/apache/hadoop/pull/3429 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 652594) Time Spent: 3h (was: 2h 50m) > testMoverWithStripedFile fails intermittently > - > > Key: HDFS-16227 > URL: https://issues.apache.org/jira/browse/HDFS-16227 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 3h > Remaining Estimate: 0h > > TestMover#testMoverWithStripedFile fails intermittently with stacktrace: > {code:java} > [ERROR] > testMoverWithStripedFile(org.apache.hadoop.hdfs.server.mover.TestMover) Time > elapsed: 48.439 s <<< FAILURE![ERROR] > testMoverWithStripedFile(org.apache.hadoop.hdfs.server.mover.TestMover) Time > elapsed: 48.439 s <<< FAILURE!java.lang.AssertionError: expected: > but was: at org.junit.Assert.fail(Assert.java:89) at > org.junit.Assert.failNotEquals(Assert.java:835) at > org.junit.Assert.assertEquals(Assert.java:120) at > org.junit.Assert.assertEquals(Assert.java:146) at > org.apache.hadoop.hdfs.server.mover.TestMover.testMoverWithStripedFile(TestMover.java:965) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) at > java.lang.Thread.run(Thread.java:748) > {code} > e.g > https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3386/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16230) Minor bug in TestStorageRestore
[ https://issues.apache.org/jira/browse/HDFS-16230?focusedWorklogId=652578=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-652578 ] ASF GitHub Bot logged work on HDFS-16230: - Author: ASF GitHub Bot Created on: 18/Sep/21 09:17 Start Date: 18/Sep/21 09:17 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3455: URL: https://github.com/apache/hadoop/pull/3455#issuecomment-922246612 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 46s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 52s | | trunk passed | | +1 :green_heart: | compile | 1m 24s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 15s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 1s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 22s | | trunk passed | | +1 :green_heart: | javadoc | 0m 56s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 24s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 8s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 28s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 14s | | the patch passed | | +1 :green_heart: | compile | 1m 15s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 15s | | the patch passed | | +1 :green_heart: | compile | 1m 8s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 8s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 50s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 13s | | the patch passed | | +1 :green_heart: | javadoc | 0m 46s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 19s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 5s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 49s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 226m 53s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 45s | | The patch does not generate ASF License warnings. | | | | 321m 55s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3455/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3455 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 4d4abcd5db1c 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 6189a8e38d2f0b6ab340b0e4a76a27da362f013e | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3455/1/testReport/ | | Max. process+thread count | 2942 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3455/1/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This
[jira] [Work logged] (HDFS-16107) Split RPC configuration to isolate RPC
[ https://issues.apache.org/jira/browse/HDFS-16107?focusedWorklogId=652575=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-652575 ] ASF GitHub Bot logged work on HDFS-16107: - Author: ASF GitHub Bot Created on: 18/Sep/21 08:24 Start Date: 18/Sep/21 08:24 Worklog Time Spent: 10m Work Description: jianghuazhu commented on a change in pull request #3170: URL: https://github.com/apache/hadoop/pull/3170#discussion_r711552586 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java ## @@ -3192,23 +3196,41 @@ protected Server(String bindAddress, int port, if (queueSizePerHandler != -1) { this.maxQueueSize = handlerCount * queueSizePerHandler; } else { - this.maxQueueSize = handlerCount * conf.getInt( - CommonConfigurationKeys.IPC_SERVER_HANDLER_QUEUE_SIZE_KEY, - CommonConfigurationKeys.IPC_SERVER_HANDLER_QUEUE_SIZE_DEFAULT); + this.maxQueueSize = conf.getInt(getQueueClassPrefix() + "." + + CommonConfigurationKeys.SERVER_HANDLER_QUEUE_SIZE_KEY, 0); + if (this.maxQueueSize < 1) { +this.maxQueueSize = handlerCount * conf.getInt( +CommonConfigurationKeys.IPC_SERVER_HANDLER_QUEUE_SIZE_KEY, +CommonConfigurationKeys.IPC_SERVER_HANDLER_QUEUE_SIZE_DEFAULT); + } +} +int tmpMaxRespSize = conf.getInt(getQueueClassPrefix() + "." + +CommonConfigurationKeys.SERVER_RPC_MAX_RESPONSE_SIZE_KEY, 0); +if (tmpMaxRespSize < 1) { + this.maxRespSize = conf.getInt( + CommonConfigurationKeys.IPC_SERVER_RPC_MAX_RESPONSE_SIZE_KEY, + CommonConfigurationKeys.IPC_SERVER_RPC_MAX_RESPONSE_SIZE_DEFAULT); +} else { + this.maxRespSize = tmpMaxRespSize; } -this.maxRespSize = conf.getInt( -CommonConfigurationKeys.IPC_SERVER_RPC_MAX_RESPONSE_SIZE_KEY, -CommonConfigurationKeys.IPC_SERVER_RPC_MAX_RESPONSE_SIZE_DEFAULT); if (numReaders != -1) { this.readThreads = numReaders; } else { - this.readThreads = conf.getInt( - CommonConfigurationKeys.IPC_SERVER_RPC_READ_THREADS_KEY, - CommonConfigurationKeys.IPC_SERVER_RPC_READ_THREADS_DEFAULT); + this.readThreads = conf.getInt(getQueueClassPrefix() + "." + + CommonConfigurationKeys.SERVER_RPC_READ_THREADS_KEY, 0); Review comment: Thanks @tomscut for the comment. If there is an RPC service whose port is 8020, we hope that the number of read thread pools allocated for this RPC (ipc.8020.server.read.threadpool.size) is different from other RPCs. The logic here is: 1. First judge whether ipc.8020.server.read.threadpool.size has been allocated. If it is not allocated, the value we get should be 0, indicating that we want to use the public unified configuration (ipc.server.read.threadpool .size). 2. If it is not allocated, we should use the public unified configuration ipc.server.read.threadpool.size. At present, CommonConfigurationKeys.IPC_SERVER_RPC_READ_THREADS_DEFAULT is set to 1. If we use this attribute, we can still get a valid value when ipc.8020.server.read.threadpool.size is not set, which may cause a Ambiguity. This is my idea, welcome to continue to communicate. @tomscut -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 652575) Time Spent: 3h 40m (was: 3.5h) > Split RPC configuration to isolate RPC > -- > > Key: HDFS-16107 > URL: https://issues.apache.org/jira/browse/HDFS-16107 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: JiangHua Zhu >Assignee: JiangHua Zhu >Priority: Minor > Labels: pull-request-available > Time Spent: 3h 40m > Remaining Estimate: 0h > > For RPC of different ports, there are some common configurations, such as: > ipc.server.read.threadpool.size > ipc.server.read.connection-queue.size > ipc.server.handler.queue.size > Once we configure these values, it will affect all requests (including client > and requests within the cluster). > It is necessary for us to split these configurations to adapt to different > ports, such as: > ipc.8020.server.read.threadpool.size > ipc.8021.server.read.threadpool.size > ipc.8020.server.read.connection-queue.size > ipc.8021.server.read.connection-queue.size > The advantage of this is to isolate the RPC to deal with the pressure of > requests from all sides. -- This message was sent by
[jira] [Work logged] (HDFS-16107) Split RPC configuration to isolate RPC
[ https://issues.apache.org/jira/browse/HDFS-16107?focusedWorklogId=652574=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-652574 ] ASF GitHub Bot logged work on HDFS-16107: - Author: ASF GitHub Bot Created on: 18/Sep/21 08:23 Start Date: 18/Sep/21 08:23 Worklog Time Spent: 10m Work Description: jianghuazhu commented on a change in pull request #3170: URL: https://github.com/apache/hadoop/pull/3170#discussion_r711552586 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java ## @@ -3192,23 +3196,41 @@ protected Server(String bindAddress, int port, if (queueSizePerHandler != -1) { this.maxQueueSize = handlerCount * queueSizePerHandler; } else { - this.maxQueueSize = handlerCount * conf.getInt( - CommonConfigurationKeys.IPC_SERVER_HANDLER_QUEUE_SIZE_KEY, - CommonConfigurationKeys.IPC_SERVER_HANDLER_QUEUE_SIZE_DEFAULT); + this.maxQueueSize = conf.getInt(getQueueClassPrefix() + "." + + CommonConfigurationKeys.SERVER_HANDLER_QUEUE_SIZE_KEY, 0); + if (this.maxQueueSize < 1) { +this.maxQueueSize = handlerCount * conf.getInt( +CommonConfigurationKeys.IPC_SERVER_HANDLER_QUEUE_SIZE_KEY, +CommonConfigurationKeys.IPC_SERVER_HANDLER_QUEUE_SIZE_DEFAULT); + } +} +int tmpMaxRespSize = conf.getInt(getQueueClassPrefix() + "." + +CommonConfigurationKeys.SERVER_RPC_MAX_RESPONSE_SIZE_KEY, 0); +if (tmpMaxRespSize < 1) { + this.maxRespSize = conf.getInt( + CommonConfigurationKeys.IPC_SERVER_RPC_MAX_RESPONSE_SIZE_KEY, + CommonConfigurationKeys.IPC_SERVER_RPC_MAX_RESPONSE_SIZE_DEFAULT); +} else { + this.maxRespSize = tmpMaxRespSize; } -this.maxRespSize = conf.getInt( -CommonConfigurationKeys.IPC_SERVER_RPC_MAX_RESPONSE_SIZE_KEY, -CommonConfigurationKeys.IPC_SERVER_RPC_MAX_RESPONSE_SIZE_DEFAULT); if (numReaders != -1) { this.readThreads = numReaders; } else { - this.readThreads = conf.getInt( - CommonConfigurationKeys.IPC_SERVER_RPC_READ_THREADS_KEY, - CommonConfigurationKeys.IPC_SERVER_RPC_READ_THREADS_DEFAULT); + this.readThreads = conf.getInt(getQueueClassPrefix() + "." + + CommonConfigurationKeys.SERVER_RPC_READ_THREADS_KEY, 0); Review comment: Thanks @tomscut for the comment. If there is an RPC service whose port is 8020, we hope that the number of read thread pools allocated for this RPC (ipc.8020.server.read.threadpool.size) is different from other RPCs. The logic here is: 1. First judge whether ipc.8020.server.read.threadpool.size has been allocated. If it is not allocated, the value we get should be 0, indicating that we want to use the public unified configuration (ipc.server.read.threadpool .size). 2. If it is not allocated, we should use the public unified configuration ipc.server.read.threadpool.size. At present, CommonConfigurationKeys.IPC_SERVER_RPC_READ_THREADS_DEFAULT is set to 1. If we use this attribute, we can still get a valid value when ipc.8020.server.read.threadpool.size is not set, which may cause a Ambiguity. This is my idea, welcome to continue to communicate. @tomscut -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 652574) Time Spent: 3.5h (was: 3h 20m) > Split RPC configuration to isolate RPC > -- > > Key: HDFS-16107 > URL: https://issues.apache.org/jira/browse/HDFS-16107 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: JiangHua Zhu >Assignee: JiangHua Zhu >Priority: Minor > Labels: pull-request-available > Time Spent: 3.5h > Remaining Estimate: 0h > > For RPC of different ports, there are some common configurations, such as: > ipc.server.read.threadpool.size > ipc.server.read.connection-queue.size > ipc.server.handler.queue.size > Once we configure these values, it will affect all requests (including client > and requests within the cluster). > It is necessary for us to split these configurations to adapt to different > ports, such as: > ipc.8020.server.read.threadpool.size > ipc.8021.server.read.threadpool.size > ipc.8020.server.read.connection-queue.size > ipc.8021.server.read.connection-queue.size > The advantage of this is to isolate the RPC to deal with the pressure of > requests from all sides. -- This message was sent by Atlassian
[jira] [Work logged] (HDFS-16230) Minor bug in TestStorageRestore
[ https://issues.apache.org/jira/browse/HDFS-16230?focusedWorklogId=652553=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-652553 ] ASF GitHub Bot logged work on HDFS-16230: - Author: ASF GitHub Bot Created on: 18/Sep/21 06:41 Start Date: 18/Sep/21 06:41 Worklog Time Spent: 10m Work Description: ayushtkn commented on a change in pull request #3455: URL: https://github.com/apache/hadoop/pull/3455#discussion_r711520715 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestStorageRestore.java ## @@ -294,7 +294,7 @@ public void testDfsAdminCmd() throws Exception { restore = fsi.getStorage().getRestoreFailedStorage(); assertTrue("After check call restore is " + restore, restore); String commandOutput = cmdResult.getCommandOutput(); - commandOutput.trim(); + commandOutput = commandOutput.trim(); assertTrue(commandOutput.contains("restoreFailedStorage is set to true")); Review comment: I think we can remove ``commandOutput.trim();`` itself. The ``assertTrue`` is anyway checking ``contains`` which will pass irrespective of this ``trim`` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 652553) Time Spent: 20m (was: 10m) > Minor bug in TestStorageRestore > --- > > Key: HDFS-16230 > URL: https://issues.apache.org/jira/browse/HDFS-16230 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Reporter: Thomas Leplus >Priority: Trivial > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > Strings being immutable, you need to use the trim() method return value. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org