[jira] [Work logged] (HDFS-16088) Standby NameNode process getLiveDatanodeStorageReport request to reduce Active load
[ https://issues.apache.org/jira/browse/HDFS-16088?focusedWorklogId=618312=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-618312 ] ASF GitHub Bot logged work on HDFS-16088: - Author: ASF GitHub Bot Created on: 03/Jul/21 02:21 Start Date: 03/Jul/21 02:21 Worklog Time Spent: 10m Work Description: tomscut commented on pull request #3140: URL: https://github.com/apache/hadoop/pull/3140#issuecomment-873329818 > @tomscut Thanks for contribution. > I see that getLiveDatanodeStorageReport and getBlocks mostly have the same code. Better to extract them into a new method and it will be more clean. Hi @ferhui , I fixed it. Could you please take a look when you have time? Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 618312) Time Spent: 1.5h (was: 1h 20m) > Standby NameNode process getLiveDatanodeStorageReport request to reduce > Active load > --- > > Key: HDFS-16088 > URL: https://issues.apache.org/jira/browse/HDFS-16088 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Attachments: standyby-ipcserver.jpg > > Time Spent: 1.5h > Remaining Estimate: 0h > > As with HDFS-13183, NameNodeConnector#getLiveDatanodeStorageReport() can also > request to SNN to reduce the ANN load. > There are two points that need to be mentioned: > 1. FSNamesystem#getLiveDatanodeStorageReport() is > OperationCategory.UNCHECKED, so we can access SNN directly. > 2. We can share the same UT(testBalancerRequestSBNWithHA) with > NameNodeConnector#getBlocks(). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-16088) Standby NameNode process getLiveDatanodeStorageReport request to reduce Active load
[ https://issues.apache.org/jira/browse/HDFS-16088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17373872#comment-17373872 ] tomscut edited comment on HDFS-16088 at 7/3/21, 2:19 AM: - Hi [~hexiaoqiao], I added an unit test for this. Could you please take a look? Thanks a lot. We forward this request to Standby and it worked fine. !standyby-ipcserver.jpg|width=549,height=129! was (Author: tomscut): Hi [~hexiaoqiao], I added an unit test for this. Could you please take a look? Thanks a lot. We forward this request to Standby and it worked fine. [^standyby-ipcserver.log] > Standby NameNode process getLiveDatanodeStorageReport request to reduce > Active load > --- > > Key: HDFS-16088 > URL: https://issues.apache.org/jira/browse/HDFS-16088 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Attachments: standyby-ipcserver.jpg > > Time Spent: 1h 20m > Remaining Estimate: 0h > > As with HDFS-13183, NameNodeConnector#getLiveDatanodeStorageReport() can also > request to SNN to reduce the ANN load. > There are two points that need to be mentioned: > 1. FSNamesystem#getLiveDatanodeStorageReport() is > OperationCategory.UNCHECKED, so we can access SNN directly. > 2. We can share the same UT(testBalancerRequestSBNWithHA) with > NameNodeConnector#getBlocks(). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16088) Standby NameNode process getLiveDatanodeStorageReport request to reduce Active load
[ https://issues.apache.org/jira/browse/HDFS-16088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] tomscut updated HDFS-16088: --- Attachment: standyby-ipcserver.jpg > Standby NameNode process getLiveDatanodeStorageReport request to reduce > Active load > --- > > Key: HDFS-16088 > URL: https://issues.apache.org/jira/browse/HDFS-16088 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Attachments: standyby-ipcserver.jpg > > Time Spent: 1h 20m > Remaining Estimate: 0h > > As with HDFS-13183, NameNodeConnector#getLiveDatanodeStorageReport() can also > request to SNN to reduce the ANN load. > There are two points that need to be mentioned: > 1. FSNamesystem#getLiveDatanodeStorageReport() is > OperationCategory.UNCHECKED, so we can access SNN directly. > 2. We can share the same UT(testBalancerRequestSBNWithHA) with > NameNodeConnector#getBlocks(). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-16088) Standby NameNode process getLiveDatanodeStorageReport request to reduce Active load
[ https://issues.apache.org/jira/browse/HDFS-16088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17373872#comment-17373872 ] tomscut commented on HDFS-16088: Hi [~hexiaoqiao], I added an unit test for this. Could you please take a look? Thanks a lot. We forward this request to Standby and it worked fine. [^standyby-ipcserver.log] > Standby NameNode process getLiveDatanodeStorageReport request to reduce > Active load > --- > > Key: HDFS-16088 > URL: https://issues.apache.org/jira/browse/HDFS-16088 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Attachments: standyby-ipcserver.log > > Time Spent: 1h 20m > Remaining Estimate: 0h > > As with HDFS-13183, NameNodeConnector#getLiveDatanodeStorageReport() can also > request to SNN to reduce the ANN load. > There are two points that need to be mentioned: > 1. FSNamesystem#getLiveDatanodeStorageReport() is > OperationCategory.UNCHECKED, so we can access SNN directly. > 2. We can share the same UT(testBalancerRequestSBNWithHA) with > NameNodeConnector#getBlocks(). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16088) Standby NameNode process getLiveDatanodeStorageReport request to reduce Active load
[ https://issues.apache.org/jira/browse/HDFS-16088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] tomscut updated HDFS-16088: --- Attachment: (was: standyby-ipcserver.log) > Standby NameNode process getLiveDatanodeStorageReport request to reduce > Active load > --- > > Key: HDFS-16088 > URL: https://issues.apache.org/jira/browse/HDFS-16088 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Time Spent: 1h 20m > Remaining Estimate: 0h > > As with HDFS-13183, NameNodeConnector#getLiveDatanodeStorageReport() can also > request to SNN to reduce the ANN load. > There are two points that need to be mentioned: > 1. FSNamesystem#getLiveDatanodeStorageReport() is > OperationCategory.UNCHECKED, so we can access SNN directly. > 2. We can share the same UT(testBalancerRequestSBNWithHA) with > NameNodeConnector#getBlocks(). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16088) Standby NameNode process getLiveDatanodeStorageReport request to reduce Active load
[ https://issues.apache.org/jira/browse/HDFS-16088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] tomscut updated HDFS-16088: --- Attachment: standyby-ipcserver.log > Standby NameNode process getLiveDatanodeStorageReport request to reduce > Active load > --- > > Key: HDFS-16088 > URL: https://issues.apache.org/jira/browse/HDFS-16088 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Attachments: standyby-ipcserver.log > > Time Spent: 1h 20m > Remaining Estimate: 0h > > As with HDFS-13183, NameNodeConnector#getLiveDatanodeStorageReport() can also > request to SNN to reduce the ANN load. > There are two points that need to be mentioned: > 1. FSNamesystem#getLiveDatanodeStorageReport() is > OperationCategory.UNCHECKED, so we can access SNN directly. > 2. We can share the same UT(testBalancerRequestSBNWithHA) with > NameNodeConnector#getBlocks(). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16109) Fix flaky some unit tests since they offen timeout
[ https://issues.apache.org/jira/browse/HDFS-16109?focusedWorklogId=618310=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-618310 ] ASF GitHub Bot logged work on HDFS-16109: - Author: ASF GitHub Bot Created on: 03/Jul/21 01:50 Start Date: 03/Jul/21 01:50 Worklog Time Spent: 10m Work Description: tomscut opened a new pull request #3172: URL: https://github.com/apache/hadoop/pull/3172 JIRA: [HDFS-16109](https://issues.apache.org/jira/browse/HDFS-16109) Increase timeout for TestBootstrapStandby, TestFsVolumeList and TestDecommissionWithBackoffMonitor since they offen timeout. TestBootstrapStandby: ``` [ERROR] Tests run: 8, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 159.474 s <<< FAILURE! - in org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandby [ERROR] testRateThrottling(org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandby) Time elapsed: 31.262 s <<< ERROR! org.junit.runners.model.TestTimedOutException: test timed out after 3 milliseconds at java.io.RandomAccessFile.writeBytes(Native Method) at java.io.RandomAccessFile.write(RandomAccessFile.java:512) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:947) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:910) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:699) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:642) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:387) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:243) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1224) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:795) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:673) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:760) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:1014) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:989) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1763) at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:2261) at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:2231) at org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandby.testRateThrottling(TestBootstrapStandby.java:297) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) ``` TestFsVolumeList: ``` [ERROR] Tests run: 12, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 190.294 s <<< FAILURE! - in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList [ERROR] testAddRplicaProcessorForAddingReplicaInMap(org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList) Time elapsed: 60.028 s <<< ERROR! org.junit.runners.model.TestTimedOutException: test timed out after 6 milliseconds at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:429) at java.util.concurrent.FutureTask.get(FutureTask.java:191) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList.testAddRplicaProcessorForAddingReplicaInMap(TestFsVolumeList.java:395) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at
[jira] [Updated] (HDFS-16109) Fix flaky some unit tests since they offen timeout
[ https://issues.apache.org/jira/browse/HDFS-16109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HDFS-16109: -- Labels: pull-request-available (was: ) > Fix flaky some unit tests since they offen timeout > -- > > Key: HDFS-16109 > URL: https://issues.apache.org/jira/browse/HDFS-16109 > Project: Hadoop HDFS > Issue Type: Wish >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Increase timeout for TestBootstrapStandby, TestFsVolumeList and > TestDecommissionWithBackoffMonitor since they offen timeout. > > TestBootstrapStandby: > {code:java} > [ERROR] Tests run: 8, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: > 159.474 s <<< FAILURE! - in > org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandby[ERROR] Tests > run: 8, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 159.474 s <<< > FAILURE! - in > org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandby[ERROR] > testRateThrottling(org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandby) > Time elapsed: 31.262 s <<< > ERROR!org.junit.runners.model.TestTimedOutException: test timed out after > 3 milliseconds at java.io.RandomAccessFile.writeBytes(Native Method) at > java.io.RandomAccessFile.write(RandomAccessFile.java:512) at > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:947) > at > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:910) > at > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:699) > at > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:642) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:387) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:243) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1224) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:795) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:673) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:760) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:1014) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:989) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1763) > at > org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:2261) > at > org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:2231) > at > org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandby.testRateThrottling(TestBootstrapStandby.java:297) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) at > java.lang.Thread.run(Thread.java:748) > {code} > TestFsVolumeList: > {code:java} > [ERROR] Tests run: 12, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: > 190.294 s <<< FAILURE! - in > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList[ERROR] > Tests run: 12, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 190.294 s > <<< FAILURE! - in > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList[ERROR] > testAddRplicaProcessorForAddingReplicaInMap(org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList) > Time elapsed: 60.028 s <<< > ERROR!org.junit.runners.model.TestTimedOutException: test timed out after > 6 milliseconds at sun.misc.Unsafe.park(Native Method) at > java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at > java.util.concurrent.FutureTask.awaitDone(FutureTask.java:429) at > java.util.concurrent.FutureTask.get(FutureTask.java:191) at >
[jira] [Created] (HDFS-16109) Fix flaky some unit tests since they offen timeout
tomscut created HDFS-16109: -- Summary: Fix flaky some unit tests since they offen timeout Key: HDFS-16109 URL: https://issues.apache.org/jira/browse/HDFS-16109 Project: Hadoop HDFS Issue Type: Wish Reporter: tomscut Assignee: tomscut Increase timeout for TestBootstrapStandby, TestFsVolumeList and TestDecommissionWithBackoffMonitor since they offen timeout. TestBootstrapStandby: {code:java} [ERROR] Tests run: 8, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 159.474 s <<< FAILURE! - in org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandby[ERROR] Tests run: 8, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 159.474 s <<< FAILURE! - in org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandby[ERROR] testRateThrottling(org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandby) Time elapsed: 31.262 s <<< ERROR!org.junit.runners.model.TestTimedOutException: test timed out after 3 milliseconds at java.io.RandomAccessFile.writeBytes(Native Method) at java.io.RandomAccessFile.write(RandomAccessFile.java:512) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:947) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:910) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:699) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:642) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:387) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:243) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1224) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:795) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:673) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:760) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:1014) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:989) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1763) at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:2261) at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:2231) at org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandby.testRateThrottling(TestBootstrapStandby.java:297) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) {code} TestFsVolumeList: {code:java} [ERROR] Tests run: 12, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 190.294 s <<< FAILURE! - in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList[ERROR] Tests run: 12, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 190.294 s <<< FAILURE! - in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList[ERROR] testAddRplicaProcessorForAddingReplicaInMap(org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList) Time elapsed: 60.028 s <<< ERROR!org.junit.runners.model.TestTimedOutException: test timed out after 6 milliseconds at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:429) at java.util.concurrent.FutureTask.get(FutureTask.java:191) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList.testAddRplicaProcessorForAddingReplicaInMap(TestFsVolumeList.java:395) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at
[jira] [Commented] (HDFS-14529) NPE while Loading the Editlogs
[ https://issues.apache.org/jira/browse/HDFS-14529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17373709#comment-17373709 ] Wei-Chiu Chuang commented on HDFS-14529: Another possibility to hit this error (without snapshots) is the race between rename and setTime. The getBlockLocation had a data race where path could be resolved to IIP, release the lock, rename the file, and then the IIP couldn't reach the file. HDFS-13901 fixed that. > NPE while Loading the Editlogs > -- > > Key: HDFS-14529 > URL: https://issues.apache.org/jira/browse/HDFS-14529 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 3.1.1 >Reporter: Harshakiran Reddy >Assignee: Ayush Saxena >Priority: Major > > {noformat} > 2019-05-31 15:15:42,397 ERROR namenode.FSEditLogLoader: Encountered exception > on operation TimesOp [length=0, > path=/testLoadSpace/dir0/dir0/dir0/dir2/_file_9096763, mtime=-1, > atime=1559294343288, opCode=OP_TIMES, txid=18927893] > java.lang.NullPointerException > at > org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.unprotectedSetTimes(FSDirAttrOp.java:490) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:711) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:286) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:181) > at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:924) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:771) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:331) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1105) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:726) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.doRecovery(NameNode.java:1558) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1640) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1725){noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14529) NPE while Loading the Editlogs
[ https://issues.apache.org/jira/browse/HDFS-14529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17373706#comment-17373706 ] Wei-Chiu Chuang commented on HDFS-14529: We encountered this bug again, and it is reproducible for this set of fsimage/edit logs. We added debug logs and found that the IIP has a few missing components. It was supposed to have 8 components in the path but only 6 was found. Two were nulls. It is likely caused by files already deleted from snapshots. Somehow the active NN keeps the file in memory, so standby namenode crashes upon loading edits. Comparing this method with other similar methods, I think we should check for nullity of iip.getLastINode(), and throw FileNotFoundException. There are other places in the code where we could add the nullity check as well. I did fail several times for other edit log op (mkdir, rename, renameSnapshot) too. {noformat} 21/07/02 11:39:39 ERROR namenode.FSEditLogLoader: AssertionError caught in unprotectedSetTimes: iip=INodesInPath: path = /apps/hive/warehouse/ea_common.db/sls_blng_rw/ins_gmt_dt=2021-06-22/part-1-087de2ec-7888-4f2b-bea6-3702c69cf953.c000 inodes = [, apps, hive, warehouse, ea_common.db, sls_blng_rw, null, null], length=8 isSnapshot= false snapshotId= 8014, lastINode=null, mtime=-1, atime=1624825911021, force? true java.lang.AssertionError: i = 6 != 8, this=INodesInPath: path = /apps/hive/warehouse/ea_common.db/sls_blng_rw/ins_gmt_dt=2021-06-22/part-1-087de2ec-7888-4f2b-bea6-3702c69cf953.c000 inodes = [, apps, hive, warehouse, ea_common.db, sls_blng_rw, null, null], length=8 isSnapshot= false snapshotId= 8014 at org.apache.hadoop.hdfs.server.namenode.INodesInPath.validate(INodesInPath.java:488) at org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.unprotectedSetTimes(FSDirAttrOp.java:355) at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:631) {noformat} > NPE while Loading the Editlogs > -- > > Key: HDFS-14529 > URL: https://issues.apache.org/jira/browse/HDFS-14529 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 3.1.1 >Reporter: Harshakiran Reddy >Assignee: Ayush Saxena >Priority: Major > > {noformat} > 2019-05-31 15:15:42,397 ERROR namenode.FSEditLogLoader: Encountered exception > on operation TimesOp [length=0, > path=/testLoadSpace/dir0/dir0/dir0/dir2/_file_9096763, mtime=-1, > atime=1559294343288, opCode=OP_TIMES, txid=18927893] > java.lang.NullPointerException > at > org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.unprotectedSetTimes(FSDirAttrOp.java:490) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:711) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:286) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:181) > at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:924) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:771) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:331) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1105) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:726) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.doRecovery(NameNode.java:1558) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1640) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1725){noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-11528) NameNode load EditRecords throws NPE
[ https://issues.apache.org/jira/browse/HDFS-11528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang resolved HDFS-11528. Resolution: Duplicate I'll resolve this one and use HDFS-14529 for further discussion. > NameNode load EditRecords throws NPE > > > Key: HDFS-11528 > URL: https://issues.apache.org/jira/browse/HDFS-11528 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.1 >Reporter: Shangwen Tang >Priority: Major > > this is mylog > {noformat} > [2017-03-13T19:18:02.187+08:00] [ERROR] > server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java 242) > [main] : Encountered exception on operation TimesOp [length=0, > path=/user/spark/log/application_1487848228144_0004, mtime=-1, > atime=1489392253959, opCode=OP_TIMES, txid=26215] > java.lang.NullPointerException > at > org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.unprotectedSetTimes(FSDirAttrOp.java:473) > at > org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.unprotectedSetTimes(FSDirAttrOp.java:299) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:629) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:234) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:143) > at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:837) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:692) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:294) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:980) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:686) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:589) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:649) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:816) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:800) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1498) > at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1564) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16108) Incorrect log placeholders used in JournalNodeSyncer
[ https://issues.apache.org/jira/browse/HDFS-16108?focusedWorklogId=618148=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-618148 ] ASF GitHub Bot logged work on HDFS-16108: - Author: ASF GitHub Bot Created on: 02/Jul/21 15:06 Start Date: 02/Jul/21 15:06 Worklog Time Spent: 10m Work Description: virajjasani commented on pull request #3169: URL: https://github.com/apache/hadoop/pull/3169#issuecomment-873068042 @ferhui could you please take a look? Thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 618148) Time Spent: 1h 10m (was: 1h) > Incorrect log placeholders used in JournalNodeSyncer > > > Key: HDFS-16108 > URL: https://issues.apache.org/jira/browse/HDFS-16108 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Minor > Labels: pull-request-available > Time Spent: 1h 10m > Remaining Estimate: 0h > > When Journal sync thread is using incorrect log placeholders at 2 places: > # When it fails to create dir for downloading log segments > # When it fails to move tmp editFile to current dir > Since these failure logs are important to debug JN sync issues, we should fix > these incorrect placeholders. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16108) Incorrect log placeholders used in JournalNodeSyncer
[ https://issues.apache.org/jira/browse/HDFS-16108?focusedWorklogId=618144=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-618144 ] ASF GitHub Bot logged work on HDFS-16108: - Author: ASF GitHub Bot Created on: 02/Jul/21 14:51 Start Date: 02/Jul/21 14:51 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3169: URL: https://github.com/apache/hadoop/pull/3169#issuecomment-873057373 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 32s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 30m 32s | | trunk passed | | +1 :green_heart: | compile | 1m 23s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 16s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 2s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 23s | | trunk passed | | +1 :green_heart: | javadoc | 0m 55s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 30s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 6s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 9s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 12s | | the patch passed | | +1 :green_heart: | compile | 1m 16s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 16s | | the patch passed | | +1 :green_heart: | compile | 1m 9s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 9s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 53s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 14s | | the patch passed | | +1 :green_heart: | javadoc | 0m 46s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 21s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 16s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 41s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 239m 8s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3169/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 43s | | The patch does not generate ASF License warnings. | | | | 323m 23s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3169/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3169 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 8424f1079d4a 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 507a47a17163d5759a5024d4c28719aa199eac14 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results |
[jira] [Work logged] (HDFS-16108) Incorrect log placeholders used in JournalNodeSyncer
[ https://issues.apache.org/jira/browse/HDFS-16108?focusedWorklogId=618135=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-618135 ] ASF GitHub Bot logged work on HDFS-16108: - Author: ASF GitHub Bot Created on: 02/Jul/21 14:36 Start Date: 02/Jul/21 14:36 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3169: URL: https://github.com/apache/hadoop/pull/3169#issuecomment-873046199 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 49s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 13s | | trunk passed | | +1 :green_heart: | compile | 1m 25s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 15s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 0s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 24s | | trunk passed | | +1 :green_heart: | javadoc | 0m 55s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 26s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 16s | | trunk passed | | +1 :green_heart: | shadedclient | 18m 58s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 15s | | the patch passed | | +1 :green_heart: | compile | 1m 18s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 18s | | the patch passed | | +1 :green_heart: | compile | 1m 10s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 10s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 56s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 16s | | the patch passed | | +1 :green_heart: | javadoc | 0m 48s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 20s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 25s | | the patch passed | | +1 :green_heart: | shadedclient | 18m 54s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 337m 18s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3169/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 38s | | The patch does not generate ASF License warnings. | | | | 429m 17s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3169/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3169 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 9916232dc298 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / d1c06047d5dece5fb94eceddb4af5bbdee9499c6 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK
[jira] [Work logged] (HDFS-16107) Split RPC configuration to isolate RPC
[ https://issues.apache.org/jira/browse/HDFS-16107?focusedWorklogId=618131=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-618131 ] ASF GitHub Bot logged work on HDFS-16107: - Author: ASF GitHub Bot Created on: 02/Jul/21 14:21 Start Date: 02/Jul/21 14:21 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3170: URL: https://github.com/apache/hadoop/pull/3170#issuecomment-873035866 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 21s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 58s | | trunk passed | | +1 :green_heart: | compile | 26m 43s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 21m 14s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 13s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 45s | | trunk passed | | +1 :green_heart: | javadoc | 1m 11s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 49s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 7s | | trunk passed | | +1 :green_heart: | shadedclient | 18m 22s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 5s | | the patch passed | | +1 :green_heart: | compile | 25m 36s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 25m 36s | | the patch passed | | +1 :green_heart: | compile | 22m 33s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 22m 33s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 13s | [/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3170/3/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt) | hadoop-common-project/hadoop-common: The patch generated 7 new + 300 unchanged - 1 fixed = 307 total (was 301) | | +1 :green_heart: | mvnsite | 1m 53s | | the patch passed | | +1 :green_heart: | javadoc | 1m 14s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 48s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 2s | | the patch passed | | +1 :green_heart: | shadedclient | 18m 39s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 18m 11s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 50s | | The patch does not generate ASF License warnings. | | | | 207m 29s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3170/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3170 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux d17d9faa8449 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 74d93ddcd6b4b8c909cbd36f279857f2a8891ba1 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3170/3/testReport/ | | Max. process+thread count | 1285 (vs. ulimit of 5500) | | modules | C:
[jira] [Work logged] (HDFS-16107) Split RPC configuration to isolate RPC
[ https://issues.apache.org/jira/browse/HDFS-16107?focusedWorklogId=618123=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-618123 ] ASF GitHub Bot logged work on HDFS-16107: - Author: ASF GitHub Bot Created on: 02/Jul/21 13:51 Start Date: 02/Jul/21 13:51 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3170: URL: https://github.com/apache/hadoop/pull/3170#issuecomment-873014715 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 33s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 3s | | trunk passed | | +1 :green_heart: | compile | 24m 28s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 20m 36s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 12s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 40s | | trunk passed | | +1 :green_heart: | javadoc | 1m 8s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 46s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 2m 36s | | trunk passed | | +1 :green_heart: | shadedclient | 18m 8s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 59s | | the patch passed | | +1 :green_heart: | compile | 23m 24s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 23m 24s | | the patch passed | | +1 :green_heart: | compile | 20m 39s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 20m 39s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 7s | [/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3170/2/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt) | hadoop-common-project/hadoop-common: The patch generated 8 new + 300 unchanged - 1 fixed = 308 total (was 301) | | +1 :green_heart: | mvnsite | 1m 42s | | the patch passed | | +1 :green_heart: | javadoc | 1m 6s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 41s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 2m 47s | | the patch passed | | +1 :green_heart: | shadedclient | 17m 35s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 17m 25s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 54s | | The patch does not generate ASF License warnings. | | | | 194m 47s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3170/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3170 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux e48dcc7b7223 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 7a0c5ffbddb675a07e9aa96a7da420a8f640108c | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3170/2/testReport/ | | Max. process+thread count | 1258 (vs. ulimit of 5500) | | modules | C:
[jira] [Work logged] (HDFS-16107) Split RPC configuration to isolate RPC
[ https://issues.apache.org/jira/browse/HDFS-16107?focusedWorklogId=618051=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-618051 ] ASF GitHub Bot logged work on HDFS-16107: - Author: ASF GitHub Bot Created on: 02/Jul/21 10:58 Start Date: 02/Jul/21 10:58 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3170: URL: https://github.com/apache/hadoop/pull/3170#issuecomment-872911109 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 1s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 13s | | trunk passed | | +1 :green_heart: | compile | 21m 33s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 19m 2s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 8s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 38s | | trunk passed | | +1 :green_heart: | javadoc | 1m 10s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 42s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 2m 36s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 22s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 57s | | the patch passed | | +1 :green_heart: | compile | 21m 43s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 21m 43s | | the patch passed | | +1 :green_heart: | compile | 18m 51s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 18m 51s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 6s | [/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3170/1/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt) | hadoop-common-project/hadoop-common: The patch generated 8 new + 300 unchanged - 1 fixed = 308 total (was 301) | | +1 :green_heart: | mvnsite | 1m 34s | | the patch passed | | +1 :green_heart: | javadoc | 1m 7s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 43s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 2m 35s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 1s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 17m 10s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 58s | | The patch does not generate ASF License warnings. | | | | 181m 35s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3170/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3170 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 78fe521fcfa2 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 47c84c38f861ca44fb50c53a3fa4399e8d6f3cf7 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3170/1/testReport/ | | Max. process+thread count | 1267 (vs. ulimit of 5500) | | modules | C:
[jira] [Work logged] (HDFS-16108) Incorrect log placeholders used in JournalNodeSyncer
[ https://issues.apache.org/jira/browse/HDFS-16108?focusedWorklogId=618025=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-618025 ] ASF GitHub Bot logged work on HDFS-16108: - Author: ASF GitHub Bot Created on: 02/Jul/21 09:24 Start Date: 02/Jul/21 09:24 Worklog Time Spent: 10m Work Description: virajjasani commented on a change in pull request #3169: URL: https://github.com/apache/hadoop/pull/3169#discussion_r662873220 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeSyncer.java ## @@ -174,8 +174,8 @@ private void startSyncJournalsDaemon() { } } if (!createEditsSyncDir()) { -LOG.error("Failed to create directory for downloading log " + -"segments: %s. Stopping Journal Node Sync.", +LOG.error("Failed to create directory for downloading log " ++ "segments: {}. Stopping Journal Node Sync.", Review comment: Sure, let me put it back to how it was. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 618025) Time Spent: 40m (was: 0.5h) > Incorrect log placeholders used in JournalNodeSyncer > > > Key: HDFS-16108 > URL: https://issues.apache.org/jira/browse/HDFS-16108 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Minor > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > When Journal sync thread is using incorrect log placeholders at 2 places: > # When it fails to create dir for downloading log segments > # When it fails to move tmp editFile to current dir > Since these failure logs are important to debug JN sync issues, we should fix > these incorrect placeholders. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16108) Incorrect log placeholders used in JournalNodeSyncer
[ https://issues.apache.org/jira/browse/HDFS-16108?focusedWorklogId=618014=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-618014 ] ASF GitHub Bot logged work on HDFS-16108: - Author: ASF GitHub Bot Created on: 02/Jul/21 08:59 Start Date: 02/Jul/21 08:59 Worklog Time Spent: 10m Work Description: ferhui commented on a change in pull request #3169: URL: https://github.com/apache/hadoop/pull/3169#discussion_r662857123 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeSyncer.java ## @@ -174,8 +174,8 @@ private void startSyncJournalsDaemon() { } } if (!createEditsSyncDir()) { -LOG.error("Failed to create directory for downloading log " + -"segments: %s. Stopping Journal Node Sync.", +LOG.error("Failed to create directory for downloading log " ++ "segments: {}. Stopping Journal Node Sync.", Review comment: Guess CI will report checkstyle problem. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 618014) Time Spent: 0.5h (was: 20m) > Incorrect log placeholders used in JournalNodeSyncer > > > Key: HDFS-16108 > URL: https://issues.apache.org/jira/browse/HDFS-16108 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Minor > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > When Journal sync thread is using incorrect log placeholders at 2 places: > # When it fails to create dir for downloading log segments > # When it fails to move tmp editFile to current dir > Since these failure logs are important to debug JN sync issues, we should fix > these incorrect placeholders. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16088) Standby NameNode process getLiveDatanodeStorageReport request to reduce Active load
[ https://issues.apache.org/jira/browse/HDFS-16088?focusedWorklogId=618010=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-618010 ] ASF GitHub Bot logged work on HDFS-16088: - Author: ASF GitHub Bot Created on: 02/Jul/21 08:47 Start Date: 02/Jul/21 08:47 Worklog Time Spent: 10m Work Description: tomscut commented on pull request #3140: URL: https://github.com/apache/hadoop/pull/3140#issuecomment-872831435 > @tomscut Thanks for contribution. > I see that getLiveDatanodeStorageReport and getBlocks mostly have the same code. Better to extract them into a new method and it will be more clean. Thanks @ferhui for your review and advice, I will extract a new method. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 618010) Time Spent: 1h 20m (was: 1h 10m) > Standby NameNode process getLiveDatanodeStorageReport request to reduce > Active load > --- > > Key: HDFS-16088 > URL: https://issues.apache.org/jira/browse/HDFS-16088 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Time Spent: 1h 20m > Remaining Estimate: 0h > > As with HDFS-13183, NameNodeConnector#getLiveDatanodeStorageReport() can also > request to SNN to reduce the ANN load. > There are two points that need to be mentioned: > 1. FSNamesystem#getLiveDatanodeStorageReport() is > OperationCategory.UNCHECKED, so we can access SNN directly. > 2. We can share the same UT(testBalancerRequestSBNWithHA) with > NameNodeConnector#getBlocks(). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16108) Incorrect log placeholders used in JournalNodeSyncer
[ https://issues.apache.org/jira/browse/HDFS-16108?focusedWorklogId=618005=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-618005 ] ASF GitHub Bot logged work on HDFS-16108: - Author: ASF GitHub Bot Created on: 02/Jul/21 08:34 Start Date: 02/Jul/21 08:34 Worklog Time Spent: 10m Work Description: virajjasani commented on pull request #3169: URL: https://github.com/apache/hadoop/pull/3169#issuecomment-872823066 Thanks @tomscut for the review. @aajisaka could you please take a look? Thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 618005) Time Spent: 20m (was: 10m) > Incorrect log placeholders used in JournalNodeSyncer > > > Key: HDFS-16108 > URL: https://issues.apache.org/jira/browse/HDFS-16108 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Minor > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > When Journal sync thread is using incorrect log placeholders at 2 places: > # When it fails to create dir for downloading log segments > # When it fails to move tmp editFile to current dir > Since these failure logs are important to debug JN sync issues, we should fix > these incorrect placeholders. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16088) Standby NameNode process getLiveDatanodeStorageReport request to reduce Active load
[ https://issues.apache.org/jira/browse/HDFS-16088?focusedWorklogId=618003=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-618003 ] ASF GitHub Bot logged work on HDFS-16088: - Author: ASF GitHub Bot Created on: 02/Jul/21 08:21 Start Date: 02/Jul/21 08:21 Worklog Time Spent: 10m Work Description: ferhui commented on pull request #3140: URL: https://github.com/apache/hadoop/pull/3140#issuecomment-872814498 @tomscut Thanks for contribution. I see that getLiveDatanodeStorageReport and getBlocks mostly have the same code. Better to extract them into a new method and it will be more clean. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 618003) Time Spent: 1h 10m (was: 1h) > Standby NameNode process getLiveDatanodeStorageReport request to reduce > Active load > --- > > Key: HDFS-16088 > URL: https://issues.apache.org/jira/browse/HDFS-16088 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Time Spent: 1h 10m > Remaining Estimate: 0h > > As with HDFS-13183, NameNodeConnector#getLiveDatanodeStorageReport() can also > request to SNN to reduce the ANN load. > There are two points that need to be mentioned: > 1. FSNamesystem#getLiveDatanodeStorageReport() is > OperationCategory.UNCHECKED, so we can access SNN directly. > 2. We can share the same UT(testBalancerRequestSBNWithHA) with > NameNodeConnector#getBlocks(). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16107) Split RPC configuration to isolate RPC
[ https://issues.apache.org/jira/browse/HDFS-16107?focusedWorklogId=618000=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-618000 ] ASF GitHub Bot logged work on HDFS-16107: - Author: ASF GitHub Bot Created on: 02/Jul/21 07:55 Start Date: 02/Jul/21 07:55 Worklog Time Spent: 10m Work Description: jianghuazhu opened a new pull request #3170: URL: https://github.com/apache/hadoop/pull/3170 ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 618000) Remaining Estimate: 0h Time Spent: 10m > Split RPC configuration to isolate RPC > -- > > Key: HDFS-16107 > URL: https://issues.apache.org/jira/browse/HDFS-16107 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: JiangHua Zhu >Assignee: JiangHua Zhu >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > For RPC of different ports, there are some common configurations, such as: > ipc.server.read.threadpool.size > ipc.server.read.connection-queue.size > ipc.server.handler.queue.size > Once we configure these values, it will affect all requests (including client > and requests within the cluster). > It is necessary for us to split these configurations to adapt to different > ports, such as: > ipc.8020.server.read.threadpool.size > ipc.8021.server.read.threadpool.size > ipc.8020.server.read.connection-queue.size > ipc.8021.server.read.connection-queue.size > The advantage of this is to isolate the RPC to deal with the pressure of > requests from all sides. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16107) Split RPC configuration to isolate RPC
[ https://issues.apache.org/jira/browse/HDFS-16107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HDFS-16107: -- Labels: pull-request-available (was: ) > Split RPC configuration to isolate RPC > -- > > Key: HDFS-16107 > URL: https://issues.apache.org/jira/browse/HDFS-16107 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: JiangHua Zhu >Assignee: JiangHua Zhu >Priority: Minor > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > For RPC of different ports, there are some common configurations, such as: > ipc.server.read.threadpool.size > ipc.server.read.connection-queue.size > ipc.server.handler.queue.size > Once we configure these values, it will affect all requests (including client > and requests within the cluster). > It is necessary for us to split these configurations to adapt to different > ports, such as: > ipc.8020.server.read.threadpool.size > ipc.8021.server.read.threadpool.size > ipc.8020.server.read.connection-queue.size > ipc.8021.server.read.connection-queue.size > The advantage of this is to isolate the RPC to deal with the pressure of > requests from all sides. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16088) Standby NameNode process getLiveDatanodeStorageReport request to reduce Active load
[ https://issues.apache.org/jira/browse/HDFS-16088?focusedWorklogId=617999=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-617999 ] ASF GitHub Bot logged work on HDFS-16088: - Author: ASF GitHub Bot Created on: 02/Jul/21 07:38 Start Date: 02/Jul/21 07:38 Worklog Time Spent: 10m Work Description: tomscut commented on pull request #3140: URL: https://github.com/apache/hadoop/pull/3140#issuecomment-872788581 Hi @ferhui , could you please do another review? Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 617999) Time Spent: 1h (was: 50m) > Standby NameNode process getLiveDatanodeStorageReport request to reduce > Active load > --- > > Key: HDFS-16088 > URL: https://issues.apache.org/jira/browse/HDFS-16088 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > As with HDFS-13183, NameNodeConnector#getLiveDatanodeStorageReport() can also > request to SNN to reduce the ANN load. > There are two points that need to be mentioned: > 1. FSNamesystem#getLiveDatanodeStorageReport() is > OperationCategory.UNCHECKED, so we can access SNN directly. > 2. We can share the same UT(testBalancerRequestSBNWithHA) with > NameNodeConnector#getBlocks(). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16106) Fix flaky unit test TestDFSShell
[ https://issues.apache.org/jira/browse/HDFS-16106?focusedWorklogId=617998=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-617998 ] ASF GitHub Bot logged work on HDFS-16106: - Author: ASF GitHub Bot Created on: 02/Jul/21 07:35 Start Date: 02/Jul/21 07:35 Worklog Time Spent: 10m Work Description: tomscut commented on pull request #3168: URL: https://github.com/apache/hadoop/pull/3168#issuecomment-872787494 Thanks @aajisaka @ayushtkn @virajjasani for the review. Thanks @ferhui for the merge. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 617998) Time Spent: 1.5h (was: 1h 20m) > Fix flaky unit test TestDFSShell > > > Key: HDFS-16106 > URL: https://issues.apache.org/jira/browse/HDFS-16106 > Project: Hadoop HDFS > Issue Type: Wish >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 1.5h > Remaining Estimate: 0h > > This unit test occasionally fails. > The value set for dfs.namenode.accesstime.precision is too low, result in the > execution of the method, accesstime could be set many times, eventually > leading to failed assert. > IMO, dfs.namenode.accesstime.precision should be greater than or equal to the > timeout(120s) of TestDFSShell#testCopyCommandsWithPreserveOption(), or > directly set to 0 to disable this feature. > > {code:java} > [ERROR] Tests run: 52, Failures: 3, Errors: 0, Skipped: 0, Time elapsed: > 106.778 s <<< FAILURE! - in org.apache.hadoop.hdfs.TestDFSShell[ERROR] Tests > run: 52, Failures: 3, Errors: 0, Skipped: 0, Time elapsed: 106.778 s <<< > FAILURE! - in org.apache.hadoop.hdfs.TestDFSShell [ERROR] > testCopyCommandsWithPreserveOption(org.apache.hadoop.hdfs.TestDFSShell) Time > elapsed: 2.353 s <<< FAILURE! java.lang.AssertionError: > expected:<1625095098319> but was:<1625095099374> at > org.junit.Assert.fail(Assert.java:89) at > org.junit.Assert.failNotEquals(Assert.java:835) at > org.junit.Assert.assertEquals(Assert.java:647) at > org.junit.Assert.assertEquals(Assert.java:633) at > org.apache.hadoop.hdfs.TestDFSShell.testCopyCommandsWithPreserveOption(TestDFSShell.java:2282) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) at > java.lang.Thread.run(Thread.java:748) > [ERROR] > testCopyCommandsWithPreserveOption(org.apache.hadoop.hdfs.TestDFSShell) Time > elapsed: 2.467 s <<< FAILURE! java.lang.AssertionError: > expected:<1625095192527> but was:<1625095193950> at > org.junit.Assert.fail(Assert.java:89) at > org.junit.Assert.failNotEquals(Assert.java:835) at > org.junit.Assert.assertEquals(Assert.java:647) at > org.junit.Assert.assertEquals(Assert.java:633) at > org.apache.hadoop.hdfs.TestDFSShell.testCopyCommandsWithPreserveOption(TestDFSShell.java:2323) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at >
[jira] [Resolved] (HDFS-16106) Fix flaky unit test TestDFSShell
[ https://issues.apache.org/jira/browse/HDFS-16106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hui Fei resolved HDFS-16106. Fix Version/s: 3.4.0 Resolution: Fixed > Fix flaky unit test TestDFSShell > > > Key: HDFS-16106 > URL: https://issues.apache.org/jira/browse/HDFS-16106 > Project: Hadoop HDFS > Issue Type: Wish >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 1h 20m > Remaining Estimate: 0h > > This unit test occasionally fails. > The value set for dfs.namenode.accesstime.precision is too low, result in the > execution of the method, accesstime could be set many times, eventually > leading to failed assert. > IMO, dfs.namenode.accesstime.precision should be greater than or equal to the > timeout(120s) of TestDFSShell#testCopyCommandsWithPreserveOption(), or > directly set to 0 to disable this feature. > > {code:java} > [ERROR] Tests run: 52, Failures: 3, Errors: 0, Skipped: 0, Time elapsed: > 106.778 s <<< FAILURE! - in org.apache.hadoop.hdfs.TestDFSShell[ERROR] Tests > run: 52, Failures: 3, Errors: 0, Skipped: 0, Time elapsed: 106.778 s <<< > FAILURE! - in org.apache.hadoop.hdfs.TestDFSShell [ERROR] > testCopyCommandsWithPreserveOption(org.apache.hadoop.hdfs.TestDFSShell) Time > elapsed: 2.353 s <<< FAILURE! java.lang.AssertionError: > expected:<1625095098319> but was:<1625095099374> at > org.junit.Assert.fail(Assert.java:89) at > org.junit.Assert.failNotEquals(Assert.java:835) at > org.junit.Assert.assertEquals(Assert.java:647) at > org.junit.Assert.assertEquals(Assert.java:633) at > org.apache.hadoop.hdfs.TestDFSShell.testCopyCommandsWithPreserveOption(TestDFSShell.java:2282) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) at > java.lang.Thread.run(Thread.java:748) > [ERROR] > testCopyCommandsWithPreserveOption(org.apache.hadoop.hdfs.TestDFSShell) Time > elapsed: 2.467 s <<< FAILURE! java.lang.AssertionError: > expected:<1625095192527> but was:<1625095193950> at > org.junit.Assert.fail(Assert.java:89) at > org.junit.Assert.failNotEquals(Assert.java:835) at > org.junit.Assert.assertEquals(Assert.java:647) at > org.junit.Assert.assertEquals(Assert.java:633) at > org.apache.hadoop.hdfs.TestDFSShell.testCopyCommandsWithPreserveOption(TestDFSShell.java:2323) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) at > java.lang.Thread.run(Thread.java:748) > [ERROR] > testCopyCommandsWithPreserveOption(org.apache.hadoop.hdfs.TestDFSShell) Time > elapsed: 2.173 s <<< FAILURE! java.lang.AssertionError: > expected:<1625095196756> but was:<1625095197975> at > org.junit.Assert.fail(Assert.java:89) at > org.junit.Assert.failNotEquals(Assert.java:835) at > org.junit.Assert.assertEquals(Assert.java:647) at > org.junit.Assert.assertEquals(Assert.java:633) at > org.apache.hadoop.hdfs.TestDFSShell.testCopyCommandsWithPreserveOption(TestDFSShell.java:2303) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
[jira] [Work logged] (HDFS-16106) Fix flaky unit test TestDFSShell
[ https://issues.apache.org/jira/browse/HDFS-16106?focusedWorklogId=617995=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-617995 ] ASF GitHub Bot logged work on HDFS-16106: - Author: ASF GitHub Bot Created on: 02/Jul/21 07:31 Start Date: 02/Jul/21 07:31 Worklog Time Spent: 10m Work Description: ferhui merged pull request #3168: URL: https://github.com/apache/hadoop/pull/3168 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 617995) Time Spent: 1h 20m (was: 1h 10m) > Fix flaky unit test TestDFSShell > > > Key: HDFS-16106 > URL: https://issues.apache.org/jira/browse/HDFS-16106 > Project: Hadoop HDFS > Issue Type: Wish >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Time Spent: 1h 20m > Remaining Estimate: 0h > > This unit test occasionally fails. > The value set for dfs.namenode.accesstime.precision is too low, result in the > execution of the method, accesstime could be set many times, eventually > leading to failed assert. > IMO, dfs.namenode.accesstime.precision should be greater than or equal to the > timeout(120s) of TestDFSShell#testCopyCommandsWithPreserveOption(), or > directly set to 0 to disable this feature. > > {code:java} > [ERROR] Tests run: 52, Failures: 3, Errors: 0, Skipped: 0, Time elapsed: > 106.778 s <<< FAILURE! - in org.apache.hadoop.hdfs.TestDFSShell[ERROR] Tests > run: 52, Failures: 3, Errors: 0, Skipped: 0, Time elapsed: 106.778 s <<< > FAILURE! - in org.apache.hadoop.hdfs.TestDFSShell [ERROR] > testCopyCommandsWithPreserveOption(org.apache.hadoop.hdfs.TestDFSShell) Time > elapsed: 2.353 s <<< FAILURE! java.lang.AssertionError: > expected:<1625095098319> but was:<1625095099374> at > org.junit.Assert.fail(Assert.java:89) at > org.junit.Assert.failNotEquals(Assert.java:835) at > org.junit.Assert.assertEquals(Assert.java:647) at > org.junit.Assert.assertEquals(Assert.java:633) at > org.apache.hadoop.hdfs.TestDFSShell.testCopyCommandsWithPreserveOption(TestDFSShell.java:2282) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) at > java.lang.Thread.run(Thread.java:748) > [ERROR] > testCopyCommandsWithPreserveOption(org.apache.hadoop.hdfs.TestDFSShell) Time > elapsed: 2.467 s <<< FAILURE! java.lang.AssertionError: > expected:<1625095192527> but was:<1625095193950> at > org.junit.Assert.fail(Assert.java:89) at > org.junit.Assert.failNotEquals(Assert.java:835) at > org.junit.Assert.assertEquals(Assert.java:647) at > org.junit.Assert.assertEquals(Assert.java:633) at > org.apache.hadoop.hdfs.TestDFSShell.testCopyCommandsWithPreserveOption(TestDFSShell.java:2323) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) > at >
[jira] [Work logged] (HDFS-16106) Fix flaky unit test TestDFSShell
[ https://issues.apache.org/jira/browse/HDFS-16106?focusedWorklogId=617994=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-617994 ] ASF GitHub Bot logged work on HDFS-16106: - Author: ASF GitHub Bot Created on: 02/Jul/21 07:31 Start Date: 02/Jul/21 07:31 Worklog Time Spent: 10m Work Description: ferhui commented on pull request #3168: URL: https://github.com/apache/hadoop/pull/3168#issuecomment-872785067 @tomscut Thanks for contribution, @aajisaka @ayushtkn @virajjasani Thanks for review! Merged to trunk -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 617994) Time Spent: 1h 10m (was: 1h) > Fix flaky unit test TestDFSShell > > > Key: HDFS-16106 > URL: https://issues.apache.org/jira/browse/HDFS-16106 > Project: Hadoop HDFS > Issue Type: Wish >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Time Spent: 1h 10m > Remaining Estimate: 0h > > This unit test occasionally fails. > The value set for dfs.namenode.accesstime.precision is too low, result in the > execution of the method, accesstime could be set many times, eventually > leading to failed assert. > IMO, dfs.namenode.accesstime.precision should be greater than or equal to the > timeout(120s) of TestDFSShell#testCopyCommandsWithPreserveOption(), or > directly set to 0 to disable this feature. > > {code:java} > [ERROR] Tests run: 52, Failures: 3, Errors: 0, Skipped: 0, Time elapsed: > 106.778 s <<< FAILURE! - in org.apache.hadoop.hdfs.TestDFSShell[ERROR] Tests > run: 52, Failures: 3, Errors: 0, Skipped: 0, Time elapsed: 106.778 s <<< > FAILURE! - in org.apache.hadoop.hdfs.TestDFSShell [ERROR] > testCopyCommandsWithPreserveOption(org.apache.hadoop.hdfs.TestDFSShell) Time > elapsed: 2.353 s <<< FAILURE! java.lang.AssertionError: > expected:<1625095098319> but was:<1625095099374> at > org.junit.Assert.fail(Assert.java:89) at > org.junit.Assert.failNotEquals(Assert.java:835) at > org.junit.Assert.assertEquals(Assert.java:647) at > org.junit.Assert.assertEquals(Assert.java:633) at > org.apache.hadoop.hdfs.TestDFSShell.testCopyCommandsWithPreserveOption(TestDFSShell.java:2282) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) at > java.lang.Thread.run(Thread.java:748) > [ERROR] > testCopyCommandsWithPreserveOption(org.apache.hadoop.hdfs.TestDFSShell) Time > elapsed: 2.467 s <<< FAILURE! java.lang.AssertionError: > expected:<1625095192527> but was:<1625095193950> at > org.junit.Assert.fail(Assert.java:89) at > org.junit.Assert.failNotEquals(Assert.java:835) at > org.junit.Assert.assertEquals(Assert.java:647) at > org.junit.Assert.assertEquals(Assert.java:633) at > org.apache.hadoop.hdfs.TestDFSShell.testCopyCommandsWithPreserveOption(TestDFSShell.java:2323) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at >
[jira] [Updated] (HDFS-16108) Incorrect log placeholders used in JournalNodeSyncer
[ https://issues.apache.org/jira/browse/HDFS-16108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HDFS-16108: -- Labels: pull-request-available (was: ) > Incorrect log placeholders used in JournalNodeSyncer > > > Key: HDFS-16108 > URL: https://issues.apache.org/jira/browse/HDFS-16108 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Minor > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > When Journal sync thread is using incorrect log placeholders at 2 places: > # When it fails to create dir for downloading log segments > # When it fails to move tmp editFile to current dir > Since these failure logs are important to debug JN sync issues, we should fix > these incorrect placeholders. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16108) Incorrect log placeholders used in JournalNodeSyncer
[ https://issues.apache.org/jira/browse/HDFS-16108?focusedWorklogId=617989=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-617989 ] ASF GitHub Bot logged work on HDFS-16108: - Author: ASF GitHub Bot Created on: 02/Jul/21 07:25 Start Date: 02/Jul/21 07:25 Worklog Time Spent: 10m Work Description: virajjasani opened a new pull request #3169: URL: https://github.com/apache/hadoop/pull/3169 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 617989) Remaining Estimate: 0h Time Spent: 10m > Incorrect log placeholders used in JournalNodeSyncer > > > Key: HDFS-16108 > URL: https://issues.apache.org/jira/browse/HDFS-16108 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > When Journal sync thread is using incorrect log placeholders at 2 places: > # When it fails to create dir for downloading log segments > # When it fails to move tmp editFile to current dir > Since these failure logs are important to debug JN sync issues, we should fix > these incorrect placeholders. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-16108) Incorrect log placeholders used in JournalNodeSyncer
Viraj Jasani created HDFS-16108: --- Summary: Incorrect log placeholders used in JournalNodeSyncer Key: HDFS-16108 URL: https://issues.apache.org/jira/browse/HDFS-16108 Project: Hadoop HDFS Issue Type: Bug Reporter: Viraj Jasani Assignee: Viraj Jasani When Journal sync thread is using incorrect log placeholders at 2 places: # When it fails to create dir for downloading log segments # When it fails to move tmp editFile to current dir Since these failure logs are important to debug JN sync issues, we should fix these incorrect placeholders. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-16097) Datanode receives ipc requests will throw NPE when datanode quickly restart
[ https://issues.apache.org/jira/browse/HDFS-16097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17373287#comment-17373287 ] lei w commented on HDFS-16097: -- Thanks [~hexiaoqiao] for your comment. I did not look for the consequences of the client encountering this kind of request. But judging from the code logic, if the DataNode performs block recovery, the block recovery task will fail and if the client calls the getReplicaVisibleLength() method in the ClientDataNodeProtocol, the client should exit directly. > Datanode receives ipc requests will throw NPE when datanode quickly restart > > > Key: HDFS-16097 > URL: https://issues.apache.org/jira/browse/HDFS-16097 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode > Environment: >Reporter: lei w >Assignee: lei w >Priority: Major > Attachments: HDFS-16097.001.patch > > > Datanode receives ipc requests will throw NPE when datanode quickly restart. > This is because when DN is reStarted, BlockPool is first registered with > blockPoolManager and then fsdataset is initialized. When BlockPool is > registered to blockPoolManager without initializing fsdataset, DataNode > receives an IPC request will throw NPE, because it will call related methods > provided by fsdataset. The stack exception is as follows: > {code:java} > java.lang.NullPointerException > at > org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:3468) > at > org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) > at > org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:916) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:862) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16106) Fix flaky unit test TestDFSShell
[ https://issues.apache.org/jira/browse/HDFS-16106?focusedWorklogId=617988=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-617988 ] ASF GitHub Bot logged work on HDFS-16106: - Author: ASF GitHub Bot Created on: 02/Jul/21 07:13 Start Date: 02/Jul/21 07:13 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3168: URL: https://github.com/apache/hadoop/pull/3168#issuecomment-872775192 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 45s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 37m 37s | | trunk passed | | +1 :green_heart: | compile | 1m 43s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 31s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 14s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 39s | | trunk passed | | +1 :green_heart: | javadoc | 1m 9s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 34s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 17s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 48s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 16s | | the patch passed | | +1 :green_heart: | compile | 1m 16s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 16s | | the patch passed | | +1 :green_heart: | compile | 1m 6s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 6s | | the patch passed | | +1 :green_heart: | blanks | 0m 1s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 52s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 12s | | the patch passed | | +1 :green_heart: | javadoc | 0m 46s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 19s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 12s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 13s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 250m 47s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 47s | | The patch does not generate ASF License warnings. | | | | 344m 9s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3168/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3168 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 7be4e93f394f 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / d955ded33d1305881793ac860b6d1c6e8cdc2baa | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3168/1/testReport/ | | Max. process+thread count | 2740 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3168/1/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message
[jira] [Updated] (HDFS-16083) Forbid Observer NameNode trigger active namenode log roll
[ https://issues.apache.org/jira/browse/HDFS-16083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jinglun updated HDFS-16083: --- Status: Open (was: Patch Available) > Forbid Observer NameNode trigger active namenode log roll > -- > > Key: HDFS-16083 > URL: https://issues.apache.org/jira/browse/HDFS-16083 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namanode >Reporter: lei w >Assignee: lei w >Priority: Minor > Attachments: HDFS-16083.001.patch, HDFS-16083.002.patch, > HDFS-16083.003.patch, HDFS-16083.004.patch, HDFS-16083.005.1.patch, > HDFS-16083.005.patch, activeRollEdits.png > > > When the Observer NameNode is turned on in the cluster, the Active NameNode > will receive rollEditLog RPC requests from the Standby NameNode and Observer > NameNode in a short time. Observer NameNode's rollEditLog request is a > repetitive operation, so should we forbid Observer NameNode trigger active > namenode log roll ? We 'dfs.ha.log-roll.period' configured is 300( 5 > minutes) and active NameNode receives rollEditLog RPC as shown in > activeRollEdits.png -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16083) Forbid Observer NameNode trigger active namenode log roll
[ https://issues.apache.org/jira/browse/HDFS-16083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jinglun updated HDFS-16083: --- Attachment: HDFS-16083.005.1.patch Status: Patch Available (was: Open) > Forbid Observer NameNode trigger active namenode log roll > -- > > Key: HDFS-16083 > URL: https://issues.apache.org/jira/browse/HDFS-16083 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namanode >Reporter: lei w >Assignee: lei w >Priority: Minor > Attachments: HDFS-16083.001.patch, HDFS-16083.002.patch, > HDFS-16083.003.patch, HDFS-16083.004.patch, HDFS-16083.005.1.patch, > HDFS-16083.005.patch, activeRollEdits.png > > > When the Observer NameNode is turned on in the cluster, the Active NameNode > will receive rollEditLog RPC requests from the Standby NameNode and Observer > NameNode in a short time. Observer NameNode's rollEditLog request is a > repetitive operation, so should we forbid Observer NameNode trigger active > namenode log roll ? We 'dfs.ha.log-roll.period' configured is 300( 5 > minutes) and active NameNode receives rollEditLog RPC as shown in > activeRollEdits.png -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-16107) Split RPC configuration to isolate RPC
[ https://issues.apache.org/jira/browse/HDFS-16107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] JiangHua Zhu reassigned HDFS-16107: --- Assignee: JiangHua Zhu > Split RPC configuration to isolate RPC > -- > > Key: HDFS-16107 > URL: https://issues.apache.org/jira/browse/HDFS-16107 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: JiangHua Zhu >Assignee: JiangHua Zhu >Priority: Minor > > For RPC of different ports, there are some common configurations, such as: > ipc.server.read.threadpool.size > ipc.server.read.connection-queue.size > ipc.server.handler.queue.size > Once we configure these values, it will affect all requests (including client > and requests within the cluster). > It is necessary for us to split these configurations to adapt to different > ports, such as: > ipc.8020.server.read.threadpool.size > ipc.8021.server.read.threadpool.size > ipc.8020.server.read.connection-queue.size > ipc.8021.server.read.connection-queue.size > The advantage of this is to isolate the RPC to deal with the pressure of > requests from all sides. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-16107) Split RPC configuration to isolate RPC
JiangHua Zhu created HDFS-16107: --- Summary: Split RPC configuration to isolate RPC Key: HDFS-16107 URL: https://issues.apache.org/jira/browse/HDFS-16107 Project: Hadoop HDFS Issue Type: Improvement Reporter: JiangHua Zhu For RPC of different ports, there are some common configurations, such as: ipc.server.read.threadpool.size ipc.server.read.connection-queue.size ipc.server.handler.queue.size Once we configure these values, it will affect all requests (including client and requests within the cluster). It is necessary for us to split these configurations to adapt to different ports, such as: ipc.8020.server.read.threadpool.size ipc.8021.server.read.threadpool.size ipc.8020.server.read.connection-queue.size ipc.8021.server.read.connection-queue.size The advantage of this is to isolate the RPC to deal with the pressure of requests from all sides. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org