[jira] [Created] (HDFS-14465) When the Block expected replications is larger than the number of DataNodes, entering maintenance will never exit.
Yicong Cai created HDFS-14465: - Summary: When the Block expected replications is larger than the number of DataNodes, entering maintenance will never exit. Key: HDFS-14465 URL: https://issues.apache.org/jira/browse/HDFS-14465 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.9.2 Reporter: Yicong Cai Scenes: There is a small HDFS cluster with 5 DataNodes; one of them is maintained, added to the maintenance list, and set dfs.namenode.maintenance.replication.min to 1. When refresh Nodes, the NameNode starts checking whether the blocks on the node require a new replication. The replications of the MapReduce task job file is 10 by default, isNeededReplicationForMaintenance will determine to false, and isSufficientlyReplicated will determine to false, so the block of the job file needs to increase the replication. When adding a replication, since the cluster has only 5 DataNodes, all the nodes have the replications of the block, chooseTargetInOrder will throw a NotEnoughReplicasException, so that the replication cannot be increase, and the Entering Maintenance cannot be ended. This issue will cause the independent small cluster to be unable to use the maintenance mode. {panel:title=chooseTarget exception log} 2019-05-03 23:42:31,008 [31545331] - WARN [ReplicationMonitor:BlockPlacementPolicyDefault@431] - Failed to place enough replicas, still in need of 1 to reach 5 (unavailableStorages=[], storagePolicy=BlockStoragePolicy\{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology {panel} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDDS-1491) Ozone KeyInputStream seek() should not read the chunk file
[ https://issues.apache.org/jira/browse/HDDS-1491?focusedWorklogId=237550=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-237550 ] ASF GitHub Bot logged work on HDDS-1491: Author: ASF GitHub Bot Created on: 05/May/19 23:44 Start Date: 05/May/19 23:44 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on issue #795: HDDS-1491. Ozone KeyInputStream seek() should not read the chunk file. URL: https://github.com/apache/hadoop/pull/795#issuecomment-489473825 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 61 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 470 | trunk passed | | +1 | compile | 204 | trunk passed | | +1 | checkstyle | 56 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 880 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 126 | trunk passed | | 0 | spotbugs | 254 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 452 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 413 | the patch passed | | +1 | compile | 218 | the patch passed | | +1 | javac | 218 | the patch passed | | +1 | checkstyle | 60 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 1 | The patch has no whitespace issues. | | +1 | shadedclient | 664 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 126 | the patch passed | | +1 | findbugs | 439 | the patch passed | ||| _ Other Tests _ | | -1 | unit | 164 | hadoop-hdds in the patch failed. | | -1 | unit | 922 | hadoop-ozone in the patch failed. | | +1 | asflicense | 36 | The patch does not generate ASF License warnings. | | | | 5436 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdds.scm.pipeline.TestRatisPipelineUtils | | | hadoop.hdds.scm.pipeline.TestNodeFailure | | | hadoop.ozone.om.TestOMDbCheckpointServlet | | | hadoop.ozone.TestMiniChaosOzoneCluster | | | hadoop.ozone.om.TestOmBlockVersioning | | | hadoop.ozone.om.TestOmInit | | | hadoop.hdds.scm.container.TestContainerStateManagerIntegration | | | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption | | | hadoop.ozone.scm.pipeline.TestSCMPipelineMetrics | | | hadoop.ozone.client.rpc.TestCommitWatcher | | | hadoop.ozone.om.TestOmMetrics | | | hadoop.ozone.om.TestMultipleContainerReadWrite | | | hadoop.ozone.client.rpc.TestContainerStateMachineFailures | | | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException | | | hadoop.ozone.scm.TestContainerSmallFile | | | hadoop.hdds.scm.pipeline.TestSCMRestart | | | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures | | | hadoop.ozone.om.TestOzoneManagerHA | | | hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion | | | hadoop.hdds.scm.pipeline.TestNode2PipelineMap | | | hadoop.ozone.client.rpc.TestBCSID | | | hadoop.ozone.web.client.TestOzoneClient | | | hadoop.hdds.scm.pipeline.TestPipelineClose | | | hadoop.ozone.client.rpc.TestOzoneRpcClient | | | hadoop.ozone.om.TestScmSafeMode | | | hadoop.ozone.om.TestOzoneManager | | | hadoop.ozone.client.rpc.TestContainerStateMachine | | | hadoop.ozone.TestContainerStateMachineIdempotency | | | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis | | | hadoop.hdds.scm.safemode.TestSCMSafeModeWithPipelineRules | | | hadoop.ozone.scm.TestGetCommittedBlockLengthAndPutKey | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-795/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/795 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux b23a2bee3199 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 1d70c8c | | Default Java | 1.8.0_191 | | unit |
[jira] [Updated] (HDDS-1491) Ozone KeyInputStream seek() should not read the chunk file
[ https://issues.apache.org/jira/browse/HDDS-1491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HDDS-1491: - Labels: pull-request-available (was: ) > Ozone KeyInputStream seek() should not read the chunk file > -- > > Key: HDDS-1491 > URL: https://issues.apache.org/jira/browse/HDDS-1491 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Labels: pull-request-available > > KeyInputStream#seek() calls BlockInputStream#seek() to adjust the buffer > position to the seeked position. As part of the seek operation, the whole > chunk is read from the container and stored in the buffer so that the buffer > position can be advanced to the seeked position. > We should not read from disk on a seek() operation. Instead, for a read > operation, when the chunk file is read and put in the buffer, at that time, > we can advance the buffer position to the previously seeked position. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDDS-1491) Ozone KeyInputStream seek() should not read the chunk file
[ https://issues.apache.org/jira/browse/HDDS-1491?focusedWorklogId=237545=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-237545 ] ASF GitHub Bot logged work on HDDS-1491: Author: ASF GitHub Bot Created on: 05/May/19 22:12 Start Date: 05/May/19 22:12 Worklog Time Spent: 10m Work Description: hanishakoneru commented on pull request #795: HDDS-1491. Ozone KeyInputStream seek() should not read the chunk file. URL: https://github.com/apache/hadoop/pull/795 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 237545) Time Spent: 10m Remaining Estimate: 0h > Ozone KeyInputStream seek() should not read the chunk file > -- > > Key: HDDS-1491 > URL: https://issues.apache.org/jira/browse/HDDS-1491 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > KeyInputStream#seek() calls BlockInputStream#seek() to adjust the buffer > position to the seeked position. As part of the seek operation, the whole > chunk is read from the container and stored in the buffer so that the buffer > position can be advanced to the seeked position. > We should not read from disk on a seek() operation. Instead, for a read > operation, when the chunk file is read and put in the buffer, at that time, > we can advance the buffer position to the previously seeked position. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-1491) Ozone KeyInputStream seek() should not read the chunk file
Hanisha Koneru created HDDS-1491: Summary: Ozone KeyInputStream seek() should not read the chunk file Key: HDDS-1491 URL: https://issues.apache.org/jira/browse/HDDS-1491 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Hanisha Koneru Assignee: Hanisha Koneru KeyInputStream#seek() calls BlockInputStream#seek() to adjust the buffer position to the seeked position. As part of the seek operation, the whole chunk is read from the container and stored in the buffer so that the buffer position can be advanced to the seeked position. We should not read from disk on a seek() operation. Instead, for a read operation, when the chunk file is read and put in the buffer, at that time, we can advance the buffer position to the previously seeked position. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14245) Class cast error in GetGroups with ObserverReadProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-14245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833443#comment-16833443 ] Konstantin Shvachko commented on HDFS-14245: Let's go over it one issue at a time. ??IOException rather than RuntimeException.?? I mean here checked exception vs unchecked exceptions. {{RuntimeException}} is unchecked so the callers will not even know they need to handle it. Checked exception would make callers think that about such possibility. BUT if this is a bug and the caller of the method cannot do anything about it better than {{getProxyAsClientProtocol()}} itself then we should just assert or LOG error and crash the client. And I agree that then we should not trigger any retries. > Class cast error in GetGroups with ObserverReadProxyProvider > > > Key: HDFS-14245 > URL: https://issues.apache.org/jira/browse/HDFS-14245 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: HDFS-12943 >Reporter: Shen Yinjie >Assignee: Erik Krogen >Priority: Major > Attachments: HDFS-14245.000.patch, HDFS-14245.001.patch, > HDFS-14245.002.patch, HDFS-14245.003.patch, HDFS-14245.patch > > > Run "hdfs groups" with ObserverReadProxyProvider, Exception throws as : > {code:java} > Exception in thread "main" java.io.IOException: Couldn't create proxy > provider class > org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider > at > org.apache.hadoop.hdfs.NameNodeProxiesClient.createFailoverProxyProvider(NameNodeProxiesClient.java:261) > at > org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:119) > at > org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:95) > at org.apache.hadoop.hdfs.tools.GetGroups.getUgmProtocol(GetGroups.java:87) > at org.apache.hadoop.tools.GetGroupsBase.run(GetGroupsBase.java:71) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) > at org.apache.hadoop.hdfs.tools.GetGroups.main(GetGroups.java:96) > Caused by: java.lang.reflect.InvocationTargetException > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at > org.apache.hadoop.hdfs.NameNodeProxiesClient.createFailoverProxyProvider(NameNodeProxiesClient.java:245) > ... 7 more > Caused by: java.lang.ClassCastException: > org.apache.hadoop.hdfs.server.namenode.ha.NameNodeHAProxyFactory cannot be > cast to org.apache.hadoop.hdfs.server.namenode.ha.ClientHAProxyFactory > at > org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider.(ObserverReadProxyProvider.java:123) > at > org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider.(ObserverReadProxyProvider.java:112) > ... 12 more > {code} > similar with HDFS-14116, we did a simple fix. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1464) Client should have different retry policies for different exceptions
[ https://issues.apache.org/jira/browse/HDDS-1464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833412#comment-16833412 ] Hudson commented on HDDS-1464: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16505 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/16505/]) HDDS-1464. Client should have different retry policies for different (koneru.hanisha: rev 1d70c8ca0fb08fbf4166a3bfbf589c593042ab69) * (edit) hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneClientUtils.java * (edit) hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/KeyOutputStream.java > Client should have different retry policies for different exceptions > > > Key: HDDS-1464 > URL: https://issues.apache.org/jira/browse/HDDS-1464 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Hanisha Koneru >Assignee: Siddharth Wagle >Priority: Major > Labels: pull-request-available > Time Spent: 1h 50m > Remaining Estimate: 0h > > Client should have different retry policies for different type of failures. > For example, If a key write fails because of ContainerNotOpen exception, the > client should wait for a specified interval before retrying. But if the key > write fails because of lets say ratis leader election or request timeout, we > want the client to retry immediately. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDDS-1464) Client should have different retry policies for different exceptions
[ https://issues.apache.org/jira/browse/HDDS-1464?focusedWorklogId=237492=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-237492 ] ASF GitHub Bot logged work on HDDS-1464: Author: ASF GitHub Bot Created on: 05/May/19 16:21 Start Date: 05/May/19 16:21 Worklog Time Spent: 10m Work Description: hanishakoneru commented on pull request #785: HDDS-1464. Client should have different retry policies for different exceptions. URL: https://github.com/apache/hadoop/pull/785 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 237492) Time Spent: 1h 50m (was: 1h 40m) > Client should have different retry policies for different exceptions > > > Key: HDDS-1464 > URL: https://issues.apache.org/jira/browse/HDDS-1464 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Hanisha Koneru >Assignee: Siddharth Wagle >Priority: Major > Labels: pull-request-available > Time Spent: 1h 50m > Remaining Estimate: 0h > > Client should have different retry policies for different type of failures. > For example, If a key write fails because of ContainerNotOpen exception, the > client should wait for a specified interval before retrying. But if the key > write fails because of lets say ratis leader election or request timeout, we > want the client to retry immediately. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDDS-1464) Client should have different retry policies for different exceptions
[ https://issues.apache.org/jira/browse/HDDS-1464?focusedWorklogId=237491=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-237491 ] ASF GitHub Bot logged work on HDDS-1464: Author: ASF GitHub Bot Created on: 05/May/19 16:20 Start Date: 05/May/19 16:20 Worklog Time Spent: 10m Work Description: hanishakoneru commented on issue #785: HDDS-1464. Client should have different retry policies for different exceptions. URL: https://github.com/apache/hadoop/pull/785#issuecomment-489440978 The test is flaky. I will merge this PR. Thanks @swagle for working on this. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 237491) Time Spent: 1h 40m (was: 1.5h) > Client should have different retry policies for different exceptions > > > Key: HDDS-1464 > URL: https://issues.apache.org/jira/browse/HDDS-1464 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Hanisha Koneru >Assignee: Siddharth Wagle >Priority: Major > Labels: pull-request-available > Time Spent: 1h 40m > Remaining Estimate: 0h > > Client should have different retry policies for different type of failures. > For example, If a key write fails because of ContainerNotOpen exception, the > client should wait for a specified interval before retrying. But if the key > write fails because of lets say ratis leader election or request timeout, we > want the client to retry immediately. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14372) NPE while DN is shutting down
[ https://issues.apache.org/jira/browse/HDFS-14372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833329#comment-16833329 ] Hudson commented on HDFS-14372: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16504 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/16504/]) HDFS-14372. NPE while DN is shutting down. Contributed by lujie. (surendralilhore: rev 69b903bbd8e2dafac6b2cb1d748ea666b6f877cf) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDatanodeRegister.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java > NPE while DN is shutting down > - > > Key: HDFS-14372 > URL: https://issues.apache.org/jira/browse/HDFS-14372 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: lujie >Assignee: lujie >Priority: Major > Fix For: 3.3.0 > > Attachments: HDFS-14372_0.patch, HDFS-14372_1.patch, > HDFS-14372_2.patch > > > Take the code BPServiceActor#register: > {code:java} > while (shouldRun()) { > try { >// Use returned registration from namenode with updated fields > newBpRegistration = bpNamenode.registerDatanode(newBpRegistration); > newBpRegistration.setNamespaceInfo(nsInfo); > bpRegistration = newBpRegistration; > break; > } catch(EOFException e) { // namenode might have just restarted > > } > LOG.info("Block pool " + this + " successfully registered with NN"); > bpos.registrationSucceeded(this, bpRegistration); > {code} > if DN is shutdown, then above code will skip the loop, and bpRegistration == > null, the null value will be used in DataNode#bpRegistrationSucceeded: > {code:java} > if(!storage.getDatanodeUuid().equals(bpRegistration.getDatanodeUuid())) > {code} > hence NPE happens > {code:java} > java.lang.NullPointerException > at > org.apache.hadoop.hdfs.server.datanode.DataNode.bpRegistrationSucceeded(DataNode.java:1583) > at > org.apache.hadoop.hdfs.server.datanode.BPOfferService.registrationSucceeded(BPOfferService.java:425) > at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor.register(BPServiceActor.java:807) > at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:294) > at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:840) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14372) NPE while DN is shutting down
[ https://issues.apache.org/jira/browse/HDFS-14372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Surendra Singh Lilhore updated HDFS-14372: -- Resolution: Fixed Fix Version/s: 3.3.0 Status: Resolved (was: Patch Available) Thanks [~xiaoheipangzi] for contribution. Thanks [~ayushtkn] for review.. Committed to trunk. > NPE while DN is shutting down > - > > Key: HDFS-14372 > URL: https://issues.apache.org/jira/browse/HDFS-14372 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: lujie >Assignee: lujie >Priority: Major > Fix For: 3.3.0 > > Attachments: HDFS-14372_0.patch, HDFS-14372_1.patch, > HDFS-14372_2.patch > > > Take the code BPServiceActor#register: > {code:java} > while (shouldRun()) { > try { >// Use returned registration from namenode with updated fields > newBpRegistration = bpNamenode.registerDatanode(newBpRegistration); > newBpRegistration.setNamespaceInfo(nsInfo); > bpRegistration = newBpRegistration; > break; > } catch(EOFException e) { // namenode might have just restarted > > } > LOG.info("Block pool " + this + " successfully registered with NN"); > bpos.registrationSucceeded(this, bpRegistration); > {code} > if DN is shutdown, then above code will skip the loop, and bpRegistration == > null, the null value will be used in DataNode#bpRegistrationSucceeded: > {code:java} > if(!storage.getDatanodeUuid().equals(bpRegistration.getDatanodeUuid())) > {code} > hence NPE happens > {code:java} > java.lang.NullPointerException > at > org.apache.hadoop.hdfs.server.datanode.DataNode.bpRegistrationSucceeded(DataNode.java:1583) > at > org.apache.hadoop.hdfs.server.datanode.BPOfferService.registrationSucceeded(BPOfferService.java:425) > at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor.register(BPServiceActor.java:807) > at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:294) > at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:840) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14353) Erasure Coding: metrics xmitsInProgress become to negative.
[ https://issues.apache.org/jira/browse/HDFS-14353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833312#comment-16833312 ] Hadoop QA commented on HDFS-14353: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 35s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 23s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 34s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}121m 18s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 36s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}181m 17s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeMetrics | | | hadoop.hdfs.server.namenode.TestNameNodeMXBean | | | hadoop.hdfs.TestSafeModeWithStripedFile | | | hadoop.hdfs.TestDFSShell | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e | | JIRA Issue | HDFS-14353 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12967854/HDFS-14353.006.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 42c2e8a165c7 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / d331a2a | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/26746/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/26746/testReport/ | | Max. process+thread count | 2717 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output |
[jira] [Commented] (HDFS-14372) NPE while DN is shutting down
[ https://issues.apache.org/jira/browse/HDFS-14372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833310#comment-16833310 ] Hadoop QA commented on HDFS-14372: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 38s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 51s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}121m 38s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 48s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}181m 24s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeUUID | | | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy | | | hadoop.hdfs.server.datanode.TestDataNodeMetrics | | | hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e | | JIRA Issue | HDFS-14372 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12964354/HDFS-14372_2.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 9866853ac189 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / d331a2a | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/26745/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/26745/testReport/ | | Max. process+thread count
[jira] [Commented] (HDFS-14438) Fix typo in OfflineEditsVisitorFactory
[ https://issues.apache.org/jira/browse/HDFS-14438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833284#comment-16833284 ] Hudson commented on HDFS-14438: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16503 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/16503/]) HDFS-14438. Fix typo in OfflineEditsVisitorFactory. Contributed by (surendralilhore: rev e424392a62418fad401fe80bf6517e375911c08c) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/OfflineEditsVisitorFactory.java > Fix typo in OfflineEditsVisitorFactory > -- > > Key: HDFS-14438 > URL: https://issues.apache.org/jira/browse/HDFS-14438 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 3.1.2 >Reporter: bianqi >Assignee: bianqi >Priority: Major > Labels: newbie > Fix For: 3.3.0 > > Attachments: HDFS-14438.1.patch > > > https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/OfflineEditsVisitorFactory.java#L68 > proccesor -> processor -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14438) Fix typo in OfflineEditsVisitorFactory
[ https://issues.apache.org/jira/browse/HDFS-14438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Surendra Singh Lilhore updated HDFS-14438: -- Resolution: Fixed Fix Version/s: 3.3.0 Status: Resolved (was: Patch Available) Thanks [~bianqi] for contribution. Added yon in HDFS contributor list. Committed to trunk. Thanks [~dineshchitlangia] for review > Fix typo in OfflineEditsVisitorFactory > -- > > Key: HDFS-14438 > URL: https://issues.apache.org/jira/browse/HDFS-14438 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 3.1.2 >Reporter: bianqi >Assignee: bianqi >Priority: Major > Labels: newbie > Fix For: 3.3.0 > > Attachments: HDFS-14438.1.patch > > > https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/OfflineEditsVisitorFactory.java#L68 > proccesor -> processor -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14438) Fix typo in OfflineEditsVisitorFactory
[ https://issues.apache.org/jira/browse/HDFS-14438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Surendra Singh Lilhore updated HDFS-14438: -- Summary: Fix typo in OfflineEditsVisitorFactory (was: Fix typo in HDFS for OfflineEditsVisitorFactory.java) > Fix typo in OfflineEditsVisitorFactory > -- > > Key: HDFS-14438 > URL: https://issues.apache.org/jira/browse/HDFS-14438 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 3.1.2 >Reporter: bianqi >Assignee: bianqi >Priority: Major > Labels: newbie > Attachments: HDFS-14438.1.patch > > > https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/OfflineEditsVisitorFactory.java#L68 > proccesor -> processor -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-14438) Fix typo in HDFS for OfflineEditsVisitorFactory.java
[ https://issues.apache.org/jira/browse/HDFS-14438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Surendra Singh Lilhore reassigned HDFS-14438: - Assignee: bianqi > Fix typo in HDFS for OfflineEditsVisitorFactory.java > > > Key: HDFS-14438 > URL: https://issues.apache.org/jira/browse/HDFS-14438 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 3.1.2 >Reporter: bianqi >Assignee: bianqi >Priority: Major > Labels: newbie > Attachments: HDFS-14438.1.patch > > > https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/OfflineEditsVisitorFactory.java#L68 > proccesor -> processor -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-14438) Fix typo in HDFS for OfflineEditsVisitorFactory.java
[ https://issues.apache.org/jira/browse/HDFS-14438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Surendra Singh Lilhore reassigned HDFS-14438: - Assignee: (was: Surendra Singh Lilhore) > Fix typo in HDFS for OfflineEditsVisitorFactory.java > > > Key: HDFS-14438 > URL: https://issues.apache.org/jira/browse/HDFS-14438 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 3.1.2 >Reporter: bianqi >Priority: Major > Labels: newbie > Attachments: HDFS-14438.1.patch > > > https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/OfflineEditsVisitorFactory.java#L68 > proccesor -> processor -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-14438) Fix typo in HDFS for OfflineEditsVisitorFactory.java
[ https://issues.apache.org/jira/browse/HDFS-14438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Surendra Singh Lilhore reassigned HDFS-14438: - Assignee: Surendra Singh Lilhore > Fix typo in HDFS for OfflineEditsVisitorFactory.java > > > Key: HDFS-14438 > URL: https://issues.apache.org/jira/browse/HDFS-14438 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 3.1.2 >Reporter: bianqi >Assignee: Surendra Singh Lilhore >Priority: Major > Labels: newbie > Attachments: HDFS-14438.1.patch > > > https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/OfflineEditsVisitorFactory.java#L68 > proccesor -> processor -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14438) Fix typo in HDFS for OfflineEditsVisitorFactory.java
[ https://issues.apache.org/jira/browse/HDFS-14438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833255#comment-16833255 ] Surendra Singh Lilhore commented on HDFS-14438: --- +1 > Fix typo in HDFS for OfflineEditsVisitorFactory.java > > > Key: HDFS-14438 > URL: https://issues.apache.org/jira/browse/HDFS-14438 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 3.1.2 >Reporter: bianqi >Priority: Major > Labels: newbie > Attachments: HDFS-14438.1.patch > > > https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/OfflineEditsVisitorFactory.java#L68 > proccesor -> processor -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14353) Erasure Coding: metrics xmitsInProgress become to negative.
[ https://issues.apache.org/jira/browse/HDFS-14353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833254#comment-16833254 ] maobaolong commented on HDFS-14353: --- [~elgoiri] Thank you for your advice, i've put a comment to the relative line. > Erasure Coding: metrics xmitsInProgress become to negative. > --- > > Key: HDFS-14353 > URL: https://issues.apache.org/jira/browse/HDFS-14353 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, erasure-coding >Affects Versions: 3.3.0 >Reporter: maobaolong >Assignee: maobaolong >Priority: Major > Fix For: 3.3.0 > > Attachments: HDFS-14353.001.patch, HDFS-14353.002.patch, > HDFS-14353.003.patch, HDFS-14353.004.patch, HDFS-14353.005.patch, > HDFS-14353.006.patch, screenshot-1.png > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14353) Erasure Coding: metrics xmitsInProgress become to negative.
[ https://issues.apache.org/jira/browse/HDFS-14353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] maobaolong updated HDFS-14353: -- Attachment: HDFS-14353.006.patch > Erasure Coding: metrics xmitsInProgress become to negative. > --- > > Key: HDFS-14353 > URL: https://issues.apache.org/jira/browse/HDFS-14353 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, erasure-coding >Affects Versions: 3.3.0 >Reporter: maobaolong >Assignee: maobaolong >Priority: Major > Fix For: 3.3.0 > > Attachments: HDFS-14353.001.patch, HDFS-14353.002.patch, > HDFS-14353.003.patch, HDFS-14353.004.patch, HDFS-14353.005.patch, > HDFS-14353.006.patch, screenshot-1.png > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-700) Support rack awared node placement policy based on network topology
[ https://issues.apache.org/jira/browse/HDDS-700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sammi Chen updated HDDS-700: Summary: Support rack awared node placement policy based on network topology (was: Support Node selection based on network topology) > Support rack awared node placement policy based on network topology > --- > > Key: HDDS-700 > URL: https://issues.apache.org/jira/browse/HDDS-700 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Xiaoyu Yao >Assignee: Sammi Chen >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-1490) Support configurable containerPlacement policy
Sammi Chen created HDDS-1490: Summary: Support configurable containerPlacement policy Key: HDDS-1490 URL: https://issues.apache.org/jira/browse/HDDS-1490 Project: Hadoop Distributed Data Store Issue Type: Sub-task Reporter: Sammi Chen Support configurable containerPlacement policy to meet different requirements -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org