[ https://issues.apache.org/jira/browse/HDDS-4343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Glen Geng updated HDDS-4343: ---------------------------- Summary: ReplicationManager.handleOverReplicatedContainer does not handle (was: CLONE - OM client request fails with "failed to commit as key is not found in OpenKey table") > ReplicationManager.handleOverReplicatedContainer does not handle > ----------------------------------------------------------------- > > Key: HDDS-4343 > URL: https://issues.apache.org/jira/browse/HDDS-4343 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM > Reporter: Glen Geng > Assignee: Glen Geng > Priority: Blocker > > {code:java} > // If there are unhealthy replicas, then we should remove them even if > it > // makes the container violate the placement policy, as excess unhealthy > // containers are not really useful. It will be corrected later as a > // mis-replicated container will be seen as under-replicated. > for (ContainerReplica r : unhealthyReplicas) { > if (excess > 0) { > sendDeleteCommand(container, r.getDatanodeDetails(), true); > excess -= 1; > } > break; > } > // After removing all unhealthy replicas, if the container is still over > // replicated then we need to check if it is already mis-replicated. > // If it is, we do no harm by removing excess replicas. However, if it > is > // not mis-replicated, then we can only remove replicas if they don't > // make the container become mis-replicated. > {code} > From the comment, it wants to remove all unhealthy replicas until excess > reach 0 ? It should be > {code:java} > for (ContainerReplica r : unhealthyReplicas) { > if (excess > 0) { > sendDeleteCommand(container, r.getDatanodeDetails(), true); > excess -= 1; > } else { > break; > } > } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org