[ https://issues.apache.org/jira/browse/HDFS-9876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Jing Zhao updated HDFS-9876: ---------------------------- Attachment: HDFS-9876.001.patch Remove unused internalBlock. > shouldProcessOverReplicated should not count number of pending replicas > ----------------------------------------------------------------------- > > Key: HDFS-9876 > URL: https://issues.apache.org/jira/browse/HDFS-9876 > Project: Hadoop HDFS > Issue Type: Sub-task > Reporter: Takuya Fukudome > Assignee: Jing Zhao > Attachments: HDFS-9876.000.patch, HDFS-9876.001.patch, > HDFS-9876.001.patch > > > Currently when checking if we should process over-replicated block in > {{addStoredBlock}}, we count both the number of reported replicas and pending > replicas. However, {{processOverReplicatedBlock}} chooses excess replicas > only among all the reported storages of the block. So in a situation where we > have over-replicated replica/internal blocks which only reside in the pending > queue, we will not be able to choose any extra replica to delete. > For contiguous blocks, this causes {{chooseExcessReplicasContiguous}} to do > nothing. But for striped blocks, this may cause endless loop in > {{chooseExcessReplicasStriped}} in the following while loop: > {code} > while (candidates.size() > 1) { > List<DatanodeStorageInfo> replicasToDelete = placementPolicy > .chooseReplicasToDelete(nonExcess, candidates, (short) 1, > excessTypes, null, null); > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)