[ https://issues.apache.org/jira/browse/HDFS-3119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13240101#comment-13240101 ]
Uma Maheswara Rao G commented on HDFS-3119: ------------------------------------------- Nicholas, thanks a lot for taking a look. I tried this case in my cluster with only one block. In this case anyway this block itself should have priority. For your question. I don't see any OverReplicated processing from neededReplication priority Queues. We will just remove from needed replication queues. Am I missing? {code} if (numEffectiveReplicas >= requiredReplication) { if ( (pendingReplications.getNumReplicas(block) > 0) || (blockHasEnoughRacks(block)) ) { neededReplications.remove(block, priority); // remove from neededReplications neededReplications.decrementReplicationIndex(priority); NameNode.stateChangeLog.info("BLOCK* " + "Removing block " + block + " from neededReplications as it has enough replicas."); continue; } {code} processOverReplications is happening straight away from addStoredBlock and setReplication calls. Anyway let's see what happened in Andreina's cluster. > Overreplicated block is not deleted even after the replication factor is > reduced after sync follwed by closing that file > ------------------------------------------------------------------------------------------------------------------------ > > Key: HDFS-3119 > URL: https://issues.apache.org/jira/browse/HDFS-3119 > Project: Hadoop HDFS > Issue Type: Bug > Components: name-node > Affects Versions: 0.24.0 > Reporter: J.Andreina > Priority: Minor > Fix For: 0.24.0, 0.23.2 > > > cluster setup: > -------------- > 1NN,2 DN,replication factor 2,block report interval 3sec ,block size-256MB > step1: write a file "filewrite.txt" of size 90bytes with sync(not closed) > step2: change the replication factor to 1 using the command: "./hdfs dfs > -setrep 1 /filewrite.txt" > step3: close the file > * At the NN side the file "Decreasing replication from 2 to 1 for > /filewrite.txt" , logs has occured but the overreplicated blocks are not > deleted even after the block report is sent from DN > * while listing the file in the console using "./hdfs dfs -ls " the > replication factor for that file is mentioned as 1 > * In fsck report for that files displays that the file is replicated to 2 > datanodes -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira