[ https://issues.apache.org/jira/browse/HDFS-140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13126136#comment-13126136 ]
dhruba borthakur commented on HDFS-140: --------------------------------------- I would like to agree with Todd too. Uma: do you have a use-case why you definitely need this in 0.20? > When a file is deleted, its blocks remain in the blocksmap till the next > block report from Datanode > --------------------------------------------------------------------------------------------------- > > Key: HDFS-140 > URL: https://issues.apache.org/jira/browse/HDFS-140 > Project: Hadoop HDFS > Issue Type: Bug > Affects Versions: 0.20.1 > Reporter: dhruba borthakur > Assignee: Uma Maheswara Rao G > Fix For: 0.20.205.0 > > Attachments: HDFS-140.20security205.patch > > > When a file is deleted, the namenode sends out block deletions messages to > the appropriate datanodes. However, the namenode does not delete these blocks > from the blocksmap. Instead, the processing of the next block report from the > datanode causes these blocks to get removed from the blocksmap. > If we desire to make block report processing less frequent, this issue needs > to be addressed. Also, this introduces indeterministic behaviout to a a few > unit tests. Another factor to consider is to ensure that duplicate block > detection is not compromised. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira