[ 
https://issues.apache.org/jira/browse/HDFS-140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13176461#comment-13176461
 ] 

dhruba borthakur commented on HDFS-140:
---------------------------------------

hi uma, your technical point makes sense, but my feeling is that it is too late 
to roll it into 0.20 release. It is already fixed in newer releases, so new 
users will automatically get this fix. for people who are stuck with older 0.20 
based releases, they can pull this patch into their code base in their own, is 
it not? 
                
> When a file is deleted, its blocks remain in the blocksmap till the next 
> block report from Datanode
> ---------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-140
>                 URL: https://issues.apache.org/jira/browse/HDFS-140
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 0.20.1
>            Reporter: dhruba borthakur
>            Assignee: Uma Maheswara Rao G
>         Attachments: HDFS-140.20security205.patch
>
>
> When a file is deleted, the namenode sends out block deletions messages to 
> the appropriate datanodes. However, the namenode does not delete these blocks 
> from the blocksmap. Instead, the processing of the next block report from the 
> datanode causes these blocks to get removed from the blocksmap.
> If we desire to make block report processing less frequent, this issue needs 
> to be addressed. Also, this introduces indeterministic behaviout to a a few 
> unit tests. Another factor to consider is to ensure that duplicate block 
> detection is not compromised.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to