[ 
https://issues.apache.org/jira/browse/HADOOP-1135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-1135:
------------------------------

       Resolution: Fixed
    Fix Version/s: 0.13.0
           Status: Resolved  (was: Patch Available)

I've just committed this. Thanks Dhruba!

(I've marked it as fixed in 0.13.0, but there is still an open question as to 
whether this merits a 0.12.2 release.)

> A block report processing may incorrectly cause the namenode to delete blocks 
> ------------------------------------------------------------------------------
>
>                 Key: HADOOP-1135
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1135
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>            Reporter: dhruba borthakur
>         Assigned To: dhruba borthakur
>             Fix For: 0.13.0
>
>         Attachments: blockReportInvalidateBlock2.patch
>
>
> When a block report arrives at the namenode, the namenode goes through all 
> the blocks on that datanode. If a block is not valid it is marked for 
> deletion. The blocks-to-be-deleted are sent to the datanode as a response to 
> the next heartbeat RPC. The namenode sends only 100 blocks-to-be-deleted at a 
> time. This was introduced as part of hadoop-994. The bug is that if the 
> number of blocks-to-be-deleted exceeds 100, then that namenode marks all the 
> remaining blocks in the block report for deletion.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to