[
https://issues.apache.org/jira/browse/HADOOP-1497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
dhruba borthakur updated HADOOP-1497:
-------------------------------------
Component/s: dfs
> Possibility of duplicate blockids if dead-datanodes come back up after
> corresponding files were deleted
> -------------------------------------------------------------------------------------------------------
>
> Key: HADOOP-1497
> URL: https://issues.apache.org/jira/browse/HADOOP-1497
> Project: Hadoop
> Issue Type: Bug
> Components: dfs
> Reporter: dhruba borthakur
> Assignee: dhruba borthakur
>
> Suppose a datanode D has a block B that belongs to file F. Suppose the
> datanode D dies and the namenode replicates those blocks to other datanodes.
> No, suppose the user deletes file F. The namenode removes all the blocks that
> belonged to file F. Now, suppose a new file F1 is created and the namenode
> generates the same blockid B for this new file F1.
> Suppose the old datanode D comes back to life. Now we have a valid corrupted
> block B on datanode D.
> This case is possibly detected by the Client (using CRC). But does HDFS need
> to handle this scenario better?
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.