[
https://issues.apache.org/jira/browse/HDFS-15730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17252352#comment-17252352
]
Ayush Saxena commented on HDFS-15730:
-------------------------------------
Do you think it is correct for the excess block to get deleted before
reconstruction? I think we should prevent it from getting deleted incase there
is a corrupt replica, until the block gets reconstructed. In case we delete the
excess index say b0 and we loose the only remaining b0 as well, we might loose
a chance of reconstruction as well?
Thoughts?
> Erasure Coding: Fix unit test bug of
> TestAddOverReplicatedStripedBlocks.testProcessOverReplicatedAndCorruptStripedBlock.
> ------------------------------------------------------------------------------------------------------------------------
>
> Key: HDFS-15730
> URL: https://issues.apache.org/jira/browse/HDFS-15730
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: Jinglun
> Assignee: Jinglun
> Priority: Minor
> Attachments: HDFS-15730.001.patch
>
>
> I'm working on ec replication and find a bug of the test case:
> TestAddOverReplicatedStripedBlocks#testProcessOverReplicatedAndCorruptStripedBlock.
> The test case added 2 redundant block then check the block indices. It wrote
> 'the redundant internal blocks will not be deleted before the corrupted block
> gets reconstructed.'
> But actually the redundant block could be deleted when there is corrupted
> block. The reason the test could pass is it runs very fast and checks the
> block indices before the redundant block is deleted and reported to the
> NameNode.
> The patch is both a fix and an explanation of the bug.
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]