[ 
https://issues.apache.org/jira/browse/HDFS-15375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17124096#comment-17124096
 ] 

Surendra Singh Lilhore commented on HDFS-15375:
-----------------------------------------------

{quote}-                 neededReconstruction.update(block, repl.liveReplicas() 
+ pendingNum,{quote}
We can't remove {{pendingNum}} from here, it will create extra replication task 
if this count doesn't include pendingNum. In your case all the block are 
corrupted means live replica will be zero. You can add some logic based on live 
replica zero check.

> Reconstruction Work should not happen for Corrupt Block
> -------------------------------------------------------
>
>                 Key: HDFS-15375
>                 URL: https://issues.apache.org/jira/browse/HDFS-15375
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: hemanthboyina
>            Assignee: hemanthboyina
>            Priority: Major
>         Attachments: HDFS-15375-testrepro.patch, HDFS-15375.001.patch
>
>
> In BlockManager#updateNeededReconstructions , while updating the 
> NeededReconstruction we are adding Pendingreconstruction blocks to live 
> replicas
> {code:java}
>  int pendingNum = pendingReconstruction.getNumReplicas(block);
>       int curExpectedReplicas = getExpectedRedundancyNum(block);
>       if (!hasEnoughEffectiveReplicas(block, repl, pendingNum)) {
>         neededReconstruction.update(block, repl.liveReplicas() + 
> pendingNum,{code}
> But if two replicas were in pending reconstruction (due to corruption) , and 
> if the third replica is corrupted the block should be in 
> QUEUE_WITH_CORRUPT_BLOCKS but because of above logic it was getting added in 
> QUEUE_LOW_REDUNDANCY , this makes the RedudancyMonitor to reconstruct a 
> corrupted block , which is wrong



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to