[ 
https://issues.apache.org/jira/browse/HDFS-17556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

caozhiqiang updated HDFS-17556:
-------------------------------
    Status: Patch Available  (was: In Progress)

> Avoid adding block to neededReconstruction repeatedly in decommission
> ---------------------------------------------------------------------
>
>                 Key: HDFS-17556
>                 URL: https://issues.apache.org/jira/browse/HDFS-17556
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namanode
>    Affects Versions: 3.5.0
>            Reporter: caozhiqiang
>            Assignee: caozhiqiang
>            Priority: Major
>
> In decommission and maintenance process, before added to 
> BlockManager::neededReconstruction block will be check if it has been added. 
> The check contains if block is in BlockManager::neededReconstruction or in 
> PendingReconstructionBlocks::pendingReconstructions as below code. 
> But it also need to check if it is in 
> PendingReconstructionBlocks::timedOutItems. Or else 
> DatanodeAdminDefaultMonitor will add block to 
> BlockManager::neededReconstruction repeatedly if block time out in 
> PendingReconstructionBlocks::pendingReconstructions.
>  
> {code:java}
> if (!blockManager.neededReconstruction.contains(block) &&
>     blockManager.pendingReconstruction.getNumReplicas(block) == 0 &&
>     blockManager.isPopulatingReplQueues()) {
>   // Process these blocks only when active NN is out of safe mode.
>   blockManager.neededReconstruction.add(block,
>       liveReplicas, num.readOnlyReplicas(),
>       num.outOfServiceReplicas(),
>       blockManager.getExpectedRedundancyNum(block));
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to