[ https://issues.apache.org/jira/browse/HDFS-11285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Lantao Jin updated HDFS-11285: ------------------------------ Description: We have seen the use case of decommissioning DataNodes that are already dead or unresponsive, and not expected to rejoin the cluster. In a large cluster, we met more than 100 nodes were dead, decommissioning and their {{Under replicated blocks}} {{Blocks with no live replicas}} were all ZERO. Actually It has been fixed in [HDFS-7374|https://issues.apache.org/jira/browse/HDFS-7374]. After that, we can refreshNode twice to eliminate this case. But, seems this patch missed after refactor[HDFS-7411|https://issues.apache.org/jira/browse/HDFS-7411]. We are using a Hadoop version based 2.7.1 and only below operations can transition the status from {{Dead, DECOMMISSION_INPROGRESS}} to {{Dead, DECOMMISSIONED}}: # Retire it from hdfs-exclude # refreshNodes # Re-add it to hdfs-exclude # refreshNodes So, why the code removed after refactor in the new DecommissionManager? {code:java} if (!node.isAlive) { LOG.info("Dead node " + node + " is decommissioned immediately."); node.setDecommissioned(); {code} was: We have seen the use case of decommissioning DataNodes that are already dead or unresponsive, and not expected to rejoin the cluster. In a large cluster, we met more than 100 nodes were dead, decommissioning and their {panel} Under replicated blocks {panel} {panel} Blocks with no live replicas {panel} were all ZERO. Actually It has been fixed in [HDFS-7374|https://issues.apache.org/jira/browse/HDFS-7374]. After that, we can refreshNode twice to eliminate this case. But, seems this patch missed after refactor[HDFS-7411|https://issues.apache.org/jira/browse/HDFS-7411]. We are using a Hadoop version based 2.7.1 and only below operations can transition the status from {panel} Dead, DECOMMISSION_INPROGRESS {panel} to {panel} Dead, DECOMMISSIONED {panel}: # Retire it from hdfs-exclude # refreshNodes # Re-add it to hdfs-exclude # refreshNodes So, why the code removed after refactor in the new DecommissionManager? {code:java} if (!node.isAlive) { LOG.info("Dead node " + node + " is decommissioned immediately."); node.setDecommissioned(); {code} > Dead DataNodes keep a long time in (Dead, DECOMMISSION_INPROGRESS), and never > transition to (Dead, DECOMMISSIONED) > ------------------------------------------------------------------------------------------------------------------ > > Key: HDFS-11285 > URL: https://issues.apache.org/jira/browse/HDFS-11285 > Project: Hadoop HDFS > Issue Type: Bug > Affects Versions: 2.7.1 > Reporter: Lantao Jin > Attachments: DecomStatus.png > > > We have seen the use case of decommissioning DataNodes that are already dead > or unresponsive, and not expected to rejoin the cluster. In a large cluster, > we met more than 100 nodes were dead, decommissioning and their {{Under > replicated blocks}} {{Blocks with no live replicas}} were all ZERO. Actually > It has been fixed in > [HDFS-7374|https://issues.apache.org/jira/browse/HDFS-7374]. After that, we > can refreshNode twice to eliminate this case. But, seems this patch missed > after refactor[HDFS-7411|https://issues.apache.org/jira/browse/HDFS-7411]. We > are using a Hadoop version based 2.7.1 and only below operations can > transition the status from {{Dead, DECOMMISSION_INPROGRESS}} to {{Dead, > DECOMMISSIONED}}: > # Retire it from hdfs-exclude > # refreshNodes > # Re-add it to hdfs-exclude > # refreshNodes > So, why the code removed after refactor in the new DecommissionManager? > {code:java} > if (!node.isAlive) { > LOG.info("Dead node " + node + " is decommissioned immediately."); > node.setDecommissioned(); > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org