[ 
https://issues.apache.org/jira/browse/HDFS-15945?focusedWorklogId=577532&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-577532
 ]

ASF GitHub Bot logged work on HDFS-15945:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 06/Apr/21 12:31
            Start Date: 06/Apr/21 12:31
    Worklog Time Spent: 10m 
      Work Description: tasanuma commented on pull request #2854:
URL: https://github.com/apache/hadoop/pull/2854#issuecomment-814082007


   On second thought, the last commit has a problem. Just after restarting 
NameNode, NameNode hasn't received any block reports from any DataNode, so 
NameNode recognizes all DataNodes as zero blocks. Therefore, when restarting 
NameNode while decommissioning a DataNode, the DataNode becomes decommissioned 
imediately before replicating its blocks. Actually 
`TestDecommission#testDecommissionWithNamenodeRestart()` covers this case and 
it fails for 0aa3649.
   
   After all, I think we need to consider if the DataNode has zero capacity or 
not. If the capacity is zero, it means the DataNode has a problem with its 
storage, and we can decommission it safely.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 577532)
    Time Spent: 1.5h  (was: 1h 20m)

> DataNodes with zero capacity and zero blocks should be decommissioned 
> immediately
> ---------------------------------------------------------------------------------
>
>                 Key: HDFS-15945
>                 URL: https://issues.apache.org/jira/browse/HDFS-15945
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Takanobu Asanuma
>            Assignee: Takanobu Asanuma
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Such as when there is a storage problem, DataNode capacity and block count 
> sometimes become zero.
>  When we tried to decommission those DataNodes, we ran into an issue that the 
> decommission did not complete because the NameNode had not received their 
> first block report.
> {noformat}
> INFO  blockmanagement.DatanodeAdminManager 
> (DatanodeAdminManager.java:startDecommission(183)) - Starting decommission of 
> 127.0.0.1:58343 
> [DISK]DS-a29de094-2b19-4834-8318-76cda3bd86bf:NORMAL:127.0.0.1:58343 with 0 
> blocks
> INFO  blockmanagement.BlockManager 
> (BlockManager.java:isNodeHealthyForDecommissionOrMaintenance(4587)) - Node 
> 127.0.0.1:58343 hasn't sent its first block report.
> INFO  blockmanagement.DatanodeAdminDefaultMonitor 
> (DatanodeAdminDefaultMonitor.java:check(258)) - Node 127.0.0.1:58343 isn't 
> healthy. It needs to replicate 0 more blocks. Decommission In Progress is 
> still in progress.
> {noformat}
> To make matters worse, even if we stopped these DataNodes afterward, they 
> remained in a dead&decommissioning state until NameNode restarted.
> I think those DataNodes should be decommissioned immediately even if NameNode 
> hasn't recived the first block report.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to