[ 
https://issues.apache.org/jira/browse/HDFS-13157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16919628#comment-16919628
 ] 

Stephen O'Donnell commented on HDFS-13157:
------------------------------------------

This is a very interesting observation. The DatanodeAdminManager writes the 
blocks to be replicated into the replication queue, and then it is processed 
like a queue, in order, where work is then allocated to the datanodes. So it 
seems you are correct, in that the blocks will be replicated disk by disk.

{quote}

With that said, perhaps the decommissioning node should be exempted from 
replication altogether (for all blocks with replication >1) so that the load is 
spread randomly throughout all the DataNodes in the cluster and do not get 
concentrated on the decommissioning node.

{quote}

It is also worth keeping in mind, that a node in decommissioning state will not 
receive any new writes and reads are only directed to it as a last resort (ie 
no other replica available). Therefore the load on a decommission node would 
normally be less, so it should be able to replicate more. Also each DN can only 
have a set number of inflight transfers at once. So if the decommissioning node 
was not able to keep up, I think more of its blocks would get scheduled on 
other nodes as they clear their work queue more quickly, but I am not certain 
on that.

One observation I had when looking at decommissioning, is that the NN write 
lock is held for the entire time it takes to process the blocks on a given DN 
on the first pass. This time can be several seconds on a node with a lot of 
blocks. HDFS-10477 observed the same behaviour for recommission and made a 
change to drop the write lock after processing each storage rather that holding 
it for all storages. I wonder if shuffling the iterators would make a change 
like that for starting the decommission impossible?

> Do Not Remove Blocks Sequentially During Decommission 
> ------------------------------------------------------
>
>                 Key: HDFS-13157
>                 URL: https://issues.apache.org/jira/browse/HDFS-13157
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: datanode, namenode
>    Affects Versions: 3.0.0
>            Reporter: David Mollitor
>            Assignee: David Mollitor
>            Priority: Major
>
> From what I understand of [DataNode 
> decommissioning|https://github.com/apache/hadoop/blob/42a1c98597e6dba2e371510a6b2b6b1fb94e4090/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminManager.java]
>  it appears that all the blocks are scheduled for removal _in order._. I'm 
> not 100% sure what the ordering is exactly, but I think it loops through each 
> data volume and schedules each block to be replicated elsewhere. The net 
> affect is that during a decommission, all of the DataNode transfer threads 
> slam on a single volume until it is cleaned out. At which point, they all 
> slam on the next volume, etc.
> Please randomize the block list so that there is a more even distribution 
> across all volumes when decommissioning a node.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to