[ 
https://issues.apache.org/jira/browse/HDFS-16663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

caozhiqiang updated HDFS-16663:
-------------------------------
    Status: Patch Available  (was: In Progress)

> Allow block reconstruction pending timeout refreshable to increase 
> decommission performance
> -------------------------------------------------------------------------------------------
>
>                 Key: HDFS-16663
>                 URL: https://issues.apache.org/jira/browse/HDFS-16663
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: ec, namenode
>    Affects Versions: 3.4.0
>            Reporter: caozhiqiang
>            Assignee: caozhiqiang
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> In [HDFS-16613|https://issues.apache.org/jira/browse/HDFS-16613], increase 
> the value of dfs.namenode.replication.max-streams-hard-limit would maximize 
> the IO performance of the decommissioning DN, witch has a lot of EC blocks. 
> Besides this, we also need to decrease the value of 
> dfs.namenode.reconstruction.pending.timeout-sec, default is 5 minutes, to 
> shorten the interval time for checking pendingReconstructions. Or the 
> decommissioning node would be idle to wait for copy tasks in much time of 
> this 5 minutes.
> In decommission progress, we may need to reconfigure these 2 parameters 
> several times. In 
> [HDFS-14560|https://issues.apache.org/jira/browse/HDFS-14560], the 
> dfs.namenode.replication.max-streams-hard-limit can already be reconfigured 
> dynamically without namenode restart. And the 
> dfs.namenode.reconstruction.pending.timeout-sec parameter also need to be 
> reconfigured dynamically. 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to