[ 
https://issues.apache.org/jira/browse/HDFS-14560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16862478#comment-16862478
 ] 

Stephen O'Donnell commented on HDFS-14560:
------------------------------------------

When committing this, it would be great if we could apply it to the 3.1 and 3.2 
branches too. The change cherry-picks back to 3.2 cleanly. For 3.1 there is a 
conflict. I have uploaded a 3.1 patch with the conflict resolved.

> Allow block replication parameters to be refreshable
> ----------------------------------------------------
>
>                 Key: HDFS-14560
>                 URL: https://issues.apache.org/jira/browse/HDFS-14560
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: namenode
>    Affects Versions: 3.3.0
>            Reporter: Stephen O'Donnell
>            Assignee: Stephen O'Donnell
>            Priority: Major
>         Attachments: HDFS-14560.001.patch, HDFS-14560.002.patch, 
> HDFS-14560.003.patch, HDFS-14560.004.patch, HDFS-14560.005.patch
>
>
> There are 3 key parameters that control the speed of block replication across 
> the cluster:
> {code}
> dfs.namenode.replication.max-streams
> dfs.namenode.replication.max-streams-hard-limit
> dfs.namenode.replication.work.multiplier.per.iteration
> {code}
> These are used when decomissioning nodes and when under replicated blocks are 
> being recovered across the cluster. There are times when it may be desirable 
> to increase the speed of replication and then reduce it again (eg during off 
> peak hours) without restarting the namenode.
> Therefore it would be good to be able to reconfigure / refresh these 
> parameters dynamically without a namenode restart.
> This Jira is to allow these parameters to be refreshed at runtime without a 
> restart.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to