[ 
https://issues.apache.org/jira/browse/HDFS-15253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17072267#comment-17072267
 ] 

Dinesh Chitlangia commented on HDFS-15253:
------------------------------------------

[~kpalanisamy] - Thanks for filing this jira. Yes, restricting the bandwith to 
50mb/s makes sense. When I work with customers who are running HDFS at scale, 
that is the first thing I recommend them.

 

Regarding dfs.image.compress, I have fairly little experience and have not seen 
much benefit with it other than reduced file size.

dfs.namenode.checkpoint.txns can vary based on the cluster usage. So no matter 
what value is set as default, there will always be a large set of users who 
would still have to tune it based on their cluster usage. So I would recommend 
we leave it at default 1M.

> Set default throttle value on dfs.image.transfer.bandwidthPerSec
> ----------------------------------------------------------------
>
>                 Key: HDFS-15253
>                 URL: https://issues.apache.org/jira/browse/HDFS-15253
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>            Reporter: Karthik Palanisamy
>            Assignee: Karthik Palanisamy
>            Priority: Major
>
> The default value dfs.image.transfer.bandwidthPerSec is set to 0 so it can 
> use maximum available bandwidth for fsimage transfers during checkpoint. I 
> think we should throttle this. Many users were experienced namenode failover 
> when transferring large image size along with fsimage replication on 
> dfs.namenode.name.dir. eg. >25Gb.  
> Thought to set,
> dfs.image.transfer.bandwidthPerSec=52428800. (50 MB/s)
> dfs.namenode.checkpoint.txns=2000000 (Default is 1M, good to avoid frequent 
> checkpoint. However, the default checkpoint runs every 6 hours once)
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to