[ 
https://issues.apache.org/jira/browse/HDFS-15253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17081210#comment-17081210
 ] 

Karthik Palanisamy commented on HDFS-15253:
-------------------------------------------

Thank you  [~dineshchitlangia] I think 1M txns even low for normal workload.  
Most times, I see every 1-2 hours regular checkpoints.  I am ok to keep default 
setting. [~weichiu] what do you think?

If we set dfs.image.compress to true then the user can't use 
dfs.image.parallel.load option.

Right now,  I will be updating dfs.image.transfer.bandwidthPerSec, and let 
other settings as it is.  

> Set default throttle value on dfs.image.transfer.bandwidthPerSec
> ----------------------------------------------------------------
>
>                 Key: HDFS-15253
>                 URL: https://issues.apache.org/jira/browse/HDFS-15253
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>            Reporter: Karthik Palanisamy
>            Assignee: Karthik Palanisamy
>            Priority: Major
>
> The default value dfs.image.transfer.bandwidthPerSec is set to 0 so it can 
> use maximum available bandwidth for fsimage transfers during checkpoint. I 
> think we should throttle this. Many users were experienced namenode failover 
> when transferring large image size along with fsimage replication on 
> dfs.namenode.name.dir. eg. >25Gb.  
> Thought to set,
> dfs.image.transfer.bandwidthPerSec=52428800. (50 MB/s)
> dfs.namenode.checkpoint.txns=2000000 (Default is 1M, good to avoid frequent 
> checkpoint. However, the default checkpoint runs every 6 hours once)
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to