[ 
https://issues.apache.org/jira/browse/HDFS-1457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12927265#action_12927265
 ] 

Hairong Kuang commented on HDFS-1457:
-------------------------------------

Right, the support for compressed image has already checked in in HDFS-1435. So 
I expect that a user with a big image runs NN with image compression enabled, 
which in addition will reduce the cost of saving the image to local or remote 
disks at NN.

Then this jira simply supports image transfer throttling. Does this make sense? 
I am glad to hear that Baidu is running Hadoop well. You guys have done good 
job with Hadoop.

> Limit transmission rate when transfering image between primary and secondary 
> NNs
> --------------------------------------------------------------------------------
>
>                 Key: HDFS-1457
>                 URL: https://issues.apache.org/jira/browse/HDFS-1457
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: name-node
>    Affects Versions: 0.22.0
>            Reporter: Hairong Kuang
>            Assignee: Hairong Kuang
>             Fix For: 0.22.0
>
>         Attachments: checkpoint-limitandcompress.patch, 
> trunkThrottleImage.patch
>
>
> If the fsimage is very big. The network is full in a short time when 
> SeconaryNamenode do checkpoint, leading to Jobtracker access Namenode to get 
> relevant file data to fail in job initialization phase. So we limit 
> transmission speed and compress transmission to resolve the problem. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to