[ 
https://issues.apache.org/jira/browse/HDFS-1268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12883176#action_12883176
 ] 

jinglong.liujl commented on HDFS-1268:
--------------------------------------

In my case, if I want to delete 600 blocks,  I have to wait 6 heartbeats 
periods. During this period,  disk maybe reach its capacity. Then, too slow 
block fetching will cause write failure. 
In general case, default value (100 can work well), but in this extremely case, 
default value is not enough. Currently, this parameter can be computed by 
heartbeatInterval, but in the case before, "slower heartbeat + per heartbeat 
carry more blocks " can not carry more blocks in the same period.
Why not make this parameter can be configured?





> Extract blockInvalidateLimit as a seperated configuration
> ---------------------------------------------------------
>
>                 Key: HDFS-1268
>                 URL: https://issues.apache.org/jira/browse/HDFS-1268
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: name-node
>    Affects Versions: 0.22.0
>            Reporter: jinglong.liujl
>         Attachments: patch.diff
>
>
>       If there're many file piled up in recentInvalidateSets, only 
> Math.max(blockInvalidateLimit, 
> 20*(int)(heartbeatInterval/1000)) invalid blocks can be carried in a 
> heartbeat.(By default, It's 100). In high write stress, it'll cause process 
> of invalidate blocks removing can not catch up with  speed of writing. 
>     We extract blockInvalidateLimit  to a sperate config parameter that user 
> can make the right configure for your cluster. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to