[ 
https://issues.apache.org/jira/browse/HDFS-611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12766851#action_12766851
 ] 

Zheng Shao commented on HDFS-611:
---------------------------------

+1 on the idea.

Encountered this bug on 0.19 when doing a HDFS stress test - DataNode was not 
able to send received block list, so client gets "block not replicated" 
exception.

I think we can start with 1 thread per volumn, and in case it cannot keep up, 
we can make that "1" configurable.


> Heartbeats times from Datanodes increase when there are plenty of blocks to 
> delete
> ----------------------------------------------------------------------------------
>
>                 Key: HDFS-611
>                 URL: https://issues.apache.org/jira/browse/HDFS-611
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: data-node
>            Reporter: dhruba borthakur
>            Assignee: dhruba borthakur
>
> I am seeing that when we delete a large directory that has plenty of blocks, 
> the heartbeat times from datanodes increase significantly from the normal 
> value of 3 seconds to as large as 50 seconds or so. The heartbeat thread in 
> the Datanode deletes a bunch of blocks sequentially, this causes the 
> heartbeat times to increase.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to