[
https://issues.apache.org/jira/browse/HDFS-611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12771181#action_12771181
]
Zheng Shao commented on HDFS-611:
---------------------------------
There are 3 approaches:
A1. Decrement the dfs usage when we schedule the task.
A2. Pass a handle of FSDataset to BlockFileDeleter ctor, so we can do
"synchronized(fsdataset) { ... }" in the BlockFileDeleteTask
A3. Add a method in FSDataset for decrementing the dfs usage for a volume.
A1 is not good because the dfs usage won't be accurate.
A3 is changing the interface - but the method to decrement dfs usage shouldn't
be exposed from FSDataset I think.
I prefer A2.
Will that work?
> Heartbeats times from Datanodes increase when there are plenty of blocks to
> delete
> ----------------------------------------------------------------------------------
>
> Key: HDFS-611
> URL: https://issues.apache.org/jira/browse/HDFS-611
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: data-node
> Affects Versions: 0.20.1, 0.21.0, 0.22.0
> Reporter: dhruba borthakur
> Assignee: Zheng Shao
> Fix For: 0.20.2, 0.21.0, 0.22.0
>
> Attachments: HDFS-611.branch-19.patch, HDFS-611.branch-20.patch,
> HDFS-611.trunk.patch
>
>
> I am seeing that when we delete a large directory that has plenty of blocks,
> the heartbeat times from datanodes increase significantly from the normal
> value of 3 seconds to as large as 50 seconds or so. The heartbeat thread in
> the Datanode deletes a bunch of blocks sequentially, this causes the
> heartbeat times to increase.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.