[
https://issues.apache.org/jira/browse/HDFS-611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12771191#action_12771191
]
Zheng Shao commented on HDFS-611:
---------------------------------
A4. Do "synchronized(fsdataset)" in "void decDfsUsed(...)". That function is
called only from "BlockFileDeleteTask" now.
A5. Do "synchronized void decDfsUsed(...)". This will move the lock down to the
volume, to make it faster.
Although A5 might help us avoid some contentions, I don't see any code that
locks a volume. I guess we always lock the FSDataset.
I prefer A4 the best.
> Heartbeats times from Datanodes increase when there are plenty of blocks to
> delete
> ----------------------------------------------------------------------------------
>
> Key: HDFS-611
> URL: https://issues.apache.org/jira/browse/HDFS-611
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: data-node
> Affects Versions: 0.20.1, 0.21.0, 0.22.0
> Reporter: dhruba borthakur
> Assignee: Zheng Shao
> Fix For: 0.20.2, 0.21.0, 0.22.0
>
> Attachments: HDFS-611.branch-19.patch, HDFS-611.branch-20.patch,
> HDFS-611.trunk.patch
>
>
> I am seeing that when we delete a large directory that has plenty of blocks,
> the heartbeat times from datanodes increase significantly from the normal
> value of 3 seconds to as large as 50 seconds or so. The heartbeat thread in
> the Datanode deletes a bunch of blocks sequentially, this causes the
> heartbeat times to increase.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.