[ 
https://issues.apache.org/jira/browse/HDFS-15177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17039723#comment-17039723
 ] 

zhuqi commented on HDFS-15177:
------------------------------

Hi [~sodonnell]

Thanks for your reply.

Next, i will change our cluster to a fair lock which has been changed in 3.x 
branch, then to see if the blocked thread problem will be improved.

I support you that the FoldedTreeSet should be improved to get better 
performance, and i will monitor the namenode stack when the datanode become 
slower next time, to see if the FoldedTreeSet problem happen.

I am excited to just see the  
[HDFS-15150|https://issues.apache.org/jira/browse/HDFS-15150] and 
the[HDFS-15160|https://issues.apache.org/jira/browse/HDFS-15160] , it is a good 
news for the improvement for the concurrency and throughput of the lock, and it 
is a start for a lock per block pool proposal.

 

> Split datanode invalide block deletion, to avoid the FsDatasetImpl lock too 
> much time.
> --------------------------------------------------------------------------------------
>
>                 Key: HDFS-15177
>                 URL: https://issues.apache.org/jira/browse/HDFS-15177
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: datanode
>            Reporter: zhuqi
>            Assignee: zhuqi
>            Priority: Major
>         Attachments: image-2020-02-18-22-39-00-642.png, 
> image-2020-02-18-22-51-28-624.png, image-2020-02-18-22-52-59-202.png, 
> image-2020-02-18-22-55-38-661.png
>
>
> In our cluster, the datanode receive the delete command with too many blocks 
> deletion when we have many blockpools sharing the same datanode and the 
> datanode with about 30 storage dirs, it will cause the FsDatasetImpl lock too 
> much time.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to