[ 
https://issues.apache.org/jira/browse/HDFS-15180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17055798#comment-17055798
 ] 

Stephen O'Donnell commented on HDFS-15180:
------------------------------------------

[~zhuqi] Thanks for sharing this, it looks promising. It is also good to see 
the patch used in a real world cluster without any issues.

Looking at your chart, are the orange and blue lines using the old code until 
around 5th / 6th March, then you switched to the new RW Fair lock plus 
HDFS-15160 and the blocked thread count has reduced to almost zero? The green 
line has been using HDFS-15160 since at least 4th March?

I am not sure what metrics we should track to prove this change it good, but 
blocked thread count seems like a good one for now. Your chart looks promising. 
It would also be good to see a line on your chart for 2 or 3 nodes where 
HDFS-15160 is NOT applied for a comparison over time, so we can clearly see the 
nodes with HDFS-15160 against nodes without it.

>  DataNode FsDatasetImpl Fine-Grained Locking via BlockPool.
> -----------------------------------------------------------
>
>                 Key: HDFS-15180
>                 URL: https://issues.apache.org/jira/browse/HDFS-15180
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: datanode
>    Affects Versions: 3.2.0
>            Reporter: zhuqi
>            Assignee: zhuqi
>            Priority: Major
>         Attachments: image-2020-03-10-17-22-57-391.png, 
> image-2020-03-10-17-31-58-830.png, image-2020-03-10-17-34-26-368.png
>
>
> Now the FsDatasetImpl datasetLock is heavy, when their are many namespaces in 
> big cluster. If we can split the FsDatasetImpl datasetLock via blockpool. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to