[jira] [Commented] (HDFS-15382) Split DataNode FsDatasetImpl lock to blockpool volume lock

2020-06-02 Thread Aiphago (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17123379#comment-17123379
 ] 

Aiphago commented on HDFS-15382:


After improve our du in cache copy time is very low.And we make improve just 
copy replica in each BlockPoolSlice.

 
{code:java}
2020-06-02 12:44:16,586 INFO 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed:
 Copy replica infos, blockPoolId: BP-xxx, replicas size: 665, duration: 0ms
2020-06-02 12:44:16,586 INFO 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed:
 Refresh dfs used, bpid: BP-xxx,replicas size: 665, dfsUsed: 15925882188 on 
volume: DS-4f1f820a-460f-4fa9-89be-49caed604a52, duration: 0ms , isopen 
hardlink false
2020-06-02 12:44:16,586 INFO 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed:
 Copy replica infos, blockPoo
lId: BP-xxx, replicas size: 699, duration: 1ms
2020-06-02 12:44:16,586 INFO 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed:
 Copy replica infos, blockPoo
lId: BP-xxx, replicas size: 698, duration: 1ms
2020-06-02 12:44:16,587 INFO 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed:
 Copy replica infos, blockPoo
lId: BP-xxx, replicas size: 638, duration: 0ms
2020-06-02 12:44:16,587 INFO 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed:
 Refresh dfs used, bpid: BP-xxx, replicas size: 638, dfsUsed: 16519661992 on 
volume: DS-b2eb6423-d0bd-493e-a102-d317e55815ce, duration: 0ms , isopen 
hardlink false
2020-06-02 12:44:16,588 INFO 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed:
 Copy replica infos, blockPoo
lId: BP-xxx, replicas size: 644, duration: 0ms
2020-06-02 12:44:16,588 INFO 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed:
 Refresh dfs used, bpid: BP-xxx, replicas size: 644, dfsUsed: 16636348641 on 
volume: DS-83a2deeb-2389-4036-9f13-df61fc6b35f6, duration: 0ms , isopen 
hardlink false
2020-06-02 12:44:16,588 INFO 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed:
 Copy replica infos, BP-xxx, 
replicas size: 663, duration: 0ms{code}
 

> Split DataNode FsDatasetImpl lock to blockpool volume lock 
> ---
>
> Key: HDFS-15382
> URL: https://issues.apache.org/jira/browse/HDFS-15382
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Aiphago
>Assignee: Aiphago
>Priority: Major
> Fix For: 3.2.1
>
> Attachments: image-2020-06-02-1.png
>
>
> In HDFS-15180 we split lock to blockpool grain size.But when one volume is in 
> heavy load and will block other request which in same blockpool but different 
> volume.So we split lock to two leval to avoid this happend.And to improve 
> datanode performance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15382) Split DataNode FsDatasetImpl lock to blockpool volume lock

2020-06-02 Thread Aiphago (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17123372#comment-17123372
 ] 

Aiphago commented on HDFS-15382:


Now we deploy in our produce cluster use this patch for some days.Here is the 
metric by random choose some dn.The metric we add before upgrade with this 
patch.The unit is ms.

!image-2020-06-02-1.png|width=923,height=233!

> Split DataNode FsDatasetImpl lock to blockpool volume lock 
> ---
>
> Key: HDFS-15382
> URL: https://issues.apache.org/jira/browse/HDFS-15382
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Aiphago
>Assignee: Aiphago
>Priority: Major
> Fix For: 3.2.1
>
> Attachments: image-2020-06-02-1.png
>
>
> In HDFS-15180 we split lock to blockpool grain size.But when one volume is in 
> heavy load and will block other request which in same blockpool but different 
> volume.So we split lock to two leval to avoid this happend.And to improve 
> datanode performance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org