[ 
https://issues.apache.org/jira/browse/HDFS-15382?focusedWorklogId=730333&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-730333
 ]

ASF GitHub Bot logged work on HDFS-15382:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 21/Feb/22 12:26
            Start Date: 21/Feb/22 12:26
    Worklog Time Spent: 10m 
      Work Description: Hexiaoqiao commented on a change in pull request #3941:
URL: https://github.com/apache/hadoop/pull/3941#discussion_r811060744



##########
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancer.java
##########
@@ -504,15 +503,13 @@ private void createWorkPlan(NodePlan plan) throws 
DiskBalancerException {
     Map<String, String> storageIDToVolBasePathMap = new HashMap<>();
     FsDatasetSpi.FsVolumeReferences references;
     try {
-      try(AutoCloseableLock lock = this.dataset.acquireDatasetReadLock()) {

Review comment:
       Just suggest to keep it and improve to #dataSetLock as other 
improvement. We should create another jira to remove it if this is not 
necessary now.

##########
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
##########
@@ -3654,18 +3661,21 @@ public int getVolumeCount() {
   }
 
   void stopAllDataxceiverThreads(FsVolumeImpl volume) {
-    try (AutoCloseableLock lock = datasetWriteLock.acquire()) {

Review comment:
       Same as comment#1. 

##########
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java
##########
@@ -232,118 +232,6 @@ public void setUp() throws IOException {
     assertEquals(0, dataset.getNumFailedVolumes());
   }
 
-  @Test(timeout=10000)

Review comment:
       I do not get where delete this unit test here.

##########
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
##########
@@ -464,42 +427,40 @@ public AutoCloseableLock acquireDatasetReadLock() {
    * Activate a volume to serve requests.
    * @throws IOException if the storage UUID already exists.
    */
-  private void activateVolume(
+  private synchronized void activateVolume(

Review comment:
       Just suggest to add annotation to describe that we use #synchronized to 
protect volume instance and use datasetLock to protect blockpool instance. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 730333)
    Time Spent: 1h  (was: 50m)

> Split FsDatasetImpl from blockpool lock to blockpool volume lock 
> -----------------------------------------------------------------
>
>                 Key: HDFS-15382
>                 URL: https://issues.apache.org/jira/browse/HDFS-15382
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Mingxiang Li
>            Assignee: Mingxiang Li
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: HDFS-15382-sample.patch, image-2020-06-02-1.png, 
> image-2020-06-03-1.png
>
>          Time Spent: 1h
>  Remaining Estimate: 0h
>
> In HDFS-15180 we split lock to blockpool grain size.But when one volume is in 
> heavy load and will block other request which in same blockpool but different 
> volume.So we split lock to two leval to avoid this happend.And to improve 
> datanode performance.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to