[ 
https://issues.apache.org/jira/browse/HDFS-16819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17625449#comment-17625449
 ] 

ASF GitHub Bot commented on HDFS-16819:
---------------------------------------

tomscut commented on code in PR #5074:
URL: https://github.com/apache/hadoop/pull/5074#discussion_r1007672138


##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java:
##########
@@ -1912,9 +1907,8 @@ public ReplicaHandler createTemporary(StorageType 
storageType,
       return new ReplicaHandler(newReplicaInfo, ref);
     } finally {
       if (dataNodeMetrics != null) {
-        // Create temporary operation hold write lock twice.
-        long createTemporaryOpMs = Time.monotonicNow() - startHoldLockTimeMs
-            + holdLockTimeMs;
+        // Create temporary operation hold write lock once.

Review Comment:
   This comment can also be removed. The other changes look good to me.
   





>  Remove the redundant write lock in FsDatasetImpl#createTemporary 
> ------------------------------------------------------------------
>
>                 Key: HDFS-16819
>                 URL: https://issues.apache.org/jira/browse/HDFS-16819
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Haiyang Hu
>            Assignee: Haiyang Hu
>            Priority: Major
>              Labels: pull-request-available
>
>  In FsDatasetImpl#createTemporary Line_1840 the writeLock here seems useless. 
> The readLock is already held in volumeMap.get().  From the code logic point 
> of view, the writeLock here maybe to remove
> {code:java}
> public ReplicaHandler createTemporary(StorageType storageType,
>     String storageId, ExtendedBlock b, boolean isTransfer)
>     throws IOException {
>   long startTimeMs = Time.monotonicNow();
>   long writerStopTimeoutMs = datanode.getDnConf().getXceiverStopTimeout();
>   ReplicaInfo lastFoundReplicaInfo = null;
>   boolean isInPipeline = false;
>   do {
>    try (AutoCloseableLock lock = lockManager.writeLock(LockLevel.BLOCK_POOl,
>        b.getBlockPoolId())) { //the writeLock here maybe to remove
>      ReplicaInfo currentReplicaInfo =
>          volumeMap.get(b.getBlockPoolId(), b.getBlockId());
>      if (currentReplicaInfo == lastFoundReplicaInfo) {
>        break;
>      } else {
>        isInPipeline = currentReplicaInfo.getState() == ReplicaState.TEMPORARY
>            || currentReplicaInfo.getState() == ReplicaState.RBW;
>        /*
>         * If the current block is not PROVIDED and old, reject.
>         * else If transfer request, then accept it.
>         * else if state is not RBW/Temporary, then reject
>         * If current block is PROVIDED, ignore the replica.
>         */
>        if (((currentReplicaInfo.getGenerationStamp() >= b
>            .getGenerationStamp()) || (!isTransfer && !isInPipeline))
>            && !isReplicaProvided(currentReplicaInfo)) {
>          throw new ReplicaAlreadyExistsException("Block " + b
>              + " already exists in state " + currentReplicaInfo.getState()
>              + " and thus cannot be created.");
>        }
>        lastFoundReplicaInfo = currentReplicaInfo;
>      }
>     }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to