[ https://issues.apache.org/jira/browse/HDFS-16774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17609969#comment-17609969 ]
ASF GitHub Bot commented on HDFS-16774: --------------------------------------- ZanderXu commented on code in PR #4903: URL: https://github.com/apache/hadoop/pull/4903#discussion_r981060797 ########## hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java: ########## @@ -1891,23 +1894,22 @@ public void delayDeleteReplica() { // If this replica is deleted from memory, the client would got an ReplicaNotFoundException. assertNotNull(ds.getStoredBlock(bpid, extendedBlock.getBlockId())); - // Make it resume the removeReplicaFromMem method + // Make it resume the removeReplicaFromMem method. semaphore.release(1); // Sleep for 1 second so that datanode can complete invalidate. Review Comment: How about change this comment to `Waiting for the async deletion task finish`? ########## hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java: ########## @@ -1891,23 +1894,22 @@ public void delayDeleteReplica() { // If this replica is deleted from memory, the client would got an ReplicaNotFoundException. assertNotNull(ds.getStoredBlock(bpid, extendedBlock.getBlockId())); - // Make it resume the removeReplicaFromMem method + // Make it resume the removeReplicaFromMem method. semaphore.release(1); // Sleep for 1 second so that datanode can complete invalidate. - GenericTestUtils.waitFor(new com.google.common.base.Supplier<Boolean>() { - @Override public Boolean get() { - return ds.asyncDiskService.countPendingDeletions() == 0; - } - }, 100, 1000); + GenericTestUtils.waitFor(() -> ds.asyncDiskService.countPendingDeletions() == 0, + 100, 1000); Review Comment: single line. ########## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java: ########## @@ -2400,6 +2400,93 @@ public void invalidate(String bpid, ReplicaInfo block) { block.getStorageUuid()); } + /** + * Remove Replica from ReplicaMap + * + * @param block + * @param volume + * @return + */ + public boolean removeReplicaFromMem(final ExtendedBlock block, final FsVolumeImpl volume) { + final String blockPoolId = block.getBlockPoolId(); + final Block localBlock = block.getLocalBlock(); + final long blockId = localBlock.getBlockId(); + try (AutoCloseableLock lock = lockManager.writeLock( + LockLevel.BLOCK_POOl, blockPoolId)) { + final ReplicaInfo info = volumeMap.get(blockPoolId, localBlock); + if (info == null) { + ReplicaInfo infoByBlockId = + volumeMap.get(blockPoolId, blockId); + if (infoByBlockId == null) { + // It is okay if the block is not found > Improve async delete replica on datanode > ---------------------------------------- > > Key: HDFS-16774 > URL: https://issues.apache.org/jira/browse/HDFS-16774 > Project: Hadoop HDFS > Issue Type: Improvement > Reporter: Haiyang Hu > Assignee: Haiyang Hu > Priority: Major > Labels: pull-request-available > > In our online cluster, a large number of ReplicaNotFoundExceptions occur when > client reads the data. > After tracing the root cause, it is caused by the asynchronous deletion of > the replica operation has many stacked pending deletion caused > ReplicationNotFoundException. > Current the asynchronous delete of the replica operation process is as > follows: > 1.remove the replica from the ReplicaMap > 2.delete the replica file on the disk [blocked in threadpool] > 3.notifying namenode through IBR [blocked in threadpool] > In order to avoid similar problems as much as possible, consider optimizing > the execution flow: > The deleting replica from ReplicaMap, deleting replica from disk and > notifying namenode through IBR are processed in the same asynchronous thread. -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org