showuon commented on code in PR #15951:
URL: https://github.com/apache/kafka/pull/15951#discussion_r1609228927


##########
core/src/main/scala/kafka/server/ReplicaManager.scala:
##########
@@ -2114,16 +2114,12 @@ class ReplicaManager(val config: KafkaConfig,
         partition.log.foreach { _ =>
           val leader = BrokerEndPoint(config.brokerId, "localhost", -1)
 
-          // Add future replica log to partition's map
-          partition.createLogIfNotExists(
-            isNew = false,
-            isFutureReplica = true,
-            offsetCheckpoints,
-            topicIds(partition.topic))
-
-          // pause cleaning for partitions that are being moved and start 
ReplicaAlterDirThread to move
-          // replica from source dir to destination dir
-          logManager.abortAndPauseCleaning(topicPartition)
+          // Add future replica log to partition's map if it's not existed

Review Comment:
   > However, adding alter thread 
(replicaAlterLogDirsManager.addFetcherForPartitions(futureReplicasAndInitialOffset))
 is not in this check. Is it possible that alter thread, which is invoked by 
another thread, just remove the future log and then this thread add the topic 
partition to replicaAlterLogDirsManager? It seems to me that alter thread will 
get fail as future log of partition is gone.
   
   That's possible. But I think that's fine because the removal of future log 
could because:
   1. alter logDir completes. In this case, the new leaderAndIsr request or 
topic partition update will updated and this fetcher will be removed then in 
`ReplicaManager#makeLeader or makeFollower`.
   2. Another log failure happened. In this case the createLogIfInexsted will 
fail, too.
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to