splett2 commented on code in PR #14053: URL: https://github.com/apache/kafka/pull/14053#discussion_r1322065839
########## core/src/main/scala/kafka/cluster/Partition.scala: ########## @@ -858,7 +859,7 @@ class Partition(val topicPartition: TopicPartition, // No need to calculate low watermark if there is no delayed DeleteRecordsRequest val oldLeaderLW = if (delayedOperations.numDelayedDelete > 0) lowWatermarkIfLeader else -1L val prevFollowerEndOffset = replica.stateSnapshot.logEndOffset - replica.updateFetchState( + replica.updateFetchStateOrThrow( Review Comment: I don't think calling `updateFetchState` without holding the leaderIsrUpdate read lock is safe. One case I am thinking of is that in `makeLeader` we call: ``` remoteReplicas.foreach { replica => replica.resetReplicaState( currentTimeMs = currentTimeMs, leaderEndOffset = leaderEpochStartOffset, isNewLeader = isNewLeader, isFollowerInSync = partitionState.isr.contains(replica.brokerId) ) } ``` which means that this fetch state update outside of the lock can conflict with the leader epoch bump resetting the replica state. I think acquiring the read lock (and re-validating the leader epoch) potentially simplifies the logic in the PR as well. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org