chb2ab commented on code in PR #14444:
URL: https://github.com/apache/kafka/pull/14444#discussion_r1361075604


##########
core/src/main/scala/kafka/server/KafkaApis.scala:
##########
@@ -559,6 +560,26 @@ class KafkaApis(val requestChannel: RequestChannel,
     }
   }
 
+  case class LeaderNode(leaderId: Int, leaderEpoch: Int, node: Node)
+
+  private def getCurrentLeader(tp: TopicPartition): LeaderNode = {
+    val partitionInfoOrError = replicaManager.getPartitionOrError(tp)
+    val (leaderId, leaderEpoch) = partitionInfoOrError match {
+      case Right(x) =>
+        (x.leaderReplicaIdOpt.getOrElse(-1), x.getLeaderEpoch)
+      case Left(x) =>
+        debug(s"Unable to retrieve local leaderId and Epoch with error $x, 
falling back to metadata cache")
+        metadataCache.getPartitionInfo(tp.topic, tp.partition) match {
+          case Some(pinfo) => (pinfo.leader(), pinfo.leaderEpoch())
+          case None => (-1, -1)
+        }
+    }
+    val leaderNode: Node = metadataCache.getAliveBrokerNode(leaderId, 
config.interBrokerListenerName).getOrElse({

Review Comment:
   I looked more into this and I see replica manager looks up the partition 
from a Pool object while metadata cache looks it up in the current image and 
creates a new UpdateMetadataPartitionState to return. I think we can avoid an 
allocation using the replica manager, also since the fetch/produce paths should 
have recently tried to read through replica manager I think it's more likely to 
give an in-memory cache hit than the metadata path. It still seems better to me 
to try from replica manager first, what do you all think?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to