splett2 commented on code in PR #14444:
URL: https://github.com/apache/kafka/pull/14444#discussion_r1362252576


##########
core/src/main/scala/kafka/server/KafkaApis.scala:
##########
@@ -559,6 +560,26 @@ class KafkaApis(val requestChannel: RequestChannel,
     }
   }
 
+  case class LeaderNode(leaderId: Int, leaderEpoch: Int, node: Node)
+
+  private def getCurrentLeader(tp: TopicPartition): LeaderNode = {
+    val partitionInfoOrError = replicaManager.getPartitionOrError(tp)
+    val (leaderId, leaderEpoch) = partitionInfoOrError match {
+      case Right(x) =>
+        (x.leaderReplicaIdOpt.getOrElse(-1), x.getLeaderEpoch)
+      case Left(x) =>
+        debug(s"Unable to retrieve local leaderId and Epoch with error $x, 
falling back to metadata cache")
+        metadataCache.getPartitionInfo(tp.topic, tp.partition) match {
+          case Some(pinfo) => (pinfo.leader(), pinfo.leaderEpoch())
+          case None => (-1, -1)
+        }
+    }
+    val leaderNode: Node = metadataCache.getAliveBrokerNode(leaderId, 
config.interBrokerListenerName).getOrElse({

Review Comment:
   the extra object allocation is not a big issue, since the new leader and new 
leader lookup are not done in the common case, only in erroring cases.
   
   populating the new leader state from the `Partition` also doesn't work for 
cases where the partition gets deleted from the leader, for instance in cases 
with reassignments, so populating from the metadata cache is both more likely 
to have up-to-date information (in KRaft mode, which we should assume to be the 
default) and it handles NotLeader in more cases.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to