junrao commented on code in PR #17437:
URL: https://github.com/apache/kafka/pull/17437#discussion_r1799892871
##########
core/src/main/scala/kafka/server/ReplicaManager.scala:
##########
@@ -950,6 +983,7 @@ class ReplicaManager(val config: KafkaConfig,
delayedProducePurgatory.checkAndComplete(requestKey)
delayedFetchPurgatory.checkAndComplete(requestKey)
delayedDeleteRecordsPurgatory.checkAndComplete(requestKey)
+ // TODO: We should do a checkAndComplete on
delayedShareFetchPurgatory once we start using topicId in ProduceRequest RPC.
Review Comment:
The caller calls `appendToLocalLog`, which has access to `Partition`. We
could just restructure the code a bit so that we could get topicId from
`Partition` and pass it here.
##########
core/src/main/java/kafka/server/share/SharePartitionManager.java:
##########
@@ -339,9 +315,9 @@ public CompletableFuture<Map<TopicIdPartition,
ShareAcknowledgeResponseData.Part
}
void
addPurgatoryCheckAndCompleteDelayedActionToActionQueue(Set<TopicIdPartition>
topicIdPartitions, String groupId) {
- delayedActionsQueue.add(() -> {
+ replicaManager.addToActionQueue(() -> {
Review Comment:
Could we get rid of this method and have `DelayedShareFetch` call
`replicaManager.addToActionQueue` directly?
##########
core/src/test/java/kafka/server/share/SharePartitionManagerTest.java:
##########
@@ -2289,6 +2291,21 @@ static Seq<Tuple2<TopicIdPartition, LogReadResult>>
buildLogReadResult(Set<Topic
return CollectionConverters.asScala(logReadResults).toSeq();
}
+ static void mockReplicaManagerDelayedShareFetch(ReplicaManager
replicaManager,
+
DelayedOperationPurgatory<DelayedShareFetch> delayedShareFetchPurgatory) {
+ doAnswer(invocationOnMock -> {
+ Object[] args = invocationOnMock.getArguments();
+ delayedShareFetchPurgatory.checkAndComplete((DelayedShareFetchKey)
args[0]);
Review Comment:
Is the casting to `DelayedShareFetchKey` redundant?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]