kamalcph commented on code in PR #15825: URL: https://github.com/apache/kafka/pull/15825#discussion_r1598848735
########## core/src/main/scala/kafka/log/LocalLog.scala: ########## @@ -370,11 +370,12 @@ class LocalLog(@volatile private var _dir: File, throw new OffsetOutOfRangeException(s"Received request for offset $startOffset for partition $topicPartition, " + s"but we only have log segments upto $endOffset.") - if (startOffset == maxOffsetMetadata.messageOffset) + if (startOffset == maxOffsetMetadata.messageOffset) { emptyFetchDataInfo(maxOffsetMetadata, includeAbortedTxns) - else if (startOffset > maxOffsetMetadata.messageOffset) - emptyFetchDataInfo(convertToOffsetMetadataOrThrow(startOffset), includeAbortedTxns) - else { + } else if (startOffset > maxOffsetMetadata.messageOffset) { + // Instead of converting the `startOffset` to metadata, returning message-only metadata to avoid potential loop + emptyFetchDataInfo(new LogOffsetMetadata(startOffset), includeAbortedTxns) Review Comment: Agree, this patch is getting tricky. We want to validate all the scenarios especially when there is no data to read from the server, the number of fetch requests rate from the clients should be almost the same. To avoid/reduce the cases, Can we always resolve the maxOffsetMetadata to complete metadata? https://github.com/apache/kafka/pull/15825#discussion_r1598841362 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org