satishd commented on a change in pull request #11390:
URL: https://github.com/apache/kafka/pull/11390#discussion_r791322611
##########
File path: core/src/main/scala/kafka/server/AbstractFetcherThread.scala
##########
@@ -715,6 +727,58 @@ abstract class AbstractFetcherThread(name: String,
}
}
+ /**
+ * Handle a partition whose offset is out of range and return a new fetch
offset.
+ */
+ protected def fetchOffsetAndTruncate(topicPartition: TopicPartition,
topicId: Option[Uuid], currentLeaderEpoch: Int): PartitionFetchState = {
+ fetchOffsetAndApplyFun(topicPartition, topicId, currentLeaderEpoch,
+ (epoch, leaderLogStartOffset) => truncateFullyAndStartAt(topicPartition,
leaderLogStartOffset))
Review comment:
Sure, as we discussed offline. I added the approach to keep
OffsetOutOfRange like earlier, which is to fetch the log-start-offset and it
may get a response of OffsetMovedToTieredStorage if it tries to fetch from
log-start-offset and it is moved to tiered storage.
##########
File path: core/src/main/scala/kafka/server/AbstractFetcherThread.scala
##########
@@ -715,6 +727,58 @@ abstract class AbstractFetcherThread(name: String,
}
}
+ /**
+ * Handle a partition whose offset is out of range and return a new fetch
offset.
+ */
+ protected def fetchOffsetAndTruncate(topicPartition: TopicPartition,
topicId: Option[Uuid], currentLeaderEpoch: Int): PartitionFetchState = {
+ fetchOffsetAndApplyFun(topicPartition, topicId, currentLeaderEpoch,
+ (epoch, leaderLogStartOffset) => truncateFullyAndStartAt(topicPartition,
leaderLogStartOffset))
Review comment:
Sure, as we discussed offline, I added the approach to keep
OffsetOutOfRange like earlier, which is to fetch the log-start-offset and it
may get a response of OffsetMovedToTieredStorage if it tries to fetch from
log-start-offset and it is moved to tiered storage.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]