junrao commented on code in PR #15618:
URL: https://github.com/apache/kafka/pull/15618#discussion_r1543560539


##########
core/src/main/scala/kafka/log/UnifiedLog.scala:
##########
@@ -1320,10 +1320,8 @@ class UnifiedLog(@volatile var logStartOffset: Long,
         // constant time access while being safe to use with concurrent 
collections unlike `toArray`.
         val segmentsCopy = logSegments.toBuffer
         val latestTimestampSegment = segmentsCopy.maxBy(_.maxTimestampSoFar)
-        val latestTimestampAndOffset = 
latestTimestampSegment.maxTimestampAndOffsetSoFar
-
-        Some(new TimestampAndOffset(latestTimestampAndOffset.timestamp,
-          latestTimestampAndOffset.offset,
+        val batch = 
latestTimestampSegment.log.batches().asScala.maxBy(_.maxTimestamp())

Review Comment:
   Yes, suppose you have a 1GB segment and the maxTimestamp is in the last 
batch. latestTimestampSegment.log.batches() needs to read 1GB from disk. Using 
the offsetIndex, we only need to read the index and the index.interval (default 
to 4KB) worth of bytes.
   
   > Is the impl of lookup like this?
   
   Yes, that's what I was thinking.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to