divijvaidya commented on code in PR #16237:
URL: https://github.com/apache/kafka/pull/16237#discussion_r1960063268


##########
core/src/main/java/kafka/log/remote/RemoteLogManager.java:
##########
@@ -972,7 +972,9 @@ private boolean 
isSegmentBreachedByRetentionSize(RemoteLogSegmentMetadata metada
                     }
                 }
                 if (shouldDeleteSegment) {
-                    logStartOffset = OptionalLong.of(metadata.endOffset() + 1);
+                    if (!logStartOffset.isPresent() || 
logStartOffset.getAsLong() < metadata.endOffset() + 1) {

Review Comment:
   I came across this PR right now and this line change fixes another problem. 
This line ensures that start offset doesn't move backward.
   
   For example, prior to this line change, the following scenario has a problem:
   
   1. we have an "expired" segment with offset with [0-100]. 
   2. user calls delete records and sets new start offset to 110.
   3. Fetch call arrives for 105 and is rejected because 105 < 100
   4. this segment is expired as part of this code and start offset is set back 
to 100 + 1 = 101
   5. Fetch call arrives for 105 and is not served this time around (and thus 
we ignore the effect of delete records API call).
   
   Taking a maximum of metadata.endOffset + 1 or current logStartOffset fixes 
this.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to