[ 
https://issues.apache.org/jira/browse/KAFKA-6056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16201451#comment-16201451
 ] 

Jan Filipiak commented on KAFKA-6056:
-------------------------------------

Besides not mentioned in the comment, the code actually checks this condition. 
since here https://issues.apache.org/jira/browse/KAFKA-4451
So it can only affect users upgrading from before 10.2 that do have a 
LogSegment where the range is already exeeded inside 1 segment



> LogCleaner always cleaning into 1 Segment per sizegroup might exeed relativ 
> offset range
> ----------------------------------------------------------------------------------------
>
>                 Key: KAFKA-6056
>                 URL: https://issues.apache.org/jira/browse/KAFKA-6056
>             Project: Kafka
>          Issue Type: Bug
>          Components: core, log
>    Affects Versions: 0.11.0.0
>            Reporter: Jan Filipiak
>            Priority: Minor
>
> After having an Issue with compaction stopping for some time. It can be an 
> issue that the LogCleaner will always clean into 1 Segment per sizegroup. 
> Usually  the Log enforces a maximum distance between min and max offset in a 
> LogSegment. If that Distance would be exeeded in maybeRoll() a new logsegment 
> would be rolled. I assume this is because relative offset might be stored as 
> integer. The LogCleaner OTOH is not going to roll a new LogSegment as its 
> only ever using 1 Segment to clean into. 
> A lenghty discussion about this can be found in the slack community:
> https://confluentcommunity.slack.com/archives/C49R61XMM/p1506914441000005
> The observed stacktrace is as follows:
> https://gist.github.com/brettrann/ce52343692696a45d5b9f4df723bcd14
> I could imagin also enfocing Integer.MAX_VALUE as offset distance in
> groupSegmentsBySize in the LogCleaner to make sure a Segment doesnt exeed 
> this limit.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to