[ 
https://issues.apache.org/jira/browse/KAFKA-8522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16904804#comment-16904804
 ] 

Richard Yu commented on KAFKA-8522:
-----------------------------------

Alright, so I have found the necessary logic for the checkpoint files and the 
classes in which it is managed. (LogCleaner and LogManager).

The current problem I have is where the cleaning offsets are committed to the 
checkpoint files. Still looking into it at the moment. 

> Tombstones can survive forever
> ------------------------------
>
>                 Key: KAFKA-8522
>                 URL: https://issues.apache.org/jira/browse/KAFKA-8522
>             Project: Kafka
>          Issue Type: Bug
>          Components: log cleaner
>            Reporter: Evelyn Bayes
>            Priority: Minor
>
> This is a bit grey zone as to whether it's a "bug" but it is certainly 
> unintended behaviour.
>  
> Under specific conditions tombstones effectively survive forever:
>  * Small amount of throughput;
>  * min.cleanable.dirty.ratio near or at 0; and
>  * Other parameters at default.
> What  happens is all the data continuously gets cycled into the oldest 
> segment. Old records get compacted away, but the new records continuously 
> update the timestamp of the oldest segment reseting the countdown for 
> deleting tombstones.
> So tombstones build up in the oldest segment forever.
>  
> While you could "fix" this by reducing the segment size, this can be 
> undesirable as a sudden change in throughput could cause a dangerous number 
> of segments to be created.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

Reply via email to