[ https://issues.apache.org/jira/browse/KAFKA-8522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16878247#comment-16878247 ]
Jun Rao commented on KAFKA-8522: -------------------------------- Since we are returning the tombstone to the consumer, it may not be ideal to change it. Another option is to write out the cleaning offset and the corresponding cleaning time to a checkpoint file in the partition dir. The cleaner can use that checkpoint to determine the cleaning time for each tombstone. Once all tombstones before an offset are removed, the corresponding entry in the checkpoint file can be removed. > Tombstones can survive forever > ------------------------------ > > Key: KAFKA-8522 > URL: https://issues.apache.org/jira/browse/KAFKA-8522 > Project: Kafka > Issue Type: Bug > Components: log cleaner > Reporter: Evelyn Bayes > Priority: Minor > > This is a bit grey zone as to whether it's a "bug" but it is certainly > unintended behaviour. > > Under specific conditions tombstones effectively survive forever: > * Small amount of throughput; > * min.cleanable.dirty.ratio near or at 0; and > * Other parameters at default. > What happens is all the data continuously gets cycled into the oldest > segment. Old records get compacted away, but the new records continuously > update the timestamp of the oldest segment reseting the countdown for > deleting tombstones. > So tombstones build up in the oldest segment forever. > > While you could "fix" this by reducing the segment size, this can be > undesirable as a sudden change in throughput could cause a dangerous number > of segments to be created. -- This message was sent by Atlassian JIRA (v7.6.3#76005)