[ 
https://issues.apache.org/jira/browse/KAFKA-8522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16909866#comment-16909866
 ] 

Richard Yu edited comment on KAFKA-8522 at 8/18/19 3:08 AM:
------------------------------------------------------------

[~junrao] Just want some clarifications on something. When 
{{LogCleanerManager}} is created, I noticed that a {{logDirs}} parameters was 
given a sequence of files the program can write into. These files' classpaths 
were given by the user in the {{logDirs}} property located in {{KafkaConfig}}.

It was here that I got a little confused. Firstly, I assumed that the files was 
limited in number by the user to be per disk (since KafkaConfig's values should 
be controlled by the user). This is where I think there could be a problem. In 
a real world situation, the number of partitions typically exceeds the number 
of disks available, so in other words, that would mean that if we pair off one 
checkpoint file per partition, there would be some partitions that would have 
no checkpoint files to write to.

Is my understanding of the situation correct? I am not completely sure about 
this, but this was how it appeared to me.


was (Author: yohan123):
[~junrao] Just want some clarifications on something. When 
{{LogCleanerManager}} is created, I noticed that a {{logDirs}} parameters was 
given (a sequence of {{file}}s{{) that were used to determine the checkpoint 
files a person can write into. After some research, I discovered that these 
files' classpaths was given by KafkaConfig's {{logDirs}} property. 

It was here that I got a little confused. Firstly, I assumed that the files was 
limited in number by the user to be per disk (since KafkaConfig's values should 
be controlled by the user). This is where I think there could be a problem. In 
a real world situation, the number of partitions typically exceeds the number 
of disks available, so in other words, that would mean that if we pair off one 
checkpoint file per partition, there would be some partitions that would have 
no checkpoint files to write to.

Is my understanding of the situation correct? I am not completely sure about 
this, but this was how it appeared to me.

> Tombstones can survive forever
> ------------------------------
>
>                 Key: KAFKA-8522
>                 URL: https://issues.apache.org/jira/browse/KAFKA-8522
>             Project: Kafka
>          Issue Type: Improvement
>          Components: log cleaner
>            Reporter: Evelyn Bayes
>            Priority: Minor
>
> This is a bit grey zone as to whether it's a "bug" but it is certainly 
> unintended behaviour.
>  
> Under specific conditions tombstones effectively survive forever:
>  * Small amount of throughput;
>  * min.cleanable.dirty.ratio near or at 0; and
>  * Other parameters at default.
> What  happens is all the data continuously gets cycled into the oldest 
> segment. Old records get compacted away, but the new records continuously 
> update the timestamp of the oldest segment reseting the countdown for 
> deleting tombstones.
> So tombstones build up in the oldest segment forever.
>  
> While you could "fix" this by reducing the segment size, this can be 
> undesirable as a sudden change in throughput could cause a dangerous number 
> of segments to be created.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

Reply via email to