Dear kafka users,

I've got a problem on one of my kafka clusters. I use this cluster with kafka 
streams applications. Some of this stream apps use a kafka state store. 
Therefore a changelog topic is created for those stores with cleanup policy 
"compact". One of these topics is running wild for some time now and seems to 
grow indefinitely. When I check the  log file of the first segment, there is a 
lot of data in it, that should have been compacted already.

So I guess I did not configure everything correctly for log compaction to work 
as expected. What config parameters do have influence on log compaction? And 
how to set them, when I want data older than 4 hours to be compacted?

Thanks in advance.

Best,
Claudia

Reply via email to