Looking at some recent JIRAs, such as KAFKA-6568, which came in after the
release of 0.11.0

Would that possibly be related to what you observed ?

Cheers

On Mon, Jul 23, 2018 at 6:23 PM Mitch Seymour <mitchseym...@gmail.com>
wrote:

> Hi all,
>
> We're using version 0.11.0 of Kafka (broker and client), and our Kafka
> Streams app uses a compacted topic for storing it's state. Here's the
> output of kafka-topics.sh --describe:
>
> Topic:mytopic
> PartitionCount:32
> ReplicationFactor:2
> Configs:retention.ms=432000000,cleanup.policy=compact
>
> The app will write tombstones to this topic when it's finished with a
> certain key. I can see the tombstones using kafkacat
>
> kafkacat -q -b ... -t mytopic -o beginning -c 2 -f"Time: %T, Key: %k,
> Message: %s\n" -Z
>
> Output:
>
> Time: 1530559667357, Key: key1, Message: NULL
> Time: 1530559667466, Key: key2, Message: NULL
>
> Note: the -Z flag in kafkacat prints null values as NULL to make it easier
> to see the tombstones. Anyways, the timestamps on these topics are from
> GMT: Monday, July 2, 2018, so I'm not sure why the tombstones still exist
> in this topic.
>
> Furthermore, it looks like compaction is being triggered because I'm seeing
> this in the logs:
>
> discarding tombstones prior to Thu Jul 19 14:15:17 GMT 2018.
>
> However, I still see tombstones in this topic from Jul 2, so it doesn't add
> up Another side note: I'm not explicitly setting delete.retention.ms, and
> since the default value is 86400000, or 1 day, I'm not too sure why the
> tombstones are sticking around.
>
> Anyways, has anyone experienced this before? I'm not sure if this is a
> known bug, or if there's something peculiar with our own setup. Thanks,
>
> Mitch
>

Reply via email to