Hi,
We are using aggregation by key on a kstream to create a ktable.
As I read from
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Streams%3A+Internal+Data+Management
it creates an internal changelog topic.

However over the time the streaming application is run message size
increases and it starts throwing max.message.bytes exception.

Is there a way to control the retention.ms time for internal changelog
topics so that messages are purged before they exceed this size.

If not is there a way to control or avoid such an error.

Thanks
Sachin

Reply via email to