Hi,

I have a kafka stream app that currently takes 3 topics and aggregates
them into a KTable. This app resides inside a microservice which has
been allocated 512 MB memory to work with. After implementing this,
I've noticed that the docker container running the microservice
eventually runs out of memory and was trying to debug the cause.

My current theory (whilst reading the sizing guide
https://docs.confluent.io/current/streams/sizing.html) is that over
time, the increasing records stored in the KTable and by extension,
the underlying RocksDB, is causing the OOM for the microservice. Does
kafka provide any way to find out the memory used by the underlying
default RocksDB implementation?

Reply via email to