Hi Patrik,

As of 2.3 you will be able to use the RocksDBConfigSetter to effectively
bound the total memory used by RocksDB for a single app instance. You
should already be able to limit the memory used per rocksdb store, though
as you mention there can be a lot of them. I'm not sure you can monitor the
memory usage if you are not limiting it though.

On Fri, Jun 7, 2019 at 2:06 AM Patrik Kleindl <pklei...@gmail.com> wrote:

> Hi
> Thanks Bruno for the KIP, this is a very good idea.
>
> I have one question, are there metrics available for the memory consumption
> of RocksDB?
> As they are running outside the JVM we have run into issues because they
> were using all the other memory.
> And with multiple streams applications on the same machine, each with
> several KTables and 10+ partitions per topic the number of stores can get
> out of hand pretty easily.
> Or did I miss something obvious how those can be monitored better?
>
> best regards
>
> Patrik
>
> On Fri, 17 May 2019 at 23:54, Bruno Cadonna <br...@confluent.io> wrote:
>
> > Hi all,
> >
> > this KIP describes the extension of the Kafka Streams' metrics to include
> > RocksDB's internal statistics.
> >
> > Please have a look at it and let me know what you think. Since I am not a
> > RocksDB expert, I am thankful for any additional pair of eyes that
> > evaluates this KIP.
> >
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-471:+Expose+RocksDB+Metrics+in+Kafka+Streams
> >
> > Best regards,
> > Bruno
> >
>

Reply via email to