Hm. Just to be absolutely sure, could you try throwing an exception or
something in your RocksDBConfigSetter?

On Wed, Jul 17, 2019 at 10:43 AM Muhammed Ashik <ashi...@gmail.com> wrote:

> I can confirm the /tmp/kafka-streams doesn't have any data related to
> rocksdb.
>
> On Wed, Jul 17, 2019 at 11:11 PM Sophie Blee-Goldman <sop...@confluent.io>
> wrote:
>
> > You can describe your topology to see if there are any state stores in it
> > that you aren't aware of. Alternatively you could check out the state
> > directory (/tmp/kafka-streams by default) and see if there is any data in
> > there
> >
> > On Wed, Jul 17, 2019 at 10:36 AM Muhammed Ashik <ashi...@gmail.com>
> wrote:
> >
> > > Thanks How can I verify If there is some data really going on rocksdb
> > > I tried printing the statistics with no success.
> > >
> > > class CustomRocksDBConfig extends RocksDBConfigSetter {
> > >   override def setConfig(storeName: String, options: Options, configs:
> > > util.Map[String, AnyRef]): Unit = {
> > >
> > >     val stats = new Statistics
> > >     stats.setStatsLevel(StatsLevel.ALL)
> > >     options.setStatistics(stats)
> > >       .setStatsDumpPeriodSec(600)
> > >     options
> > >       .setInfoLogLevel(InfoLogLevel.INFO_LEVEL)
> > >     options.setDbLogDir("/tmp/dump")
> > >
> > >   }
> > > }
> > >
> > >
> > > and included in the stream config ..
> > >
> > > settings.put(StreamsConfig.ROCKSDB_CONFIG_SETTER_CLASS_CONFIG,
> > > classOf[CustomRocksDBConfig])
> > >
> > >
> > > Regards
> > > Ashik
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > On Wed, Jul 17, 2019 at 10:52 PM Sophie Blee-Goldman <
> > sop...@confluent.io>
> > > wrote:
> > >
> > > > Sorry, didn't see the "off-heap" part of the email. Are you using any
> > > > stateful DSL operators? The default stores are persistent, so you may
> > > have
> > > > a RocksDB store in your topology without explicitly using one.
> > > >
> > > > On Wed, Jul 17, 2019 at 10:12 AM Sophie Blee-Goldman <
> > > sop...@confluent.io>
> > > > wrote:
> > > >
> > > > > If you are using inMemoryKeyValueStore, the records are stored by
> > > > > definition in memory. RocksDB is not used at all. This store will
> > > > continue
> > > > > to grow proportionally to your keyspace. If you do not have
> > sufficient
> > > > > memory to hold your entire dataset in memory, consider adding
> another
> > > > > instance or switching to the RocksDB store
> > > > >
> > > > > On Wed, Jul 17, 2019 at 6:22 AM Muhammed Ashik <ashi...@gmail.com>
> > > > wrote:
> > > > >
> > > > >> Kafka Streams version - 2.0.0
> > > > >>
> > > > >> Hi, in our streaming instance we are observing a steady growth in
> > the
> > > > >> off-heap memory (out of 2gb allocated memory 1.3 is reserved for
> > heap
> > > > >> memory and the ~700mb memory is utilised over a time of ~6hrs and
> > the
> > > > >> process is OOM killed eventually).
> > > > >>
> > > > >> we are using only the inMemoryKeyValueStore and not doing any
> > > > persistence.
> > > > >> as suggested the iterators are closed at the places it is
> used(using
> > > it
> > > > in
> > > > >> only once).
> > > > >>
> > > > >> Some forums were relating such issues with rocksdb but we are not
> > > > >> specifying rocksdb in the config explicitly though. I was not sure
> > > > whether
> > > > >> it is used as a default in memory store by kafka streams.
> > > > >>
> > > > >> Regards
> > > > >> Ashik
> > > > >>
> > > > >
> > > >
> > >
> >
>

Reply via email to