Keeping the rocksdb iterator wouldn't cause a memory leak in the heap. That
is why i asked.

On Sat, 4 Feb 2017 at 16:36 Pierre Coquentin <pierre.coquen...@gmail.com>
wrote:

> The iterator is inside a try-with-resources. And if the memory leak was
> inside our code, we will see it using visualvm or jmap, and that's not the
> case. This is not a memory leak in the heap. That's why my guess goes
> directly to rocksdb.
>
> On Sat, Feb 4, 2017 at 5:31 PM, Damian Guy <damian....@gmail.com> wrote:
>
> > Hi Pierre,
> >
> > When you are iterating over the entries do you close the iterator once
> you
> > are finished? If you don't then that will cause a memory leak.
> >
> > Thanks,
> > Damian
> >
> > On Sat, 4 Feb 2017 at 16:18 Pierre Coquentin <pierre.coquen...@gmail.com
> >
> > wrote:
> >
> > > Hi,
> > >
> > > We ran a few tests with apache kafka 0.10.1.1.
> > > We use a Topology with only one processor and a KVStore configured as
> > > persistent backed by rocksdb 4.9.0. Each events received are stored
> using
> > > the method put(key, value) and in the punctuate method, we iterate over
> > all
> > > entries with all(), processing them and deleting them with delete(key).
> > > Then after a few days, the jvm crashed due to lack of memory. Visualvm
> or
> > > jmap doesn't show anything, so my guesses were there was a memory leak
> in
> > > rocksdb. We configured the KVStore to be in memory and as you can see
> in
> > > picture 2, the memory is stable.
> > > I didn't run the same test with a newer version of rocksdb yet.
> > > Any thoughts ?
> > > Regards,
> > >
> > > Pierre
> > >
> >
>

Reply via email to