Hello!

800MB entry is far above of the entry size that we ever expected to see.
Even brief holding of these entries on heap will cause problems for you, as
well as sending them over communication.

I recommend splitting entries into chunks, maybe. That's what IGFS did
basically, we decided to ax it, but you can still use that approach.

Having said that, if you can check the heap dump to see where are these
stuck byte arrays referenced from, I may check it.

Regards,
-- 
Ilya Kasnacheev


вт, 3 нояб. 2020 г. в 11:48, Kalin Katev <ka...@reasonance.de>:

> Hi,
>
> not too long ago I tested apache ignite for my use case on Open JDK 11.
> The use case consists of writing cache entries with values going up to
> 800MB in size, the data itself being a simple string. After writing 5
> caches entries, 800 MB each, I noticed my Heap space exploding up to 11GB,
> while the entries themselves were written off-heap. The histogram of the
> heap dump shows that there are 5 tuples of byte[] arrays with size 800MB
> and 1000MB that are left dangling on heap.  I am very curious if I did
> something wrong or if there indeed is an issue in Ignite. All details can
> be seen on
> https://stackoverflow.com/questions/64550479/possible-memory-leak-in-apache-ignite
>
> Should I create a jira ticket for this issue?
>
> Best regards,
> Kalin Katev
>
> Resonance GmbH
>

Reply via email to