I'm using Ignite v2.7.5 with C# client.

I have an error where Ignite throws an out of memory exception, like this:

2020-03-03 12:02:58,036 [287] ERR [MutableCacheComputeServer] JVM will be
halted immediately due to the failure: [failureCtx=FailureContext
[type=CRITICAL_ERROR, err=class o.a.i.i.mem.IgniteOutOfMemoryException: Out
of memory in data region [name=TAGFileBufferQueue, initSize=64.0 MiB,
maxSize=64.0 MiB, persistenceEnabled=true] Try the following:
  ^-- Increase maximum off-heap memory size
(DataRegionConfiguration.maxSize)
  ^-- Enable Ignite persistence (DataRegionConfiguration.persistenceEnabled)
  ^-- Enable eviction or expiration policies]]

I don't have an eviction policy set (is this even a valid recommendation
when using persistence?)

Increasing the off heap memory size for the data region does prevent this
error, but I want to minimise the in-memory size for this buffer as it is
essentially just a queue.

The suggestion of enabling data persistence is strange as this data region
has already persistence enabled for it.

My assumption is that Ignite manages the memory in this cache by saving and
loading values as required.

The test workflow in this failure is one where ~14,500 objects totalling
~440 Mb in size (avery object size = ~30Kb) are added to the cache, and are
then drained by a processors using a continuous query. Elements are removed
from the cache as the processor completes them.

Is this kind of out of memory error supposed to be possible when using
persistent data regions?

Thanks,
Raymond.

Reply via email to