Thanks Alex and Denis

We have configured off heap memory to 100GB and we have 10 nodes ignite
cluster. However when we are running spark job we see following error in
the ignite logs. When we run the spark job heap utilization on most of the
ignite nodes is increasing significantly though we are using off heap
storage. We have set JVM heap size on each ignite node to 50GB. Please
suggest.

java.lang.OutOfMemoryError: GC overhead limit exceeded
             at java.util.Arrays.copyOf(Arrays.java:3332)
             at
java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)


On Fri, Aug 3, 2018 at 4:16 AM, Alex Plehanov <plehanov.a...@gmail.com>
wrote:

>  "Non-heap memory ..." metrics in visor have nothing to do with offheap
> memory allocated for data regions. "Non-heap memory" returned by visor
> it's JVM managed memory regions other then heap used for internal JVM
> purposes (JIT compiler, etc., see [1]). Memory allocated in offheap by
> Ignite for data regions (via "unsafe") not included into this metrics. Some
> data region related metrics in visor were implemented in Ignite 2.4.
>
> [1] https://docs.oracle.com/javase/8/docs/api/java/lang/
> management/MemoryMXBean.html
>

Reply via email to