So, Amol,
Did you look at the heap dump?
Denis
пн, 6 авг. 2018 г. в 18:46, Amol Zambare :
> Hi Alex,
>
> Here is the full stack trace
>
> [INFO][tcp-disco-sock-reader-#130][TcpDiscoverySpi] Finished serving
> remote node connection
> [INFO][tcp-disco-sock-reader-#653][TcpDiscoverySpi] Started
Hi Alex,
Here is the full stack trace
[INFO][tcp-disco-sock-reader-#130][TcpDiscoverySpi] Finished serving remote
node connection
[INFO][tcp-disco-sock-reader-#653][TcpDiscoverySpi] Started serving remote
node connection
[SEVERE][tcp-disco-sock-reader-#130][TcpDiscoverySpi] Runtime error caught
Amol,
Data is pulled onto heap every time you use it.
So, if your Spark jobs operate over big amount of data, then heap memory
utilization will be high.
Take a heap dump next time you encounter OutOfMemoryError.
You can make Java take a heap dump every time it fails with OOME:
Offheap and heap memory regions are used for different purposes and can't
replace each other. You can't get rid of OOME in heap by increasing offheap
memory.
Can you provide full exception stack trace?
2018-08-03 20:55 GMT+03:00 Amol Zambare :
> Thanks Alex and Denis
>
> We have configured off
Thanks Alex and Denis
We have configured off heap memory to 100GB and we have 10 nodes ignite
cluster. However when we are running spark job we see following error in
the ignite logs. When we run the spark job heap utilization on most of the
ignite nodes is increasing significantly though we are
"Non-heap memory ..." metrics in visor have nothing to do with offheap
memory allocated for data regions. "Non-heap memory" returned by visor it's
JVM managed memory regions other then heap used for internal JVM purposes
(JIT compiler, etc., see [1]). Memory allocated in offheap by Ignite for
Hi Denis.
I am using node command on visor and referring to "Non-heap memory
maximum" metric
| Non-heap memory initialized | 2mb
| Non-heap memory used| 64mb
| Non-heap memory committed | 66mb
|* Non-heap memory maximum | 1gb *
Thanks,
Amol
On Tue, Jul 31, 2018 at 12:21 PM,
Amol,
The configuration looks correct at least the piece, that you provided. Do
you start the server nodes with this config?
Which visor metric do you use to verify the off-heap size?
Denis
On Fri, Jul 27, 2018, 21:53 Amol Zambare wrote:
> Hi,
>
> We are using ignite to share in memory data
Hi,
We are using ignite to share in memory data across the spark jobs. .
I am using below configuration to set ignite offheap memory. I would like
to set it as 100 gb.
However when I print the node statistics using visor it shows offheap max
memory as 1 gb.
Please suggest.
Apache Ignite