Node is going down due to full gc triggering, you can avoid it by [1] but you
obtain ALL not only life objects.
Additionally you can try to attach with async prof [2] or probably visual vm
[1]
https://stackoverflow.com/questions/23393480/can-heap-dump-be-created-for-analyzing-memory-leak-with
heap dump generation does not seems to be working.
whenever I tried to generate the heap dump, node is going down, bit strange,
what else we could analyze
On Tue, Oct 12, 2021 at 7:35 PM Zhenya Stanilovsky
wrote:
> hi, highly likely the problem in your code - cpu usage grow synchronously
> with
hi, highly likely the problem in your code - cpu usage grow synchronously with
heap increasing between 00.00 and 12.00.
You need to analyze heap dump, no additional settings will help here.
>On the same subject, we have made the changes as suggested
>
>nodes are running on 8 CORE and 128 GB M
oh sorry about that, 128 is in our configuration file.
On 2021/09/29 15:47:27, Stephen Darlington
wrote:
> Correct me if I’m wrong, but I think they set the size of the threadpool to
> 128 in their configuration file.
>
> > On 29 Sep 2021, at 16:33, Zhenya Stanilovsky wrote:
> >
> > Ok, i
Correct me if I’m wrong, but I think they set the size of the threadpool to 128
in their configuration file.
> On 29 Sep 2021, at 16:33, Zhenya Stanilovsky wrote:
>
> Ok, i still can`t understand whats the source of 128 value.
> Can you check Runtime.getRuntime().availableProcessors() returning
Ok, i still can`t understand whats the source of 128 value.
Can you check Runtime.getRuntime().availableProcessors() returning value on
your side ?
>
>>
>>>Hi Naveen,
>>>
>>>my first change was to change jvm parameters, at first it seemed to be
>>>resolved but changing jvm parameters only
Hi Naveen,
my first change was to change jvm parameters, at first it seemed to be resolved
but changing jvm parameters only delayed the problem. Before that heap problems
occured after 14-16 hours after the start, but with jvm changes it took up to
36 hours.
while keeping jvm changes i updated
Good to hear from you , I have had the same issue for quite a long time and
am still looking for a fix.
What do you think has exactly resolved the heap starvation issue, is it the
GC related configuration or the threadpool configuration. ?
Default thread pool is the number of the cores of the ser
yes i was able to get 128 with the following configuration;
here is a sample log;
[2021-09-27T00:00:19,359][INFO ][grid-timeout-worker-#198][IgniteKernal]
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=f0025abe, uptime=2 days, 21:40:21.138]
^-- Cluster [h
systemThreadPoolSize and other pools are defined by default from:
Runtime.getRuntime().availableProcessors(), if you somehow obtain 128, plz fill
the ticket with all env info.
thanks !
>after many configuration changes and optimizations, i think i've solved the
>heap problem.
>
>here are th
after many configuration changes and optimizations, i think i've solved the
heap problem.
here are the changes that i applied to the system;
JVM changes ->
https://medium.com/@hoan.nguyen.it/how-did-g1gc-tuning-flags-affect-our-back-end-web-app-c121d38dfe56
helped a lot
nodes are running on 12
Actually Query interface doesn't define close() method, but QueryCursor
does.
In your snippets you're using try-with-resource construction for SELECT
queries which is good, but when you run MERGE INTO query you would also
get an QueryCursor as a result of
igniteCacheService.getCache(ID, Ignite
I can’t say whether it’s the problem but I can say that’s really not going to
help. The default for most of these thread pools is going to be 12 (the number
of cores). Each thread is going to have its own working set, stack, etc. If all
the threads are thrashing, it’s not going to be able to gar
out of curiosity, how does these poolSizes affects heap size usages? we are
using physical machines with total of 12 cores. we did have these sizes before
upgrade from 2.7.6 and had no heap issues at all. we do have same data size an
I’m not sure if it’s possible to diagnose this without knowing what’s on the
heap.
I don’t think there are any known issues around heap usage. Certainly a MERGE
is going to use a lot of heap space, but it should be recovered once the
statement is closed. Similarly, your select could use a lot o
Hi igniters,
the situation getting frustrated keeps going really huge GCs. Here is the last
GC report;
https://gceasy.io/my-gc-report.jsp?p=c2hhcmVkLzIwMjEvMDkvMjAvLS1nYy5sb2cuMC5jdXJyZW50IDIuemlwLS02LTMxLTg=&channel=WEB
This is production environment we cannot get heap dump since the prodecure
Just to add what Ibrahim mentioned, I also have a similar issue but I am
using 2.8.1 and we do have a good number of Insert/merge statements getting
executed.
We do get warnings for some of the MERGE statements, like "*The search row
by explicit key isn't supported. The primary key is always used t
Hi Ilya,
since this is production environment i could not risk to take heap dump for
now, but i will try to convince my superiors to get one and analyze it.
Queries are heavily used in our system but aren't they autoclosable objects? do
we have to close them anyway?
here are some usage example
Hi, Ibrahim!
Have you analyzed the heap dump of the server node JVMs?
In case your application executes queries are their cursors closed?
пт, 10 сент. 2021 г. в 11:54, Ibrahim Altun :
> Igniters any comment on this issue, we are facing huge GC problems on
> production environment, please advise.
Igniters any comment on this issue, we are facing huge GC problems on
production environment, please advise.
On 2021/09/07 14:11:09, Ibrahim Altun wrote:
> Hi,
>
> totally 400 - 600K reads/writes/updates
> 12core
> 64GB RAM
> no iowait
> 10 nodes
>
> On 2021/09/07 12:51:28, Piotr Jagielski w
Hi,
totally 400 - 600K reads/writes/updates
12core
64GB RAM
no iowait
10 nodes
On 2021/09/07 12:51:28, Piotr Jagielski wrote:
> Hi,
> Can you provide some information on how you use the cluster? How many
> reads/writes/updates per second? Also CPU / RAM spec of cluster nodes?
>
> We observed
Hi,
Can you provide some information on how you use the cluster? How many
reads/writes/updates per second? Also CPU / RAM spec of cluster nodes?
We observed full GC / CPU load / OOM killer when loading big amount of data (15
mln records, data streamer + allowOverwrite=true). We've seen 200-400k
After upgrading from 2.7.1 version to 2.10.0 version ignite nodes facing
huge full GC operations after 24-36 hours after node start.
We try to increase heap size but no luck, here is the start configuration
for nodes;
JVM_OPTS="$JVM_OPTS -Xms12g -Xmx12g -server
-javaagent:/etc/prometheus/jmx_prom
23 matches
Mail list logo