Hi Stan,

Thanks for your analysis.
We have increased the on heap cache size 500000 and added expiry policy
[30mins].
The expiry policy is expiring the entries and the cache is never reaching to
it's max size.

But now we see high heap usage because of that GCs are happening frequently
and FULL GC is happened only once in a 2 days, after that full gc didn't
happen, only GCs are happening frequently.
Every time the heap usage is more than 59% in all the nodes and the heap
usage is reaching to 94% after 40 to 60 mins. Once GC happens it is coming
down to 60% . 


Following are the gc logs.
Desired survivor size 10485760 bytes, new threshold 1 (max 15)
 [PSYoungGen: 2086889K->10213K(2086912K)] 5665084K->3680259K(6281216K),
0.0704050 secs] [Times: user=0.54 sys=0.00, real=0.07 secs]
2018-06-22T09:53:38.873-0400: 374604.772: Total time for which application
threads were stopped: 0.0794010 seconds
2018-06-22T09:55:00.332-0400: 374686.231: Total time for which application
threads were stopped: 0.0084890 seconds
2018-06-22T09:55:00.340-0400: 374686.239: Total time for which application
threads were stopped: 0.0075450 seconds
2018-06-22T09:55:00.348-0400: 374686.247: Total time for which application
threads were stopped: 0.0078560 seconds
2018-06-22T09:55:26.847-0400: 374712.746: Total time for which application
threads were stopped: 0.0090060 seconds
2018-06-22T10:00:26.857-0400: 375012.756: Total time for which application
threads were stopped: 0.0105490 seconds
2018-06-22T10:02:48.740-0400: 375154.639: Total time for which application
threads were stopped: 0.0093160 seconds
2018-06-22T10:02:48.748-0400: 375154.647: Total time for which application
threads were stopped: 0.0077770 seconds
2018-06-22T10:02:48.757-0400: 375154.656: Total time for which application
threads were stopped: 0.0092110 seconds
2018-06-22T10:05:26.867-0400: 375312.766: Total time for which application
threads were stopped: 0.0098100 seconds
2018-06-22T10:05:52.775-0400: 375338.674: Total time for which application
threads were stopped: 0.0083580 seconds
2018-06-22T10:05:52.783-0400: 375338.682: Total time for which application
threads were stopped: 0.0074860 seconds
2018-06-22T10:05:52.790-0400: 375338.689: Total time for which application
threads were stopped: 0.0073980 seconds
2018-06-22T10:06:48.756-0400: 375394.655: Total time for which application
threads were stopped: 0.0086660 seconds
2018-06-22T10:06:48.764-0400: 375394.662: Total time for which application
threads were stopped: 0.0076080 seconds
2018-06-22T10:06:48.771-0400: 375394.670: Total time for which application
threads were stopped: 0.0076890 seconds
2018-06-22T10:07:05.603-0400: 375411.501: Total time for which application
threads were stopped: 0.0077390 seconds
2018-06-22T10:07:05.610-0400: 375411.509: Total time for which application
threads were stopped: 0.0074570 seconds
2018-06-22T10:07:05.617-0400: 375411.516: Total time for which application
threads were stopped: 0.0073410 seconds
2018-06-22T10:07:05.626-0400: 375411.525: Total time for which application
threads were stopped: 0.0072380 seconds
2018-06-22T10:07:05.633-0400: 375411.532: Total time for which application
threads were stopped: 0.0073070 seconds
2018-06-22T10:10:26.876-0400: 375612.775: Total time for which application
threads were stopped: 0.0091690 seconds
2018-06-22T10:15:26.887-0400: 375912.786: Total time for which application
threads were stopped: 0.0111650 seconds
2018-06-22T10:20:26.897-0400: 376212.796: Total time for which application
threads were stopped: 0.0099680 seconds
2018-06-22T10:22:30.917-0400: 376336.816: Total time for which application
threads were stopped: 0.0085330 seconds
2018-06-22T10:25:26.907-0400: 376512.806: Total time for which application
threads were stopped: 0.0094760 seconds
2018-06-22T10:26:04.247-0400: 376550.145: Total time for which application
threads were stopped: 0.0077120 seconds
2018-06-22T10:26:04.254-0400: 376550.153: Total time for which application
threads were stopped: 0.0075380 seconds
2018-06-22T10:26:04.262-0400: 376550.161: Total time for which application
threads were stopped: 0.0073460 seconds
2018-06-22T10:30:26.918-0400: 376812.817: Total time for which application
threads were stopped: 0.0107140 seconds
2018-06-22T10:35:26.929-0400: 377112.827: Total time for which application
threads were stopped: 0.0102250 seconds
2018-06-22T10:40:26.939-0400: 377412.838: Total time for which application
threads were stopped: 0.0096620 seconds
2018-06-22T10:41:06.178-0400: 377452.077: Total time for which application
threads were stopped: 0.0085630 seconds
2018-06-22T10:41:06.186-0400: 377452.085: Total time for which application
threads were stopped: 0.0079250 seconds
2018-06-22T10:41:06.194-0400: 377452.092: Total time for which application
threads were stopped: 0.0074940 seconds
2018-06-22T10:42:57.088-0400: 377562.987: Total time for which application
threads were stopped: 0.0090560 seconds
2018-06-22T10:42:57.096-0400: 377562.995: Total time for which application
threads were stopped: 0.0073790 seconds
2018-06-22T10:42:57.104-0400: 377563.002: Total time for which application
threads were stopped: 0.0076010 seconds
2018-06-22T10:45:26.949-0400: 377712.848: Total time for which application
threads were stopped: 0.0103300 seconds
2018-06-22T10:46:05.406-0400: 377751.305: [GC
Desired survivor size 10485760 bytes, new threshold 1 (max 15)
 [PSYoungGen: 2086885K->10230K(2086912K)] 5756931K->3741858K(6281216K),
0.0516130 secs] [Times: user=0.40 sys=0.00, real=0.06 secs]


After observing this much of usage of heap, i am worried to put more data
for any other caches on this servers.
Can you suggest how to reduce the usage of heap or any ideal recommendation
in creation of cache.


Thanks,
Praveen



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Reply via email to