Hello,
Here is the path to GC roots for a single RequestInfo object. However,
it does not tell me anything. At least I don't see that my application
is holding up the resources.
Could you see anything relevant here?
Class Name | Shallow Heap | Retained Heap
-------------------------------------------------------------------------------------------------------------------------------------------------------
| |
org.apache.coyote.RequestInfo @ 0xf4a2ce18 | 88 | 105 504
|- [955] java.lang.Object[1234] @ 0xf35b5988 | 4 952 | 4 952
| '- elementData java.util.ArrayList @ 0xf73df080 | 24
| 4 976
| '- processors org.apache.coyote.RequestGroupInfo @ 0xf72eaa30
| 56 | 5 032
| '- global
org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler @
0xf73de248 | 32 | 800
| |- handler org.apache.tomcat.util.net.NioEndpoint @
0xf72eab48 | 288 | 60 232
| | |- this$0 org.apache.tomcat.util.net.NioEndpoint$Acceptor
@ 0xf850f0e0 | 24
| 24
| | | '- <Java Local>, target java.lang.Thread @ 0xf850ef68
http-nio-443-Acceptor-0 Thread | 120 | 400
| | |- this$0 org.apache.tomcat.util.net.NioEndpoint$Poller @
0xf850f320 | 48
| 632
| | | |- <Java Local>, target java.lang.Thread @ 0xf850f0f8
http-nio-443-ClientPoller-0 Thread | 120 | 4 752
| | | |- poller
org.apache.tomcat.util.net.NioEndpoint$KeyAttachment @
0xf6506640 | 112 | 57 400
| | | | '- attachment sun.nio.ch.SelectionKeyImpl @
0xf6506788 | 40 | 57 976
| | | | |- value java.util.HashMap$Node @ 0xf65067b0
| 32 | 32
| | | | | '- [101] java.util.HashMap$Node[256] @
0xf56a40e0 | 1 040 |
1 136
| | | | | '- table java.util.HashMap @ 0xf8519708
| 48 | 1 184
| | | | | '- fdToKey sun.nio.ch.EPollSelectorImpl
@ 0xf850f2b8 | 72 |
1 512
| | | | | |- <Java Local> java.lang.Thread @
0xf850f0f8 http-nio-443-ClientPoller-0 Thread | 120
| 4 752
| | | | | |- this$0
java.nio.channels.spi.AbstractSelector$1 @
0xf8518540 | 16 | 16
| | | | | | '- blocker java.lang.Thread @
0xf850f0f8 http-nio-443-ClientPoller-0 Thread | 120
| 4 752
| | | | | '- Total: 2 entries | |
| | | | |- key java.util.HashMap$Node @ 0xf65067d0
| 32 | 32
| | | | | '- [105] java.util.HashMap$Node[256] @
0xf56a44f0 | 1 040 |
1 136
| | | | | '- table java.util.HashMap @ 0xf8526d88
| 48 | 1 200
| | | | | '- map java.util.HashSet @ 0xf85196f8
| 16 | 1 216
| | | | | '- c
java.util.Collections$UnmodifiableSet @
0xf850f310 | 16 | 16
| | | | | '- <Java Local> java.lang.Thread
@ 0xf850f0f8 http-nio-443-ClientPoller-0 Thread| 120 |
4 752
| | | | '- Total: 2 entries | |
| | | '- Total: 2 entries | |
| | '- Total: 2 entries | |
| |- cHandler org.apache.coyote.http11.Http11NioProtocol @
0xf72ea530 | 128 | 200
| '- Total: 2 entries | |
|- resource org.apache.tomcat.util.modeler.BaseModelMBean @ 0xf4a2fba0
| 40 | 40
'- Total: 2 entries | |
-------------------------------------------------------------------------------------------------------------------------------------------------------
15.03.2018 11:53, Suvendu Sekhar Mondal пишет:
On Wed, Mar 14, 2018 at 2:19 AM, Industrious <industrious.3...@gmail.com> wrote:
Hello, Mark,
Thanks for your attention.
Could you take a look at the class histogram from today's OOME heap dump?
Maybe it could provide some details.
I see a spike in CPU usage at the approximate time the dump was
generated but that might be caused by the garbage collector's futile
attempt to free up memory.
That's correct. Aggressive GCs causes that.
Class Name | Objects |
Shallow Heap | Retained Heap
---------------------------------------------------------------------------------------------------
| |
|
org.apache.coyote.RequestInfo | 1 118 |
98 384 | >= 169 690 688
org.apache.coyote.Request | 1 118 |
187 824 | >= 169 564 168
byte[] | 70 337 |
120 141 712 | >= 120 141 712
From your problem description it seems that you are having a slow leak
which is crashing your Tomcat after few days. Are those RequestInfo
objects are of same size? or, some of them are large whereas rest of
them are small in size? Same question goes for byte arrays. From
output format it seems that you are using MAT. I will suggest to check
"path to gc roots" for those objects first. That will tell you who is
keeping them alive. Also please check the thread(s) associated with
those Byte arrays and RequestInfo objects. That will tell you if any
application thread is involved or not.
org.apache.tomcat.util.net.SecureNioChannel | 985 |
47 280 | >= 59 649 592
char[] | 128 612 |
55 092 504 | >= 55 092 504
org.apache.tomcat.util.buf.CharChunk | 89 026 |
4 273 248 | >= 41 134 168
java.nio.HeapByteBuffer | 4 256 |
204 288 | >= 33 834 864
org.apache.tomcat.util.net.NioEndpoint$NioBufferHandler| 985 |
23 640 | >= 33 482 120
org.apache.catalina.connector.Request | 1 118 |
187 824 | >= 33 452 000
org.apache.catalina.connector.Response | 1 118 |
71 552 | >= 28 233 912
org.apache.catalina.connector.OutputBuffer | 1 118 |
89 440 | >= 27 898 496
org.apache.catalina.connector.InputBuffer | 1 118 |
80 496 | >= 27 270 448
sun.security.ssl.SSLEngineImpl | 985 |
133 960 | >= 25 596 024
sun.security.ssl.EngineInputRecord | 985 |
63 040 | >= 20 648 288
org.apache.tomcat.util.buf.ByteChunk | 99 093 |
4 756 464 | >= 15 422 384
java.lang.String | 108 196 |
2 596 704 | >= 14 737 456
org.apache.tomcat.util.buf.MessageBytes | 84 554 |
4 058 592 | >= 12 960 440
java.util.HashMap$Node[] | 9 139 |
1 156 352 | >= 12 864 216
java.util.HashMap | 10 997 |
527 856 | >= 12 817 352
java.util.HashMap$Node | 56 583 |
1 810 656 | >= 11 484 248
org.apache.catalina.loader.WebappClassLoader | 2 |
272 | >= 10 199 128
org.apache.coyote.http11.InternalNioOutputBuffer | 1 118 |
89 440 | >= 9 811 568
java.util.concurrent.ConcurrentHashMap | 3 823 |
244 672 | >= 9 646 384
java.util.concurrent.ConcurrentHashMap$Node[] | 1 295 |
260 664 | >= 9 404 616
java.lang.Class | 9 901 |
85 176 | >= 9 233 664
java.util.concurrent.ConcurrentHashMap$Node | 15 554 |
497 728 | >= 9 111 176
org.apache.tomcat.util.http.MimeHeaders | 2 236 |
53 664 | >= 7 119 880
org.apache.tomcat.util.http.MimeHeaderField[] | 2 236 |
141 248 | >= 7 066 208
org.apache.tomcat.util.http.MimeHeaderField | 20 201 |
484 824 | >= 6 924 960
org.apache.catalina.loader.ResourceEntry | 5 133 |
246 384 | >= 5 414 616
java.lang.Object[] | 17 925 |
1 249 176 | >= 5 046 960
java.io.ByteArrayOutputStream | 3 857 |
92 568 | >= 4 603 096
org.apache.catalina.webresources.JarResourceSet | 46 |
3 312 | >= 4 576 680
java.text.SimpleDateFormat | 3 413 |
218 432 | >= 4 236 656
java.text.SimpleDateFormat[] | 1 126 |
36 032 | >= 4 227 008
sun.security.ssl.HandshakeHash | 985 |
39 400 | >= 3 895 560
Total: 36 of 9 892 entries; 9 856 more | 1 227 954 |
219 995 336 |
---------------------------------------------------------------------------------------------------
2018-03-05 16:59 GMT+02:00 Mark Thomas <ma...@apache.org>:
That strikes me as odd. The RequestInfo objects are fairly small. They
are usually part of a much large group of objects but they should appear
separately in any analysis.
The number of RequestInfo is limited by maxConnections.
Do you think I should try setting maxConnections to a lower limit
instead of the default 10000?
I have configured the executor to maxThreads="20" but that did not help.
Thank you.
Could you tell me what are these objects for?
Take a look at the Javadoc for the version you are using.
Should not they be recycled or disposed of?
No. They are re-used across multiple requests allocated to the same
Processor.
Mark
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org