Thanks.  I have done more heap analysis and think I have it tracked closer to 
the source.

--
I started looking at the heap a different way.  The random values I looked at 
before (of the 80,000) may not have be as representative as I thought.

Examining the retained sizes in the heap, I am finding:
I have two instances of AbstractProtocol$ConnectionHandler.

One of these AbstractProtocol$ConnectionHandler instances has a 
ConcurrentHashMap called "connections"
This map as 32 elements in its "table".  Most of these are null.  Some of the 
ones that are not, are huge.
The entirety of the map retains 112MG.

Two of these ConcurrentHashMap$Node elements take up around 50MG a piece.
Looking at the "val" variable of a node, there is an UpgradeProcessorInternal
Inside this is a variable called internalHttpUpgradeHandler (of type 
Http2UpgradeHandler)
The one of these I am looking at now retains 16MG of memory.
(Oddly, once I get this far, the retained sizes of its internal objects don't 
really add up.)

Any ideas on how to work around this?  Or if this is already fixed in a later 
version of Tomcat?

Thanks,

Mark Claassen
Senior Software Engineer

Donnell Systems, Inc.
130 South Main Street
Leighton Plaza Suite 375
South Bend, IN  46601
E-mail: mailto:mclaas...@ocie.net
Voice: (574)232-3784
Fax: (574)232-4014

Disclaimer:
The opinions provided herein do not necessarily state or reflect 
those of Donnell Systems, Inc.(DSI). DSI makes no warranty for and 
assumes no legal liability or responsibility for the posting. 




-----Original Message-----
From: Rob Sargent <rsarg...@xmission.com> 
Sent: Thursday, July 8, 2021 6:50 PM
To: users@tomcat.apache.org
Subject: Re: [Possible Spam] Re: HTTP/2 Memory Leak
Importance: Low



On 7/8/21 3:17 PM, Mark A. Claassen wrote:
> Ok.  That didn’t seem to work.  I will investigate further and try to find a 
> way to send that information.
>
> It is not that busy a server, but the memory use increases very quickly.  
> Doing a class_histogram shows MessageBytes growing by the thousands every 30 
> minutes.
>
> (We have a temporary monitor script in place that does a GC and then prints a 
> class_histogram every half hour to help us pinpoint what is happening.)
>
> Thanks,
> Mark
>
Perhaps you've done this already, but

    grep -R 'static HashMap' src

might locate some potential culprits.

Reply via email to