Right, I have taken a memory snapshot using YourKit of the system running Tomcat-7.0.10 after about 1 hour, when the Old Gen memory was beginning to reach its maximum.
I am not completely sure what information is useful for you to know as I have not used YourKit before so I am working from the demos and support docs that YourKit provide, hopefully they're the right things to be looking at. If there's anything missing or this isn't helpful information, I apologise in advance as I don't mean to waste anyone's time. So... ordering the "Class List" by retained size, in bytes, gives the top entries as... java.util.concurrent.ConcurrentHashMap$Segment (retained size: 755,981,384) java.util.concurrent.ConcurrentHashMap$HashEntry (retained size: 735,577,824) java.util.concurrent.ConcurrentHashMap$Segment[] (retained size: 753,281,704) ... The top entries when restricted to match "org.apache" are... org.apache.jasper.servlet.JspServlet (retained size: 732,632,608) org.apache.jasper.compiler.JspRuntimeContext (retained size: 732,613,072) org.apache.jasper.servlet.JspServletWrapper (retained size: 714,078,800) ... Looking at the "Biggest objects (dominators)" section shows a top entry of... org.apache.jasper.servlet.JspServlet (retained size: 732,615,184) Which is considerably bigger than the next in the list, at 12,235,872 bytes. Expanding this top entry in "Biggest objects (dominators)" shows... + org.apache.jasper.servlet.JspServlet (732,615,184) =+ org.apache.jasper.compiler.JspRuntimeContext (732,596,248) ==+ java.util.concurrent.ConcurrentHashMap (732,561,080) ===+ java.util.concurrent.ConcurrentHashMap$Segment[16] (732,561,032) ====+ java.util.concurrent.ConcurrentHashMap$Segment (55,193,864) ====+ java.util.concurrent.ConcurrentHashMap$Segment (49,947,008) ====+ .... (the rest of the 16 map entries, of similar size) So I looked into the "Selected Objects" of the java.util.concurrent.ConcurrentHashMap$Segment[16]. This contains... + java.util.concurrent.ConcurrentHashMap$Segment[16] (732,561,032) =+ [14] java.util.concurrent.ConcurrentHashMap$Segment (55,193,864) ==+ table java.util.concurrent.ConcurrentHashMap$HashEntry[512] (55,193,792) ===+ [16] java.util.concurrent.ConcurrentHashMap$HashEntry (496,712) ====+ value org.apache.jasper.servlet.JspServletWrapper (496,448) ====+ key java.lang.String "/directory-name/file-name.jsp" (32) ====+ hash = int -457695728 0xE4B81E10 ===+ [1] java.util.concurrent.ConcurrentHashMap$HashEntry (96,912) ===+ [13] java.util.concurrent.ConcurrentHashMap$HashEntry (27,488) ===+ ... (the rest of the 512 map entries) ==+ sync java.util.concurrent.locks.ReentrantLock$NonfairSync (32) ==+ count = int 248 0x000000F8 ==+ loadFactor = float 0.75 ==+ modCount = int 248 0x000000F8 ==+ threshold = int 384 0x00000180 =+ [9] java.util.concurrent.ConcurrentHashMap$Segment (49,947,008) =+ [13] java.util.concurrent.ConcurrentHashMap$Segment (49,717,568) =+ ... (the rest of the 16 map entries) I don't want this to become any more unreadable than it might already be so maybe this is a reasonable point to stop at to see if I'm even going down the right route. >From what I can make out simply, there is a large HashMap (~700MB) holding references to JspServletWrappers indexed by the JSP page uri, which garbage collection has little impact on. I have looked further into the JspServletWrapper, would this information be useful for anyone? It did not reveal much to my limited knowledge! Or is this enough to show that it is very much looking like something within our JSPs that's causing the issues rather than a difference between the versions of Tomcat? Ian On 26 July 2011 10:52, Mark Thomas <ma...@apache.org> wrote: > On 26/07/2011 10:43, Ian Marsh wrote: >> Unfortunately the conf changes to fork the compilation of JSPs and the >> increased 'modificationTestInterval' value made no real improvement so >> I am continuing to work towards replacing the language text request >> scope variables with property files references. >> >> However, if this is a problem, this has made me concerned that all >> request scope variables aren't being released, not just the ones we >> use for language text! > > Again, use a profiler and follow the gc roots. Until you provide some > evidence that Tomcat is doing something wrong, the assumption is going > to be that the problem is with the application. > > Mark > > > > --------------------------------------------------------------------- > To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org > For additional commands, e-mail: users-h...@tomcat.apache.org > > --------------------------------------------------------------------- To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org