I and another colleague of mine have been investigating what seems to be a "memory leak" in our Tapestry application for about a month since we upgraded to T4.1.2. I won't bore you with the saga of the last month, but I would like to present the data I've gathered and look to the list for a proposed solution. I was reading a recent thread in which Jesse said (08/09/2007):
"There is a map that grows as large as the system using it internally to javassist of various cached reflection info - but it doesn't leak in any way." This is precisely what I've found in profiling our application and it *appears* to be this map that is causing our applications to eventually run out of memory. The YourKit profiler shows me that, as time goes on, there is an instance of HiveMindClassPool that grows and grows as class instances are created. This class extends from javassist.ClassPool and is the map that Jesse is talking about in his quote above. And he's right, I wouldn't say that the class pool "leaks" either because it looks like it's designed to retain that memory until the class pool itself is no longer needed. Take this quote from the javassist.ClassPool javadocs: "Memory consumption memo: ClassPool objects hold all the CtClasses that have been created so that the consistency among modified classes can be guaranteed. Thus if a large number of CtClasses are processed, the ClassPool will consume a huge amount of memory. To avoid this, a ClassPool object should be recreated, for example, every hundred classes processed. Note that getDefault() is a singleton factory. Otherwise, detach() in CtClass should be used to avoid huge memory consumption. " This huge memory consumption by the ClassPool is what I was seeing. In particular it is the ClassPool that is held onto by OgnlRuntime. Inspecting this object in the profiler showed that it has a map containing about 45,000 classes. All of the keys into this map were things like: "ASTTest_11494aca9af" and "ASTAnd_11494ace4fb" and the values are instances of javassist.CtNewClass. Each entry in this map looks like it retains about 1,900 bytes, for a grand total of about 90 MB of memory used. These numbers came from my staging deployment where I had the profiler attached, using some reflection tricks I was able to look at a production site and found that it had about 240,000 items in that class pool.. approximately 450 MB of memory. So I guess the questions in my mind are: Why are there so many classes in the pool? Why does the number only ever go up? Do those classes really need to stay in the pool forever?