Increase the MaxPermSize in Java
Hi, Where can i see the default value of MaxPermSize in java. I am using Tomcat 5.5 and java 1.6 and the operating system is windows 2008 server. How can i increase the maxpermsize in java in windows mode rujin
Re: Increase the MaxPermSize in Java
http://tinyurl.com/34hlbxl :-) regards Leon On Sat, Nov 13, 2010 at 11:39 AM, rujin raj rujin...@gmail.com wrote: Hi, Where can i see the default value of MaxPermSize in java. I am using Tomcat 5.5 and java 1.6 and the operating system is windows 2008 server. How can i increase the maxpermsize in java in windows mode rujin - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: Tomcat 6.0.29 using more and more RAM until it collapses?
On 12/11/2010 20:35, Brian wrote: Ok, I will do that now! I have taken another snapshot of the JVM a few minutes ago. Now I also see that 160MB are being used by org.apache.jasper.runtime.BodyContextImpl. This contains images of my DYNAMIC pages! That is sort of what I'd expect. A little background: Tag bodies have to be buffered. Jasper (Tomcat's JSP engine) uses a pool of buffers. Tag bodies are expected to be small. The buffer grows (but does not shrink) if the body is large. If you have a lot of tags that have large bodies then you can see an increase in memory usage in this area. To control this see org.apache.jasper.runtime.BodyContentImpl.LIMIT_BUFFER on http://tomcat.apache.org/tomcat-6.0-doc/config/systemprops.html HTH, Mark - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: Tomcat 6.0.29 using more and more RAM until it collapses?
On 12/11/2010 20:44, Christopher Schultz wrote: In fact, that's a good test: search your memory snapshot for instances of WebappClassLoader. There's a boolean in each one (active I think) It is started. that tells you if it's active. Force a couple of full GCs, then see how many of them are still around. If you have more than 1+webapp count (I think you get one for free plus one for each webapp deployed) then the old, undeployed webapps are still sitting around in memory. Nope, there should be one and only one for each deployed webapp. Mark - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: RequestDumperFilter improvement
On 12/11/2010 20:51, Christopher Schultz wrote: All, I would like to propose a new feature for the RequestDumperFilter that is similar to the AccessLogValve's condition setting, except that it is logically opposite: it would be nice to allow a Filter ahead of the RequestDumperFilter in the chain to flag the request as dump-able instead of having to clear-out a flag: presumably, the act of conditionally dumping a request is rare and so the semantics seem reasonable to work opposite to AccessLogValve's condition. So, I can submit a patch that does this but the question is what to call the configuration attribute. I have several options: 1. condition. This may cause confusion as it acts the opposite way to AccessLogValve's condition. Don't like the name condition 2. ifSet or even if isSet seems clearer than if 3. Give the user a choice: ifSet /and/ ifUnset (or both!) This seems the best option. Mark - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: 7.0.4 problem
On 13/11/2010 00:30, Anthony J. Biacco wrote: Centos 5.5 Linux x64 Mysql connector j 5.1.13 Tomcat 7.0.4 w/apr ajp Mysql cluster 7.1.3 Jdk 1.6.0_21 x64 Anybody aware of any problems with this combination? Using jmeter to load test my servlet, i see mysql threads held up indefinately until i get a 'Too many connections' error from mysql. Ajp threads all go back to Waiting according to visualvm. I have had no such problems with 7.0.2 I think you have hit this: https://issues.apache.org/bugzilla/show_bug.cgi?id=50159 Technically, 7.0.4 is sticking to the letter of the J2EE spec but it isn't what applications expect. 7.0.5 will revert to the previous behaviour but with an configurable option for those folks who want their resource factories to behave differently. Mark - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: Increase the MaxPermSize in Java
On 13/11/2010 10:39, rujin raj wrote: Hi, Where can i see the default value of MaxPermSize in java. I am using Tomcat 5.5 and java 1.6 and the operating system is windows 2008 server. Default can depend on additional factors such as memory and number of CPUs. It varies from operating system to operating system and can vary between point releases of the JDK. Check the docs for the version you are using. Mark - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
RE: Tomcat 6.0.29 using more and more RAM until it collapses?
From: Brian [mailto:bbprefix-m...@yahoo.com] Subject: RE: Tomcat 6.0.29 using more and more RAM until it collapses? the Eden Space is barely used (10MB right now). The survivor space is even less used (1MB right now). An object is normally created in Eden and will migrate to survivor if still reachable during the next minor GC. In most applications, the vast majority of objects become unreachable very quickly, and never make it to a survivor space. But the Tenured Gen space has 120MB right now! In fact, when my JVM start to eat houndreds of MB, most of that goes to the Tenured Gen. Once the survivor space fills up, long-lived objects migrate to tenured, where they stay until the application discards them. Very large objects may be initially allocated in tenured if they won't fit in Eden. Anything you see in tenured has either been around for quite some time or exceeds the Eden allocation threshold. The dead objects in tenured space won't be cleaned out until a major GC occurs; the JVM tries to minimize the number of those since they take quite a bit more time than a minor GC. You can force a major GC with the JConsole button. The perm gen is using 22MB right now, which is not a lot so I guess it is normal. A one-time look at the size of the PermGen isn't interesting; you need to see whether it increases over time, especially after restarting a webapp. If it does increase after a restart, that means the old instance of the webapp is still hanging around. - Chuck THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY MATERIAL and is thus for use only by the intended recipient. If you received this in error, please contact the sender and delete the e-mail and its attachments from all computers.
Re: Tomcat 6.0.29 using more and more RAM until it collapses?
On 12/11/2010 21:27, Leon Rosenberg wrote: P.S. I have a small tool that creates a diff of two subsequent histograms, i can share it if you need it. Post it to the wiki, perhaps? p 0x62590808.asc Description: application/pgp-keys signature.asc Description: OpenPGP digital signature
Re: Tomcat 6.0.29 using more and more RAM until it collapses?
On 13/11/2010 01:13, Brian wrote: I had noticed some warnings about that indeed. What might those have been? Posting the information about what you found will help others understand the process you've been through, whether their problem is similar to yours and if a similar solution applies. p 0x62590808.asc Description: application/pgp-keys signature.asc Description: OpenPGP digital signature
Re: Tomcat 6.0.29 using more and more RAM until it collapses?
On Sat, Nov 13, 2010 at 10:19 PM, Pid p...@pidster.com wrote: On 12/11/2010 21:27, Leon Rosenberg wrote: P.S. I have a small tool that creates a diff of two subsequent histograms, i can share it if you need it. Post it to the wiki, perhaps? Which page would fit best? Leon p - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
RE: Tomcat 6.0.29 using more and more RAM until it collapses?
Hi Leon, Thanks for your suggestion and sorry for responding so late. I had to take a break from this issue at least for a day. I had been working on this for 3 days in a row and barely sleeping at night, because of the crashes. As soon as I saw my installation not crashing, I needed to take a rest. I'm already using www.YourKit.com for now. It is not free, I guess I will not buy it finally if I totally solve my problem. But for now, while in the trial period, I'm using it and it is great! But as soon as it ends, I will definitely try Jmap/Jhat as you suggested. -Original Message- From: Leon Rosenberg [mailto:rosenberg.l...@gmail.com] Sent: Friday, November 12, 2010 04:27 PM To: Tomcat Users List Subject: Re: Tomcat 6.0.29 using more and more RAM until it collapses? Hello Brian, maybe I missed half of the communication, but from the other half I got the feeling that you are shooting in the dark. Heap dumps are hard to decipher especially if the internals seems to be unknown ;-) When hunting a memory leak I setup a cron job that performs the same task once an hour: jmap -heap:live pid file-with-timestamp.heap jmap -histo:live pid file- with-timestamp.histo the jmap histogramm contains all objects in your vm and their cumulated space. by comparing two of them taken in 30 or 60 minutes you can determine which objects are actually increasing numbers or size. With that knowledge analyzing heap dumps can be performed much faster and easier. Keep in mind that analyzing mem leaks that lead to outofmemory after the oome occured is twice as hard as shortly before . regards Leon P.S. I have a small tool that creates a diff of two subsequent histograms, i can share it if you need it. P.P.S. jmap is a standart java tool. Another standart java tool - jhat can theoretically analyze a heap dump based on a baseline heapdump taken previously. On Fri, Nov 12, 2010 at 9:44 PM, Caldarale, Charles R chuck.caldar...@unisys.com wrote: From: Brian [mailto:bbprefix-m...@yahoo.com] Subject: RE: Tomcat 6.0.29 using more and more RAM until it collapses? Now I also see that 160MB are being used by org.apache.jasper.runtime.BodyContextImpl. There are a couple of system properties you can set to control this: org.apache.jasper.runtime.JspFactoryImpl.USE_POOL org.apache.jasper.runtime.JspFactoryImpl.POOL_SIZE Look here for the doc: http://tomcat.apache.org/tomcat-6.0-doc/config/systemprops.html - Chuck THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY MATERIAL and is thus for use only by the intended recipient. If you received this in error, please contact the sender and delete the e-mail and its attachments from all computers. - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Share session between apps
Hi guys, Any suggestion on how to share session status on web apps deployed on the same tomcat instance? Kind of sso for container. Sent from my Windows Phone
RE: Tomcat 6.0.29 using more and more RAM until it collapses?
Hi Mark, This is interesting. I got a snapshot of the memory. I found just about 91 instances of this class (org.apache.jasper.runtime.BodyContentImpl). Just 91, so I guess the quantity of buffers at the same time is small, maybe they belong to a pool and the maximum quantity of objects in the pool is 100? I'm just guessing. Well, one of them was as big as 18MB! The second one almost the same about the first 10 of them where HUGE. Then the rest where more reasonable, between 1 and 50KB. How can a page in my site be as big as 18MB? It definitely can't happen. But for some reason, the buffer was that big in a certain point of time. I inspected the content of them, and they were not huge inside, just a few KB of HTML code inside and the rest was spaces or some invisible character. I mean, these instances of the class contained a huge array of chars, buf only the first hundreds/thousands of them contained HTML code, but given that the busfferSize variable contained a big value (18 millons), the object thought it had a huge full buffer containing real values. I guess something wrong happened, maybe an exception ocurred or I undeployed my app at that very moment (or something else went wrong) and this huge buffers got wrongly full of dummy characters and then stayed in the limbo, until the GB would delete them? Or now that I think more about it, they stayed in the memory to be reused again, being objects in a pool, their nextChar variable (a pointer) was reset to a small value, but their hugely increased internal array of chars was going to stay as big as the biggest they were... like you said. I will set the LIMIT_BUFFER value now, I guess that will solve it as you said. When the buffer gets cleared after it is being used every time, and the LIMIT value =true, the buffer will shrink to 512 bytes again, huh? That will be nice. :-) -Original Message- From: Mark Thomas [mailto:ma...@apache.org] Sent: Saturday, November 13, 2010 07:21 AM To: Tomcat Users List Subject: Re: Tomcat 6.0.29 using more and more RAM until it collapses? On 12/11/2010 20:35, Brian wrote: Ok, I will do that now! I have taken another snapshot of the JVM a few minutes ago. Now I also see that 160MB are being used by org.apache.jasper.runtime.BodyContextImpl. This contains images of my DYNAMIC pages! That is sort of what I'd expect. A little background: Tag bodies have to be buffered. Jasper (Tomcat's JSP engine) uses a pool of buffers. Tag bodies are expected to be small. The buffer grows (but does not shrink) if the body is large. If you have a lot of tags that have large bodies then you can see an increase in memory usage in this area. To control this see org.apache.jasper.runtime.BodyContentImpl.LIMIT_BUFFER on http://tomcat.apache.org/tomcat-6.0-doc/config/systemprops.html HTH, Mark - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
RE: Tomcat 6.0.29 using more and more RAM until it collapses?
Hi Chuck, Thanks for your explanations. My PermGen seems to be normal, low and steady. It seems that I solved my problems! So far, these are my conclutions: - Very often, when I restart/redeploy my app, some garbage is left in the memory. I don't know why, given that my code closes everything (relationships with the database, etc), unloads the JDBC drivers, etc. Now I'm restarting Tomcat very often, instead of just restarting/deploying my app. - The Tag bodies made some buffers to grow to huge objects. I have told Tomcat to decrease those buffers if they get bigger than the standard size (512 bytes), and now the problem seems to be gone! I wonder why isn't that LIMIT=true directive a standard. I bet millions of sites are having the same problem, and they don't even imagine what a memory profiler is, and how this can be happening. This problem was swallowing hundreds of MB! - I configured the Context so it wont use a cache for the static pages. At least for now, I made this to all the contexts. Maybe I will restore this capability to just my java app that doesn't have more than a few static resources, and keep it disabled for the 20 WARs full of static pages. This problem was also swallowing hundreds of MB! Brian -Original Message- From: Caldarale, Charles R [mailto:chuck.caldar...@unisys.com] Sent: Saturday, November 13, 2010 11:03 AM To: Tomcat Users List Subject: RE: Tomcat 6.0.29 using more and more RAM until it collapses? From: Brian [mailto:bbprefix-m...@yahoo.com] Subject: RE: Tomcat 6.0.29 using more and more RAM until it collapses? the Eden Space is barely used (10MB right now). The survivor space is even less used (1MB right now). An object is normally created in Eden and will migrate to survivor if still reachable during the next minor GC. In most applications, the vast majority of objects become unreachable very quickly, and never make it to a survivor space. But the Tenured Gen space has 120MB right now! In fact, when my JVM start to eat houndreds of MB, most of that goes to the Tenured Gen. Once the survivor space fills up, long-lived objects migrate to tenured, where they stay until the application discards them. Very large objects may be initially allocated in tenured if they won't fit in Eden. Anything you see in tenured has either been around for quite some time or exceeds the Eden allocation threshold. The dead objects in tenured space won't be cleaned out until a major GC occurs; the JVM tries to minimize the number of those since they take quite a bit more time than a minor GC. You can force a major GC with the JConsole button. The perm gen is using 22MB right now, which is not a lot so I guess it is normal. A one-time look at the size of the PermGen isn't interesting; you need to see whether it increases over time, especially after restarting a webapp. If it does increase after a restart, that means the old instance of the webapp is still hanging around. - Chuck THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY MATERIAL and is thus for use only by the intended recipient. If you received this in error, please contact the sender and delete the e-mail and its attachments from all computers. - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org