Re: Tomcat dies suddenly (was JVM goes away)
Yes, Slackware, version 13, 64bit. I had done this manually (looked through each log for any evidence of a failure) but had not done it your automated way. Just did your automated way and it found nothing (I included all the messages logs)... bummer. The server I brought up Tuesday is using the same Slackware, Tomcat, JDK. This server is a Dell T105 (it was destined to be used in a smaller setting) which has an AMD processor instead of the Xeon. This server is a little slower (the users don't notice) and has yet to have any problem. Of course, the T110 ran for a week before it had a problem. Thanks, Carl - Original Message - From: "Pid" To: Sent: Thursday, January 14, 2010 9:55 AM Subject: Re: Tomcat dies suddenly (was JVM goes away) On 14/01/2010 14:36, Carl wrote: David, I am such a dufuss... didn't even notice it cycled after it finished a test. After almost 24 hours, showing no failures. Time to call Dell. If there's no memory hardware issue, then we're back to software. You were on linux right? Did you search the OS logs for evidence of an OOM kill? cat /var/log/messages | grep --ignore-case "killed process" p - Original Message - From: "David kerber" To: "Tomcat Users List" Sent: Thursday, January 14, 2010 8:48 AM Subject: Re: Tomcat dies suddenly (was JVM goes away) Memtest86, which I believe is the same one Peter suggested (or at least a variation of it). It just loops continuously until stopped. Carl wrote: David, What do you use for your mem testing? I am using the memTest suggested by Peter... after six tests, it still shows all memory is OK. Probably call Dell this morning. TIA, Carl - Original Message - From: "David Kerber" To: "Tomcat Users List" Sent: Wednesday, January 13, 2010 6:26 PM Subject: Re: Tomcat dies suddenly (was JVM goes away) Peter Crowther wrote: 2010/1/13 David kerber : Make sure you let it run for quite a while. I've had memory failures show up as late as 11 passes into a test run. That's dedication - I usually end up stopping it after a couple of runs. Thanks David, I've learned something! - Peter - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: Tomcat dies suddenly (was JVM goes away)
On 14/01/2010 14:36, Carl wrote: David, I am such a dufuss... didn't even notice it cycled after it finished a test. After almost 24 hours, showing no failures. Time to call Dell. If there's no memory hardware issue, then we're back to software. You were on linux right? Did you search the OS logs for evidence of an OOM kill? cat /var/log/messages | grep --ignore-case "killed process" p - Original Message - From: "David kerber" To: "Tomcat Users List" Sent: Thursday, January 14, 2010 8:48 AM Subject: Re: Tomcat dies suddenly (was JVM goes away) Memtest86, which I believe is the same one Peter suggested (or at least a variation of it). It just loops continuously until stopped. Carl wrote: David, What do you use for your mem testing? I am using the memTest suggested by Peter... after six tests, it still shows all memory is OK. Probably call Dell this morning. TIA, Carl - Original Message - From: "David Kerber" To: "Tomcat Users List" Sent: Wednesday, January 13, 2010 6:26 PM Subject: Re: Tomcat dies suddenly (was JVM goes away) Peter Crowther wrote: 2010/1/13 David kerber : Make sure you let it run for quite a while. I've had memory failures show up as late as 11 passes into a test run. That's dedication - I usually end up stopping it after a couple of runs. Thanks David, I've learned something! - Peter - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: Tomcat dies suddenly (was JVM goes away)
David, I am such a dufuss... didn't even notice it cycled after it finished a test. After almost 24 hours, showing no failures. Time to call Dell. Thanks, Carl - Original Message - From: "David kerber" To: "Tomcat Users List" Sent: Thursday, January 14, 2010 8:48 AM Subject: Re: Tomcat dies suddenly (was JVM goes away) Memtest86, which I believe is the same one Peter suggested (or at least a variation of it). It just loops continuously until stopped. Carl wrote: David, What do you use for your mem testing? I am using the memTest suggested by Peter... after six tests, it still shows all memory is OK. Probably call Dell this morning. TIA, Carl - Original Message - From: "David Kerber" To: "Tomcat Users List" Sent: Wednesday, January 13, 2010 6:26 PM Subject: Re: Tomcat dies suddenly (was JVM goes away) Peter Crowther wrote: 2010/1/13 David kerber : Make sure you let it run for quite a while. I've had memory failures show up as late as 11 passes into a test run. That's dedication - I usually end up stopping it after a couple of runs. Thanks David, I've learned something! - Peter - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org I just start it and let it go for a day or four, until I get around to checking it again. I try to get at least 24 hours of memtest testing on new machines, and 48 hrs on used/older ones. D - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: Tomcat dies suddenly (was JVM goes away)
2010/1/14 David kerber : > Memtest86, which I believe is the same one Peter suggested (or at least a > variation of it). It just loops continuously until stopped. I suggested memtest86+ (http://www.memtest.org/). Memtest86 (http://www.memtest86.com/) is also available; I moved to the + version when Chris Brady stopped development of the original for a period. The core tests are very similar, doing things like looking for stuck bits (always 1 or always 0) or bits whose state can be influenced by their neighbours'. - Peter - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: Tomcat dies suddenly (was JVM goes away)
Memtest86, which I believe is the same one Peter suggested (or at least a variation of it). It just loops continuously until stopped. Carl wrote: David, What do you use for your mem testing? I am using the memTest suggested by Peter... after six tests, it still shows all memory is OK. Probably call Dell this morning. TIA, Carl - Original Message - From: "David Kerber" To: "Tomcat Users List" Sent: Wednesday, January 13, 2010 6:26 PM Subject: Re: Tomcat dies suddenly (was JVM goes away) Peter Crowther wrote: 2010/1/13 David kerber : Make sure you let it run for quite a while. I've had memory failures show up as late as 11 passes into a test run. That's dedication - I usually end up stopping it after a couple of runs. Thanks David, I've learned something! - Peter - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org I just start it and let it go for a day or four, until I get around to checking it again. I try to get at least 24 hours of memtest testing on new machines, and 48 hrs on used/older ones. D - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: Tomcat dies suddenly (was JVM goes away)
David, What do you use for your mem testing? I am using the memTest suggested by Peter... after six tests, it still shows all memory is OK. Probably call Dell this morning. TIA, Carl - Original Message - From: "David Kerber" To: "Tomcat Users List" Sent: Wednesday, January 13, 2010 6:26 PM Subject: Re: Tomcat dies suddenly (was JVM goes away) Peter Crowther wrote: 2010/1/13 David kerber : Make sure you let it run for quite a while. I've had memory failures show up as late as 11 passes into a test run. That's dedication - I usually end up stopping it after a couple of runs. Thanks David, I've learned something! - Peter - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org I just start it and let it go for a day or four, until I get around to checking it again. I try to get at least 24 hours of memtest testing on new machines, and 48 hrs on used/older ones. D - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: Tomcat dies suddenly (was JVM goes away)
Peter Crowther wrote: 2010/1/13 David kerber : Make sure you let it run for quite a while. I've had memory failures show up as late as 11 passes into a test run. That's dedication - I usually end up stopping it after a couple of runs. Thanks David, I've learned something! - Peter - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org I just start it and let it go for a day or four, until I get around to checking it again. I try to get at least 24 hours of memtest testing on new machines, and 48 hrs on used/older ones. D - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: Tomcat dies suddenly (was JVM goes away)
2010/1/13 David kerber : > Make sure you let it run for quite a while. I've had memory failures show > up as late as 11 passes into a test run. That's dedication - I usually end up stopping it after a couple of runs. Thanks David, I've learned something! - Peter - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: Tomcat dies suddenly (was JVM goes away)
David, Will do... thanks for the heads up. Carl - Original Message - From: "David kerber" To: "Tomcat Users List" Sent: Wednesday, January 13, 2010 1:17 PM Subject: Re: Tomcat dies suddenly (was JVM goes away) Carl wrote: Peter, The memTest is still running but clean so far. Make sure you let it run for quite a while. I've had memory failures show up as late as 11 passes into a test run. D - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: Tomcat dies suddenly (was JVM goes away)
Carl wrote: Peter, The memTest is still running but clean so far. Make sure you let it run for quite a while. I've had memory failures show up as late as 11 passes into a test run. D - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: Tomcat dies suddenly (was JVM goes away)
Peter, The memTest is still running but clean so far. Thanks, Carl - Original Message - From: "Peter Crowther" To: "Tomcat Users List" Sent: Wednesday, January 13, 2010 12:00 PM Subject: Re: Tomcat dies suddenly (was JVM goes away) 2010/1/13 Christopher Schultz : On 1/13/2010 8:49 AM, Peter Crowther wrote: Very difficult to know what the problem is. One thing you can now do (as you've switched to another production server) is to run a memory test across the "bad" server. Usually, I would agree that physical memory problems are likely to be a problem, but every time I've had a physical memory problem (much more common than I'd like to admit!), the JVM has crashed in a more classic way: that is, with an hs_log file and almost always with a SIGSEGV, rather than this phantom thing described by Carl. The Linux OOM killer might be a suspect, except that the process is apparently not dying, which is very strange. [...] The whole thing sounds weird. :( Oh, I agree entirely - usually something will turn a reference bad and you'll get a memory access somewhere off in hyperspace during a GC. But it's an easy thing to check, and there is an (admittedly small) possibility of seeing these symptoms. Heck, with hardware errors there's a small probability of seeing pretty much *any* symptoms. - Peter - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: Tomcat dies suddenly (was JVM goes away)
Chris, Carl: when the JVM "dies" and you use top to see free memory, does it say that 2.4GB of memory is in use by a particular process, It shows the 2.4GB as 'Used' but does not show it attached to any process (remember that the Tomcat process has disappeared... ps aux | grep tomcat yields nothing.) My observation is that the server has 500MB 'used' when it starts and moves to 2.4GB after Tomcat is started. However, the server does not appear to reclaim the memory after the process dies as the 'used' stays right at 2.4GB. Visual LVM continues to report that the now dead Tomcat instance is still holding onto the memory but I am not certain whether this reflects some variable(s) set in Visual JVM or the actual memory something is still holding onto. or does it just appear that the memory is not "available"? The 2.4GB is just shown as 'Used' by top. If it's by a particular process, which one? No process but I expected that as the Tomcat process (ps aux | grep tomcat) no longer exists (after the 'crash'.) The JVM process ("/usr/bin/java" or whatever) either does or does not exist, and if it does not exist, is it retaining memory? I don't know how I could tell if the Tomcat java process/JVM was holding onto the memory if the process no longer exists. If the Tomcat connectors have shut down (thereby releasing the TCP/IP ports), but not the java process, then there should be some indication in catalina.out No indication at all... just comes to a stop. (I had a problem a while ago with not properly releasing database connections an I still have a good deal of stuff going to catalina.out (because I have been too busy to comment out the debugging messages.) that the connectors have been shut down explicitly. The whole thing sounds weird. :( That has been a good deal of my frustration... I thought it would leave some tracks somewhere. All thoughts and ideas are appreciated. Thanks, Carl - Original Message - From: "Christopher Schultz" To: "Tomcat Users List" Sent: Wednesday, January 13, 2010 11:50 AM Subject: Re: Tomcat dies suddenly (was JVM goes away) > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > Peter, > > On 1/13/2010 8:49 AM, Peter Crowther wrote: >> Very difficult to know what the problem is. One thing you can now do >> (as you've switched to another production server) is to run a memory >> test across the "bad" server. > > Usually, I would agree that physical memory problems are likely to be a > problem, but every time I've had a physical memory problem (much more > common than I'd like to admit!), the JVM has crashed in a more classic > way: that is, with an hs_log file and almost always with a SIGSEGV, > rather than this phantom thing described by Carl. > > The Linux OOM killer might be a suspect, except that the process is > apparently not dying, which is very strange. > > Carl: when the JVM "dies" and you use top to see free memory, does it > say that 2.4GB of memory is in use by a particular process, or does it > just appear that the memory is not "available"? If it's by a particular > process, which one? The JVM process ("/usr/bin/java" or whatever) either > does or does not exist, and if it does not exist, is it retaining > memory? If the Tomcat connectors have shut down (thereby releasing the > TCP/IP ports), but not the java process, then there should be some > indication in catalina.out that the connectors have been shut down > explicitly. > > The whole thing sounds weird. :( > > - -chris > -BEGIN PGP SIGNATURE- > Version: GnuPG v1.4.10 (MingW32) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iEYEARECAAYFAktN+ecACgkQ9CaO5/Lv0PAZzgCgsZaU16RGcs5pgsgzgLVX7q0W > 8xcAnRUb1Zl+0PY6+Umk8nQAEagfl/Su > =RA9e > -END PGP SIGNATURE- > > - > To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org > For additional commands, e-mail: users-h...@tomcat.apache.org > >
Re: Tomcat dies suddenly (was JVM goes away)
2010/1/13 Christopher Schultz : > On 1/13/2010 8:49 AM, Peter Crowther wrote: >> Very difficult to know what the problem is. One thing you can now do >> (as you've switched to another production server) is to run a memory >> test across the "bad" server. > > Usually, I would agree that physical memory problems are likely to be a > problem, but every time I've had a physical memory problem (much more > common than I'd like to admit!), the JVM has crashed in a more classic > way: that is, with an hs_log file and almost always with a SIGSEGV, > rather than this phantom thing described by Carl. > > The Linux OOM killer might be a suspect, except that the process is > apparently not dying, which is very strange. > [...] > The whole thing sounds weird. :( Oh, I agree entirely - usually something will turn a reference bad and you'll get a memory access somewhere off in hyperspace during a GC. But it's an easy thing to check, and there is an (admittedly small) possibility of seeing these symptoms. Heck, with hardware errors there's a small probability of seeing pretty much *any* symptoms. - Peter - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: Tomcat dies suddenly (was JVM goes away)
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Peter, On 1/13/2010 8:49 AM, Peter Crowther wrote: > Very difficult to know what the problem is. One thing you can now do > (as you've switched to another production server) is to run a memory > test across the "bad" server. Usually, I would agree that physical memory problems are likely to be a problem, but every time I've had a physical memory problem (much more common than I'd like to admit!), the JVM has crashed in a more classic way: that is, with an hs_log file and almost always with a SIGSEGV, rather than this phantom thing described by Carl. The Linux OOM killer might be a suspect, except that the process is apparently not dying, which is very strange. Carl: when the JVM "dies" and you use top to see free memory, does it say that 2.4GB of memory is in use by a particular process, or does it just appear that the memory is not "available"? If it's by a particular process, which one? The JVM process ("/usr/bin/java" or whatever) either does or does not exist, and if it does not exist, is it retaining memory? If the Tomcat connectors have shut down (thereby releasing the TCP/IP ports), but not the java process, then there should be some indication in catalina.out that the connectors have been shut down explicitly. The whole thing sounds weird. :( - -chris -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.10 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAktN+ecACgkQ9CaO5/Lv0PAZzgCgsZaU16RGcs5pgsgzgLVX7q0W 8xcAnRUb1Zl+0PY6+Umk8nQAEagfl/Su =RA9e -END PGP SIGNATURE- - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: Tomcat dies suddenly (was JVM goes away)
Done. Thanks for the suggestion. Plan to place this machine back on the firing line after running the memory test suggested by Peter. Thanks, Carl - Original Message - From: "Paolo Santarsiero" To: "Tomcat Users List" Sent: Wednesday, January 13, 2010 8:58 AM Subject: Re: Tomcat dies suddenly (was JVM goes away) In order to monitor java memory at chrash time you can add to JAVA_OPTS these directives -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/your/tomcat/folder/memorydump.hprof In this way, if tomcat goes in out of memory, you have an image of memory (memorydump.hprof) that you can analyze by an external application like MemoryAnalyzer [ http://www.eclipse.org/mat/ ]. 2010/1/13 Carl From the original posting: This is a new server, a Dell T110 with a Xeon 3440 processor and 4GB memory. I have turned off both the turbo mode and hyperthreading. The environment: 64 bit Slackware Linux java version "1.6.0_17" Java(TM) SE Runtime Environment (build 1.6.0_17-b04) Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01, mixed mode) Tomcat: apache-tomcat-6.0.20 These are the current JAVA_OPTS="-Xms1024m -Xmx1024m -XX:PermSize=368m -XX:MaxPermSize=368m" In the previous posting, I noted that I have observed the memory usage and general performance with Java VisualVM and have seen nothing strange. GC seems to be performing well and the memory rarely gets anywhere near the max. New information: I thought I was seeing GC as memory usage was going up and down but in fact it was mostly people coming onto the system and leaving it. After several hours, the memory settles to a baseline of about 375MB. Forced GC never takes it below that value and the ups and downs from the people coming onto and leaving the system also returns it to pretty much that value. The maximum memory used never was above 700MB for the entire day. The server runs well, idling along at 2-5% load, except for a quick spike during GC, serving jsp's, etc. at a reasonable speed. Without warning and with no tracks in any log (Tomcat or system) or to the console, the JVM will just go away, disappear. New information: The JVM does not just go away but somehow Tomcat shutsdown as the ports used by Tomcat are closed (pointed out by Konstantin.) Sometimes, the system will run for a week, sometimes for only several hours. Initially, I thought the problem was the turbo or hyperthreading but, no, the problem persists. When Tomcat shuts down, the memory that it held is still being held (as seen from top) but it is nowhere near the machine physical memory. The application has been running on an older server (Dell 600SC, 32 bit Slackware, 2GB memory) for several years and, while the application will throw exceptions now and then, it never crashed. This lead me to believe the problem had something to do with the 64 bit JVM but, with without seeing errors anywhere, I can't be certain and don't know what I can do about it except go back to 32 bit. New information. Last evening, I observed the heap and permGen memory usage with Visual JVM. It was running around 600MB before I forced a GC and 375MB afterward. Speed was good. Memory usage from top was 2.4GB. Five minutes later, Tomcat stopped leaving no tracks that I could find. The memory usage from top was around 2.4GB. The memory usage from Visual JVM was still showing 400MB+ although the Tomcat process was gone. I restarted Tomcat (did not reboot) so Tomcat had been shutdown gracefully enough to close the ports (8080, 8443, 443.) Tomcat stayed up for less than an hour (under light load) and stopped again. The memory used according to top was less than 3GB but I didn't get the exact number. I restarted it again (no server reboot) and it ran for the rest of the night (light load) and top was showing 3.3GB for memory this morning. I brought up a new server last night and have switched to that server for production (same Linux, JDK, server.xml, JAVA_OPTS, etc.). It would seem if the problem is with my application or the JVM, that the problem will follow me to the new server. Anyone have any ideas how I might track this problem down? Thanks, Carl - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: Tomcat dies suddenly (was JVM goes away)
In process... thanks for the suggestion. Carl - Original Message - From: "Peter Crowther" To: "Tomcat Users List" Sent: Wednesday, January 13, 2010 8:49 AM Subject: Re: Tomcat dies suddenly (was JVM goes away) Very difficult to know what the problem is. One thing you can now do (as you've switched to another production server) is to run a memory test across the "bad" server. A T110 doesn't use error-correcting memory, as I recall, so a dodgy bit could cause problems. Give it a couple of hours with memtest86+ and you'll at least know whether you've been chasing phantoms due to a hardware error. (I'm perhaps biased - I've had memory errors on three low-end servers now) - Peter 2010/1/13 Carl : From the original posting: This is a new server, a Dell T110 with a Xeon 3440 processor and 4GB memory. I have turned off both the turbo mode and hyperthreading. The environment: 64 bit Slackware Linux java version "1.6.0_17" Java(TM) SE Runtime Environment (build 1.6.0_17-b04) Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01, mixed mode) Tomcat: apache-tomcat-6.0.20 These are the current JAVA_OPTS="-Xms1024m -Xmx1024m -XX:PermSize=368m -XX:MaxPermSize=368m" In the previous posting, I noted that I have observed the memory usage and general performance with Java VisualVM and have seen nothing strange. GC seems to be performing well and the memory rarely gets anywhere near the max. New information: I thought I was seeing GC as memory usage was going up and down but in fact it was mostly people coming onto the system and leaving it. After several hours, the memory settles to a baseline of about 375MB. Forced GC never takes it below that value and the ups and downs from the people coming onto and leaving the system also returns it to pretty much that value. The maximum memory used never was above 700MB for the entire day. The server runs well, idling along at 2-5% load, except for a quick spike during GC, serving jsp's, etc. at a reasonable speed. Without warning and with no tracks in any log (Tomcat or system) or to the console, the JVM will just go away, disappear. New information: The JVM does not just go away but somehow Tomcat shutsdown as the ports used by Tomcat are closed (pointed out by Konstantin.) Sometimes, the system will run for a week, sometimes for only several hours. Initially, I thought the problem was the turbo or hyperthreading but, no, the problem persists. When Tomcat shuts down, the memory that it held is still being held (as seen from top) but it is nowhere near the machine physical memory. The application has been running on an older server (Dell 600SC, 32 bit Slackware, 2GB memory) for several years and, while the application will throw exceptions now and then, it never crashed. This lead me to believe the problem had something to do with the 64 bit JVM but, with without seeing errors anywhere, I can't be certain and don't know what I can do about it except go back to 32 bit. New information. Last evening, I observed the heap and permGen memory usage with Visual JVM. It was running around 600MB before I forced a GC and 375MB afterward. Speed was good. Memory usage from top was 2.4GB. Five minutes later, Tomcat stopped leaving no tracks that I could find. The memory usage from top was around 2.4GB. The memory usage from Visual JVM was still showing 400MB+ although the Tomcat process was gone. I restarted Tomcat (did not reboot) so Tomcat had been shutdown gracefully enough to close the ports (8080, 8443, 443.) Tomcat stayed up for less than an hour (under light load) and stopped again. The memory used according to top was less than 3GB but I didn't get the exact number. I restarted it again (no server reboot) and it ran for the rest of the night (light load) and top was showing 3.3GB for memory this morning. I brought up a new server last night and have switched to that server for production (same Linux, JDK, server.xml, JAVA_OPTS, etc.). It would seem if the problem is with my application or the JVM, that the problem will follow me to the new server. Anyone have any ideas how I might track this problem down? Thanks, Carl - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: Tomcat dies suddenly (was JVM goes away)
In order to monitor java memory at chrash time you can add to JAVA_OPTS these directives -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/your/tomcat/folder/memorydump.hprof In this way, if tomcat goes in out of memory, you have an image of memory (memorydump.hprof) that you can analyze by an external application like MemoryAnalyzer [ http://www.eclipse.org/mat/ ]. 2010/1/13 Carl > From the original posting: > > This is a new server, a Dell T110 with a Xeon 3440 processor and 4GB > memory. I have turned off both the turbo mode and hyperthreading. > > The environment: > > 64 bit Slackware Linux > > java version "1.6.0_17" > Java(TM) SE Runtime Environment (build 1.6.0_17-b04) > Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01, mixed mode) > > Tomcat: apache-tomcat-6.0.20 > > These are the current JAVA_OPTS="-Xms1024m -Xmx1024m -XX:PermSize=368m > -XX:MaxPermSize=368m" > > In the previous posting, I noted that I have observed the memory usage and > general performance with Java VisualVM and have seen nothing strange. GC > seems to be performing well and the memory rarely gets anywhere near the > max. New information: I thought I was seeing GC as memory usage was going > up and down but in fact it was mostly people coming onto the system and > leaving it. After several hours, the memory settles to a baseline of about > 375MB. Forced GC never takes it below that value and the ups and downs from > the people coming onto and leaving the system also returns it to pretty much > that value. The maximum memory used never was above 700MB for the entire > day. > > The server runs well, idling along at 2-5% load, except for a quick spike > during GC, serving jsp's, etc. at a reasonable speed. Without warning and > with no tracks in any log (Tomcat or system) or to the console, the JVM will > just go away, disappear. New information: The JVM does not just go away but > somehow Tomcat shutsdown as the ports used by Tomcat are closed (pointed out > by Konstantin.) Sometimes, the system will run for a week, sometimes for > only several hours. Initially, I thought the problem was the turbo or > hyperthreading but, no, the problem persists. > > When Tomcat shuts down, the memory that it held is still being held (as > seen from top) but it is nowhere near the machine physical memory. > > The application has been running on an older server (Dell 600SC, 32 bit > Slackware, 2GB memory) for several years and, while the application will > throw exceptions now and then, it never crashed. This lead me to believe > the problem had something to do with the 64 bit JVM but, with without seeing > errors anywhere, I can't be certain and don't know what I can do about it > except go back to 32 bit. > > New information. > > Last evening, I observed the heap and permGen memory usage with Visual JVM. > It was running around 600MB before I forced a GC and 375MB afterward. > Speed was good. Memory usage from top was 2.4GB. Five minutes later, > Tomcat stopped leaving no tracks that I could find. The memory usage from > top was around 2.4GB. The memory usage from Visual JVM was still showing > 400MB+ although the Tomcat process was gone. I restarted Tomcat (did not > reboot) so Tomcat had been shutdown gracefully enough to close the ports > (8080, 8443, 443.) Tomcat stayed up for less than an hour (under light > load) and stopped again. The memory used according to top was less than 3GB > but I didn't get the exact number. I restarted it again (no server reboot) > and it ran for the rest of the night (light load) and top was showing 3.3GB > for memory this morning. > > I brought up a new server last night and have switched to that server for > production (same Linux, JDK, server.xml, JAVA_OPTS, etc.). It would seem if > the problem is with my application or the JVM, that the problem will follow > me to the new server. > > Anyone have any ideas how I might track this problem down? > > Thanks, > > Carl
Re: Tomcat dies suddenly (was JVM goes away)
Very difficult to know what the problem is. One thing you can now do (as you've switched to another production server) is to run a memory test across the "bad" server. A T110 doesn't use error-correcting memory, as I recall, so a dodgy bit could cause problems. Give it a couple of hours with memtest86+ and you'll at least know whether you've been chasing phantoms due to a hardware error. (I'm perhaps biased - I've had memory errors on three low-end servers now) - Peter 2010/1/13 Carl : > From the original posting: > > This is a new server, a Dell T110 with a Xeon 3440 processor and 4GB memory. > I have turned off both the turbo mode and hyperthreading. > > The environment: > > 64 bit Slackware Linux > > java version "1.6.0_17" > Java(TM) SE Runtime Environment (build 1.6.0_17-b04) > Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01, mixed mode) > > Tomcat: apache-tomcat-6.0.20 > > These are the current JAVA_OPTS="-Xms1024m -Xmx1024m -XX:PermSize=368m > -XX:MaxPermSize=368m" > > In the previous posting, I noted that I have observed the memory usage and > general performance with Java VisualVM and have seen nothing strange. GC > seems to be performing well and the memory rarely gets anywhere near the max. > New information: I thought I was seeing GC as memory usage was going up and > down but in fact it was mostly people coming onto the system and leaving it. > After several hours, the memory settles to a baseline of about 375MB. Forced > GC never takes it below that value and the ups and downs from the people > coming onto and leaving the system also returns it to pretty much that value. > The maximum memory used never was above 700MB for the entire day. > > The server runs well, idling along at 2-5% load, except for a quick spike > during GC, serving jsp's, etc. at a reasonable speed. Without warning and > with no tracks in any log (Tomcat or system) or to the console, the JVM will > just go away, disappear. New information: The JVM does not just go away but > somehow Tomcat shutsdown as the ports used by Tomcat are closed (pointed out > by Konstantin.) Sometimes, the system will run for a week, sometimes for > only several hours. Initially, I thought the problem was the turbo or > hyperthreading but, no, the problem persists. > > When Tomcat shuts down, the memory that it held is still being held (as seen > from top) but it is nowhere near the machine physical memory. > > The application has been running on an older server (Dell 600SC, 32 bit > Slackware, 2GB memory) for several years and, while the application will > throw exceptions now and then, it never crashed. This lead me to believe the > problem had something to do with the 64 bit JVM but, with without seeing > errors anywhere, I can't be certain and don't know what I can do about it > except go back to 32 bit. > > New information. > > Last evening, I observed the heap and permGen memory usage with Visual JVM. > It was running around 600MB before I forced a GC and 375MB afterward. Speed > was good. Memory usage from top was 2.4GB. Five minutes later, Tomcat > stopped leaving no tracks that I could find. The memory usage from top was > around 2.4GB. The memory usage from Visual JVM was still showing 400MB+ > although the Tomcat process was gone. I restarted Tomcat (did not reboot) so > Tomcat had been shutdown gracefully enough to close the ports (8080, 8443, > 443.) Tomcat stayed up for less than an hour (under light load) and stopped > again. The memory used according to top was less than 3GB but I didn't get > the exact number. I restarted it again (no server reboot) and it ran for the > rest of the night (light load) and top was showing 3.3GB for memory this > morning. > > I brought up a new server last night and have switched to that server for > production (same Linux, JDK, server.xml, JAVA_OPTS, etc.). It would seem if > the problem is with my application or the JVM, that the problem will follow > me to the new server. > > Anyone have any ideas how I might track this problem down? > > Thanks, > > Carl - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Tomcat dies suddenly (was JVM goes away)
>From the original posting: This is a new server, a Dell T110 with a Xeon 3440 processor and 4GB memory. I have turned off both the turbo mode and hyperthreading. The environment: 64 bit Slackware Linux java version "1.6.0_17" Java(TM) SE Runtime Environment (build 1.6.0_17-b04) Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01, mixed mode) Tomcat: apache-tomcat-6.0.20 These are the current JAVA_OPTS="-Xms1024m -Xmx1024m -XX:PermSize=368m -XX:MaxPermSize=368m" In the previous posting, I noted that I have observed the memory usage and general performance with Java VisualVM and have seen nothing strange. GC seems to be performing well and the memory rarely gets anywhere near the max. New information: I thought I was seeing GC as memory usage was going up and down but in fact it was mostly people coming onto the system and leaving it. After several hours, the memory settles to a baseline of about 375MB. Forced GC never takes it below that value and the ups and downs from the people coming onto and leaving the system also returns it to pretty much that value. The maximum memory used never was above 700MB for the entire day. The server runs well, idling along at 2-5% load, except for a quick spike during GC, serving jsp's, etc. at a reasonable speed. Without warning and with no tracks in any log (Tomcat or system) or to the console, the JVM will just go away, disappear. New information: The JVM does not just go away but somehow Tomcat shutsdown as the ports used by Tomcat are closed (pointed out by Konstantin.) Sometimes, the system will run for a week, sometimes for only several hours. Initially, I thought the problem was the turbo or hyperthreading but, no, the problem persists. When Tomcat shuts down, the memory that it held is still being held (as seen from top) but it is nowhere near the machine physical memory. The application has been running on an older server (Dell 600SC, 32 bit Slackware, 2GB memory) for several years and, while the application will throw exceptions now and then, it never crashed. This lead me to believe the problem had something to do with the 64 bit JVM but, with without seeing errors anywhere, I can't be certain and don't know what I can do about it except go back to 32 bit. New information. Last evening, I observed the heap and permGen memory usage with Visual JVM. It was running around 600MB before I forced a GC and 375MB afterward. Speed was good. Memory usage from top was 2.4GB. Five minutes later, Tomcat stopped leaving no tracks that I could find. The memory usage from top was around 2.4GB. The memory usage from Visual JVM was still showing 400MB+ although the Tomcat process was gone. I restarted Tomcat (did not reboot) so Tomcat had been shutdown gracefully enough to close the ports (8080, 8443, 443.) Tomcat stayed up for less than an hour (under light load) and stopped again. The memory used according to top was less than 3GB but I didn't get the exact number. I restarted it again (no server reboot) and it ran for the rest of the night (light load) and top was showing 3.3GB for memory this morning. I brought up a new server last night and have switched to that server for production (same Linux, JDK, server.xml, JAVA_OPTS, etc.). It would seem if the problem is with my application or the JVM, that the problem will follow me to the new server. Anyone have any ideas how I might track this problem down? Thanks, Carl
Re: JVM goes away
One more reason why we use this much memory: we run 2-4 contexts most of the time. This was originally done to separate certain customer data while keeping the code base the same and to allow us to have a test environment exactly like the production environment. If each context requires 50-60MB for perm gen, then the Tomcat perm gen is upwards of 256MB. Also, if each context requires 200-300MB of heap, then we could require upwards heap of 500-700MB (allowing for GC, timing, etc.) I have probably allocated more heap memory than needed (due to my faulty understanding that the heap constrained perm gen, etc.) Thanks, Carl - Original Message - From: "Pid" To: "Carl" Sent: Tuesday, January 12, 2010 4:41 AM Subject: Re: JVM goes away On 12/01/2010 01:30, Carl wrote: Aha, for some reason, I thought perm gen was included in the general heap so the maximum for the two combined was constrained by the 2400m I had defined for the heap. Somewhere around 2:00AM (I am US east coast), I can restart the server with the new settings. I have taken several heap dumps (using Visual JVM) and nothing looked odd. Also, I can see (from Visual JVM) that the GC runs reasonably frequently when the heap grows (from users working) but the total heap is generally under 1GB. I wonder if the sneaky little bugger, under load, just pushes to OOM and I am running so close to the edge that I don't see it. Odd though, I have forced OOM issues in the past and they always showed up in catalina.out. Does your app actually need all that memory? p Thanks for your thoughts and help. Carl - Original Message - From: "Pid" To: Sent: Monday, January 11, 2010 8:07 PM Subject: Re: JVM goes away On 11/01/2010 23:06, Peter Crowther wrote: 2010/1/11 Carl: This is a new server, a Dell T110 with a Xeon 3440 processor and 4GB memory. I have turned off both the turbo mode and hyperthreading. The environment: 64 bit Slackware Linux java version "1.6.0_17" Java(TM) SE Runtime Environment (build 1.6.0_17-b04) Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01, mixed mode) Tomcat: apache-tomcat-6.0.20 JAVA_OPTS="-Xms2400m -Xmx2400m -XX:PermSize=512m -XX:MaxPermSize=512m" I have watched observed the memory usage and general performance with Java VisualVM and have seen nothing strange. GC seems to be performing well and the memory rarely gets anywhere near the max. The server runs well, idling along at 2-5% load, serving jsp's, etc. at a reasonable speed. Without warning and with no tracks in any log (Tomcat or system) or to the console, the JVM will just go away, disappear. Sometimes, the system will run for a week, sometimes for only several hours. Initially, I thought the problem was the turbo or hyperthreading but, no, the problem persists. When the JVM goes away, the memory that it held is still being held (as seen from top) but it is nowhere near the machine physical memory. The application has been running on an older server (Dell 600SC, 32 bit Slackware, 2GB memory) for several years and, while the application will throw exceptions now and then, it never crashed the JVM. This leads me to believe the problem has something to do with the 64 bit JVM but, with errors, I can't be certain and don't know what I can do about it except go back to 32 bit. I plan to reinstall Java tonight but, it would seem if the JVM were corrupted, it simply would not run. Any ideas are welcome. I'm with Andy: the Linux OOM killer would show those symptoms. With those settings, you're not leaving a lot of memory for the OS. How much swap do you have, and does the same thing happen if you reduce the Java heap and permgen space? - Peter Despite later posts, I'm leaning towards agreeing with the above, based on the information provided. N.B. Maximum heap size does not equal the maximum memory a JVM can/will use. The Perm generation is in addition to the heap so you're effectively saying that the memory you want to use is 2400 + 512 (+ other stuff falling into the non-heap category). So you may be using more than 3Gb; jmap -heap will provide more information, you could regularly dump the output to file to see what's happening with the JVM. http://java.sun.com/javase/6/docs/technotes/guides/management/jconsole.html (Confession: I'm not sure I've got my head round it yet) An OOM should leave a trace somewhere on your system, it might be a single log entry saying that a given process id has been terminated. Google for specific info for your OS. p - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org - To unsubscribe,
Re: JVM goes away
Perhaps not the best design but basically, yes. We have stored a fair amount of information in memory (session) to try to improve the speed of the application (so we don't go back to the disk as often.) At some point (probably never happen) I would like to take a look at the balance between storing information in session and the penalty for disk access but, like most people, just too busy right now. So, I have tried to cover this up with (relatively) cheap memory. Thanks for your thoughts. Carl - Original Message - From: "Pid" To: "Carl" Sent: Tuesday, January 12, 2010 4:41 AM Subject: Re: JVM goes away On 12/01/2010 01:30, Carl wrote: Aha, for some reason, I thought perm gen was included in the general heap so the maximum for the two combined was constrained by the 2400m I had defined for the heap. Somewhere around 2:00AM (I am US east coast), I can restart the server with the new settings. I have taken several heap dumps (using Visual JVM) and nothing looked odd. Also, I can see (from Visual JVM) that the GC runs reasonably frequently when the heap grows (from users working) but the total heap is generally under 1GB. I wonder if the sneaky little bugger, under load, just pushes to OOM and I am running so close to the edge that I don't see it. Odd though, I have forced OOM issues in the past and they always showed up in catalina.out. Does your app actually need all that memory? p Thanks for your thoughts and help. Carl - Original Message - From: "Pid" To: Sent: Monday, January 11, 2010 8:07 PM Subject: Re: JVM goes away On 11/01/2010 23:06, Peter Crowther wrote: 2010/1/11 Carl: This is a new server, a Dell T110 with a Xeon 3440 processor and 4GB memory. I have turned off both the turbo mode and hyperthreading. The environment: 64 bit Slackware Linux java version "1.6.0_17" Java(TM) SE Runtime Environment (build 1.6.0_17-b04) Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01, mixed mode) Tomcat: apache-tomcat-6.0.20 JAVA_OPTS="-Xms2400m -Xmx2400m -XX:PermSize=512m -XX:MaxPermSize=512m" I have watched observed the memory usage and general performance with Java VisualVM and have seen nothing strange. GC seems to be performing well and the memory rarely gets anywhere near the max. The server runs well, idling along at 2-5% load, serving jsp's, etc. at a reasonable speed. Without warning and with no tracks in any log (Tomcat or system) or to the console, the JVM will just go away, disappear. Sometimes, the system will run for a week, sometimes for only several hours. Initially, I thought the problem was the turbo or hyperthreading but, no, the problem persists. When the JVM goes away, the memory that it held is still being held (as seen from top) but it is nowhere near the machine physical memory. The application has been running on an older server (Dell 600SC, 32 bit Slackware, 2GB memory) for several years and, while the application will throw exceptions now and then, it never crashed the JVM. This leads me to believe the problem has something to do with the 64 bit JVM but, with errors, I can't be certain and don't know what I can do about it except go back to 32 bit. I plan to reinstall Java tonight but, it would seem if the JVM were corrupted, it simply would not run. Any ideas are welcome. I'm with Andy: the Linux OOM killer would show those symptoms. With those settings, you're not leaving a lot of memory for the OS. How much swap do you have, and does the same thing happen if you reduce the Java heap and permgen space? - Peter Despite later posts, I'm leaning towards agreeing with the above, based on the information provided. N.B. Maximum heap size does not equal the maximum memory a JVM can/will use. The Perm generation is in addition to the heap so you're effectively saying that the memory you want to use is 2400 + 512 (+ other stuff falling into the non-heap category). So you may be using more than 3Gb; jmap -heap will provide more information, you could regularly dump the output to file to see what's happening with the JVM. http://java.sun.com/javase/6/docs/technotes/guides/management/jconsole.html (Confession: I'm not sure I've got my head round it yet) An OOM should leave a trace somewhere on your system, it might be a single log entry saying that a given process id has been terminated. Google for specific info for your OS. p - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: JVM goes away
Aha, for some reason, I thought perm gen was included in the general heap so the maximum for the two combined was constrained by the 2400m I had defined for the heap. Somewhere around 2:00AM (I am US east coast), I can restart the server with the new settings. I have taken several heap dumps (using Visual JVM) and nothing looked odd. Also, I can see (from Visual JVM) that the GC runs reasonably frequently when the heap grows (from users working) but the total heap is generally under 1GB. I wonder if the sneaky little bugger, under load, just pushes to OOM and I am running so close to the edge that I don't see it. Odd though, I have forced OOM issues in the past and they always showed up in catalina.out. Thanks for your thoughts and help. Carl - Original Message - From: "Pid" To: Sent: Monday, January 11, 2010 8:07 PM Subject: Re: JVM goes away On 11/01/2010 23:06, Peter Crowther wrote: 2010/1/11 Carl: This is a new server, a Dell T110 with a Xeon 3440 processor and 4GB memory. I have turned off both the turbo mode and hyperthreading. The environment: 64 bit Slackware Linux java version "1.6.0_17" Java(TM) SE Runtime Environment (build 1.6.0_17-b04) Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01, mixed mode) Tomcat: apache-tomcat-6.0.20 JAVA_OPTS="-Xms2400m -Xmx2400m -XX:PermSize=512m -XX:MaxPermSize=512m" I have watched observed the memory usage and general performance with Java VisualVM and have seen nothing strange. GC seems to be performing well and the memory rarely gets anywhere near the max. The server runs well, idling along at 2-5% load, serving jsp's, etc. at a reasonable speed. Without warning and with no tracks in any log (Tomcat or system) or to the console, the JVM will just go away, disappear. Sometimes, the system will run for a week, sometimes for only several hours. Initially, I thought the problem was the turbo or hyperthreading but, no, the problem persists. When the JVM goes away, the memory that it held is still being held (as seen from top) but it is nowhere near the machine physical memory. The application has been running on an older server (Dell 600SC, 32 bit Slackware, 2GB memory) for several years and, while the application will throw exceptions now and then, it never crashed the JVM. This leads me to believe the problem has something to do with the 64 bit JVM but, with errors, I can't be certain and don't know what I can do about it except go back to 32 bit. I plan to reinstall Java tonight but, it would seem if the JVM were corrupted, it simply would not run. Any ideas are welcome. I'm with Andy: the Linux OOM killer would show those symptoms. With those settings, you're not leaving a lot of memory for the OS. How much swap do you have, and does the same thing happen if you reduce the Java heap and permgen space? - Peter Despite later posts, I'm leaning towards agreeing with the above, based on the information provided. N.B. Maximum heap size does not equal the maximum memory a JVM can/will use. The Perm generation is in addition to the heap so you're effectively saying that the memory you want to use is 2400 + 512 (+ other stuff falling into the non-heap category). So you may be using more than 3Gb; jmap -heap will provide more information, you could regularly dump the output to file to see what's happening with the JVM. http://java.sun.com/javase/6/docs/technotes/guides/management/jconsole.html (Confession: I'm not sure I've got my head round it yet) An OOM should leave a trace somewhere on your system, it might be a single log entry saying that a given process id has been terminated. Google for specific info for your OS. p - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: JVM goes away
On 11/01/2010 23:06, Peter Crowther wrote: 2010/1/11 Carl: This is a new server, a Dell T110 with a Xeon 3440 processor and 4GB memory. I have turned off both the turbo mode and hyperthreading. The environment: 64 bit Slackware Linux java version "1.6.0_17" Java(TM) SE Runtime Environment (build 1.6.0_17-b04) Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01, mixed mode) Tomcat: apache-tomcat-6.0.20 JAVA_OPTS="-Xms2400m -Xmx2400m -XX:PermSize=512m -XX:MaxPermSize=512m" I have watched observed the memory usage and general performance with Java VisualVM and have seen nothing strange. GC seems to be performing well and the memory rarely gets anywhere near the max. The server runs well, idling along at 2-5% load, serving jsp's, etc. at a reasonable speed. Without warning and with no tracks in any log (Tomcat or system) or to the console, the JVM will just go away, disappear. Sometimes, the system will run for a week, sometimes for only several hours. Initially, I thought the problem was the turbo or hyperthreading but, no, the problem persists. When the JVM goes away, the memory that it held is still being held (as seen from top) but it is nowhere near the machine physical memory. The application has been running on an older server (Dell 600SC, 32 bit Slackware, 2GB memory) for several years and, while the application will throw exceptions now and then, it never crashed the JVM. This leads me to believe the problem has something to do with the 64 bit JVM but, with errors, I can't be certain and don't know what I can do about it except go back to 32 bit. I plan to reinstall Java tonight but, it would seem if the JVM were corrupted, it simply would not run. Any ideas are welcome. I'm with Andy: the Linux OOM killer would show those symptoms. With those settings, you're not leaving a lot of memory for the OS. How much swap do you have, and does the same thing happen if you reduce the Java heap and permgen space? - Peter Despite later posts, I'm leaning towards agreeing with the above, based on the information provided. N.B. Maximum heap size does not equal the maximum memory a JVM can/will use. The Perm generation is in addition to the heap so you're effectively saying that the memory you want to use is 2400 + 512 (+ other stuff falling into the non-heap category). So you may be using more than 3Gb; jmap -heap will provide more information, you could regularly dump the output to file to see what's happening with the JVM. http://java.sun.com/javase/6/docs/technotes/guides/management/jconsole.html (Confession: I'm not sure I've got my head round it yet) An OOM should leave a trace somewhere on your system, it might be a single log entry saying that a given process id has been terminated. Google for specific info for your OS. p - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: JVM goes away
Konstantin, Yes, it was started using startup.sh in tomcat/bin and used the same ports (8080, 8443, 443) as the tomcat that died. The fact that the OS did not recover the memory implied to me (could be wrong, even very wrong) that the JVM just died. However, as you point out, how did the ports get freed up. So, now it looks more like I am somehow killing Tomcat because that is the only way those ports could be freed. Shouldn't I see some tracks in one of Tomcat's logs then? Thanks, Carl - Original Message - From: "Konstantin Kolinko" To: "Tomcat Users List" Sent: Monday, January 11, 2010 7:31 PM Subject: Re: JVM goes away 2010/1/12 Carl : Peter and Andy, Thanks for your quick responses. Memory: Physical - $GB Used - 2.4GB to 3.0 GB (according to top... have never seen it above 3GB) Swap - 19GB, none ever used (or, at least I have never seen any used.) The above are all from top. The 2.4GB is after the JVM just crashed (after running less than an hour after having run for five days with nary a blip) and I just restarted Tomcat (customers are running right now) so it is a little higher than normal because it has perhaps .5GB+ unrecovered from the point at which the JVM crashed. You started the new Tomcat instance using the same port numbers that were used by the old one? If so, the old one has really died, but how come that the memory was not freed? I checked dmsg but saw nothing that looked out of the ordinary. dmesg they say In OOM killer scenario it can be sometimes caused by some maintenance task scheduled with cron. But an OOM killer which does not free memory would be very much useless. Best regards, Konstantin Kolinko - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: JVM goes away
2010/1/12 Carl : > Peter and Andy, > > Thanks for your quick responses. > > Memory: Physical - $GB > Used - 2.4GB to 3.0 GB (according to top... have never seen it > above 3GB) > Swap - 19GB, none ever used (or, at least I have never seen > any used.) > > The above are all from top. > > The 2.4GB is after the JVM just crashed (after running less than an hour > after having run for five days with nary a blip) and I just restarted Tomcat > (customers are running right now) so it is a little higher than normal > because it has perhaps .5GB+ unrecovered from the point at which the JVM > crashed. > You started the new Tomcat instance using the same port numbers that were used by the old one? If so, the old one has really died, but how come that the memory was not freed? > I checked dmsg but saw nothing that looked out of the ordinary. dmesg they say In OOM killer scenario it can be sometimes caused by some maintenance task scheduled with cron. But an OOM killer which does not free memory would be very much useless. Best regards, Konstantin Kolinko - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: JVM goes away
Andy, Yes, that is 4GB... just a little stressed. I did a 'find' for all 'hs_err*.pid' files and turned up nothing, no files were found. I am using catalina.sh to start Tomcat and I had always assumed that the java JVM was started in that process somewhere. I apologize for my ignorance here but I don't see any java processes other than Tomcat and Visual JVM (using ps -ef.) I do have a little java listener also running that serves some data to client applets. Neither the little listener nor Visual JVM went down when Tomcat stopped. Tomcat is in /usr/local/tomcat (bin, conf, etc.) Java is in usr/local/java. Is the 'cwd of the java process' the directory where the application (Tomcat) is running or the bin directory of java? (I don't see anything in any of those areas that looks odd or informative.) How can I tell if the JVM is or is not running as a daemon? TIA, Carl - Original Message - From: "Andy Wang" To: Sent: Monday, January 11, 2010 6:42 PM Subject: Re: JVM goes away I assume $GB means 4GB :) With that kind of memory use it doesn't sound entirely like the OOM killer. Have you looked around the filesystem for hs_err[pid].pid files? This usually is written to the cwd of the java process. That might give you ideas if it's a native crash. If so, it'll have the java stack, and other native information that might shed some light. Otherwise, if the Tomcat JVM isn't running as a daemon, is it nohup'ed? Andy On 01/11/2010 05:33 PM, Carl wrote: Peter and Andy, Thanks for your quick responses. Memory: Physical - $GB Used - 2.4GB to 3.0 GB (according to top... have never seen it above 3GB) Swap - 19GB, none ever used (or, at least I have never seen any used.) The above are all from top. The 2.4GB is after the JVM just crashed (after running less than an hour after having run for five days with nary a blip) and I just restarted Tomcat (customers are running right now) so it is a little higher than normal because it has perhaps .5GB+ unrecovered from the point at which the JVM crashed. I checked dmsg but saw nothing that looked out of the ordinary. I will cut back on the heap and permgen tonight (gonna be a long one.) Any ideas are welcome. Thanks, Carl - Original Message - From: "Peter Crowther" To: "Tomcat Users List" Sent: Monday, January 11, 2010 6:06 PM Subject: Re: JVM goes away 2010/1/11 Carl : This is a new server, a Dell T110 with a Xeon 3440 processor and 4GB memory. I have turned off both the turbo mode and hyperthreading. The environment: 64 bit Slackware Linux java version "1.6.0_17" Java(TM) SE Runtime Environment (build 1.6.0_17-b04) Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01, mixed mode) Tomcat: apache-tomcat-6.0.20 JAVA_OPTS="-Xms2400m -Xmx2400m -XX:PermSize=512m -XX:MaxPermSize=512m" I have watched observed the memory usage and general performance with Java VisualVM and have seen nothing strange. GC seems to be performing well and the memory rarely gets anywhere near the max. The server runs well, idling along at 2-5% load, serving jsp's, etc. at a reasonable speed. Without warning and with no tracks in any log (Tomcat or system) or to the console, the JVM will just go away, disappear. Sometimes, the system will run for a week, sometimes for only several hours. Initially, I thought the problem was the turbo or hyperthreading but, no, the problem persists. When the JVM goes away, the memory that it held is still being held (as seen from top) but it is nowhere near the machine physical memory. The application has been running on an older server (Dell 600SC, 32 bit Slackware, 2GB memory) for several years and, while the application will throw exceptions now and then, it never crashed the JVM. This leads me to believe the problem has something to do with the 64 bit JVM but, with errors, I can't be certain and don't know what I can do about it except go back to 32 bit. I plan to reinstall Java tonight but, it would seem if the JVM were corrupted, it simply would not run. Any ideas are welcome. I'm with Andy: the Linux OOM killer would show those symptoms. With those settings, you're not leaving a lot of memory for the OS. How much swap do you have, and does the same thing happen if you reduce the Java heap and permgen space? - Peter - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org ---
Re: JVM goes away
I assume $GB means 4GB :) With that kind of memory use it doesn't sound entirely like the OOM killer. Have you looked around the filesystem for hs_err[pid].pid files? This usually is written to the cwd of the java process. That might give you ideas if it's a native crash. If so, it'll have the java stack, and other native information that might shed some light. Otherwise, if the Tomcat JVM isn't running as a daemon, is it nohup'ed? Andy On 01/11/2010 05:33 PM, Carl wrote: > Peter and Andy, > > Thanks for your quick responses. > > Memory: Physical - $GB >Used - 2.4GB to 3.0 GB (according to top... have never > seen it above 3GB) >Swap - 19GB, none ever used (or, at least I have never > seen any used.) > > The above are all from top. > > The 2.4GB is after the JVM just crashed (after running less than an > hour after having run for five days with nary a blip) and I just > restarted Tomcat (customers are running right now) so it is a little > higher than normal because it has perhaps .5GB+ unrecovered from the > point at which the JVM crashed. > > I checked dmsg but saw nothing that looked out of the ordinary. > > I will cut back on the heap and permgen tonight (gonna be a long one.) > > Any ideas are welcome. > > Thanks, > > Carl > > > - Original Message ----- From: "Peter Crowther" > > To: "Tomcat Users List" > Sent: Monday, January 11, 2010 6:06 PM > Subject: Re: JVM goes away > > > 2010/1/11 Carl : >> This is a new server, a Dell T110 with a Xeon 3440 processor and 4GB >> memory. I have turned off both the turbo mode and hyperthreading. >> >> The environment: >> >> 64 bit Slackware Linux >> >> java version "1.6.0_17" >> Java(TM) SE Runtime Environment (build 1.6.0_17-b04) >> Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01, mixed mode) >> >> Tomcat: apache-tomcat-6.0.20 >> >> JAVA_OPTS="-Xms2400m -Xmx2400m -XX:PermSize=512m -XX:MaxPermSize=512m" >> >> I have watched observed the memory usage and general performance with >> Java VisualVM and have seen nothing strange. GC seems to be >> performing well and the memory rarely gets anywhere near the max. >> >> The server runs well, idling along at 2-5% load, serving jsp's, etc. >> at a reasonable speed. Without warning and with no tracks in any log >> (Tomcat or system) or to the console, the JVM will just go away, >> disappear. Sometimes, the system will run for a week, sometimes for >> only several hours. Initially, I thought the problem was the turbo or >> hyperthreading but, no, the problem persists. >> >> When the JVM goes away, the memory that it held is still being held >> (as seen from top) but it is nowhere near the machine physical memory. >> >> The application has been running on an older server (Dell 600SC, 32 >> bit Slackware, 2GB memory) for several years and, while the >> application will throw exceptions now and then, it never crashed the >> JVM. This leads me to believe the problem has something to do with >> the 64 bit JVM but, with errors, I can't be certain and don't know >> what I can do about it except go back to 32 bit. >> >> I plan to reinstall Java tonight but, it would seem if the JVM were >> corrupted, it simply would not run. >> >> Any ideas are welcome. > > I'm with Andy: the Linux OOM killer would show those symptoms. With > those settings, you're not leaving a lot of memory for the OS. How > much swap do you have, and does the same thing happen if you reduce > the Java heap and permgen space? > > - Peter > > - > To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org > For additional commands, e-mail: users-h...@tomcat.apache.org > > > > - > To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org > For additional commands, e-mail: users-h...@tomcat.apache.org > > - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: JVM goes away
Peter and Andy, Thanks for your quick responses. Memory: Physical - $GB Used - 2.4GB to 3.0 GB (according to top... have never seen it above 3GB) Swap - 19GB, none ever used (or, at least I have never seen any used.) The above are all from top. The 2.4GB is after the JVM just crashed (after running less than an hour after having run for five days with nary a blip) and I just restarted Tomcat (customers are running right now) so it is a little higher than normal because it has perhaps .5GB+ unrecovered from the point at which the JVM crashed. I checked dmsg but saw nothing that looked out of the ordinary. I will cut back on the heap and permgen tonight (gonna be a long one.) Any ideas are welcome. Thanks, Carl - Original Message - From: "Peter Crowther" To: "Tomcat Users List" Sent: Monday, January 11, 2010 6:06 PM Subject: Re: JVM goes away 2010/1/11 Carl : This is a new server, a Dell T110 with a Xeon 3440 processor and 4GB memory. I have turned off both the turbo mode and hyperthreading. The environment: 64 bit Slackware Linux java version "1.6.0_17" Java(TM) SE Runtime Environment (build 1.6.0_17-b04) Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01, mixed mode) Tomcat: apache-tomcat-6.0.20 JAVA_OPTS="-Xms2400m -Xmx2400m -XX:PermSize=512m -XX:MaxPermSize=512m" I have watched observed the memory usage and general performance with Java VisualVM and have seen nothing strange. GC seems to be performing well and the memory rarely gets anywhere near the max. The server runs well, idling along at 2-5% load, serving jsp's, etc. at a reasonable speed. Without warning and with no tracks in any log (Tomcat or system) or to the console, the JVM will just go away, disappear. Sometimes, the system will run for a week, sometimes for only several hours. Initially, I thought the problem was the turbo or hyperthreading but, no, the problem persists. When the JVM goes away, the memory that it held is still being held (as seen from top) but it is nowhere near the machine physical memory. The application has been running on an older server (Dell 600SC, 32 bit Slackware, 2GB memory) for several years and, while the application will throw exceptions now and then, it never crashed the JVM. This leads me to believe the problem has something to do with the 64 bit JVM but, with errors, I can't be certain and don't know what I can do about it except go back to 32 bit. I plan to reinstall Java tonight but, it would seem if the JVM were corrupted, it simply would not run. Any ideas are welcome. I'm with Andy: the Linux OOM killer would show those symptoms. With those settings, you're not leaving a lot of memory for the OS. How much swap do you have, and does the same thing happen if you reduce the Java heap and permgen space? - Peter - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: JVM goes away
2010/1/11 Carl : > This is a new server, a Dell T110 with a Xeon 3440 processor and 4GB memory. > I have turned off both the turbo mode and hyperthreading. > > The environment: > > 64 bit Slackware Linux > > java version "1.6.0_17" > Java(TM) SE Runtime Environment (build 1.6.0_17-b04) > Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01, mixed mode) > > Tomcat: apache-tomcat-6.0.20 > > JAVA_OPTS="-Xms2400m -Xmx2400m -XX:PermSize=512m -XX:MaxPermSize=512m" > > I have watched observed the memory usage and general performance with Java > VisualVM and have seen nothing strange. GC seems to be performing well and > the memory rarely gets anywhere near the max. > > The server runs well, idling along at 2-5% load, serving jsp's, etc. at a > reasonable speed. Without warning and with no tracks in any log (Tomcat or > system) or to the console, the JVM will just go away, disappear. Sometimes, > the system will run for a week, sometimes for only several hours. Initially, > I thought the problem was the turbo or hyperthreading but, no, the problem > persists. > > When the JVM goes away, the memory that it held is still being held (as seen > from top) but it is nowhere near the machine physical memory. > > The application has been running on an older server (Dell 600SC, 32 bit > Slackware, 2GB memory) for several years and, while the application will > throw exceptions now and then, it never crashed the JVM. This leads me to > believe the problem has something to do with the 64 bit JVM but, with errors, > I can't be certain and don't know what I can do about it except go back to 32 > bit. > > I plan to reinstall Java tonight but, it would seem if the JVM were > corrupted, it simply would not run. > > Any ideas are welcome. I'm with Andy: the Linux OOM killer would show those symptoms. With those settings, you're not leaving a lot of memory for the OS. How much swap do you have, and does the same thing happen if you reduce the Java heap and permgen space? - Peter - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: JVM goes away
dmesg check if the linux out of memory kill struck you :) Andy On 01/11/2010 04:37 PM, Carl wrote: > This is a new server, a Dell T110 with a Xeon 3440 processor and 4GB memory. > I have turned off both the turbo mode and hyperthreading. > > The environment: > > 64 bit Slackware Linux > > java version "1.6.0_17" > Java(TM) SE Runtime Environment (build 1.6.0_17-b04) > Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01, mixed mode) > > Tomcat: apache-tomcat-6.0.20 > > JAVA_OPTS="-Xms2400m -Xmx2400m -XX:PermSize=512m -XX:MaxPermSize=512m" > > I have watched observed the memory usage and general performance with Java > VisualVM and have seen nothing strange. GC seems to be performing well and > the memory rarely gets anywhere near the max. > > The server runs well, idling along at 2-5% load, serving jsp's, etc. at a > reasonable speed. Without warning and with no tracks in any log (Tomcat or > system) or to the console, the JVM will just go away, disappear. Sometimes, > the system will run for a week, sometimes for only several hours. Initially, > I thought the problem was the turbo or hyperthreading but, no, the problem > persists. > > When the JVM goes away, the memory that it held is still being held (as seen > from top) but it is nowhere near the machine physical memory. > > The application has been running on an older server (Dell 600SC, 32 bit > Slackware, 2GB memory) for several years and, while the application will > throw exceptions now and then, it never crashed the JVM. This leads me to > believe the problem has something to do with the 64 bit JVM but, with errors, > I can't be certain and don't know what I can do about it except go back to 32 > bit. > > I plan to reinstall Java tonight but, it would seem if the JVM were > corrupted, it simply would not run. > > Any ideas are welcome. > > TIA, > > Carl > > > - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
JVM goes away
This is a new server, a Dell T110 with a Xeon 3440 processor and 4GB memory. I have turned off both the turbo mode and hyperthreading. The environment: 64 bit Slackware Linux java version "1.6.0_17" Java(TM) SE Runtime Environment (build 1.6.0_17-b04) Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01, mixed mode) Tomcat: apache-tomcat-6.0.20 JAVA_OPTS="-Xms2400m -Xmx2400m -XX:PermSize=512m -XX:MaxPermSize=512m" I have watched observed the memory usage and general performance with Java VisualVM and have seen nothing strange. GC seems to be performing well and the memory rarely gets anywhere near the max. The server runs well, idling along at 2-5% load, serving jsp's, etc. at a reasonable speed. Without warning and with no tracks in any log (Tomcat or system) or to the console, the JVM will just go away, disappear. Sometimes, the system will run for a week, sometimes for only several hours. Initially, I thought the problem was the turbo or hyperthreading but, no, the problem persists. When the JVM goes away, the memory that it held is still being held (as seen from top) but it is nowhere near the machine physical memory. The application has been running on an older server (Dell 600SC, 32 bit Slackware, 2GB memory) for several years and, while the application will throw exceptions now and then, it never crashed the JVM. This leads me to believe the problem has something to do with the 64 bit JVM but, with errors, I can't be certain and don't know what I can do about it except go back to 32 bit. I plan to reinstall Java tonight but, it would seem if the JVM were corrupted, it simply would not run. Any ideas are welcome. TIA, Carl