David Wall wrote:
> We are running Tomcat 5.5.27 on Linux 2.6.18-53.1.4.el5xen (Red Hat
> 4.1.2-14) with Java 1.6.0_05 (32 bit) in a Xen virtualization
> environment (not my server, so unsure what version that is).  It has 3
> webapps running, two of ours and Tomcat's manager.
> 
> Normally, when we run 'top', Java and it's related PG 8.3.3 database
> that drives the Tomcat webapps show very low CPU utilization (0-1%) and
> even leave the 'top' listing.  When there is higher user activity, we
> see Java increase to 4-20% utilization, but these are spikes that also
> tend to return to the low utilization shortly after the burst.  We run
> Java with options: -server -Xms2200m -Xmx2200m -XX:MaxPermSize=128m.
> 
> But every so often, Java "goes crazy" and reaches 95-99% CPU
> utilization, and it sticks there, even though there is little Tomcat
> activity.

To me this sounds like the garbage collection kicking in (and not
finding much to throw out).

You should be able to see thread-level elapsed CPU time with
"ps -fLp <tomcat_process_id>". By running this command a few
consecutive times and comparing the results, you should be able
to see if there's a certain single thread that is taking up all
or most of the CPU. Then it'd be a task to match these thread ids
to information from the thread dump.

F.ex. from my toy machine;
$ ps -fLp 7044
UID        PID  PPID   LWP  C NLWP STIME TTY          TIME CMD
tomcat    7044     1  7044  0   40  2008 ?        00:00:12 /usr/lib/jvm/java/bin
tomcat    7044     1  7118  0   40  2008 ?        00:02:58 /usr/lib/jvm/java/bin
tomcat    7044     1  7119  0   40  2008 ?        00:00:00 /usr/lib/jvm/java/bin
...
tomcat    7044     1  7892  0   40  2008 ?        00:36:26 /usr/lib/jvm/java/bin
... (there are a number of other threads)

Then, at least on my JVM, the thread ids are shown in hexadecimal, so
it's best to convert the "interesting" thread ids to hex; from the above,
7044=0x1b84
7118=0x1bce
7119=0x1bcf
7892=0x1ed4

... and the thread info from the dump matchind to the above:
"main" prio=1 tid=0x0805cfb8 nid=0x1b84 runnable [0xbfffc000..0xbfffd708]
"VM Thread" prio=1 tid=0x08099f38 nid=0x1bce runnable
"Reference Handler" daemon prio=1 tid=0x0809cae0 nid=0x1bcf in Object.wait() 
[0xb2519000..0xb2519f20]
"ContainerBackgroundProcessor[StandardEngine[Catalina]]" daemon prio=1 
tid=0x083444e0 nid=0x1ed4 waiting on condition [0xb135c000..0xb135cfa0]

In the dump there is also the call stack for each of the threads,
so if it's an application thread that's using CPU, you'll get at least
some glimpse where in the code the time is spent.

The call stack may also contain information about threads being
deadlocked with each other (but this wouldn't lead to high
CPU usage -- instead this would lead to a standstill without
any CPU being used).

I hope these were of any help,
-- 
..Juha

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Reply via email to