Hi Kirill, Like Mark, Bill and Jeff said, those threads are normal request-processing threads. I have included a script that might help with isolating high CPU issues with Tomcat.
Also, I think it might be helpful to see how the Java heap is performing as well. Please bring up Jconsole and let it run over the week. Inspect the graphs for Memory, CPU and threads. Since you say that high CPU occurs intermittently several times during the week and clears itself, I wonder if it is somehow related with the garbage collection options you are using for the server. Or it may be a code-related problem. Things to look at may include: (1) Are high CPU times related to Java heap reductions happening at the same time? ==> GC possibly needs tuning (2) Are high CPU times related to increase in thread usage? ==> possible livelock in looping code? (3) how many network connections come into the Tomcat server during high-CPU times? Possible overload-related? Here is the script. I made a couple of small changes, for e.g., changing the username. But didn't test it after the change. During high-CPU times, invoke the script a few times, say 30 seconds apart. And then compare the thread-dumps. I like to use TDA for thread-dump analysis of Tomcat thread-dumps. Mark, et al, please feel free to help me refine this script. I would like to have a script to catch STUCK threads too :-) Let me know if anyone has a script already. Thanks. --------------high_cpu_diagnostics.pl:----- #!/usr/bin/perl # use Cwd; # Make a dated directory for capturing current diagnostics my ($sec,$min,$hour,$mday,$mon,$year, $wday,$yday,$isdst) = localtime time; $year += 1900; $mon += 1; my $pwd = cwd(); my $preview_diag_dir = "/tmp/Preview_Diag.$year-$mon-$mday-$hour:$min:$sec"; print "$preview_diag_dir\n"; mkdir $preview_diag_dir, 0755; chdir($preview_diag_dir) or die "Can't chdir into $preview_diag_dir $!\n"; # Capture Preview thread dump my $process_pattern = "preview"; my $preview_pid = `/usr/bin/pgrep -f $process_pattern`; my $login = getpwuid($<) ; if (kill 0, $preview_pid){ #Possible to send a signal to the Preview Tomcat - either "webinf" or "root" my $count = kill 3, $preview_pid; }else { # Not possible to send a signal to the VCM - use "sudo" system ("/usr/bin/sudo /bin/kill -3 $preview_pid"); } # Capture Preview thread dump system ("/usr/bin/jmap -dump:format=b,file=$preview_diag_dir/preview_heapdump.hprof $preview_pid"); # Gather the top threads; keep around for reference on what other threads are running @top_cmd = ("/usr/bin/top", "-H", "-n1", "-b"); @sort_cmd = ("/bin/sort", "-r", "-n", "-k", "9,9"); @sed_cmd = ("/bin/sed", "-n", "'8,$p'"); system("@top_cmd 1> top_all_threads.log"); # Get your tomcat user's threads, i.e. threads of user, "webinf" system('/usr/bin/tail -n+6 top_all_threads.log | /bin/sort -r -n -k "9,9" | /bin/grep webinf top_all_threads.log 1> top_user_webinf_threads.log'); # Get the thread dump my @output=`/usr/bin/jstack -l ${preview_pid}`; open (my $file, '>', 'preview_threaddump.txt') or die "Could not open file: $!"; print $file @output; close $file; open LOG, "top_user_webinf_threads.log" or die $!; open (STDOUT, "| tee -ai top_cpu_preview_threads.log"); print "PID\tCPU\tMem\tJStack Info\n"; while ($l = <LOG>) { chop $l; $pid = $l; $pid =~ s/webinf.*//g; $pid =~ s/ *//g; ## Hex PID is available in the Sun HotSpot Stack Trace */ $hex_pid = sprintf("%#x", $pid); @values = split(/\s+/, $l); $pct = $values[8]; $mem = $values[9]; # Debugger breakpoint: $DB::single = 1; # Find the Java thread that corresponds to the thread-id from the TOP output for my $j (@output) { chop $j; ($j =~ m/nid=$hex_pid/) && print $hex_pid . "\t" . $pct . "\t" . $mem . "\t" . $j . "\n"; } } close (STDOUT); close LOG; ------------------end of script -------------------------- Thanks. -Shanti On Thu, Sep 27, 2012 at 2:11 PM, Bill Miller < millebi.subscripti...@gmail.com> wrote: > I agree; we have reproducible instances where PermGen is not set to our > requirements on the Tomcat startup parameters and it will cause a "lockup" > every time. Do some JMX monitoring and you may discover a memory spike > that's killing Tomcat. > > Bill > -----Original Message----- > From: Jeff MAURY [mailto:jeffma...@gmail.com] > Sent: September-27-2012 2:01 PM > To: Tomcat Users List > Subject: Re: high CPU usage on tomcat 7 > > This is probably due to out of memory, I have the same problem on my ubuntu > ci machine Did you monitor your tomcat with jmx ? > > Jeff > > > --------------------------------------------------------------------- > To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org > For additional commands, e-mail: users-h...@tomcat.apache.org > >