high CPU usage on tomcat 7

2012-09-27 Thread Kirill Kireyev

Hi!

I'm periodically getting unduly high (100%) CPU usage by the tomcat 
process on my server. This problems happens intermittently, several 
times a week. When the server goes into this high CPU it does not come 
back (and becomes unresponsive to new requests), and the only recourse 
is to restart the tomcat process.


I'm using Tomcat 7.0.30, with APR and Apache web server, on a Ubuntu 
11.10 server with 32g of RAM / 8 CPUs.


I've done several jstack stack traces when this occurs, and what I 
consistently see, are the connector threads in the RUNNABLE state every 
time, i.e.:


ajp-apr-8009-Acceptor-0 daemon prio=10 tid=0x010a1000 nid=0x539 
runnable [0x7f9364f8e000]

   java.lang.Thread.State: RUNNABLE
at org.apache.tomcat.jni.Socket.accept(Native Method)
at 
org.apache.tomcat.util.net.AprEndpoint$Acceptor.run(AprEndpoint.java:1013)

at java.lang.Thread.run(Thread.java:722)

http-apr-8443-Acceptor-0 daemon prio=10 tid=0x0109b800 
nid=0x535 runnable [0x7f936551]

   java.lang.Thread.State: RUNNABLE
at org.apache.tomcat.jni.Socket.accept(Native Method)
at 
org.apache.tomcat.util.net.AprEndpoint$Acceptor.run(AprEndpoint.java:1013)

at java.lang.Thread.run(Thread.java:722)

http-apr-8080-Acceptor-0 daemon prio=10 tid=0x015ab000 
nid=0x531 runnable [0x7f9365a92000]

   java.lang.Thread.State: RUNNABLE
at org.apache.tomcat.jni.Socket.accept(Native Method)
at 
org.apache.tomcat.util.net.AprEndpoint$Acceptor.run(AprEndpoint.java:1013)

at java.lang.Thread.run(Thread.java:722)

Other threads are in RUNNBLE too in different cases, but these are the 
one that are always there when the high CPU occurs. That's why I'm 
starting to think it has something to do with Tomcat.


Can anyone shed some light on this? My current Connector configurations 
in server.xml are:
 Connector port=8080 
protocol=org.apache.coyote.http11.Http11AprProtocol

   connectionTimeout=2
   maxThreads=500 minSpareThreads=10 maxSpareThreads=20
   redirectPort=8443
   pollTime=10 /
...
Connector port=8443 
protocol=org.apache.coyote.http11.Http11AprProtocol
maxThreads=200 scheme=https secure=true 
SSLEnabled=true

SSLCACertificateFile=
SSLCertificateKeyFile=
SSLCertificateFile=***
enableLookups=false clientAuth=false sslProtocol=TLS
pollTime=10 /
...
Connector port=8009 protocol=AJP/1.3 redirectPort=8443
   acceptCount=100 connectionTimeout=5000 
keepAliveTimeout=2

   disableUploadTimeout=true enableLookups=false
   maxHttpHeaderSize=8192
   maxSpareThreads=75 maxThreads=150
   minSpareThreads=25
   executor=default /

Thanks a lot!
-Kirill

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: high CPU usage on tomcat 7

2012-09-27 Thread Mark Thomas


Kirill Kireyev kir...@instagrok.com wrote:

Hi!

I'm periodically getting unduly high (100%) CPU usage by the tomcat 
process on my server. This problems happens intermittently, several 
times a week. When the server goes into this high CPU it does not come 
back (and becomes unresponsive to new requests), and the only recourse 
is to restart the tomcat process.

I'm using Tomcat 7.0.30, with APR and Apache web server, on a Ubuntu 
11.10 server with 32g of RAM / 8 CPUs.

I've done several jstack stack traces when this occurs, and what I 
consistently see, are the connector threads in the RUNNABLE state every

time, i.e.:

ajp-apr-8009-Acceptor-0 daemon prio=10 tid=0x010a1000
nid=0x539 
runnable [0x7f9364f8e000]
java.lang.Thread.State: RUNNABLE
 at org.apache.tomcat.jni.Socket.accept(Native Method)
 at 
org.apache.tomcat.util.net.AprEndpoint$Acceptor.run(AprEndpoint.java:1013)
 at java.lang.Thread.run(Thread.java:722)

http-apr-8443-Acceptor-0 daemon prio=10 tid=0x0109b800 
nid=0x535 runnable [0x7f936551]
java.lang.Thread.State: RUNNABLE
 at org.apache.tomcat.jni.Socket.accept(Native Method)
 at 
org.apache.tomcat.util.net.AprEndpoint$Acceptor.run(AprEndpoint.java:1013)
 at java.lang.Thread.run(Thread.java:722)

http-apr-8080-Acceptor-0 daemon prio=10 tid=0x015ab000 
nid=0x531 runnable [0x7f9365a92000]
java.lang.Thread.State: RUNNABLE
 at org.apache.tomcat.jni.Socket.accept(Native Method)
 at 
org.apache.tomcat.util.net.AprEndpoint$Acceptor.run(AprEndpoint.java:1013)
 at java.lang.Thread.run(Thread.java:722)

Other threads are in RUNNBLE too in different cases, but these are the 
one that are always there when the high CPU occurs. That's why I'm 
starting to think it has something to do with Tomcat.

Those threads look ok to me. As acceptor threads that is what i would expect.

Can anyone shed some light on this?

With the information you have provided? Very unlikely.

What you need to do is use ps to look at CPU usage per thread (not per process) 
and then match the offending thread ID to the thread ID in the thread dump.

Mark

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: high CPU usage on tomcat 7

2012-09-27 Thread Jeff MAURY
This is probably due to out of memory, I have the same problem on my ubuntu
ci machine
Did you monitor your tomcat with jmx ?

Jeff
Le 27 sept. 2012 17:39, Kirill Kireyev kir...@instagrok.com a écrit :

 Hi!

 I'm periodically getting unduly high (100%) CPU usage by the tomcat
 process on my server. This problems happens intermittently, several times a
 week. When the server goes into this high CPU it does not come back (and
 becomes unresponsive to new requests), and the only recourse is to restart
 the tomcat process.

 I'm using Tomcat 7.0.30, with APR and Apache web server, on a Ubuntu 11.10
 server with 32g of RAM / 8 CPUs.

 I've done several jstack stack traces when this occurs, and what I
 consistently see, are the connector threads in the RUNNABLE state every
 time, i.e.:

 ajp-apr-8009-Acceptor-0 daemon prio=10 tid=0x010a1000 nid=0x539
 runnable [0x7f9364f8e000]
java.lang.Thread.State: RUNNABLE
 at org.apache.tomcat.jni.Socket.**accept(Native Method)
 at org.apache.tomcat.util.net.**AprEndpoint$Acceptor.run(**
 AprEndpoint.java:1013)
 at java.lang.Thread.run(Thread.**java:722)

 http-apr-8443-Acceptor-0 daemon prio=10 tid=0x0109b800 nid=0x535
 runnable [0x7f936551]
java.lang.Thread.State: RUNNABLE
 at org.apache.tomcat.jni.Socket.**accept(Native Method)
 at org.apache.tomcat.util.net.**AprEndpoint$Acceptor.run(**
 AprEndpoint.java:1013)
 at java.lang.Thread.run(Thread.**java:722)

 http-apr-8080-Acceptor-0 daemon prio=10 tid=0x015ab000 nid=0x531
 runnable [0x7f9365a92000]
java.lang.Thread.State: RUNNABLE
 at org.apache.tomcat.jni.Socket.**accept(Native Method)
 at org.apache.tomcat.util.net.**AprEndpoint$Acceptor.run(**
 AprEndpoint.java:1013)
 at java.lang.Thread.run(Thread.**java:722)

 Other threads are in RUNNBLE too in different cases, but these are the one
 that are always there when the high CPU occurs. That's why I'm starting to
 think it has something to do with Tomcat.

 Can anyone shed some light on this? My current Connector configurations in
 server.xml are:
  Connector port=8080 protocol=org.apache.coyote.**
 http11.Http11AprProtocol
connectionTimeout=2
maxThreads=500 minSpareThreads=10 maxSpareThreads=20
redirectPort=8443
pollTime=10 /
 ...
 Connector port=8443 protocol=org.apache.coyote.**
 http11.Http11AprProtocol
 maxThreads=200 scheme=https secure=true SSLEnabled=true
 SSLCACertificateFile=**
 SSLCertificateKeyFile=**
 SSLCertificateFile=***
 enableLookups=false clientAuth=false sslProtocol=TLS
 pollTime=10 /
 ...
 Connector port=8009 protocol=AJP/1.3 redirectPort=8443
acceptCount=100 connectionTimeout=5000
 keepAliveTimeout=2
disableUploadTimeout=true enableLookups=false
maxHttpHeaderSize=8192
maxSpareThreads=75 maxThreads=150
minSpareThreads=25
executor=default /

 Thanks a lot!
 -Kirill

 --**--**-
 To unsubscribe, e-mail: 
 users-unsubscribe@tomcat.**apache.orgusers-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org




RE: high CPU usage on tomcat 7

2012-09-27 Thread Bill Miller
I agree; we have reproducible instances where PermGen is not set to our
requirements on the Tomcat startup parameters and it will cause a lockup
every time. Do some JMX monitoring and you may discover a memory spike
that's killing Tomcat.

Bill
-Original Message-
From: Jeff MAURY [mailto:jeffma...@gmail.com] 
Sent: September-27-2012 2:01 PM
To: Tomcat Users List
Subject: Re: high CPU usage on tomcat 7

This is probably due to out of memory, I have the same problem on my ubuntu
ci machine Did you monitor your tomcat with jmx ?

Jeff
Le 27 sept. 2012 17:39, Kirill Kireyev kir...@instagrok.com a écrit :

 Hi!

 I'm periodically getting unduly high (100%) CPU usage by the tomcat 
 process on my server. This problems happens intermittently, several 
 times a week. When the server goes into this high CPU it does not come 
 back (and becomes unresponsive to new requests), and the only recourse 
 is to restart the tomcat process.

 I'm using Tomcat 7.0.30, with APR and Apache web server, on a Ubuntu 
 11.10 server with 32g of RAM / 8 CPUs.

 I've done several jstack stack traces when this occurs, and what I 
 consistently see, are the connector threads in the RUNNABLE state 
 every time, i.e.:

 ajp-apr-8009-Acceptor-0 daemon prio=10 tid=0x010a1000 
 nid=0x539 runnable [0x7f9364f8e000]
java.lang.Thread.State: RUNNABLE
 at org.apache.tomcat.jni.Socket.**accept(Native Method)
 at org.apache.tomcat.util.net.**AprEndpoint$Acceptor.run(**
 AprEndpoint.java:1013)
 at java.lang.Thread.run(Thread.**java:722)

 http-apr-8443-Acceptor-0 daemon prio=10 tid=0x0109b800 
 nid=0x535 runnable [0x7f936551]
java.lang.Thread.State: RUNNABLE
 at org.apache.tomcat.jni.Socket.**accept(Native Method)
 at org.apache.tomcat.util.net.**AprEndpoint$Acceptor.run(**
 AprEndpoint.java:1013)
 at java.lang.Thread.run(Thread.**java:722)

 http-apr-8080-Acceptor-0 daemon prio=10 tid=0x015ab000 
 nid=0x531 runnable [0x7f9365a92000]
java.lang.Thread.State: RUNNABLE
 at org.apache.tomcat.jni.Socket.**accept(Native Method)
 at org.apache.tomcat.util.net.**AprEndpoint$Acceptor.run(**
 AprEndpoint.java:1013)
 at java.lang.Thread.run(Thread.**java:722)

 Other threads are in RUNNBLE too in different cases, but these are the 
 one that are always there when the high CPU occurs. That's why I'm 
 starting to think it has something to do with Tomcat.

 Can anyone shed some light on this? My current Connector 
 configurations in server.xml are:
  Connector port=8080 protocol=org.apache.coyote.** 
 http11.Http11AprProtocol
connectionTimeout=2
maxThreads=500 minSpareThreads=10 maxSpareThreads=20
redirectPort=8443
pollTime=10 /
 ...
 Connector port=8443 protocol=org.apache.coyote.** 
 http11.Http11AprProtocol
 maxThreads=200 scheme=https secure=true
SSLEnabled=true
 SSLCACertificateFile=**
 SSLCertificateKeyFile=**
 SSLCertificateFile=***
 enableLookups=false clientAuth=false sslProtocol=TLS
 pollTime=10 /
 ...
 Connector port=8009 protocol=AJP/1.3 redirectPort=8443
acceptCount=100 connectionTimeout=5000
 keepAliveTimeout=2
disableUploadTimeout=true enableLookups=false
maxHttpHeaderSize=8192
maxSpareThreads=75 maxThreads=150
minSpareThreads=25
executor=default /

 Thanks a lot!
 -Kirill

 --**--**--
 --- To unsubscribe, e-mail: 
 users-unsubscribe@tomcat.**apache.orgusers-unsubscribe@tomcat.apache.
 org For additional commands, e-mail: users-h...@tomcat.apache.org




-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: high CPU usage on tomcat 7

2012-09-27 Thread Shanti Suresh
Hi Kirill,

Like Mark, Bill and Jeff said, those threads are normal request-processing
threads.  I have included a script that might help with isolating high CPU
issues with Tomcat.

Also, I think it might be helpful to see how the Java heap is performing as
well.
Please bring up Jconsole and let it run over the week.  Inspect the graphs
for Memory, CPU and threads.  Since you say that high CPU occurs
intermittently several times during the week and clears itself, I wonder if
it is somehow related with the garbage collection options you are using for
the server.  Or it may be a code-related problem.

Things to look at may include:

(1) Are high CPU times related to Java heap reductions happening at the
same time?  == GC possibly needs tuning
(2) Are high CPU times related to increase in thread usage?  == possible
livelock in looping code?
(3) how many network connections come into the Tomcat server during
high-CPU times?Possible overload-related?

Here is the script.  I made a couple of small changes, for e.g., changing
the username.  But didn't test it after the change.  During high-CPU times,
invoke the script a few times, say 30 seconds apart.  And then compare the
thread-dumps.  I like to use TDA for thread-dump analysis of Tomcat
thread-dumps.

Mark, et al, please feel free to help me refine this script.  I would like
to have a script to catch STUCK threads too :-)  Let me know if anyone has
a script already.  Thanks.

--high_cpu_diagnostics.pl:-
#!/usr/bin/perl
#

use Cwd;

# Make a dated directory for capturing current diagnostics
my ($sec,$min,$hour,$mday,$mon,$year,
  $wday,$yday,$isdst) = localtime time;
$year += 1900;
$mon += 1;
my $pwd = cwd();
my $preview_diag_dir = /tmp/Preview_Diag.$year-$mon-$mday-$hour:$min:$sec;
print $preview_diag_dir\n;
mkdir $preview_diag_dir, 0755;
chdir($preview_diag_dir) or die Can't chdir into $preview_diag_dir $!\n;

# Capture Preview thread dump
my $process_pattern = preview;
my $preview_pid = `/usr/bin/pgrep -f $process_pattern`;
my $login = getpwuid($) ;
if (kill 0, $preview_pid){
#Possible to send a signal to the Preview Tomcat - either webinf
or root
my $count = kill 3, $preview_pid;
}else {
# Not possible to send a signal to the VCM - use sudo
system (/usr/bin/sudo /bin/kill -3 $preview_pid);
}

# Capture Preview thread dump
system (/usr/bin/jmap
-dump:format=b,file=$preview_diag_dir/preview_heapdump.hprof $preview_pid);

# Gather the top threads; keep around for reference on what other threads
are running
@top_cmd = (/usr/bin/top,  -H, -n1, -b);
@sort_cmd = (/bin/sort, -r, -n, -k, 9,9);
@sed_cmd = (/bin/sed, -n, '8,$p');
system(@top_cmd 1 top_all_threads.log);

# Get your tomcat user's threads, i.e. threads of user, webinf
system('/usr/bin/tail -n+6 top_all_threads.log | /bin/sort -r -n -k 9,9 |
/bin/grep webinf top_all_threads.log 1 top_user_webinf_threads.log');

# Get the thread dump
my @output=`/usr/bin/jstack -l ${preview_pid}`;
open (my $file, '', 'preview_threaddump.txt') or die Could not open file:
$!;
print $file @output;
close $file;

open LOG, top_user_webinf_threads.log or die $!;
open (STDOUT, | tee -ai top_cpu_preview_threads.log);
print PID\tCPU\tMem\tJStack Info\n;
while ($l = LOG) {
chop $l;
$pid = $l;
$pid =~ s/webinf.*//g;
$pid =~ s/ *//g;
##  Hex PID is available in the Sun HotSpot Stack Trace */
$hex_pid = sprintf(%#x, $pid);
@values = split(/\s+/, $l);
$pct = $values[8];
$mem = $values[9];
# Debugger breakpoint:
$DB::single = 1;

# Find the Java thread that corresponds to the thread-id from the TOP output
for my $j (@output) {
chop $j;
($j =~ m/nid=$hex_pid/)print $hex_pid . \t . $pct . \t .
$mem . \t .  $j . \n;
}
}

close (STDOUT);

close LOG;


--end of script --

Thanks.

  -Shanti


On Thu, Sep 27, 2012 at 2:11 PM, Bill Miller 
millebi.subscripti...@gmail.com wrote:

 I agree; we have reproducible instances where PermGen is not set to our
 requirements on the Tomcat startup parameters and it will cause a lockup
 every time. Do some JMX monitoring and you may discover a memory spike
 that's killing Tomcat.

 Bill
 -Original Message-
 From: Jeff MAURY [mailto:jeffma...@gmail.com]
 Sent: September-27-2012 2:01 PM
 To: Tomcat Users List
 Subject: Re: high CPU usage on tomcat 7

 This is probably due to out of memory, I have the same problem on my ubuntu
 ci machine Did you monitor your tomcat with jmx ?

 Jeff


 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org




Re: high CPU usage on tomcat 7

2012-09-27 Thread Shanti Suresh
Hi Kirill,

I mistook that the CPU issue clears itself.  Sorry.  It may or may not be
related to Garbage-collection settings then.

   -Shanti

On Thu, Sep 27, 2012 at 2:17 PM, Shanti Suresh sha...@umich.edu wrote:

 Hi Kirill,

 Like Mark, Bill and Jeff said, those threads are normal request-processing
 threads.  I have included a script that might help with isolating high CPU
 issues with Tomcat.

 Also, I think it might be helpful to see how the Java heap is performing
 as well.
 Please bring up Jconsole and let it run over the week.  Inspect the graphs
 for Memory, CPU and threads.  Since you say that high CPU occurs
 intermittently several times during the week and clears itself, I wonder if
 it is somehow related with the garbage collection options you are using for
 the server.  Or it may be a code-related problem.

 Things to look at may include:

 (1) Are high CPU times related to Java heap reductions happening at the
 same time?  == GC possibly needs tuning
 (2) Are high CPU times related to increase in thread usage?  == possible
 livelock in looping code?
 (3) how many network connections come into the Tomcat server during
 high-CPU times?Possible overload-related?

 Here is the script.  I made a couple of small changes, for e.g., changing
 the username.  But didn't test it after the change.  During high-CPU times,
 invoke the script a few times, say 30 seconds apart.  And then compare the
 thread-dumps.  I like to use TDA for thread-dump analysis of Tomcat
 thread-dumps.

 Mark, et al, please feel free to help me refine this script.  I would like
 to have a script to catch STUCK threads too :-)  Let me know if anyone has
 a script already.  Thanks.

 --high_cpu_diagnostics.pl:-
 #!/usr/bin/perl
 #

 use Cwd;

 # Make a dated directory for capturing current diagnostics
 my ($sec,$min,$hour,$mday,$mon,$year,
   $wday,$yday,$isdst) = localtime time;
 $year += 1900;
 $mon += 1;
 my $pwd = cwd();
 my $preview_diag_dir =
 /tmp/Preview_Diag.$year-$mon-$mday-$hour:$min:$sec;
 print $preview_diag_dir\n;
 mkdir $preview_diag_dir, 0755;
 chdir($preview_diag_dir) or die Can't chdir into $preview_diag_dir $!\n;

 # Capture Preview thread dump
 my $process_pattern = preview;
 my $preview_pid = `/usr/bin/pgrep -f $process_pattern`;
 my $login = getpwuid($) ;
 if (kill 0, $preview_pid){
 #Possible to send a signal to the Preview Tomcat - either webinf
 or root
 my $count = kill 3, $preview_pid;
 }else {
 # Not possible to send a signal to the VCM - use sudo
 system (/usr/bin/sudo /bin/kill -3 $preview_pid);
 }

 # Capture Preview thread dump
 system (/usr/bin/jmap
 -dump:format=b,file=$preview_diag_dir/preview_heapdump.hprof $preview_pid);

 # Gather the top threads; keep around for reference on what other threads
 are running
 @top_cmd = (/usr/bin/top,  -H, -n1, -b);
 @sort_cmd = (/bin/sort, -r, -n, -k, 9,9);
 @sed_cmd = (/bin/sed, -n, '8,$p');
 system(@top_cmd 1 top_all_threads.log);

 # Get your tomcat user's threads, i.e. threads of user, webinf
 system('/usr/bin/tail -n+6 top_all_threads.log | /bin/sort -r -n -k 9,9
 | /bin/grep webinf top_all_threads.log 1 top_user_webinf_threads.log');

 # Get the thread dump
 my @output=`/usr/bin/jstack -l ${preview_pid}`;
 open (my $file, '', 'preview_threaddump.txt') or die Could not open
 file: $!;
 print $file @output;
 close $file;

 open LOG, top_user_webinf_threads.log or die $!;
 open (STDOUT, | tee -ai top_cpu_preview_threads.log);
 print PID\tCPU\tMem\tJStack Info\n;
 while ($l = LOG) {
 chop $l;
 $pid = $l;
 $pid =~ s/webinf.*//g;
 $pid =~ s/ *//g;
 ##  Hex PID is available in the Sun HotSpot Stack Trace */
 $hex_pid = sprintf(%#x, $pid);
 @values = split(/\s+/, $l);
 $pct = $values[8];
 $mem = $values[9];
 # Debugger breakpoint:
 $DB::single = 1;

 # Find the Java thread that corresponds to the thread-id from the TOP
 output
 for my $j (@output) {
 chop $j;
 ($j =~ m/nid=$hex_pid/)print $hex_pid . \t . $pct . \t .
 $mem . \t .  $j . \n;
 }
 }

 close (STDOUT);

 close LOG;


 --end of script --

 Thanks.

   -Shanti


 On Thu, Sep 27, 2012 at 2:11 PM, Bill Miller 
 millebi.subscripti...@gmail.com wrote:

 I agree; we have reproducible instances where PermGen is not set to our
 requirements on the Tomcat startup parameters and it will cause a lockup
 every time. Do some JMX monitoring and you may discover a memory spike
 that's killing Tomcat.

 Bill
 -Original Message-
 From: Jeff MAURY [mailto:jeffma...@gmail.com]
 Sent: September-27-2012 2:01 PM
 To: Tomcat Users List
 Subject: Re: high CPU usage on tomcat 7

 This is probably due to out of memory, I have the same problem on my
 ubuntu
 ci machine Did you monitor your tomcat with jmx ?

 Jeff


 -
 To unsubscribe, e-mail: 

Re: high CPU usage on tomcat 7

2012-09-27 Thread Kirill Kireyev

  
  
Thanks for all the advice everyone!
  There is a possibility that the CPU is caused by an app thread - I
  am looking into that possibility. Will let you know when I find
  out more.
  
  Thanks,
  Kirill
  
  On 9/27/12 12:17 PM, Shanti Suresh wrote:


  Hi Kirill,

Like Mark, Bill and Jeff said, those threads are normal request-processing
threads.  I have included a script that might help with isolating high CPU
issues with Tomcat.

Also, I think it might be helpful to see how the Java heap is performing as
well.
Please bring up Jconsole and let it run over the week.  Inspect the graphs
for Memory, CPU and threads.  Since you say that high CPU occurs
intermittently several times during the week and clears itself, I wonder if
it is somehow related with the garbage collection options you are using for
the server.  Or it may be a code-related problem.

Things to look at may include:

(1) Are high CPU times related to Java heap reductions happening at the
same time?  == GC possibly needs tuning
(2) Are high CPU times related to increase in thread usage?  == possible
livelock in looping code?
(3) how many network connections come into the Tomcat server during
high-CPU times?Possible overload-related?

Here is the script.  I made a couple of small changes, for e.g., changing
the username.  But didn't test it after the change.  During high-CPU times,
invoke the script a few times, say 30 seconds apart.  And then compare the
thread-dumps.  I like to use TDA for thread-dump analysis of Tomcat
thread-dumps.

Mark, et al, please feel free to help me refine this script.  I would like
to have a script to catch STUCK threads too :-)  Let me know if anyone has
a script already.  Thanks.

--high_cpu_diagnostics.pl:-
#!/usr/bin/perl
#

use Cwd;

# Make a dated directory for capturing current diagnostics
my ($sec,$min,$hour,$mday,$mon,$year,
  $wday,$yday,$isdst) = localtime time;
$year += 1900;
$mon += 1;
my $pwd = cwd();
my $preview_diag_dir = "/tmp/Preview_Diag.$year-$mon-$mday-$hour:$min:$sec";
print "$preview_diag_dir\n";
mkdir $preview_diag_dir, 0755;
chdir($preview_diag_dir) or die "Can't chdir into $preview_diag_dir $!\n";

# Capture Preview thread dump
my $process_pattern = "preview";
my $preview_pid = `/usr/bin/pgrep -f $process_pattern`;
my $login = getpwuid($) ;
if (kill 0, $preview_pid){
#Possible to send a signal to the Preview Tomcat - either "webinf"
or "root"
my $count = kill 3, $preview_pid;
}else {
# Not possible to send a signal to the VCM - use "sudo"
system ("/usr/bin/sudo /bin/kill -3 $preview_pid");
}

# Capture Preview thread dump
system ("/usr/bin/jmap
-dump:format=b,file=$preview_diag_dir/preview_heapdump.hprof $preview_pid");

# Gather the top threads; keep around for reference on what other threads
are running
@top_cmd = ("/usr/bin/top",  "-H", "-n1", "-b");
@sort_cmd = ("/bin/sort", "-r", "-n", "-k", "9,9");
@sed_cmd = ("/bin/sed", "-n", "'8,$p'");
system("@top_cmd 1 top_all_threads.log");

# Get your tomcat user's threads, i.e. threads of user, "webinf"
system('/usr/bin/tail -n+6 top_all_threads.log | /bin/sort -r -n -k "9,9" |
/bin/grep webinf top_all_threads.log 1 top_user_webinf_threads.log');

# Get the thread dump
my @output=`/usr/bin/jstack -l ${preview_pid}`;
open (my $file, '', 'preview_threaddump.txt') or die "Could not open file:
$!";
print $file @output;
close $file;

open LOG, "top_user_webinf_threads.log" or die $!;
open (STDOUT, "| tee -ai top_cpu_preview_threads.log");
print "PID\tCPU\tMem\tJStack Info\n";
while ($l = LOG) {
chop $l;
$pid = $l;
$pid =~ s/webinf.*//g;
$pid =~ s/ *//g;
##  Hex PID is available in the Sun HotSpot Stack Trace */
$hex_pid = sprintf("%#x", $pid);
@values = split(/\s+/, $l);
$pct = $values[8];
$mem = $values[9];
# Debugger breakpoint:
$DB::single = 1;

# Find the Java thread that corresponds to the thread-id from the TOP output
for my $j (@output) {
chop $j;
($j =~ m/nid=$hex_pid/)print $hex_pid . "\t" . $pct . "\t" .
$mem . "\t" .  $j . "\n";
}
}

close (STDOUT);

close LOG;


--end of script --

Thanks.

  -Shanti


On Thu, Sep 27, 2012 at 2:11 PM, Bill Miller 
millebi.subscripti...@gmail.com wrote:


  
I agree; we have reproducible instances where PermGen is not set to our
requirements on the Tomcat startup parameters and it will cause a "lockup"
every time. Do some JMX monitoring and you may discover a memory spike
that's killing Tomcat.

Bill
-Original Message-
From: Jeff MAURY [mailto:jeffma...@gmail.com]
Sent: September-27-2012 2:01 PM
To: Tomcat Users List
Subject: Re: high CPU usage on tomcat 7

This is probably due to out of memory, I have the same problem on my ubuntu
ci machine Did you monitor your tomcat with jmx ?

Jeff



maxHTTPHeaderSize, and specific header lengths

2012-09-27 Thread Andrew Todd
I have a question about maxHttpHeaderSize [0]. In Apache httpd, there
are two different parameters that affect the maximum size of an HTTP
header, limitRequestFieldSize and limitRequestLine. [1] These
configuration values specify about 8 kilobytes per _line_ in the
incoming request. However, in Tomcat, maxHttpHeaderSize seems to
specify the maximum length of the entire incoming header, also at
around 8 kilobytes. So httpd will, by default, accept a much bigger
header than Tomcat will.

Is that an accurate understanding of the configuration? If I want to
expand the maximum URL and header lengths that I can accept in Tomcat,
should I change the value of maxHttpHeaderSize? Thanks.


[0] https://tomcat.apache.org/tomcat-7.0-doc/config/http.html
[1] https://httpd.apache.org/docs/2.2/mod/core.html#limitrequestfieldsize

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org