Quick overview of our setup. Http requests flow from our load balancers, to
squid proxys, to Apaches, to our Tomcat servers.  We migrated to this setup
from an Oracle App Server.

Apache: 2.2.3
Tomcat: 7.0.11.0
JVM: 1.6.0_22-b04
Linux: 2.6.18-194.17.1.el5

Our production environment has max threads set at 200, the number of
threads usually hovers around 150.  About twice a day, at seemingly
unrelated times we get a sudden spike in the number of ajp threads open.
Eventually this hits our max of 200.  At this point Tomcat still seems
responsive, but the number of our httpd processes spikes until Apache locks
ups.  At this point we have monitoring software that kills and restarts
Apache.  We then manually restart Tomcat.

Here is a graph of the AJP Threads running.  You can see a sudden jump to
200 threads.  The other dips are most likely reloads triggered by our
configuration management software (puppet).

http://sporkit.com/thread_spike/spike.jpg

Also interesting to note, these threads (all 200) appear to be in the keep
alive state.

http://sporkit.com/thread_spike/threads.jpg

Our access logs don't indicate a high number of visits, or any one
particular page that might cause this issue (that I can see).

At this point we are stumped.  Do we spend our time tracking down memory
leaks?  Is there something we could do to at least mitigate the problem
over the holidays?  Any input greatly appreciated.

Reply via email to