> -----Original Message-----
> From: Christopher Schultz [mailto:ch...@christopherschultz.net]
> Sent: Thursday, May 21, 2009 10:05 AM
> To: Tomcat Users List
> Subject: Re: Running out of tomcat threads - why many threads in
> RUNNABLEstage even with no activity
> 
> 
> Vishwajit,
> 
> On 5/20/2009 3:01 PM, Pantvaidya, Vishwajit wrote:
> > [Pantvaidya, Vishwajit] Ok so RUNNABLE i.e. persistent threads should
> > not be an issue. The only reason why I thought that was an issue was
> > that I was observing that the none of the RUNNABLE connections were
> > not being used to serve new requests, only the WAITING ones were -
> > and I do know for sure that the RUNNABLE threads were not servicing
> > any existing requests as I was the only one using the system then.
> 
> It seems pretty clear that this is what your problem is. See if you can
> follow the order of events described below:
> 
> 1. Tomcat and Apache httpd are started. httpd makes one or more
>    (persistent) AJP connections to Tomcat and holds them open (duh).
>    Each connection from httpd->Tomcat puts a Java thread in the RUNNABLE
>    state (though actually blocked on socket read, it's not "really"
>    runnable)
> 
> 2. Some requests are received by httpd and sent over the AJP connections
>    to Tomcat (or not ... it really doesn't matter)
> 
> 3. Time passes, your recycle_timeout (300s) or cache_timeout (600s)
>    expires
> 
> 4. A new request comes in to httpd destined for Tomcat. mod_jk dutifully
>    follows your instructions for closing the connections expired in #3
>    above (note that Tomcat has no idea that the connection has been
>    closed, and so those threads remain in the RUNNABLE state, not
>    connected to anything, lost forever)
> 
> 5. A new connection (or multiple new connections... not sure exactly
>    how mod_jks connection expiration-and-reconnect logic is done)
>    is made to Tomcat which allocates a new thread (or threads)
>    which is/are in the RUNNABLE state
> 
> Rinse, repeat, your server chokes to death when it runs out of threads.
> 
> The above description accounts for your "loss" of 4 threads at a time:
> your web browser requests the initial page followed by 3 other assets
> (image, css, whatever). Each one of them hits step #4 above, causing a
> new AJP connection to be created, with the old one still hanging around
> on the Tomcat side just wasting a thread and memory.
> 
> By setting connectionTimeout on the AJP <Connector>, you are /doing what
> you should have done in the first place, which is match mod_jk
> cache_timeout with Connector connectionTimeout/. This allows the threads
> on the Tomcat side to expire just like those on the httpd side. They
> should expire at (virtually) the same time and everything works as
> expected.

[Pantvaidya, Vishwajit] Thanks Chris - all this makes a lot of sense. However I 
am not seeing same problem (tomcat running out of threads) on other servers 
which are running exactly same configuration except that in those cases is no 
firewall separating websvr and tomcat. Here are the figures of RUNNABLE on 3 
different tomcat server running same config:

1. Firewall between httpd and tomcat - 120 threads, 112 runnable (93%)
2. No firewall between httpd and tomcat - 40 threads, 11 runnable (27%)
3. No firewall between httpd and tomcat - 48 threads, 2 runnable (4%)

Leads me to believe there is some firewall related mischief happening with #1.


> This problem is compounded by your initial configuration which created
> 10 connections from httpd->Tomcat for every (prefork) httpd process,
> resulting in 9 useless AJP connections for every httpd process. I
> suspect that you were expiring 10 connections at a time instead of just
> one, meaning that you were running out of threads 10 times faster than
> you otherwise would.

[Pantvaidya, Vishwajit] I did not note connections expiring in multiple of 10. 
But I will keep an eye out for this. However from the cachesize explanation at 
http://tomcat.apache.org/connectors-doc/reference/workers.html#Deprecated%20Worker%20Directives
 I get the impression that this value imposes an upper limit - meaning it may 
not necessarily create 10 tomcat/jk connections for an httpd child process


> Suggestions:
> 1. Tell your ops guys we know what we're talking about
> 2. Upgrade mod_jk
> 3. Set connection_pool_size=1, or, better yet, remove the config
>    altogether and let mod_jk determine its own value
> 4. Remove all timeouts unless you know that you have a misbehaving
>    firewall. If you do, enable cping/cpong (the preferred strategy
>    by at least one author or mod_jk)
> 
> - -chris

[Pantvaidya, Vishwajit] I will set
- cachesize=1 (doc says jk will autoset this value only for worker-mpm and we 
use httpd 2.0 prefork)
-  remove cache and recycle timeouts

But before all this, I will retest after removing connectionTimeout in 
server.xml - just to test if there are firewall caused issues as mentioned 
above.


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Reply via email to