This is a little tricky to reproduce but...

Apache 2.0.45
Tomcat 4.1.24 (or .27 for that matter)
mod_jk2 2.0.2
everything built and running on Solaris 2.8

Connect all machines using a 100 Mbit switch or something fast.

Create two instances of tomcat with Coyote AJP 13 connectors where enableLookups is 
false and connectionTimeout is 0.  Use jvmRoutes of portal1 and portal2.  Under the 
webapps/ROOT directory create a set of files of various sizes like:

SIZE (BYTES)  FILE NAME
------------  ---------
    128       128.txt
    256       256.txt
    512       512.txt
   1024       1024.txt
   2048       2048.txt
   4096       4096.txt

Set up your Apache with a basic httpd.conf and workers.properties like those included.

Use the load generating script and Java app like that included with a command line like

./test.sh 40 200 http://myserver:myport

To create 40 threads each creating 200 requests for a paricular file from the given 
URL.

You may choose to monitor the server-status apache page (mod_status) and look for 
frozen 'W' slots.

The test script may sometimes run to completion without problems but more often than 
not it fails to complete and locks up.

At some point the tomcat(s) become saturated and run out of threads (message sent to 
log for tomcat): 

    [java] SEVERE: All threads are busy, waiting. Please increase maxThreads or check 
the servlet status75 75

Accompanying this the mod_jk2 in the Apache starts to block on reading headers from 
AJP13 responses (the send of the request works, the read for the header of the 
response blocks indefinitely).

mod_jk does not exhibit this behavior.  Setting connectionTimeout on the Coyote 
endpoint prevents the lockup but there is definitely inconsistent performance as the 
connections timeout.


 <<stuff.tar.gz>> 

Attachment: stuff.tar.gz
Description: stuff.tar.gz

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to