Hi,
I wonder if anyone has any feedback on a performance change that I am
working on making.
One benefit of reducing concurrency in a server application is that a
small number of requests can complete more quickly than if they had to
compete against a large number of running threads for object locks (Java
or externally in a database).
I would like have a Tomcat configuration option to set the max number of concurrent threads that can service user requests. You might configure Tomcat to handle 800 http client connections but set the max concurrent requests to 20 (perhaps higher if you have more CPUs). I like to refer to the max concurrent requests setting as the throttle size (if there is a better term, let me know).
I modified the Tomcat Thread.run code to use Doug Lea's semaphore
support but didn't expose a configuration option (haven't learned how to
do that yet). My basic change is to allow users to specify the max
number of concurrent servlet requests that can run. If an application
has a high level of concurrency, end users may get more consistent
response time with this change. If an application has a low level of
concurrency, my change doesn't help as their application only has a few
threads running concurrently anyway.
This also reduces resource use on other tiers. For example, if you are
supporting 500 users with a Tomcat instance, you don't need a database
connection pool size of 500, instead set the throttle size to 20 and
create a database connection pool size of 20.
Current status of the change:
1. org.apache.tomcat.util.threads.ThreadPool.CONCURRENT_THREADS is
hardcoded to a value of 18, should be a configurable option. 2. I hacked the build scripts to include Doug Lea's concurrent.jar but
probably didn't make these changes correctly. I could switch to using
the Java 1.5 implementation of the Concurrent package but we would still
need to do something for Java 1.4 compatibility.
Any suggestions on completing this enhancement are appreciated.
Please include my [EMAIL PROTECTED] email address in your response.
I looked at this yesterday, and while it is a cool hack, it is not that useful anymore (and we're also not going to use the concurrent utilities in Tomcat, so it's not really an option before we require Java 5). The main issue is that due to the fact keepalive is done in blocking mode, actual concurrency in the servlet container is unpredictable (the amount of processing threads - maxThreads - will usually be a lot higher than the actual expected concurrency - let's say 100 per CPU). If that issue is solved (we're trying to see if APR is a good solution for it), then the problem goes away.
Your patch is basically a much nicer implementation of maxThreads (assuming it doesn't reduce performance) which would be useful for the regular HTTP connector, so it's cool, but not worth it. Overall, I think the way maxThreads is done in the APR connector is the easiest (if the amount of workers is too high, wait a bit without accepting anything).
However, reading the text of the message, you don't seem to realize that a lot of the threads which would actually be doing processing are just blocking for keepalive (hence not doing anything useful; maybe you don't see it in your test). Anyway, congratulations for understanding that ThreadPool code (I stopped using it for new code, since I think it has some limitations and is too complex).
Rémy
--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]