On Wed, 2005-05-04 at 16:02 +0200, Remy Maucherat wrote:
> Scott Marlow wrote:
> > Hi, 
> > 
> > I wonder if anyone has any feedback on a performance change that I am
> > working on making. 
> > 
> > One benefit of reducing concurrency in a server application is that a
> > small number of requests can complete more quickly than if they had to
> > compete against a large number of running threads for object locks (Java
> > or externally in a database). 
> > 
> > I would like have a Tomcat configuration option to set the max number of
> > concurrent threads that can service user requests.  You might configure
> > Tomcat to handle 800 http client connections but set the max concurrent
> > requests to 20 (perhaps higher if you have more CPUs).  I like to refer
> > to the max concurrent requests setting as the throttle size (if there is
> > a better term, let me know).
> > 
> > I modified the Tomcat Thread.run code to use Doug Lea's semaphore
> > support but didn't expose a configuration option (haven't learned how to
> > do that yet). My basic change is to allow users to specify the max
> > number of concurrent servlet requests that can run. If an application
> > has a high level of concurrency, end users may get more consistent
> > response time with this change. If an application has a low level of
> > concurrency, my change doesn't help as their application only has a few
> > threads running concurrently anyway. 
> > 
> > This also reduces resource use on other tiers. For example, if you are
> > supporting 500 users with a Tomcat instance, you don't need a database
> > connection pool size of 500, instead set the throttle size to 20 and
> > create a database connection pool size of 20. 
> > 
> > Current status of the change: 
> > 
> > 1. org.apache.tomcat.util.threads.ThreadPool.CONCURRENT_THREADS is
> > hardcoded to a value of 18, should be a configurable option. 
> > 2. I hacked the build scripts to include Doug Lea's concurrent.jar but
> > probably didn't make these changes correctly.  I could switch to using
> > the Java 1.5 implementation of the Concurrent package but we would still
> > need to do something for Java 1.4 compatibility.
> > 
> > Any suggestions on completing this enhancement are appreciated.
> > 
> > Please include my [EMAIL PROTECTED] email address in your response.
> 
> I looked at this yesterday, and while it is a cool hack, it is not that 
> useful anymore (and we're also not going to use the concurrent utilities 
> in Tomcat, so it's not really an option before we require Java 5). The 
> main issue is that due to the fact keepalive is done in blocking mode, 
> actual concurrency in the servlet container is unpredictable (the amount 
> of processing threads - maxThreads - will usually be a lot higher than 
> the actual expected concurrency - let's say 100 per CPU). If that issue 
> is solved (we're trying to see if APR is a good solution for it), then 
> the problem goes away.
> 
> Your patch is basically a much nicer implementation of maxThreads 
> (assuming it doesn't reduce performance) which would be useful for the 
> regular HTTP connector, so it's cool, but not worth it. Overall, I think 
> the way maxThreads is done in the APR connector is the easiest (if the 
> amount of workers is too high, wait a bit without accepting anything).
> 
> However, reading the text of the message, you don't seem to realize that 
> a lot of the threads which would actually be doing processing are just 
> blocking for keepalive (hence not doing anything useful; maybe you don't 
> see it in your test). Anyway, congratulations for understanding that 
> ThreadPool code (I stopped using it for new code, since I think it has 
> some limitations and is too complex).
> 
> RÃmy
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 

Thank you for all of the replies!

The benefit of reducing concurrency is for the application code more
than the web container.  I last saw the benefit in action on Novell's
IIOP container when I was working on publishing spec.org benchmark
numbers for
(http://www.spec.org/jAppServer2001/results/res2003q4/jAppServer2001-20031118-00016.html).

Prior to setting the max number of concurrent requests allowed to run at
once, I had about 800 communication threads that were also running
application requests.  The application requests would typically do some
local processing and quite a bit of database i/o (database ran on a
different tier).  With 800 application threads running at once, there
was too much contention on shared Java objects (the Java unfair
scheduler made this worse) and database contention.  Some client
requests would take 2 seconds to complete while others would take 40
seconds.

Luckily the Novell Corba orb already had the ability to set the max
number of IIOP requests allowed to run concurrently.  Setting this to 18
didn't impact the communications threads ability to send/receive but
instead restricted the number of application requests being processed at
once to 18.  This mostly eliminated the Java object contention and
tightened the database transactions as there is much less contention
with only 18 requests running at once.  Running 18 requests at a time
gave a more consistent response time.

Other web containers also have the ability to set the max concurrent
requests allowed to run at once.  The MySQL Database InnoDB storage
engine assumes a small "max number of concurrent threads" (see
innodb_thread_concurrency on
http://dev.mysql.com/doc/mysql/en/innodb-start.html).  Other server
products have also encouraged a small number of concurrent application
requests running for similar reasons.

As I mentioned before, I'm just getting started with the Tomcat change
and don't have benchmark results to show yet (may not for a while).  No
worries as I am patient and will cover this at some point or perhaps we
will try this in a customer application to see if its helps.

Anyway, my point is that this could be a worthwhile enhancement for
applications that run on Tomcat.  What I don't understand yet is whether
the same functionality is already in Tomcat.

I should point out that some applications shouldn't limit the max number
of concurrent requests (long running requests won't benefit but maybe
those applications shouldn't run on the web tier anyway :-)

I agree that it is difficult to deal with Java 1.4 versus 1.5 and the
concurrent Java utilities.  Perhaps we could use the 1.5 support and
implement this class in the Tomcat 1.4 compatibility layer.  The 1.5
java.util.concurrent.Semaphore class would probably be used for this
(http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/Semaphore.html).

-Scott


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to