On Mon, 11 Sep 2006 19:07:07 -0400, Sandy McArthur wrote:

>> Btw just curious how you test for lock contention etc.? I only know some
>> JDK 1.5/1.6 features that measure the native locks and thread wait time
>> but nothing for 1.4. I could provide test feedback on a shiny new
>> dual-core Opteron.
> 
> I don't have any truly robust tests, just a series of micro-benchmarks
> that try to simulate single threaded and multi-threaded access patterns
> with fake-expensive poolable objects that I've run many times on a single
> cpu, a hyper threaded cpu, and a quad xeon cpu servers. My conclusion a
> few months ago was that  the serialization that happens prevented the quad
> cpu server from delivering more than about 1.3 times the throughput of a
> single cpu server. I personally think it's reasonable to expect a quad cpu
> server should give you at least a 3 times performance boost over a single
> cpu server.

At least, yes - preferrably 4. :)
The pool is of course a bit of a bottleneck by design, since that is its
purpose in certain situations. Hard to break out of that. On the other
hand just using something like ConcurrentHashMap for the KeyedObjectPool
will likely make a difference already, assuming the keys are accessed with
relatively even distribution.

> But thread-safe access to the backing idle object pool collection isn't
> the main bottleneck despite that most people seem to look their to make
> commons pool faster.
> 
> What slows the provided pool implementations down is the state transitions
> poolable objects go through and the potential expense activating or
> passivating poolable objects while keeping the limits in check. For
> non-trivial poolable objects, which I assume all are, else you wouldn't be
> pooling them, the pool spends most of it's time in the activateObject,
> validateObject, and passivateObject methods. If those can run in parallel

Right, this makes a pool different from a bunch of simple queues.

> then the total throughput of the pool increases greatly, especially on
> multi-cpu servers. The problem is if you allow activate, validate, and
> passivateObject methods to run in parallel it gets much more complicated
> wether or not a poolable object that is transitioning state will cause a
> limit to be exceeded.

I see. Obvious thing would be a single (or more? one for each phase, like
a pipeline?) background worker - that way the 90% where fast handoff is OK
and does not clash with maxActive/minActive would be served quickly and
could be processed in the background while quickly returning to the
caller. SynchronousQueue (which is not really a queue) is made especially
for those cases. Unfortunately that would conflict with the synchronous
nature of borrow/returnObject. Maybe an opportunity for something like
returnObject(Object obj, boolean async)?

> For example pretty much every pool I've seen simply uses the internal
> Collection's .size() method as the result of getNumIdle(). But without
> full synchronization this isn't sufficient because a previously idle
> poolable object isn't really active until activateObject is done. A naive
> implementation will have a race condition that could allow too many
> database connections be created or whatever else is being pooled.

Yes, the JCIP book (great reading btw!) calls these "hot fields" since
they are the single contention points required for correctness. The huge
problem is that the "precision" of the pool cannot be relaxed for those
cases where you can tolerate a temporary one-off count. Maybe we can sneak
something in based on alternative implementations.

> I wasn't aware of backport-util-concurrent, I'll look into it.

What?! run, don't walk! :) It's really great and works extremely well.

> Patches attached to a issue can be submitted as soon as you have them
> ready, but you have to wait on a commiter to commit them. Then you have to
> be asked to become a committer, accept, and then it takes a some time and
> paper work to become official. You can read about it here:
> http://jakarta.apache.org/site/getinvolved.html

I got my CLA faxed in already so that is the least problem.

cheers
Holger



---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to