I think that the best way to implement this kind of functionality is through JMX.

Expose the statistics that the pool can gather and let users monitor it and take 
action on them by creating monitors and listeners.

This would decouple the management of the pool from its implementation. I think this 
is a much cleaner design.


Mauro

-----Original Message-----
From: Phil Steitz [mailto:[EMAIL PROTECTED]
Sent: Wednesday, September 22, 2004 8:39 PM
To: Jakarta Commons Developers List
Subject: Re: (Pool) Pools should be self tuning


Stephan wrote:
> 
> To solve this problem, in my opinion, the pool should be self tunning. 
> Just like the Evictor thread (which is just a trigger), another thread 
> would wakeup at regular intervals and gather statistical data and 
> analyse it. As a result action would be taken (like changing the values 
> of the pool settings) to adapt the pool to the usage pattern.
> 
> How you you go about monitoring the usage pattern of a pool?
> The only ideas I have right now is using a moving average of the active 
> connections and may be remembering this average throught the day so that 
>  the pool that anticipate usage. Imagine an automatic process that uses 
> the pool once a day at 1:00 and borrows 1000 objects from the pool in a 
> few seconds: The pool could anticipate this regular usage and create 
> more objects in the pool.
> 
> I'm not very good at statistics and I hope that some of you are and 
> could explain how statistics could help in this task.
> 
> This self tuning thing could be a more general problem and find other 
> applications in other pagakges.

Interesting idea. A "resource manager" component that could support 
different heuristics for monitoring and controlling resource levels. An 
object pool could be a managed resource. Configuration metadata could 
abstract resource mgt interfaces (so the pool itself, for example would 
not have to change).  In many cases, very simple reactive growth / 
shrinkage (growth and shrinkage triggered by resource availability 
thresholds) would work fine.  Predictive modelling would require you to 
quantify and measure lots of things -- cost of starvation / overload, cost 
to create/activate, destroy/passivate, cost to maintain idle resources, 
resource request arrival rate, service time (at different load levels), 
etc. Really more operations research than statistics. Could be worthwhile 
in some cases. There are lots of different heuristics that could be 
applied. Defining the interfaces to support a wide range of heuristics 
could be interesting.

One more note on this. Web applications, which probably dominate [pool]'s 
user base, often present "bursty" and irregular load profiles. This 
presents some special problems for both modelling and adaptive heuristics. 
  Growing a pool on demand, for example, can fail if rampup takes too long 
when a "flash crowd" has shown up. Here is an interesting article 
describing some of the things that come up:
http://lass.cs.umass.edu/~lass/papers/pdf/TR03-37.pdf


Phil

> 
> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to