Yes, indeed, great idea!
A good way to do this is probably to build a wrapper (instead of a class the
inherit ConvergenceChecker) ConvergenceTracker implemented very much like your
proposal, except that can be wrapped around any ConvergenceChecker with a
constructor like
ct = new DebugConvergenceTracker(new SimpleValueChecker(...) )
So that later on (after optim has been called), it is possible to get the
trajectory, with something like
List<PointValuePair> path = ct.getTrajectory() ;
The same wrapper principle may be applied to count the number of time the
objective function "value" is invoqued during the optimisation process.
In fact wrapper is probably a better pattern than inheritance because it do not
meddle into the optim class hierarchy and do not require other developpers of
the optim packkage to do anything special
Indeed, I shall keep going coding and testing a lot before I can propose
something to commit to apache common
yours truly
François Laferrière
> Hello.
>
> > I am trying to extends the optim package for my own needs, keeping in
> > mind that I may possibly contribute to common-math. So I try to stick
> > to the general design philosophy of common-math
> >
> > In my project, I do need to compare the performance of different
> > algorithm. So I need to keep track not
> > only of initial guess, final results and number of iteration, but also
> > of intermediate results. For instance, it would be gread to have an
> > option to keep track of the trajectory, for instance as an array of
> > values that can be retrieved at the end. Something like (using the yet yo be
> > developped "fluent" interface).
> >
> > class MyOptimizer implement DebuggableOptim ...
> >
> > MyOptimizer optim = new MyOptimiser().withDebuggable(true) ;
> > PointValuePair resultPair = optim.optimize(...) ;
> > PointValuePair [] trajectory = optim.getTrajectory() ;
>
> Although I agree that it can be useful to keep track of how an
> algorithm reached the result, I'm wary to make it part of the
> optimizer's API.
> In fact a user could define a functionality like this by
> extending the "ConvergenceChecker" classes:
>
> ---CUT---
> public class ConvergenceTracker extends SimpleValueChecker {
> private final List<PointValuePair> path;
>
> public ConvergenceTracker(final double relativeThreshold,
> final double absoluteThreshold,
> final List<PointValuePair> path) {
> super(relativeThreshold, absoluteThreshold);
> this.path = path;
> }
>
> @Override
> public boolean converged(final int iteration,
> final PointValuePair previous,
> final PointValuePair current) {
> path.add(current);
> return super.converged(iteration, previous, current);
> }
> }
> ---CUT---
>
> > For my own purpose, I am developping implementation of different
> > gradient based optim methods not
> > available yet with 3.2 (simple gradient, Newton and quasi-Newton methods)
> >
> > To do so I also implemented a general purpose
> >
> > abstract public class NumericallyDerivableMultivariateFunction
> > implements MultivariateFunction {
> > ...
> >
> > This class impement the gradient and hessian matrix based on finite
> > differences
> >
> > If it can be of any use
>
> The features certainly look interesting.
>
> >
> > yours truly
> >
> > François Laferriere
> >
> > P.S. I previously submitted this stuff to "user" list and Gilles
> > suggested to push it on the dev list
>
> Next step would be: code, unit tests, ... :-)
>
> Regards,
> Gilles
>
>
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]