On Sun, 20 Sep 2015 01:12:49 -0500, Ole Ersoy wrote:
Wanted to float some ideas for the LeastSquaresOptimizer (Possibly
General Optimizer) design.  For example with the
LevenbergMarquardtOptimizer we would do:
`LevenbergMarquardtOptimizer.optimize(OptimizationContext c);`

Rough optimize() outline:
public static void optimise() {
//perform the optimization
//If successful
    c.notify(LevenberMarquardtResultsEnum.SUCCESS, solution);
//If not successful

c.notify(LevenberMarquardtResultsEnum.TOO_SMALL_COST_RELATIVE_TOLERANCE,
diagnostic);
//or

c.notify(LevenberMarquardtResultsEnum.TOO_SMALL_PARAMETERS_RELATIVE_TOLERANCE,
diagnostic)
//etc
}

The diagnostic, when turned on, will contain a trace of the last N
iterations leading up to the failure. When turned off, the Diagnostic
instance only contains the parameters used to detect failure.  The
diagnostic could be viewed as an indirect way to log optimizer
iterations.

WDYT?

I'm wary of having several different ways to convey information to the
caller. It seems that the reporting interfaces could quickly overwhelm
the "actual" code (one type of context per algorithm).

The current reporting is based on exceptions, and assumes that if no
exception was thrown, then the user's request completed successfully.
I totally agree that in some circumstances, more information on the
inner working of an algorithm would be quite useful.

But I don't see the point in devoting resources to reinvent the wheel:
I longed several times for the use of a logging library.
The only show-stopper has been the informal "no-dependency" policy...

Best rgards,
Gilles


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org

Reply via email to