On 09/24/2015 06:01 PM, Gilles wrote:
On Thu, 24 Sep 2015 17:02:15 -0500, Ole Ersoy wrote:
On 09/24/2015 03:23 PM, Luc Maisonobe wrote:
Le 24/09/2015 21:40, Ole Ersoy a écrit :
Hi Luc,

I gave this some more thought, and I think I may have tapped out to
soon, even though you are absolutely right about what an exception does
in terms bubbling execution to a point where it stops or we handle it.

Suppose we have an Optimizer and an Optimizer observer.  The optimizer
will emit three different events given in the process of stepping
through to the max number of iterations it is allotted:
- SOLUTION_FOUND
- COULD_NOT_CONVERGE_FOR_REASON_1
- COULD_NOT_CONVERGE_FOR_REASON_2
- END (Max iterations reached)

So we have the observer interface:

interface OptimizerObserver {

     success(Solution solution)
     update(Enum enum, Optimizer optimizer)
     end(Optimizer optimizer)
}

So if the Optimizer notifies the observer of `success`, then the
observer does what it needs to with the results and moves on.  If the
observer gets an `update` notification, that means that given the
current [constraints, numbers of iterations, data] the optimizer cannot
finish.  But the update method receives the optimizer, so it can adapt
it, and tell it to continue or just trash it and try something
completely different.  If the `END` event is reached then the Optimizer
could not finish given the number of allotted iterations. The Optimizer
is passed back via the callback interface so the observer could allow
more iterations if it wants to...perhaps based on some metric indicating
how close the optimizer is to finding a solution.

What this could do is allow the implementation of the observer to throw
the exception if 'All is lost!', in which case the Optimizer does not
need an exception.  Totally understand that this may not work
everywhere, but it seems like it could work in this case.

WDYT?
With this version, you should also pass the optimizer in case of
success. In most cases, the observer will just ignore it, but in some
cases it may try to solve another problem, or to solve again with
stricter constraints, using the previous solution as the start point
for the more stringent problem. Another case would be to go from a
simple problem to a more difficult problem using some kind of
homotopy.
Great - whoooh - glad you like this version a little better - for a
sec I thought I had complete lost it :).

IIUC, I don't like it: it looks like "GOTO"...

Inside the optimizer it would work like this:

while (!done) {
   if (can't converge) {
       observer.update(Enum.CANT_CONVERGE, this);
   }
}

Then in the update method either modify the optimizer's parameters or throw an 
exception.



Note to seeelf ... cancel
therapy with Dr. Phil.  BTW - Gilles - this could also be used as a
light weight logger.

I don't like this either (reinventing the wheel).

You still want me to go and see Dr. Phil? :)



The Optimizer could publish information deemed
interesting on each ITERATION event.

If we'd go for an "OptimizerObserver" that gets called at every
iteration,
there shouldn't be any overlap between it and "Optimizer":
So inside the Optimizer we could have:

while (!done) {
    ...
    if (observer.notifyOnIncrement())
    {
        observer.increment(this);
    }
}

Which would give us an opportunity to cancel the run if, for example, it's not 
converging fast enough.  In that case we set done to true in the observer, and 
then allow the Optimizer to get to the point where it checks if it's done, 
calls the END notification on the observer, and then the observer takes it from 
there.


iteration limit should be dealt with by the observer, the iterative
algorithm would just run "forever" until the observer is satisfied
with the current state (solution is good enough or the allotted
resources - be they time, iterations, evaluations, ... - are
exhausted).

It's possible to do it that way, although I think it's better if that code 
stays on the algorithm such that the Observer interface (The client / person 
using CM implements the Observer) is as simple as possible to implement.

The observer could then be wired
with SLF4J and perform the same type of logging that the Optimizer
would perform.  So CM could declare SLF4J as a test dependency, and
unit tests could log iterations using it.

As a "user", I'm interested in how the algorithms behave on my problem,
not in the CM unit tests.
You could still do that.  I usually take my problem, simplify it down to a data 
set that I think covers all corner cases, and then run it through my unit tests 
while looking at the logging output to get an idea of how my algorithm is 
behaving.


The question remains unanswered: why not use slf4j directly?

FWIU class path dependency conflicts for SLF4J are easily solved by excluding 
logging dependencies that other libraries bring in and then directly depending 
on the logging implementation that you want to use.  So people do run into 
issues, but I think they are solvable:
http://stackoverflow.com/questions/8921382/maven-slf4j-version-conflict-when-using-two-different-dependencies-that-requi

Lombok also has a @SLF4J annotation that's pretty sweet.  Saves the
SLF4J boilerplate.

I understand that using annotations can be a time-saver, but IMO not
so much for a library like CM; so in this case, the risk of depending
on another library must be weighed against the advantages.
Lombok is compile time only, so there should be few drawbacks:
http://stackoverflow.com/questions/6107197/how-does-lombok-work

I'll demo it on the LevenbergMarquardtOptimizer experiment, and we can see the 
level of code reduction we are able to achieve.  I think it's going to be 
fairly significant.

Cheers,
- Ole

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org

Reply via email to