I am trying to deal with a maximisation problem in which it is possible
for the objective function to (quite legitimately) return the value
-Inf,

(Just to add to the pedantic part of the discuss by those of us that do not qualify as younger and wiser:)

Setting log(0) to -Inf is often convenient but really I think the log function is undefined at zero, so I would not refer to this as "legitimate".

which causes the numerical optimisers that I have tried to fall over.

In theory as well as practice. You need to have a function that is defined on the whole domain.


The -Inf values arise from expressions of the form "a * log(b)", with b
= 0.  Under the *starting* values of the parameters, a must equal equal
0 whenever b = 0, so we can legitimately say that a * log(b) = 0 in

This also is undefined and not "legitimate". I think there is no reason it should be equal zero. We tend to want to set it to the value we think of as the "limit": for a=0 the limit as b goes to zero would be zero, but the limit of a*(-inf) is -inf as a goes to zero.

So, you really do need to avoid zero because your function is not defined there, or find a redefinition that works properly at zero. I think you have a solution from another post.

Paul

these circumstances.  However as the maximisation algorithm searches
over parameters it is possible for b to take the value 0 for values of
a that are strictly positive.  (The values of "a" do not change during
this search, although they *do* change between "successive searches".)

Clearly if one is *maximising* the objective then -Inf is not a value of
particular interest, and we should be able to "move away".  But the
optimising function just stops.

It is also clear that "moving away" is not a simple task; you can't
estimate a gradient or Hessian at a point where the function value is -Inf.

Can anyone suggest a way out of this dilemma, perhaps an optimiser that
is equipped to cope with -Inf values in some sneaky way?

Various ad hoc kludges spring to mind, but they all seem to be fraught
with peril.

I have tried changing the value returned by the objective function from
"v" to exp(v) --- which maps -Inf to 0, which is nice and finite.
However this seemed to flatten out the objective surface too much, and
the search stalled at the 0 value, which is the antithesis of optimal.

The problem arises in a context of applying the EM algorithm where the
M-step cannot be carried out explicitly, whence numerical optimisation.
I can give more detail if anyone thinks that it could be relevant.

I would appreciate advice from younger and wiser heads! :-)

cheers,

Rolf Turner

-- Technical Editor ANZJS Department of Statistics University of
Auckland Phone: +64-9-373-7599 ext. 88276

______________________________________________
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to