On 07/11/16 13:07, William Dunlap wrote:
Have you tried reparameterizing, using logb (=log(b)) instead of b?

Uh, no.  I don't think that that makes any sense in my context.

The "b" values are probabilities and must satisfy a "sum-to-1" constraint. To accommodate this constraint I re-parametrise via a "logistic" style parametrisation --- basically

   b_i = exp(z_i)/[sum_j exp(z_j)], j = 1, ... n

with the parameters that the optimiser works with being z_1, ..., z_{n-1} (and with z_n == 0 for identifiability). The objective function is of the form sum_i(a_i * log(b_i)), so I transform back
from the z_i to the b_i in order calculate the value of the objective
function. But when the z_i get moderately large-negative, the b_i become numerically 0 and then log(b_i) becomes -Inf. And the optimiser falls over.

cheers,

Rolf


Bill Dunlap
TIBCO Software
wdunlap tibco.com <http://tibco.com>

On Sun, Nov 6, 2016 at 1:17 PM, Rolf Turner <r.tur...@auckland.ac.nz
<mailto:r.tur...@auckland.ac.nz>> wrote:


    I am trying to deal with a maximisation problem in which it is
    possible for the objective function to (quite legitimately) return
    the value -Inf, which causes the numerical optimisers that I have
    tried to fall over.

    The -Inf values arise from expressions of the form "a * log(b)",
    with b = 0.  Under the *starting* values of the parameters, a must
    equal equal 0 whenever b = 0, so we can legitimately say that a *
    log(b) = 0 in these circumstances.  However as the maximisation
    algorithm searches over parameters it is possible for b to take the
    value 0 for values of
    a that are strictly positive.  (The values of "a" do not change during
    this search, although they *do* change between "successive searches".)

    Clearly if one is *maximising* the objective then -Inf is not a value of
    particular interest, and we should be able to "move away".  But the
    optimising function just stops.

    It is also clear that "moving away" is not a simple task; you can't
    estimate a gradient or Hessian at a point where the function value
    is -Inf.

    Can anyone suggest a way out of this dilemma, perhaps an optimiser
    that is equipped to cope with -Inf values in some sneaky way?

    Various ad hoc kludges spring to mind, but they all seem to be
    fraught with peril.

    I have tried changing the value returned by the objective function from
    "v" to exp(v) --- which maps -Inf to 0, which is nice and finite.
    However this seemed to flatten out the objective surface too much,
    and the search stalled at the 0 value, which is the antithesis of
    optimal.

    The problem arises in a context of applying the EM algorithm where
    the M-step cannot be carried out explicitly, whence numerical
    optimisation.
    I can give more detail if anyone thinks that it could be relevant.

    I would appreciate advice from younger and wiser heads! :-)

    cheers,

    Rolf Turner

    --
    Technical Editor ANZJS
    Department of Statistics
    University of Auckland
    Phone: +64-9-373-7599 ext. 88276 <tel:%2B64-9-373-7599%20ext.%2088276>

    ______________________________________________
    R-help@r-project.org <mailto:R-help@r-project.org> mailing list --
    To UNSUBSCRIBE and more, see
    https://stat.ethz.ch/mailman/listinfo/r-help
    <https://stat.ethz.ch/mailman/listinfo/r-help>
    PLEASE do read the posting guide
    http://www.R-project.org/posting-guide.html
    <http://www.R-project.org/posting-guide.html>
    and provide commented, minimal, self-contained, reproducible code.




--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

______________________________________________
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to