On Mon, Mar 31, 2008 at 5:14 PM, Hans W. Borchers <[EMAIL PROTECTED]> wrote:
>  > The problem with DEoptim approach is that is not guaranteed that it
>  > converges to the solution. Moreover, from my experience, it seems to
>  > be quite slow when the optimization problem is high-dimensional (i.e.,
>  > with many variables).
>
>  There is a difference between local and global optimization:
>
>  'optim' realizes *local* optimization using a gradient-based approach.
>  This is fast, but will get stuck in local optima (except method SANN).
>  'DEoptim' is one of many approaches to *global* optimization, of which
>  each has its advantages and drawbacks.

Regarding this point, I agree with you.

>  > ...not guaranteed that it converges to the solution.
>
>   As a local optimization routine, also 'optim' does not guarantee to
>   reach a (global) optimum.

Yes, but with optim one can be (almost) sure that the solution
returned is an optimum (at least locally); with DEoptim one cannot be
sure about that (it may be a non-optimum, locally or globally).

>  > [DEoptim] seems to be quite slow...
>
>   This is normal for routines in global optimization as they have to
>   search a quite large space.

Surely, but just try to solve an optimization problem with 100
variables with both approaches. In spite of being the same problem,
you will probably see that optim is far faster than DEoptim.

Paul

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to