optim(..., hessian=TRUE, ...) outputs a list with a component hessian, which is the second derivative of the log(likelihood) at the minimum. If your objective function is (-log(likelihood)), then optim(..., hessian=TRUE)$hessian is the observed information matrix. If eigen(...$hessian)$values are all positive with at most a few orders of magnitude between the largest and smallest, then it is invertable, and the square roots of the diagonal elements of the inverse give standard errors for the normal approximation to the distribution of parameter estimates. With objective functions that may not always be well behaved, I find that optim sometimes stops short of the optimum. I run it with method = "Nelder-Mead", "BFGS", and "CG", then restart the algorithm giving the best answer to one of the other algorithms. Doug Bates and Brian Ripley could probably suggest something better, but this has produced acceptable answers for me in several cases, and I did not push it beyond that.

hope this helps.

Jean Eid wrote:

Dear All,
I am trying to solve a Generalized Method of Moments problem which
necessitate the gradient of moments computation to get the
standard  errors of estimates.
I know optim does not output the gradient, but I can use numericDeriv to
get that. My question is: is this the best function to do this?

Thank you
Jean,

______________________________________________
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html



______________________________________________ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Reply via email to