Arne Henningsen wrote:
Hi,

I don't know much about non-linear models, but there is another possibility to fit these models:

1) get some starting values for the parameters
2) take the derivatives of the model with respect to the parameters at the point of the starting values of the parameters
3) perform a linear estimation of this linearized model (using systemfit) to get new parameter estimates
4) got to step 2) and take these new parameter estimates in place of the starting values
5) iterate this until the parameters stay stable from one to the next iteration


This has three advantages:
1) It is not much work to write these function since systemfit already exists
2) If the model is linear in parameters, it is identical to the linearized model and, thus, the first iteration leads directly to the optimum
3) You get get the SEs from the last iteration of systemfit


Does this approach also have disadvantages (e.g. non-convergence of parameters in many cases)?

Best wishes,
Arne


OK you have reinvented gradient descent method. It has one really big disadvantage - it has a rather poor convergence if function has a long "valleys". This method finds this valley fast enough, but it takes a long time to find a minima along this valley - successive iterations jumps from one wall to another of the valley. To solve this problem many methods was invented which somewhat modify search path to deviate it from gradient line (conjugate gradient methods, Levenberg-Marquardt) using estimation of hessian matrix.

______________________________________________
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help

Reply via email to