On Fri, Mar 15, 2013 at 9:45 AM, Prof J C Nash (U30A) <nas...@uottawa.ca> wrote: > Actually, it likely won't matter where you start. The Gauss-Newton direction > is nearly always close to 90 degrees from the gradient, as seen by turning > trace=TRUE in the package nlmrt function nlxb(), which does a safeguarded > Marquardt calculation. This can be used in place of nls(), except you need > to put your data in a data frame. It finds a solution pretty > straightforwardly, though with quite a few iterations and function > evaluations. >
Interesting observation but it does converge in 5 iterations with the improved starting value whereas it fails due to a singular gradient with the original starting value. > Lines <- " + x y + 60 0.8 + 80 6.5 + 100 20.5 + 120 45.9 + " > DF <- read.table(text = Lines, header = TRUE) > > # original starting value - singular gradient > nls(y ~ exp(a + b*x)+d,DF,start=list(a=0,b=0,d=1)) Error in nlsModel(formula, mf, start, wts) : singular gradient matrix at initial parameter estimates > > # better starting value - converges in 5 iterations > lm1 <- lm(log(y) ~ x, DF) > st <- setNames(c(coef(lm1), 0), c("a", "b", "d")) > nls(y ~ exp(a + b*x)+d, DF, start=st) Nonlinear regression model model: y ~ exp(a + b * x) + d data: DF a b d -0.1492 0.0342 -6.1966 residual sum-of-squares: 0.5743 Number of iterations to convergence: 5 Achieved convergence tolerance: 6.458e-07 > ______________________________________________ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.