Cristina Silva wrote:

Dear all

I have tried to estimate the confidence intervals for predicted values of a
nonlinear model fitted with nls. The function predict gives the predicted
values and the lower and upper limits of the prediction, when the class of
the object is lm or glm. When the object is derived from nls, the function
predict (or predict.nls) gives only the predicted values. The se.fit and
interval aguments are just ignored.

Could anybody tell me how to estimate the confidence intervals for the
predicted values (not the model parameters), using an object of class nls?

Regards,

Cristina

------------------------------------------
Cristina Silva
IPIMAR - Departamento de Recursos Marinhos
Av. de Brasília, 1449-006 Lisboa
Portugal
Tel.: 351 21 3027096
Fax: 351 21 3015948
[EMAIL PROTECTED]

______________________________________________
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html



maybe this example helps:

==============cut here===============
#define a model formula (a and b are the parameters, "f" is "x"):
frml <- k1 ~ f*(1-a*exp(-b/f))

#simulate some data:
a0 <- .6
b0 <- 1.2
f  <- seq(0.01,4,length=20)
k1true<- f*(1-a0*exp(-b0/f))
#include some noise
amp <- .1
k1 <- rnorm(k1true,k1true,amp*k1true)

#fit:
fifu <- deriv(frml,c("a","b"),function(a,b,x){})
rr<-nls(k1~fifu(a,b,f),start=list(a=.5,b=1))

#the derivatives and variance/covariance matrix:
#(derivs could be extracted from fifu, too)
dk1.da <- D(frml[[3]],'a')
dk1.db <- D(frml[[3]],'b')
covar <- vcov(rr)

#gaussian error propagation:
a <- coef(rr)['a']
b <- coef(rr)['b']
vark1 <-
      eval(dk1.da)^2*covar[1,1]+
      eval(dk1.db)^2*covar[2,2]+
      2*eval(dk1.da)*eval(dk1.db)*covar[1,2]

errk1 <- sqrt(vark1)
lower.bound <- fitted(rr)-2*errk1      #two sigma !
upper.bound <- fitted(rr)+2*errk1      #dito

plot(f,k1,pch=1)
ff <- outer(c(1,1),f)
kk <- outer(c(1,1),k1)*c(1-amp,1+amp)
matlines(ff,kk,lty=3,col=1)

matlines(f,cbind(k1true,fitted(rr),lower.bound,upper.bound),col=c(1,2,3,3),lty=c(1,1,2,2))
xylim <- par('usr')
xpos <- .1*(xylim[2]-xylim[1])+xylim[1]
ypos <- xylim[4] - .1*(xylim[4]-xylim[3])
legend(xpos,ypos,
c( 'data',
'true',
'fit', 'confidence'
), pch=c(1,-1,-1,-1),
lty=c(0,1,1,2),
col=c(1,1,2,3)
)
==============cut here===============
if you put this in a file and source it a few times from within R you'll get an impression how often the fit deviates from the 'true' curve more than expected from
the shown confidence limits.


I believe this approach is 'nearly' valid as long as gaussian error probagation is valid (i.e. only to first order in covar and therefore for not too large std. errors, am I right?).
to my simple physicist's mind this should suffice to get 'reasonable' (probably, in strict sense, not completely correct?) confidence intervals for the fit/the prediction.
If somebody objects, please let me know!



joerg

______________________________________________
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Reply via email to