Re: [R] nlme correlated random effects
I haven't seen a reply to this, so I will offer a comment in case you haven't already solved the problem. Have you consulted the Mixed-Effects Bible for S-Plus / R, Pinheiro and Bates (2000) Mixed-Effects Models in S and S-Plus (Springer)? If yes, have you worked through appropriate portions of the book and the companion script files available with the standard R distribution in ~R\library\nlme\scripts? I just did grep 'pdB' *.* in that directory and found 5 uses of pdBlocked, 3 in ch04.R, and 1 each in ch06.R and ch08.R. Also, RSiteSearch(pdBlocked with 2 random effects) produced 69 hits for me just now. You may not find anything useful there, but 69 does not seem to large a list to search, and there seems like a reasonable chance of finding something useful there. Beyond this, a recommended approach to difficult questions like this is to try to simplify it to the maximum extent possible. For example, it sounds to me like your question, can I use pdBlocked with only 2 random effects, could be answered without the complexity of 'nlme'. What phony data set can you generate with the minimum number of observations and variables that could help answer this question? The process of producing such a simplified example might produce an answer to your question. If it doesn't, then you can submit that simple, self-contained example to this list. Doing so will increase the chances of a useful reply. I know this doesn't answer your question, but I hope it helps. Best Wishes, Spencer Graves Daniel O'Shea wrote: I am examining the following nlme model. asymporig-function(x,th1,th2)th1*(1-exp(-exp(th2)*x)) mod1-nlme(fa20~(ah*habdiv+ad*log(d)+ads*ds+ads2*ds2+at*trout)+asymporig(da.p,th1,th2), fixed=ah+ad+ads+ads2+at+th1+th2~1, random=th1+th2~1, start=c(ah=.9124,ad=.9252,ads=.5,ads2=-.1,at=-1,th1=2.842,th2=-6.917), data=pca1.grouped) However, the two random effects (th1 and th2) which describe the asymptotic relationship between richness (fa20) and area (da.p) are correlated: -0.904 with approximate 95% ci of -0.99 to -.32. I examined the anova of mod1 with both random effects and mod2 with just th1 and mod1 is preferred. I also examined pdDiag(th1 + th2~1) for another model (mod3) and based on the anova the original mod1 is preferred. My question is can I use pdBlocked with only 2 random effects or should I and if so how I would specify that in the model or perhaps the 95% ci for correlation is wide enough to ignore??? Dan __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] nlme correlated random effects
I am examining the following nlme model. asymporig-function(x,th1,th2)th1*(1-exp(-exp(th2)*x)) mod1-nlme(fa20~(ah*habdiv+ad*log(d)+ads*ds+ads2*ds2+at*trout)+asymporig(da.p,th1,th2), fixed=ah+ad+ads+ads2+at+th1+th2~1, random=th1+th2~1, start=c(ah=.9124,ad=.9252,ads=.5,ads2=-.1,at=-1,th1=2.842,th2=-6.917), data=pca1.grouped) However, the two random effects (th1 and th2) which describe the asymptotic relationship between richness (fa20) and area (da.p) are correlated: -0.904 with approximate 95% ci of -0.99 to -.32. I examined the anova of mod1 with both random effects and mod2 with just th1 and mod1 is preferred. I also examined pdDiag(th1 + th2~1) for another model (mod3) and based on the anova the original mod1 is preferred. My question is can I use pdBlocked with only 2 random effects or should I and if so how I would specify that in the model or perhaps the 95% ci for correlation is wide enough to ignore??? Dan __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] nlme model
I am having trouble figuring out the right form for the nlme arguments. I do have examples in Modern and Applied Statistics with S and from other sources, but I still can't figure it out. I am trying to estimate species richness (sr) in streams across minnesota. My predictor variables are depth (d), habitat diversity (habdiv), drainage area (da) and an indicator variable representing the watershed/basins that the streams are in (bas: there are 10 watersheds). The variable explaining the greatest amount of variation appears to be da. I have used a log(da) to linearize the relationship, but an asymptotic relationship is more appropriate. I have used nls: B-c(.007,1,3,2,2,2,2,2,2,2,2,2,.05,.001,.8) st-list(ad=B[1],ahabdiv=B[2],abas=B[3:12],b=B[13],c=B[14],z=B[15]) modnls.a-nls(sr~ad*log(d)+ahabdiv*habdiv+abas[bas]+(b/(c+(da^-z))), start=st,trace=T) I next used a random slope and intercept model using lmer from the package (lme4). modlme-lmer(y~d+habdiv+log(da)+(log(da)|bas),method='ML') What I would like to do is use a similar model to the modlme, but use (b/(c+(da^-z))) instead of log(da). Keeping d and habdiv as fixed effects and the sr-da relationship for each basin as a random effect. For the life of me I can not figure out the proper form of nlme. Any help would be greatly appreciated. Fsr-function(da,b,c,z){b/(c+(da^-z} modnlme-nlme(sr~d+habdiv+Fsr(da,b,c,z), fixed=, random=, start=) __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] {nlme} Multilevel estimation heteroscedasticity
Dear All, I'm trying to model heteroscedasticity using a multilevel model. To do so, I make use of the nlme package and the weigths-parameter. Let's say that I hypothesize that the exam score of students (normexam) is influenced by their score on a standardized LR test (standLRT). Students are of course nested in schools. These variables are contained in the Exam-data in the mlmRev package. library(nlme) library(mlmRev) lme(fixed = normexam ~ standLRT, data = Exam, random = ~ 1 | school) If I want to model only a few categories of variance, all works fine. For instance, should I (for whatever reason) hypothesize that the variance on the normexam-scores is larger in mixed schools than in boys-schools, I'd use weights = varIdent(form = ~ 1 | type), leading to: heteroscedastic - lme(fixed = normexam ~ standLRT, data = Exam, weights = varIdent(form = ~ 1 | type), random = ~ 1 | school) This gives me nice and clear output, part of which is shown below: Variance function: Structure: Different standard deviations per stratum Formula: ~normexam | type Parameter estimates: Mxd Sngl 1.00 1.034607 Number of Observations: 4059 Number of Groups: 65 Though, should I hypothesize that the variance on the normexam- variable is larger on schools that have a higher average score on intake-exams (schavg), I run into troubles. I'd use weights = varIdent (form = ~ 1 | schavg), leading to: heteroscedastic - lme(fixed = normexam ~ standLRT, data = Exam, weights = varIdent(form = ~ 1 | schavg), random = ~ 1 | school) This leads to estimation problems. R tells me: Error in lme.formula(fixed = normexam ~ standLRT, data = Exam, weights = varIdent(form = ~1 | : nlminb problem, convergence error code = 1; message = iteration limit reached without convergence (9) Fiddling with maxiter and setting an unreasonable tolerance doesn't help. I think the origin of this problem lies within the large number of categories on schavg (65), that may make estimation troublesome. This leads to my two questions: - How to solve this estimation-problem? - Is is possible that the varIdent (or more general: VarFunc) of lme returns a single value, representing a coëfficiënt along which variance is increasing / decreasing? - In general: how can a variance-component / heteroscedasticity be made dependent on some level-2 variable (school level in my examples) ? Many thanks in advance, Rense Nieuwenhuis [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] {nlme} Multilevel estimation heteroscedasticity
Rense, how about weights = varPower(form = ~ schavg) or weights = varConstPower(form = ~ schavg) or even weights = varPower(form = ~ schavg | type) Yuo might find Pinheiro and Bates (2000) to be a valuable investment. I hope that this helps, Andrew On Sun, Jun 10, 2007 at 04:35:58PM +0200, Rense Nieuwenhuis wrote: Dear All, I'm trying to model heteroscedasticity using a multilevel model. To do so, I make use of the nlme package and the weigths-parameter. Let's say that I hypothesize that the exam score of students (normexam) is influenced by their score on a standardized LR test (standLRT). Students are of course nested in schools. These variables are contained in the Exam-data in the mlmRev package. library(nlme) library(mlmRev) lme(fixed = normexam ~ standLRT, data = Exam, random = ~ 1 | school) If I want to model only a few categories of variance, all works fine. For instance, should I (for whatever reason) hypothesize that the variance on the normexam-scores is larger in mixed schools than in boys-schools, I'd use weights = varIdent(form = ~ 1 | type), leading to: heteroscedastic - lme(fixed = normexam ~ standLRT, data = Exam, weights = varIdent(form = ~ 1 | type), random = ~ 1 | school) This gives me nice and clear output, part of which is shown below: Variance function: Structure: Different standard deviations per stratum Formula: ~normexam | type Parameter estimates: Mxd Sngl 1.00 1.034607 Number of Observations: 4059 Number of Groups: 65 Though, should I hypothesize that the variance on the normexam- variable is larger on schools that have a higher average score on intake-exams (schavg), I run into troubles. I'd use weights = varIdent (form = ~ 1 | schavg), leading to: heteroscedastic - lme(fixed = normexam ~ standLRT, data = Exam, weights = varIdent(form = ~ 1 | schavg), random = ~ 1 | school) This leads to estimation problems. R tells me: Error in lme.formula(fixed = normexam ~ standLRT, data = Exam, weights = varIdent(form = ~1 | : nlminb problem, convergence error code = 1; message = iteration limit reached without convergence (9) Fiddling with maxiter and setting an unreasonable tolerance doesn't help. I think the origin of this problem lies within the large number of categories on schavg (65), that may make estimation troublesome. This leads to my two questions: - How to solve this estimation-problem? - Is is possible that the varIdent (or more general: VarFunc) of lme returns a single value, representing a co?ffici?nt along which variance is increasing / decreasing? - In general: how can a variance-component / heteroscedasticity be made dependent on some level-2 variable (school level in my examples) ? Many thanks in advance, Rense Nieuwenhuis [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- Andrew Robinson Department of Mathematics and StatisticsTel: +61-3-8344-9763 University of Melbourne, VIC 3010 Australia Fax: +61-3-8344-4599 http://www.ms.unimelb.edu.au/~andrewpr http://blogs.mbs.edu/fishing-in-the-bay/ __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] nlme fixed effects specification
Just to provide some closure on this thread, let me add two comments: 1. Doug's version of my sweep function: diffid1 - function(h, id) { id - as.factor(id)[ , drop = TRUE] apply(as.matrix(h), 2, function(x) x - tapply(x, id, mean)[id]) } is far more elegant than my original, and works perfectly, but 2. I should have mentioned that proposed strategy gets the coefficient estimates right, however their standard errors need a degrees of freedom correction, which in the present instance is non-negligible -- sqrt(98/89) -- since the lm() step doesn't know that we have already estimated the fixed effects with the sweep operation. url:www.econ.uiuc.edu/~rogerRoger Koenker email[EMAIL PROTECTED]Department of Economics vox: 217-333-4558University of Illinois fax: 217-244-6678Champaign, IL 61820 On May 5, 2007, at 7:16 PM, Douglas Bates wrote: On 5/5/07, roger koenker [EMAIL PROTECTED] wrote: On May 5, 2007, at 3:14 PM, Douglas Bates wrote: As Roger indicated in another reply you should be able to obtain the results you want by sweeping out the means of the groups from both x and y. However, I tried Roger's function and a modified version that I wrote and could not show this. I'm not sure what I am doing wrong. Doug, Isn't it just that you are generating a balanced factor and Ivo is generating an unbalanced one -- he wrote: fe = as.factor( as.integer( runif(100)*10 ) ); the coefficient on x is the same or, aarrgh, is it that you don't like the s.e. being wrong. I didn't notice this at first. But it shouldn't happen. I'll have to take another look at this. No, my mistake was much dumber than that. I was comparing the wrong coefficient. For some reason I was comparing the coefficient for x in the second fit to the Intercept from the first fit. I'm glad that it really is working and, yes, you are right, the degrees of freedom are wrong in the second fit because the effect of those 10 degrees of freedom are removed from the data before the model is fit. I enclose a transcript that shows that I can reproduce the result from Roger's function but it doesn't do what either of us think it should. BTW, I realize that the estimate for the Intercept should be zero in this case. now, with a few IQ points more, I would have looked at the lme function instead of the nlme function in library(nlme).[then again, I could understand stats a lot better with a few more IQ points.] I am reading the lme description now, but I still don't understand how to specify that I want to have dummies in my specification, plus the x variable, and that's it. I think I am not understanding the integration of fixed and random effects in the same R functions. thanks for pointing me at your lme4 library. on linux, version 2.5.0, I did R CMD INSTALL matrix*.tar.gz R CMD INSTALL lme4*.tar.gz and it installed painlessly. (I guess R install packages don't have knowledge of what they rely on; lme4 requires matrix, which the docs state, but having gotten this wrong, I didn't get an error. no big deal. I guess I am too used to automatic resolution of dependencies from linux installers these days that I did not expect this.) I now tried your specification: library(lme4) Loading required package: Matrix Loading required package: lattice lmer(y~x+(1|fe)) Linear mixed-effects model fit by REML Formula: y ~ x + (1 | fe) AIC BIC logLik MLdeviance REMLdeviance 282 290 -138270 276 Random effects: Groups NameVariance Std.Dev. fe (Intercept) 0.0445 0.211 Residual 0.889548532468 0.9431588 number of obs: 100, groups: fe, 10 Fixed effects: Estimate Std. Error t value (Intercept) -0.0188 0.0943 -0.199 x 0.0528 0.0904 0.585 Correlation of Fixed Effects: (Intr) x -0.022 Warning messages: 1: Estimated variance for factor 'fe' is effectively zero in: `LMEoptimize-`(`*tmp*`, value = list(maxIter = 200L, tolerance = 0.000149011611938477, 2: $ operator not defined for this S4 class, returning NULL in: x $symbolic.cor Without being a statistician, I can still determine that this is not the model I would like to work with. The coefficient is 0.0528, not 0.0232. (I am also not sure why I am getting these warning messages on my system, either, but I don't think it matters.) is there a simple way to get the equivalent specification for my smple model, using lmer or lme, which does not choke on huge data sets? regards, /ivo Ivo_Rout.txt __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide
Re: [R] nlme fixed effects specification
Ivo, I don't know whether you ever got a proper answer to this question. Here is a kludgy one -- someone else can probably provide a more elegant version of my diffid function. What you want to do is sweep out the mean deviations from both y and x based on the factor fe and then estimate the simple y on x linear model. I have an old function that was originally designed to do panel data models that looks like this: diffid - function(h, id) { if(is.vector(h)) h - matrix(h, ncol = 1) Ph - unique(id) Ph - cbind(Ph, table(id)) for(i in 1:ncol(h)) Ph - cbind(Ph, tapply(h[, i], id, mean)) is - tapply(id, id) Ph - Ph[is, - (1:2)] h - Ph } With this you can do: set.seed(1); fe = as.factor( as.integer( runif(100)*10 ) ); y=rnorm(100); x=rnorm (100); summary(lm(diffid(y,fe) ~ diffid(x,fe))) HTH, Roger On May 4, 2007, at 3:08 PM, ivo welch wrote: hi doug: yikes. could I have done better? Oh dear. I tried to make my example clearer half-way through, but made it worse. I meant set.seed(1); fe = as.factor( as.integer( runif(100)*10 ) ); y=rnorm(100); x=rnorm (100); print(summary(lm( y ~ x + fe))) deleted Coefficients: Estimate Std. Error t value Pr(|t|) (Intercept) 0.1128 0.36800.31 0.76 x 0.0232 0.09600.24 0.81 fe1 -0.6628 0.5467 -1.21 0.23 deleted more fe's Residual standard error: 0.949 on 89 degrees of freedom Multiple R-Squared: 0.0838, Adjusted R-squared: -0.0192 F-statistic: 0.814 on 10 and 89 DF, p-value: 0.616 I really am interested only in this linear specification, the coefficient on x (0.0232) and the R^2 of 8.38% (adjusted -1.92%). If I did not have so much data in my real application, I would never have to look at nlme or nlme4. I really only want to be able to run this specification through lm with far more observations (100,000) and groups (10,000), and be done with my problem. now, with a few IQ points more, I would have looked at the lme function instead of the nlme function in library(nlme).[then again, I could understand stats a lot better with a few more IQ points.] I am reading the lme description now, but I still don't understand how to specify that I want to have dummies in my specification, plus the x variable, and that's it. I think I am not understanding the integration of fixed and random effects in the same R functions. thanks for pointing me at your lme4 library. on linux, version 2.5.0, I did R CMD INSTALL matrix*.tar.gz R CMD INSTALL lme4*.tar.gz and it installed painlessly. (I guess R install packages don't have knowledge of what they rely on; lme4 requires matrix, which the docs state, but having gotten this wrong, I didn't get an error. no big deal. I guess I am too used to automatic resolution of dependencies from linux installers these days that I did not expect this.) I now tried your specification: library(lme4) Loading required package: Matrix Loading required package: lattice lmer(y~x+(1|fe)) Linear mixed-effects model fit by REML Formula: y ~ x + (1 | fe) AIC BIC logLik MLdeviance REMLdeviance 282 290 -138270 276 Random effects: Groups NameVariance Std.Dev. fe (Intercept) 0.0445 0.211 Residual 0.889548532468 0.9431588 number of obs: 100, groups: fe, 10 Fixed effects: Estimate Std. Error t value (Intercept) -0.0188 0.0943 -0.199 x 0.0528 0.0904 0.585 Correlation of Fixed Effects: (Intr) x -0.022 Warning messages: 1: Estimated variance for factor 'fe' is effectively zero in: `LMEoptimize-`(`*tmp*`, value = list(maxIter = 200L, tolerance = 0.000149011611938477, 2: $ operator not defined for this S4 class, returning NULL in: x $symbolic.cor Without being a statistician, I can still determine that this is not the model I would like to work with. The coefficient is 0.0528, not 0.0232. (I am also not sure why I am getting these warning messages on my system, either, but I don't think it matters.) is there a simple way to get the equivalent specification for my smple model, using lmer or lme, which does not choke on huge data sets? regards, /ivo __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting- guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] nlme fixed effects specification
On 5/4/07, ivo welch [EMAIL PROTECTED] wrote: hi doug: yikes. could I have done better? Oh dear. I tried to make my example clearer half-way through, but made it worse. I meant set.seed(1); fe = as.factor( as.integer( runif(100)*10 ) ); y=rnorm(100); x=rnorm(100); print(summary(lm( y ~ x + fe))) deleted Coefficients: Estimate Std. Error t value Pr(|t|) (Intercept) 0.1128 0.36800.31 0.76 x 0.0232 0.09600.24 0.81 fe1 -0.6628 0.5467 -1.21 0.23 deleted more fe's Residual standard error: 0.949 on 89 degrees of freedom Multiple R-Squared: 0.0838, Adjusted R-squared: -0.0192 F-statistic: 0.814 on 10 and 89 DF, p-value: 0.616 I really am interested only in this linear specification, the coefficient on x (0.0232) and the R^2 of 8.38% (adjusted -1.92%). If I did not have so much data in my real application, I would never have to look at nlme or nlme4. I really only want to be able to run this specification through lm with far more observations (100,000) and groups (10,000), and be done with my problem. OK, I understand now. You really do want to accomodate for the levels of the factor as fixed effects not as random effects. The lme and lmer functions are fitting a more complicated model in which the variance of the random effects is chosen to maximize the log-likelihood or the restricted log-likelihood but they don't give the results that are of interest to you. As Roger indicated in another reply you should be able to obtain the results you want by sweeping out the means of the groups from both x and y. However, I tried Roger's function and a modified version that I wrote and could not show this. I'm not sure what I am doing wrong. I enclose a transcript that shows that I can reproduce the result from Roger's function but it doesn't do what either of us think it should. BTW, I realize that the estimate for the Intercept should be zero in this case. now, with a few IQ points more, I would have looked at the lme function instead of the nlme function in library(nlme).[then again, I could understand stats a lot better with a few more IQ points.] I am reading the lme description now, but I still don't understand how to specify that I want to have dummies in my specification, plus the x variable, and that's it. I think I am not understanding the integration of fixed and random effects in the same R functions. thanks for pointing me at your lme4 library. on linux, version 2.5.0, I did R CMD INSTALL matrix*.tar.gz R CMD INSTALL lme4*.tar.gz and it installed painlessly. (I guess R install packages don't have knowledge of what they rely on; lme4 requires matrix, which the docs state, but having gotten this wrong, I didn't get an error. no big deal. I guess I am too used to automatic resolution of dependencies from linux installers these days that I did not expect this.) I now tried your specification: library(lme4) Loading required package: Matrix Loading required package: lattice lmer(y~x+(1|fe)) Linear mixed-effects model fit by REML Formula: y ~ x + (1 | fe) AIC BIC logLik MLdeviance REMLdeviance 282 290 -138270 276 Random effects: Groups NameVariance Std.Dev. fe (Intercept) 0.0445 0.211 Residual 0.889548532468 0.9431588 number of obs: 100, groups: fe, 10 Fixed effects: Estimate Std. Error t value (Intercept) -0.0188 0.0943 -0.199 x 0.0528 0.0904 0.585 Correlation of Fixed Effects: (Intr) x -0.022 Warning messages: 1: Estimated variance for factor 'fe' is effectively zero in: `LMEoptimize-`(`*tmp*`, value = list(maxIter = 200L, tolerance = 0.000149011611938477, 2: $ operator not defined for this S4 class, returning NULL in: x$symbolic.cor Without being a statistician, I can still determine that this is not the model I would like to work with. The coefficient is 0.0528, not 0.0232. (I am also not sure why I am getting these warning messages on my system, either, but I don't think it matters.) is there a simple way to get the equivalent specification for my smple model, using lmer or lme, which does not choke on huge data sets? regards, /ivo set.seed(1) y - rnorm(100); x - rnorm(100) fe - gl(10, 10) head(coef(summary(lm(y ~ x + fe Estimate Std. Error t value Pr(|t|) (Intercept) 0.12203179 0.2955477 0.41290050 0.6806726 x0.02927071 0.1053909 0.27773478 0.7818601 fe2 0.13032049 0.4176603 0.31202505 0.7557513 fe3 -0.25552441 0.4164178 -0.61362502 0.5410283 fe4 0.01123732 0.4227301 0.02658272 0.9788521 fe5 0.02836583 0.4255255 0.0071 0.9470013 coef(summary(lm(diffid(y, fe) ~ diffid(x, fe Estimate Std. Error t value Pr(|t|) (Intercept) 9.802350e-18 0.08837912 1.109125e-16 1.000 diffid(x, fe) 2.927071e-02 0.10043495 2.914394e-01 0.7713312 diffid1 function(h,
Re: [R] nlme fixed effects specification
On 5/3/07, ivo welch [EMAIL PROTECTED] wrote: dear R experts: sorry, I have to ask this again. I know that the answer is in section 7.2 of S Programming, but I don't have the book (and I plan to buy the next edition---which I hope will be titled S/R programming ;-) ). I believe the following yields a standard fixed-effects estimation: fixed.effects = as.factor( as.integer( runif(100)*10 ) ) y=rnorm(100); x=rnorm(100); print(summary(lm( Y ~ X + fe))) Want to try that one again, Ivo? You have defined objects called fixed.effects, y and x then used the formula Y ~ X + fe. I would like to know how to get the same coefficient on X using nlme. The nlme function or the nlme package. I'm guessing that you are referring to the lme function in the nlme package but it would help if you told us exactly what you mean. Also, perhaps you could be a bit more explicit about what you mean by same coefficient. If you fit a model with random effects for a factor then you will get similar but not identical values for the coefficients of the fixed effects term. (I cannot use this ordinary lm method in my real application, simply because I have 10,000 fixed effects.) I tried a variety of arguments to the fixed nlme parameter (e.g., fixed=list(fmid)), but did not get the syntax right. could someone please tell me the magic spell? If you plan on fitting a linear mixed model then I would suggest library(lme4) lmer(Y ~ X + (1|fact)) where fact is the factor specifying the groups in the data. For example set.seed(123454321) fact - sample(10, 100, repl = TRUE) y - rnorm(100); x - rnorm(100) lmer(y ~ x + (1|fact)) Linear mixed-effects model fit by REML Formula: y ~ x + (1 | fact) AIC BIC logLik MLdeviance REMLdeviance 294.5 302.3 -144.3 282.7288.5 Random effects: Groups NameVariance Std.Dev. fact (Intercept) 5.0482e-10 2.2468e-05 Residual 1.0096e+00 1.0048e+00 number of obs: 100, groups: fact, 10 Fixed effects: Estimate Std. Error t value (Intercept) 0.085360.10062 0.8483 x0.087250.08787 0.9929 Correlation of Fixed Effects: (Intr) x -0.052 Warning message: Estimated variance for factor fact is effectively zero may I also suggest that such an example be added to the nlme examples documentation, too, please? First we need to decide what the example is. :-) Regards, Doug __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] nlme fixed effects specification
hi doug: yikes. could I have done better? Oh dear. I tried to make my example clearer half-way through, but made it worse. I meant set.seed(1); fe = as.factor( as.integer( runif(100)*10 ) ); y=rnorm(100); x=rnorm(100); print(summary(lm( y ~ x + fe))) deleted Coefficients: Estimate Std. Error t value Pr(|t|) (Intercept) 0.1128 0.36800.31 0.76 x 0.0232 0.09600.24 0.81 fe1 -0.6628 0.5467 -1.21 0.23 deleted more fe's Residual standard error: 0.949 on 89 degrees of freedom Multiple R-Squared: 0.0838, Adjusted R-squared: -0.0192 F-statistic: 0.814 on 10 and 89 DF, p-value: 0.616 I really am interested only in this linear specification, the coefficient on x (0.0232) and the R^2 of 8.38% (adjusted -1.92%). If I did not have so much data in my real application, I would never have to look at nlme or nlme4. I really only want to be able to run this specification through lm with far more observations (100,000) and groups (10,000), and be done with my problem. now, with a few IQ points more, I would have looked at the lme function instead of the nlme function in library(nlme).[then again, I could understand stats a lot better with a few more IQ points.] I am reading the lme description now, but I still don't understand how to specify that I want to have dummies in my specification, plus the x variable, and that's it. I think I am not understanding the integration of fixed and random effects in the same R functions. thanks for pointing me at your lme4 library. on linux, version 2.5.0, I did R CMD INSTALL matrix*.tar.gz R CMD INSTALL lme4*.tar.gz and it installed painlessly. (I guess R install packages don't have knowledge of what they rely on; lme4 requires matrix, which the docs state, but having gotten this wrong, I didn't get an error. no big deal. I guess I am too used to automatic resolution of dependencies from linux installers these days that I did not expect this.) I now tried your specification: library(lme4) Loading required package: Matrix Loading required package: lattice lmer(y~x+(1|fe)) Linear mixed-effects model fit by REML Formula: y ~ x + (1 | fe) AIC BIC logLik MLdeviance REMLdeviance 282 290 -138270 276 Random effects: Groups NameVariance Std.Dev. fe (Intercept) 0.0445 0.211 Residual 0.889548532468 0.9431588 number of obs: 100, groups: fe, 10 Fixed effects: Estimate Std. Error t value (Intercept) -0.0188 0.0943 -0.199 x 0.0528 0.0904 0.585 Correlation of Fixed Effects: (Intr) x -0.022 Warning messages: 1: Estimated variance for factor 'fe' is effectively zero in: `LMEoptimize-`(`*tmp*`, value = list(maxIter = 200L, tolerance = 0.000149011611938477, 2: $ operator not defined for this S4 class, returning NULL in: x$symbolic.cor Without being a statistician, I can still determine that this is not the model I would like to work with. The coefficient is 0.0528, not 0.0232. (I am also not sure why I am getting these warning messages on my system, either, but I don't think it matters.) is there a simple way to get the equivalent specification for my smple model, using lmer or lme, which does not choke on huge data sets? regards, /ivo __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] nlme fixed effects specification
dear R experts: sorry, I have to ask this again. I know that the answer is in section 7.2 of S Programming, but I don't have the book (and I plan to buy the next edition---which I hope will be titled S/R programming ;-) ). I believe the following yields a standard fixed-effects estimation: fixed.effects = as.factor( as.integer( runif(100)*10 ) ) y=rnorm(100); x=rnorm(100); print(summary(lm( Y ~ X + fe))) I would like to know how to get the same coefficient on X using nlme. (I cannot use this ordinary lm method in my real application, simply because I have 10,000 fixed effects.) I tried a variety of arguments to the fixed nlme parameter (e.g., fixed=list(fmid)), but did not get the syntax right. could someone please tell me the magic spell? may I also suggest that such an example be added to the nlme examples documentation, too, please? help appreciated. regards, /ivo __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] nlme trouble
I am not certain how nlme works so I followed an example from the web ( http://www.menne-biomed.de/gastempt/gastempt1.html). I was able to successfully reproduce the example. However, when I modified my the example to use my data and with my formula, I get a set of errors having to do with the log() function. I get 10 of them (all exactly the same) and there are 10 levels in my factor variable. This seems significant to me. Below is a list of the code I modified with comments that should help understand what I am trying to do. If you enter the code in order from top to bottom you will recreate what I did. If you think that the data is at fault and want to see it, I can provide it. Also, under the code is the errors that I get when I run the nlsList function. Also, can nlme handle more than one independent at a time? And I was wondering what the model=as.character(mCall[[1]]) line in the code did. Removing it does not seem to change the error that I receive. My OS is WinXP. Thank You in advance, Chris ###List of the code that I modified with comments## library(nlme) BiLinInit0= function(mCall,LHS,data) { model=as.character(mCall[[1]])#I don't know what this does a - tan(pi/4)# 45 degree angle (rising slope) b - -tan(pi/4)# 45 degree angle (desending slope) c - 0# intercept nu - 0.434 xy - sortedXyData(mCall[[x]], LHS, data) x - xy[[x]] x0 - (min(x)+max(x))/2 + nu*((log(a, exp(1)) - log(b, exp(1/(a+b) #log defaults to base e value = c(nu,a,b,c,x0) names(value)=mCall[c(nu,a,b,c,x0)] value } SSBiLin=selfStart(~-nu*log(exp(-a*(x-x0)/nu)+exp(b*(x-x0))/nu, exp(1))+c, initial=BiLinInit0, parameters= c(nu,a,b,c,x0))#log defaults to base e, but I used exp(1) just to be sure (tried with just log(x) as well) ge = read.table(G:\\SSNon-Linear\\BINDING DATA fixed DESCRIPTORS ONLY.csv,sep=,,header=TRUE) #Load in my data set ge$study = as.factor(ge$study)#Force study to be a factor variable gelist = nlsList(RBA~SSBiLin(Molecular.Volume,nu,a,b,c,x0)|study,data=ge) #This is where I get the error contr = nlmeControl(pnlsTol=0.3) ge0.nlme=nlme(gelist,control=contr,verbose=F) ##Errors### gelist = nlsList(RBA~SSBiLin(Molecular.Volume,nu,a,b,c,x0)|study,data=ge) Error in qr(attr(rhs, gradient)) : NA/NaN/Inf in foreign function call (arg 1) In addition: Warning message: NaNs produced in: log(x, base) Error in qr(attr(rhs, gradient)) : NA/NaN/Inf in foreign function call (arg 1) In addition: Warning message: NaNs produced in: log(x, base) Error in qr(attr(rhs, gradient)) : NA/NaN/Inf in foreign function call (arg 1) In addition: Warning message: NaNs produced in: log(x, base) Error in qr(attr(rhs, gradient)) : NA/NaN/Inf in foreign function call (arg 1) In addition: Warning message: NaNs produced in: log(x, base) Error in qr(attr(rhs, gradient)) : NA/NaN/Inf in foreign function call (arg 1) In addition: Warning message: NaNs produced in: log(x, base) Error in qr(attr(rhs, gradient)) : NA/NaN/Inf in foreign function call (arg 1) In addition: Warning message: NaNs produced in: log(x, base) Error in qr(attr(rhs, gradient)) : NA/NaN/Inf in foreign function call (arg 1) In addition: Warning message: NaNs produced in: log(x, base) Error in qr(attr(rhs, gradient)) : NA/NaN/Inf in foreign function call (arg 1) In addition: Warning message: NaNs produced in: log(x, base) Error in qr(attr(rhs, gradient)) : NA/NaN/Inf in foreign function call (arg 1) In addition: Warning message: NaNs produced in: log(x, base) Error in qr(attr(rhs, gradient)) : NA/NaN/Inf in foreign function call (arg 1) In addition: Warning message: NaNs produced in: log(x, base) [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] nlme : convergence problem and other errors
Dear R-user, I am trying to use the R nlme function to fit a non linear mixed effects model. The model I wand to fit is an individual somatic growth model with 4 parameters. For all parameters both fixed and random effects have to be estimated, as well as their covariance matrix (see the formula bellow). The data are simulated with the same growth model as in the nlme, with know parameters, and covariance matrix. I tried to fit the model with several simulated data sets, but most of the time, R returns an error message. there are three main types of errors : - there is a singular matrix - a factor is reduced bellow the PNLS level. - max number of iteration is reached but there is no convergence. Do you know how to resolve these problems. Is there a way to modify the parameters of the maximization algorithm to avoid these error messages? Furthermore, do you know if it is possible to fix the values of the fixed effects so that the model only has to estimate the random effects? Thank you for your help and answers. Regards, Thomas Brunel pds.fit-nlme(pds~(exp (lna)/(exp (lnb) + 1/(1 + exp(1000 * (exp (lntmat) - t + 0.5))) * exp (lnc)))^4 * (1 - (1 - ( 0.051 * (1 - 1/(1 + exp(1000 * (exp (lntmat) - t + 0.5 + (exp (lna)/exp (lnb))^4 * (1 - (1 - (0.051)^0.25 * (exp (lnb)/exp (lna))) * exp( - ((exp (lnb) * exp (lntmat))/4)))^4 * 1/(1 + exp(1000 * (exp (lntmat) - t + 0.5^0.25 * ((exp (lnb) + 1/(1 + exp(1000 * (exp (lntmat) - t + 0.5))) * exp (lnc))/exp (lna))) * exp( - (((exp (lnb) + 1/(1 + exp(1000 * (exp (lntmat) - t + 0.5))) * exp (lnc)) *(t - (1/(1 + exp(1000 * (exp (lntmat) - t + 0.5))) * exp (lntmat/4)))^4,data=pdsdata,fixed=lna+lnb+lnc+lntmat~1,random= lna+lnb+lnc+lntmat~1|indi ,start=c(lna=log(a2),lnb=log(b2) ,lnc = log(c2), lntmat=log(1200))) -- Laboratoire Halieutique de Port-en-Bessin avenue du Général de Gaulle 14520 Port-en-Bessin tél : 02 31 51 56 00 (standard) fax : 02 31 51 56 01 page personnelle (CV online): http://www.ifremer.fr/drvrhbr/personnel/brunel/index.htm __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] nlme
-- David A. Fournier P.O. Box 2040, Sidney, B.C. V8l 3S3 Canada Phone/FAX 250-655-3364 http://otter-rsch.com __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] nlme sorry about that
Sorry list I twitched and sent the wrong stuff. Maybe I had enough fun for today. -- David A. Fournier P.O. Box 2040, Sidney, B.C. V8l 3S3 Canada Phone/FAX 250-655-3364 http://otter-rsch.com __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] nlme
Hello! I am trying to fit a mixed non-linear model using nlme. How can I constrain the fixed parameter space (add bounds) as in nls. Thank you Ronen [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] nlme
On 11/19/06, Fluss [EMAIL PROTECTED] wrote: Hello! I am trying to fit a mixed non-linear model using nlme. How can I constrain the fixed parameter space (add bounds) as in nls. By rewriting the nlme function, an option I wouldn't recommend. :-) __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] nlme
A more sensible option in my experience would be to transform the parameter space to send the boundaries to +/-Inf. Suggested reading for 'nlme' includes Pinheiro and Bates (2000) Mixed-Effects Models in S and S-Plus (Springer) and Bates and Watts (1988) Nonlinear Regression Analysis and Its Applications (Wiley). Spencer Graves Douglas Bates wrote: On 11/19/06, Fluss [EMAIL PROTECTED] wrote: Hello! I am trying to fit a mixed non-linear model using nlme. How can I constrain the fixed parameter space (add bounds) as in nls. By rewriting the nlme function, an option I wouldn't recommend. :-) __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] nlme Error: Subscript out of bounds
I can't begin to guess. If your example were self contained (and preferably simple), it would be easier to diagnose. When I have problems like this, I often try to find a simple example that produced the same error message. That process often leads me to a solution. If it doesn't it often produces a sufficiently simple example that I can describe it easily in a few lines of a question to r-help. I might also use the 'debug' function (in the 'base' package. I have not used the 'debug' package, which may be more powerful but possibly harder to use.). To access a chain of earlier recommendations on debug, I tried 'RSiteSearch(graves debug), then 'RSiteSearch(graves debug lme)'. The fifth hit from the latter is http://finzi.psych.upenn.edu/R/Rhelp02a/archive/77298.html;. Hope this helps. Spencer Graves Lisa Avery wrote: Hello, I am new to non-linear growth modelling in R and I am trying to reproduce an analysis that was done (successfully) in S-Plus. I have a simple non-linear growth model, with no nesting. I have attempted to simplify the call as much as possible (by creating another grouped object, instead of using subset= and compacting the fixed and random expressions.) This is a what the grouped data object looks like: levelI.data[1:2,] Grouped Data: GMAE ~ AGE | STUDYID STUDYID TIMESCORE INCURVES MOST FIRST AGE 1910051 ACTUAL (unaided) in JAMA curves Level I Level I 49.11301 2010052 ACTUAL (unaided) in JAMA curves Level I Level I 56.53745 GMFM GMFCS GMAE YRS 19 91.03394 1 74.16 4.092751 20 95.35018 1 84.05 4.711454 Here is the nlme model: cp.grad-deriv(~ (100/(1+exp(-L)))*(1-exp(-exp(logR)*x)), c(L, logR), function(x=0:100,L,logR) NULL) levelI.nlme-nlme(GMAE~cp.grad(AGE,limit,lograte), data=levelI.data, fixed = limit+lograte~1, random = limit+lograte~1, start = c(2.0, -3.0)) I get a subscript out of bounds error - which I am not finding very helpful because I don't know where things are going wrong. Bill Shipley posted a similar problem with nlme called with a self-starting function - but here I don't use a self-starting function and I give the starting values explicitly so I assume that this is not the same problem he is having. What am I doing wrong? Any insights would be very much appreciated. Kind Regards, Lisa Avery [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] nlme Error: Subscript out of bounds
Hello, I am new to non-linear growth modelling in R and I am trying to reproduce an analysis that was done (successfully) in S-Plus. I have a simple non-linear growth model, with no nesting. I have attempted to simplify the call as much as possible (by creating another grouped object, instead of using subset= and compacting the fixed and random expressions.) This is a what the grouped data object looks like: levelI.data[1:2,] Grouped Data: GMAE ~ AGE | STUDYID STUDYID TIMESCORE INCURVES MOST FIRST AGE 1910051 ACTUAL (unaided) in JAMA curves Level I Level I 49.11301 2010052 ACTUAL (unaided) in JAMA curves Level I Level I 56.53745 GMFM GMFCS GMAE YRS 19 91.03394 1 74.16 4.092751 20 95.35018 1 84.05 4.711454 Here is the nlme model: cp.grad-deriv(~ (100/(1+exp(-L)))*(1-exp(-exp(logR)*x)), c(L, logR), function(x=0:100,L,logR) NULL) levelI.nlme-nlme(GMAE~cp.grad(AGE,limit,lograte), data=levelI.data, fixed = limit+lograte~1, random = limit+lograte~1, start = c(2.0, -3.0)) I get a subscript out of bounds error - which I am not finding very helpful because I don't know where things are going wrong. Bill Shipley posted a similar problem with nlme called with a self-starting function - but here I don't use a self-starting function and I give the starting values explicitly so I assume that this is not the same problem he is having. What am I doing wrong? Any insights would be very much appreciated. Kind Regards, Lisa Avery [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] nlme with a factor in R 2.4.0beta
Hi, the following R lines work fine in R 2.4.0 alpha (and older R versions), but not in R 2.4.0 beta (details below): library(drc) # to load the dataset 'PestSci' library(nlme) ## Starting values sv - c(0.328919, 1.956121, 0.097547, 1.642436, 0.208924) ## No error m1 - nlme(SLOPE ~ c + (d-c)/(1+exp(b*(log(DOSE)-log(e, fixed = list(b+c+d+e~1), random = d~1|CURVE, start = sv[c(2,3,4,5)], data = PestSci) ## Error: attempt to select more than one element m2 - nlme(SLOPE ~ c + (d-c)/(1+exp(b*(log(DOSE)-log(e, fixed = list(b~HERBICIDE, c+d+e~1), random = d~1|CURVE, start = sv, data = PestSci) Output from sessionInfo() for R 2.4.0 alpha R version 2.4.0 alpha (2006-09-16 r39365) i386-pc-mingw32 locale: LC_COLLATE=Danish_Denmark.1252;LC_CTYPE=Danish_Denmark.1252;LC_MONETARY=Danish_Denmark.1252;LC_NUMERIC=C; LC_TIME=Danish_Denmark.1252 attached base packages: [1] methods stats graphics grDevices utils datasets [7] base other attached packages: nlme drc 3.1-75 1.0-1 Output from sessionInfo() for R 2.4.0 beta R version 2.4.0 beta (2006-09-24 r39497) i386-pc-mingw32 locale: LC_COLLATE=Danish_Denmark.1252;LC_CTYPE=Danish_Denmark.1252;LC_MONETARY=Danish_Denmark.1252;LC_NUMERIC=C; LC_TIME=Danish_Denmark.1252 attached base packages: [1] methods stats graphics grDevices utils datasets [7] base other attached packages: nlme drc 3.1-76 1.0-1 Christian __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] nlme with a factor in R 2.4.0beta
Christian Ritz [EMAIL PROTECTED] writes: Hi, the following R lines work fine in R 2.4.0 alpha (and older R versions), but not in R 2.4.0 beta (details below): library(drc) # to load the dataset 'PestSci' library(nlme) ## Starting values sv - c(0.328919, 1.956121, 0.097547, 1.642436, 0.208924) ## No error m1 - nlme(SLOPE ~ c + (d-c)/(1+exp(b*(log(DOSE)-log(e, fixed = list(b+c+d+e~1), random = d~1|CURVE, start = sv[c(2,3,4,5)], data = PestSci) ## Error: attempt to select more than one element m2 - nlme(SLOPE ~ c + (d-c)/(1+exp(b*(log(DOSE)-log(e, fixed = list(b~HERBICIDE, c+d+e~1), random = d~1|CURVE, start = sv, data = PestSci) ... other attached packages: nlme drc 3.1-75 1.0-1 nlme drc 3.1-76 1.0-1 I presume this is the real issue: The upgrade of nlme, rather than the change of R itself from alpha to beta status. The culprit would seem to be pars[, nm] - f %*% beta[[fmap[[nm inside nlme:::getParsNlme(). fmap[[nm]] is not necessarily a scalar, so the outer set of [[]] should likely be []. The maintainer of nlme will know for sure. -- O__ Peter Dalgaard Øster Farimagsgade 5, Entr.B c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K (*) \(*) -- University of Copenhagen Denmark Ph: (+45) 35327918 ~~ - ([EMAIL PROTECTED]) FAX: (+45) 35327907 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] NLME: Limitations of using identify to interact with scatterplots?
I have a quick question regarding the use of identify to interact with points on a scatterplot. My question is essentially: can identify be used when one is plotting model objects to generate diagnostic plots? Specifically I am using NLME. For example, I am plotting the fitted values on the x axis vs a variable called log2game with the following code: plot(D2C29.nlme, log2game ~ fitted(.), abline=c(0,1)) and then I have tried to use identify as follows: identify(D2C29.nlme$fitted[,2],Data2$log2game,row.names(Data2)) (if I leave out the [,2] on the fitted attributes then I am told that x and y are not the same length and it appears that this is due to the fact that the fitted attribute has 2 columns.) but I get an error message that plot.new has not been called yet. I am not sure if this is because I am doing something wrong or if identify simply cannot be used in this context. Many thanks Greg __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] NLME: Limitations of using identify to interact with scatterplots?
Most plotting functions in the nlme package use lattice graphics functions based on the grid package. Identify will not work with lattice graphics. I'm not sure if there is a replacement. On 8/17/06, Greg Distiller [EMAIL PROTECTED] wrote: I have a quick question regarding the use of identify to interact with points on a scatterplot. My question is essentially: can identify be used when one is plotting model objects to generate diagnostic plots? Specifically I am using NLME. For example, I am plotting the fitted values on the x axis vs a variable called log2game with the following code: plot(D2C29.nlme, log2game ~ fitted(.), abline=c(0,1)) and then I have tried to use identify as follows: identify(D2C29.nlme$fitted[,2],Data2$log2game,row.names(Data2)) (if I leave out the [,2] on the fitted attributes then I am told that x and y are not the same length and it appears that this is due to the fact that the fitted attribute has 2 columns.) but I get an error message that plot.new has not been called yet. I am not sure if this is because I am doing something wrong or if identify simply cannot be used in this context. Many thanks Greg __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] NLME: Limitations of using identify to interact with scatterplots?
Hi Take a look at panel.identify() (in the 'lattice' package). I'm not sure if it will help you because I cannot run your example code. Paul Douglas Bates wrote: Most plotting functions in the nlme package use lattice graphics functions based on the grid package. Identify will not work with lattice graphics. I'm not sure if there is a replacement. On 8/17/06, Greg Distiller [EMAIL PROTECTED] wrote: I have a quick question regarding the use of identify to interact with points on a scatterplot. My question is essentially: can identify be used when one is plotting model objects to generate diagnostic plots? Specifically I am using NLME. For example, I am plotting the fitted values on the x axis vs a variable called log2game with the following code: plot(D2C29.nlme, log2game ~ fitted(.), abline=c(0,1)) and then I have tried to use identify as follows: identify(D2C29.nlme$fitted[,2],Data2$log2game,row.names(Data2)) (if I leave out the [,2] on the fitted attributes then I am told that x and y are not the same length and it appears that this is due to the fact that the fitted attribute has 2 columns.) but I get an error message that plot.new has not been called yet. I am not sure if this is because I am doing something wrong or if identify simply cannot be used in this context. Many thanks Greg __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- Dr Paul Murrell Department of Statistics The University of Auckland Private Bag 92019 Auckland New Zealand 64 9 3737599 x85392 [EMAIL PROTECTED] http://www.stat.auckland.ac.nz/~paul/ __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] NLME: Limitations of using identify to interact with scatterplots?
Many useful diagnostic plots can be recreated in the usual plot() framework, with only a little coding effort. In this case, I would imagine that plot(dframe$log2game, fitted(D2C29.nlme)) abline(0,1) should get pretty close, if the name of the dataframe containing the variable is 'dframe'. Andrew On Thu, Aug 17, 2006 at 08:55:41AM -0500, Douglas Bates wrote: Most plotting functions in the nlme package use lattice graphics functions based on the grid package. Identify will not work with lattice graphics. I'm not sure if there is a replacement. On 8/17/06, Greg Distiller [EMAIL PROTECTED] wrote: I have a quick question regarding the use of identify to interact with points on a scatterplot. My question is essentially: can identify be used when one is plotting model objects to generate diagnostic plots? Specifically I am using NLME. For example, I am plotting the fitted values on the x axis vs a variable called log2game with the following code: plot(D2C29.nlme, log2game ~ fitted(.), abline=c(0,1)) and then I have tried to use identify as follows: identify(D2C29.nlme$fitted[,2],Data2$log2game,row.names(Data2)) (if I leave out the [,2] on the fitted attributes then I am told that x and y are not the same length and it appears that this is due to the fact that the fitted attribute has 2 columns.) but I get an error message that plot.new has not been called yet. I am not sure if this is because I am doing something wrong or if identify simply cannot be used in this context. Many thanks Greg __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- Andrew Robinson Department of Mathematics and StatisticsTel: +61-3-8344-9763 University of Melbourne, VIC 3010 Australia Fax: +61-3-8344-4599 Email: [EMAIL PROTECTED] http://www.ms.unimelb.edu.au __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] nlme iteration process, few questions
Recently I started using nlme intensively, and since it is all new for me, I have some questions. I am running nlme with control=list(verbose=TRUE) and during one lengthy fitting, I started watching the output for some clues, how to speed up the process. I noticed that in one case, the iteration process is alternating between two solutions. Here is an output of 5 iterations, after which I stopped the calculation: I have sometimes fixed this problem by increasing the number of nlm and PNLS iterations. It can be due to the random effects estimates being too small, so not necessary in the model, or I suspect that a high correlation between the random effects would also produce the problem. The usual method is to try fitting simpler models by removing random effects or with structured covariance matrix and then use the estimates of the fixed effects as starting values for more complex models. HTH Ken __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] nlme iteration process, few questions
Hi all, Recently I started using nlme intensively, and since it is all new for me, I have some questions. I am running nlme with control=list(verbose=TRUE) and during one lengthy fitting, I started watching the output for some clues, how to speed up the process. I noticed that in one case, the iteration process is alternating between two solutions. Here is an output of 5 iterations, after which I stopped the calculation: **Iteration 22 LME step: Loglik: -1580.5 , nlm iterations: 50 reStruct parameters: id1 id2 id3 id4 id5 id6 -2.6387745 5.7964164 3.4153271 4.1734349 48.7541909 -0.1250377 id7 id8 id9id10 -36.7011651 30.1780697 -56.0266217 127.3234221 PNLS step: RSS = 137743.3 fixed effects:24.0656 0.105708 -2.86501 0.542384 iterations: 7 Convergence: fixed reStruct 0.1353935 3.3655468 **Iteration 23 LME step: Loglik: -1579.346 , nlm iterations: 50 reStruct parameters: id1id2id3id4id5id6id7 -2.285671 6.380246 3.583532 4.248227 -1.398577 -1.318335 -47.773224 id8id9 id10 12.834766 -63.669545 136.063383 PNLS step: RSS = 151492.2 fixed effects:20.8075 0.105065 -2.85908 0.544615 iterations: 7 Convergence: fixed reStruct 0.1565794 1.5396836 **Iteration 24 LME step: Loglik: -1580.500 , nlm iterations: 50 reStruct parameters: id1 id2 id3 id4 id5 id6 -2.6387708 5.7964464 3.4153368 4.1734411 48.7509855 -0.1246523 id7 id8 id9id10 -36.7012641 30.1788064 -56.0254345 127.3242285 PNLS step: RSS = 137743.5 fixed effects:24.0665 0.105706 -2.86503 0.542385 iterations: 7 Convergence: fixed reStruct 0.1354136 3.3647287 **Iteration 25 LME step: Loglik: -1579.346 , nlm iterations: 50 reStruct parameters: id1id2id3id4id5id6id7 -2.285674 6.380234 3.583527 4.248225 -1.399981 -1.318104 -47.772885 id8id9 id10 12.835424 -63.669219 136.063111 PNLS step: RSS = 151495.6 fixed effects:20.8062 0.105066 -2.85907 0.544616 iterations: 7 Convergence: fixed reStruct 0.1566962 1.5398072 **Iteration 26 LME step: Loglik: -1580.500 , nlm iterations: 50 reStruct parameters: id1 id2 id3 id4 id5 id6 -2.6387698 5.7964218 3.4153246 4.1734379 48.7501373 -0.1249832 id7 id8 id9id10 -36.7020319 30.1775554 -56.0315196 127.3225180 PNLS step: RSS = 137744.8 fixed effects:24.066 0.105707 -2.86501 0.542384 iterations: 7 Convergence: fixed reStruct 0.1354528 3.3650687 Is there any particular strategy I can pursue in such case? Besides the usual ones: changing the starting values, changing the model, using different scaling. I am not familiar with inner workings of nlme algorithm, maybe this an indication of some known problem? Thanks for all answers. Vaidotas Zemlys -- Doctorate student, Vilnius University http://www.mif.vu.lt/katedros/eka/katedra/zemlys.php __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] NLME: Problem with plotting ranef vs a factor
Your question is entirely too complex for me to try to answer in a reasonable amount of time, especially since your example in not self contained. If you would still like help on this, I suggest you try to generate a self contained example that is as simple as you can make it that illustrates your problem, as suggested in the posting guide www.R-project.org/posting-guide.html. With only a modest amount of luck, the things you try to simplify your example will lead to enlightenment. If that fails, please submit another question that is self contained, simple and clear. Doing so should substantially increase your chances of getting a quick, useful reply. I know this doesn't answer your question, but I hope it helps. Spencer Graves Greg Distiller wrote: Hi I am following the model building strategy that is outlined in the Pinheiro and Bates book wrt including covariates but am having a problem with the plot. Basically I am using 4 covariates (1 of them is continuous) and 3 of them are fine but the 4th one is being shown as a scatterplot despite the fact that it is a factor. I have explicitly declared this to be a factor (pcat-as.factor(pcat)) and have also checked by using the is.factor and the levels command that it is a factor. Yet despite this the plot command is not recognising it as a factor. Here is more information about my problem: I am reading in the data by: Data-read.csv(Data1_93_2.csv,header=T) attach(Data) Data1_93-transform(Data,log2game=log2(gamedens+1)) pcat-as.factor(pcat) Data1_93-groupedData(log2game ~ day | subjectno, data=Data1_93) detach(Data) Here is the code to check that the covariate called pcat is indeed a factor: levels(pcat) [1] 1 2 3 is.factor(pcat) [1] TRUE and then after the model is fitted I extract the random effects: D1C2.ran - ranef(mod11.103nlme,augFrame=T) and here is an extract from the object: C R day gamedens pcat site mutcat1 pdens0 log2game NA02_259 -1.016987007 0.0162825099 15.75000 23.51 Namaacha Mixed 15018 3.761099 NA02_073 -0.939355374 0.0132589702 10.5 23.750001 Namaacha Resistant6170 3.675543 M00_12 -0.775048474 0.0047124742 10.5 25.01 Mpumulanga Sensitive 17525 3.768326 M00_93 -0.555801118 0.0053872868 14.0 37.52 Mpumulanga Sensitive 332000 4.254319 NA02_053 -0.327990343 -0.0037659864 6.0 39.250001 Namaacha Resistant 65529 4.292481 Note that this output also seems to indicate that pcat is a factor as it is summarised correctly. I then generate plots for my random effects: plot(D1C2.ran,form= C ~site+mutcat2+pcat+pdens0) and the problem is that the panel for my random effects vs pcat is displayed as a scatterplot rather than as a boxplot. I am getting told to check warnings and these warnings look like: Warning messages: 1: at 0.99 2: radius 0.0001 3: all data on boundary of neighborhood. make span bigger 4: pseudoinverse used at 0.99 5: neighborhood radius 0.01 6: reciprocal condition number -1.#IND 7: zero-width neighborhood. make span bigger I do not get these warnings if I exclude the problematic variable pcat so must be something to do with this. Any ideas? Many thanks Greg [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] NLME: Problem with plotting ranef vs a factor
Hi I am following the model building strategy that is outlined in the Pinheiro and Bates book wrt including covariates but am having a problem with the plot. Basically I am using 4 covariates (1 of them is continuous) and 3 of them are fine but the 4th one is being shown as a scatterplot despite the fact that it is a factor. I have explicitly declared this to be a factor (pcat-as.factor(pcat)) and have also checked by using the is.factor and the levels command that it is a factor. Yet despite this the plot command is not recognising it as a factor. Here is more information about my problem: I am reading in the data by: Data-read.csv(Data1_93_2.csv,header=T) attach(Data) Data1_93-transform(Data,log2game=log2(gamedens+1)) pcat-as.factor(pcat) Data1_93-groupedData(log2game ~ day | subjectno, data=Data1_93) detach(Data) Here is the code to check that the covariate called pcat is indeed a factor: levels(pcat) [1] 1 2 3 is.factor(pcat) [1] TRUE and then after the model is fitted I extract the random effects: D1C2.ran - ranef(mod11.103nlme,augFrame=T) and here is an extract from the object: C R day gamedens pcat site mutcat1 pdens0 log2game NA02_259 -1.016987007 0.0162825099 15.75000 23.51 Namaacha Mixed 15018 3.761099 NA02_073 -0.939355374 0.0132589702 10.5 23.750001 Namaacha Resistant6170 3.675543 M00_12 -0.775048474 0.0047124742 10.5 25.01 Mpumulanga Sensitive 17525 3.768326 M00_93 -0.555801118 0.0053872868 14.0 37.52 Mpumulanga Sensitive 332000 4.254319 NA02_053 -0.327990343 -0.0037659864 6.0 39.250001 Namaacha Resistant 65529 4.292481 Note that this output also seems to indicate that pcat is a factor as it is summarised correctly. I then generate plots for my random effects: plot(D1C2.ran,form= C ~site+mutcat2+pcat+pdens0) and the problem is that the panel for my random effects vs pcat is displayed as a scatterplot rather than as a boxplot. I am getting told to check warnings and these warnings look like: Warning messages: 1: at 0.99 2: radius 0.0001 3: all data on boundary of neighborhood. make span bigger 4: pseudoinverse used at 0.99 5: neighborhood radius 0.01 6: reciprocal condition number -1.#IND 7: zero-width neighborhood. make span bigger I do not get these warnings if I exclude the problematic variable pcat so must be something to do with this. Any ideas? Many thanks Greg [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] NLME: Problem with plotting ranef vs a factor
Greg, be careful using attach() and detach(). From the syntax snippets you showed it seems that you did create an object pcat (factor variable), but you did not change the respective variable in your data frame. Try to remove pcat and see what happens do the results of lme()! Dirk __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] 'nlme' crashes R (was: Using corStruct in nlme)
Thanks for providing such a self-contained example by which 'nlme' crashes R. Could you please also give us 'sessionInfo()'? I don't have time to test it myself now, but perhaps if you identify your platform, you might interest someone else in checking it. I'm sorry I couldn't be more helpful. Spencer Graves [EMAIL PROTECTED] wrote: I am having trouble fitting correlation structures within nlme. I would like to fit corCAR1, corGaus and corExp correlation structures to my data. I either get the error step halving reduced below minimum in pnls step or alternatively R crashes. My dataset is similar to the CO2 example in the nlme package. The one major difference is that in my case the 'conc' steps are not the same for each 'Plant'. I have replicated the problem using the CO2 data in nlme (based off of the Ch08.R script). This works (when 'conc' is the same for each 'Plant': (fm1CO2.lis - nlsList(SSasympOff, CO2)) (fm1CO2.nlme - nlme(fm1CO2.lis, control = list(tolerance = 1e-2))) (fm2CO2.nlme - update(fm1CO2.nlme, random = Asym + lrc ~ 1)) CO2.nlme.var - update(fm2CO2.nlme, fixed = list(Asym ~ Type * Treatment, lrc + c0 ~ 1), start = c(32.412, 0, 0, 0, -4.5603, 49.344), weights=varConstPower(fixed=list(const=0.1, power=1)), verbose=T) CO2.nlme.CAR-update(CO2.nlme.var, corr=corCAR1()) CO2.nlme.gauss-update(CO2.nlme.var, correlation=corGaus(form=~as.numeric(conc)|Plant,nugget=F), data=CO2) CO2.nlme.exp-update(CO2.nlme.var, correlation=corExp(form=~as.numeric(conc)|Plant,nugget=F), data=CO2) But, if i change each of the 'conc' numbers slightly so that they are no longer identical between subjects i can only get the corCAR1 correlation to work while R crashes for both corExp and corGaus: for(i in 1:length(CO2$conc)){ CO2$conc[i]-(CO2$conc[i]+rnorm(1)) } (fm1CO2.lis - nlsList(SSasympOff, CO2)) (fm1CO2.nlme - nlme(fm1CO2.lis, control = list(tolerance = 1e-2))) (fm2CO2.nlme - update(fm1CO2.nlme, random = Asym + lrc ~ 1)) CO2.nlme.var - update(fm2CO2.nlme, fixed = list(Asym ~ Type * Treatment, lrc + c0 ~ 1), start = c(32.412, 0, 0, 0, -4.5603, 49.344), weights=varConstPower(fixed=list(const=0.1, power=1)), verbose=T) CO2.nlme.CAR-update(CO2.nlme.var, corr=corCAR1()) CO2.nlme.gauss-update(CO2.nlme.var, correlation=corGaus(form=~as.numeric(conc)|Plant,nugget=F), data=CO2) CO2.nlme.exp-update(CO2.nlme.var, correlation=corExp(form=~as.numeric(conc)|Plant,nugget=F), data=CO2) I have read Pinheiro Bates (2000) and i think that it should be possible to fit these correlation structures to my data, but maybe i am mistaken. I am running R 2.3.1 and have recently updated all packages. Thanks, Katie Grieve Quantitative Ecology Resource Management University of Washington __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] 'nlme' crashes R (was: Using corStruct in nlme)
Thanks Spencer. Here is my sessionInfo(): Version 2.3.1 (2006-06-01) i386-pc-mingw32 attached base packages: [1] methods stats graphics grDevices utils datasets [7] base other attached packages: nlme 3.1-75 On Thu, 20 Jul 2006, Spencer Graves wrote: Thanks for providing such a self-contained example by which 'nlme' crashes R. Could you please also give us 'sessionInfo()'? I don't have time to test it myself now, but perhaps if you identify your platform, you might interest someone else in checking it. I'm sorry I couldn't be more helpful. Spencer Graves [EMAIL PROTECTED] wrote: I am having trouble fitting correlation structures within nlme. I would like to fit corCAR1, corGaus and corExp correlation structures to my data. I either get the error step halving reduced below minimum in pnls step or alternatively R crashes. My dataset is similar to the CO2 example in the nlme package. The one major difference is that in my case the 'conc' steps are not the same for each 'Plant'. I have replicated the problem using the CO2 data in nlme (based off of the Ch08.R script). This works (when 'conc' is the same for each 'Plant': (fm1CO2.lis - nlsList(SSasympOff, CO2)) (fm1CO2.nlme - nlme(fm1CO2.lis, control = list(tolerance = 1e-2))) (fm2CO2.nlme - update(fm1CO2.nlme, random = Asym + lrc ~ 1)) CO2.nlme.var - update(fm2CO2.nlme, fixed = list(Asym ~ Type * Treatment, lrc + c0 ~ 1), start = c(32.412, 0, 0, 0, -4.5603, 49.344), weights=varConstPower(fixed=list(const=0.1, power=1)), verbose=T) CO2.nlme.CAR-update(CO2.nlme.var, corr=corCAR1()) CO2.nlme.gauss-update(CO2.nlme.var, correlation=corGaus(form=~as.numeric(conc)|Plant,nugget=F), data=CO2) CO2.nlme.exp-update(CO2.nlme.var, correlation=corExp(form=~as.numeric(conc)|Plant,nugget=F), data=CO2) But, if i change each of the 'conc' numbers slightly so that they are no longer identical between subjects i can only get the corCAR1 correlation to work while R crashes for both corExp and corGaus: for(i in 1:length(CO2$conc)){ CO2$conc[i]-(CO2$conc[i]+rnorm(1)) } (fm1CO2.lis - nlsList(SSasympOff, CO2)) (fm1CO2.nlme - nlme(fm1CO2.lis, control = list(tolerance = 1e-2))) (fm2CO2.nlme - update(fm1CO2.nlme, random = Asym + lrc ~ 1)) CO2.nlme.var - update(fm2CO2.nlme, fixed = list(Asym ~ Type * Treatment, lrc + c0 ~ 1), start = c(32.412, 0, 0, 0, -4.5603, 49.344), weights=varConstPower(fixed=list(const=0.1, power=1)), verbose=T) CO2.nlme.CAR-update(CO2.nlme.var, corr=corCAR1()) CO2.nlme.gauss-update(CO2.nlme.var, correlation=corGaus(form=~as.numeric(conc)|Plant,nugget=F), data=CO2) CO2.nlme.exp-update(CO2.nlme.var, correlation=corExp(form=~as.numeric(conc)|Plant,nugget=F), data=CO2) I have read Pinheiro Bates (2000) and i think that it should be possible to fit these correlation structures to my data, but maybe i am mistaken. I am running R 2.3.1 and have recently updated all packages. Thanks, Katie Grieve Quantitative Ecology Resource Management University of Washington __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] nlme: correlation structure in gls and zero distance
Joris De Wolf a écrit : Have you tried to define 'an' as a group? Like in gls(IKAfox~an,correlation=corExp(2071,form=~x+y|an,nugget=1.22),data=renliev) A small data set might help to explain the problem. Joris Thanks. Seems to work with a small artificial data set: an-as.factor(rep(2001:2004,each=10)) x-rep(rnorm(10),times=4) y-rep(rnorm(10),times=4) IKA-rpois(40,2) site-as.factor(rep(letters[1:10],times=4)) library(nlme) mod1-gls(IKA~an-1,correlation=corExp(form=~x+y)) Error in getCovariate.corSpatial(object, data = data) : Cannot have zero distances in corSpatial mod2-gls(IKA~an-1,correlation=corExp(form=~x+y|an)) mod2 Generalized least squares fit by REML Model: IKA ~ an - 1 Data: NULL Log-restricted-likelihood: -73.63998 Coefficients: an2001 an2002 an2003 an2004 1.987611 2.454520 2.429907 2.761011 Correlation Structure: Exponential spatial correlation Formula: ~x + y | an Parameter estimate(s): range 0.4304012 Degrees of freedom: 40 total; 36 residual Residual standard error: 1.746205 Joris Patrick Giraudoux wrote: Dear listers, I am trying to model the distribution of fox density over years in the Doubs department. Measurements have been taken on 470 plots in March each year and georeferenced. Average density is supposed to be different each year. In a first approach, I would like to use a general model of this type, taking spatial correlation into account: gls(IKAfox~an,correlation=corExp(2071,form=~x+y,nugget=1.22),data=renliev) but I get gls(IKAfox~an,correlation=corExp(2071,form=~x+y,nugget=1.22),data=renliev) Error in getCovariate.corSpatial(object, data = data) : Cannot have zero distances in corSpatial I understand that the 470 geographical coordinates are repeated three times (measurement are taken each of the three years at the same place) which obviously cannot be handled there. Does anybody know a way to work around that except jittering slightly the geographical coordinates? Thanks in advance, Patrick __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html confidentiality notice: The information contained in this e-mail is confidential a...{{dropped}} __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] nlme: correlation structure in gls and zero distance
Dear listers, I am trying to model the distribution of fox density over years in the Doubs department. Measurements have been taken on 470 plots in March each year and georeferenced. Average density is supposed to be different each year. In a first approach, I would like to use a general model of this type, taking spatial correlation into account: gls(IKAfox~an,correlation=corExp(2071,form=~x+y,nugget=1.22),data=renliev) but I get gls(IKAfox~an,correlation=corExp(2071,form=~x+y,nugget=1.22),data=renliev) Error in getCovariate.corSpatial(object, data = data) : Cannot have zero distances in corSpatial I understand that the 470 geographical coordinates are repeated three times (measurement are taken each of the three years at the same place) which obviously cannot be handled there. Does anybody know a way to work around that except jittering slightly the geographical coordinates? Thanks in advance, Patrick __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] NLME Fitting
I'm not certain what you want, but it sounds like the answer might be in '?nlme' help page. Toward the end, it describes the 'Value' returned as follows: an object of class 'nlme' representing the nonlinear mixed-effects model fit. Generic functions such as 'print', 'plot' and 'summary' have methods to show the results of the fit. See 'nlmeObject' for the components of the fit. The functions 'resid', 'coef', 'fitted', 'fixed.effects', and 'random.effects' can be used to extract some of its components. Also, 'str' might help you find what you want. Spencer Graves p.s. If you'd like further help from this listserve, PLEASE do read the posting guide! www.R-project.org/posting-guide.html. This might help you write a question so it is easier for others to understand, thereby increasing the chances for a quick and useful reply. F Monadjemi wrote: Dear Reader, Is it possible to extract the random part of nlme fitting analysis (non linear mixed effect model) i.e.sigma(b), in R? Thank you for respond Farinaz __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] NLME Fitting
Dear Reader, Is it possible to extract the random part of nlme fitting analysis (non linear mixed effect model) i.e.sigma(b), in R? Thank you for respond Farinaz __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] NLME: using the layout option with the plot command
Hi Greg, Since you haven't yet had a response, you could distill this. It uses the pixel dataset from nlme() as an example. ## To get separate files, do this postscript(c:\MyGraph%03.ps, onefile=F) plot(Pixel, display = Dog, inner = ~Side, layout=c(4,1)) dev.off() ## To get your layout into one file, as separate pages, do this postscript(c:\MyGraph.ps, onefile=T) plot(Pixel, display = Dog, inner = ~Side, layout=c(4,1)) dev.off() If you prefer pdf, then use : pdf(filename, onefile=F), c. At the R prompt do : ?postscript (and ?pdf), and go on reading! Also have a look at setps() in package Hmisc. Regards, Mark. Mark DiffordPh.D. candidate, Botany Department, Nelson Mandela Metropolitan University, Port Elizabeth, SA. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] NLME: using the layout option with the plot command
Hi This is the 2nd time I am posting this question as I never got a reply the 1st time round - apologies to anybody who might take offense at this but I dont know what else to do. I am struggling to split up the plots of the grouped objects in nlme in a usable way. The standard plot command generates plots for each group on a single page. When there are many groups however this does not look so good. I have discovered the layout option which allows one to split up these plots over a certain number of pages but the problem is it very quickly scrolls through all the pages only leaving the final page in the viewer. My question is how does one get to view all these pages? Or even better is there an option where the pages change only when prompted by the user? Thanks Greg __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] NLME: using the layout option with the plot command
hi greg If you are using windows, set up a plot window and click the Record option in the menu. Then run the command. Now you can scroll back through previous pages by hitting Page Up. Beware that if you save your workspace without clearing the history, you may have a lot of bloat from the graphs. David On 20/06/06, Greg Distiller [EMAIL PROTECTED] wrote: Hi This is the 2nd time I am posting this question as I never got a reply the 1st time round - apologies to anybody who might take offense at this but I dont know what else to do. I am struggling to split up the plots of the grouped objects in nlme in a usable way. The standard plot command generates plots for each group on a single page. When there are many groups however this does not look so good. I have discovered the layout option which allows one to split up these plots over a certain number of pages but the problem is it very quickly scrolls through all the pages only leaving the final page in the viewer. My question is how does one get to view all these pages? Or even better is there an option where the pages change only when prompted by the user? Thanks Greg __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] nlme: extract covariance matrix of the random effects
Dear all, I am looking for a function to extract, from an nlme object, the estimated variance-covariance matrix of the random effects. many thanks, Bernard, __ [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] nlme: extract covariance matrix of the random effects
Marc Bernard bernarduse1 at yahoo.fr writes: I am looking for a function to extract, from an nlme object, the estimated variance-covariance matrix of the random effects. VarCorr D.Menne __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] nlme model specification
Thanks for providing such a simple, replicatable example. When I tried that and similar examples, the response matched what you report. I also tried the following slight modification of the 'nlme' call that worked for you: mod4 - +nlme(circumference ~ SSlogis(age, Asym, xmid, scal), +data = Orange, +fixed = list(Asym+xmid+scal ~ 1), +start = fixef(fm1Oran.lis)) Error in parse(file, n, text, prompt) : syntax error in ~ This error message matches what you got. I tentatively conclude that 'fixed' does NOT like an argument of class 'list'. To diagnose this further, I tried 'traceback': traceback() 6: parse(text = paste(~, paste(nVal, collapse = /))) 5: eval(parse(text = paste(~, paste(nVal, collapse = / 4: getGroupsFormula.reStruct(reSt) 3: getGroupsFormula(reSt) 2: nlme.formula(circumference ~ SSlogis(age, Asym, xmid, scal), data = Orange, fixed = list(Asym + xmid + scal ~ 1), start = fixef(fm1Oran.lis)) 1: nlme(circumference ~ SSlogis(age, Asym, xmid, scal), data = Orange, fixed = list(Asym + xmid + scal ~ 1), start = fixef(fm1Oran.lis)) I then tried to further isolate the problem using 'debug': debug(getGroupsFormula.reStruct) Error: object getGroupsFormula.reStruct not found Unfortunately, it's hidden is a namespace: methods(getGroupsFormula) [1] getGroupsFormula.default* getGroupsFormula.gls* [3] getGroupsFormula.lme* getGroupsFormula.lmList* [5] getGroupsFormula.reStruct* Non-visible functions are asterisked I could trace this further by making local copies of all the functions in the call stack, but I don't have time for that right now. 'help(package=nlme)' says the 'author' is Pinheiro, Bates, DebRoy, and Sarkar, and the 'Maintainer' is 'R-core'. I've cc-ed Bates on this reply. If you don't hear from someone else in a few days, you might consider making local copies of the functions in this call stack, then trying 'debug(getGroupsFormula.reStruct)', and generating another post based on what you learn doing that. You might see, for example, if the example that works also calls 'getGroupFormula.reStruct'. I'm sorry I couldn't provide an answer, but with luck, my reply will inspire someone else to do something that can enlighten us both. Best Wishes, Spencer Graves p.s. Before you submit another post on this, I suggest you 'update.packages'. A new version of 'nlme' is now available. It won't solve your problem, but it at least will assure someone else that you are up to date. sessionInfo() Version 2.3.0 (2006-04-24) i386-pc-mingw32 attached base packages: [1] methods stats graphics grDevices utils datasets [7] base other attached packages: nlme 3.1-72 Martin Henry H. Stevens wrote: Hi folks, I am tearing my hair out on this one. I am using an example from Pinheiro and Bates. ### this works data(Orange) mod.lis - nlsList(circumference ~ SSlogis(age, Asymp, xmid, scal), data=Orange ) ### This works mod - nlme(circumference ~ SSlogis(age, Asymp, xmid, scal), data=Orange, fixed = Asymp + xmid + scal ~ 1, start = fixef(mod.lis) ) ### I try a slightly different model specification for the fixed effects, and it does not work. ### fixed = list(Asymp ~ 1, xmid ~ 1, scal ~ 1) ### I tried following the example on page 355. mod - nlme(circumference ~ SSlogis(age, Asymp, xmid, scal), data=Orange, fixed = list(Asymp ~ 1, xmid ~ 1, scal ~ 1), start = fixef(mod.lis) ) ### I get Error in parse(file, n, text, prompt) : syntax error in ~ nlme version 3.1-71 version _ platform powerpc-apple-darwin8.6.0 arch powerpc os darwin8.6.0 system powerpc, darwin8.6.0 status major 2 minor 3.0 year 2006 month 04 day24 svn rev37909 language R version.string Version 2.3.0 (2006-04-24) Dr. M. Hank H. Stevens, Assistant Professor 338 Pearson Hall Botany Department Miami University Oxford, OH 45056 Office: (513) 529-4206 Lab: (513) 529-4262 FAX: (513) 529-4243 http://www.cas.muohio.edu/~stevenmh/ http://www.muohio.edu/ecology/ http://www.muohio.edu/botany/ E Pluribus Unum __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] nlme model specification
Hi folks, I am tearing my hair out on this one. I am using an example from Pinheiro and Bates. ### this works data(Orange) mod.lis - nlsList(circumference ~ SSlogis(age, Asymp, xmid, scal), data=Orange ) ### This works mod - nlme(circumference ~ SSlogis(age, Asymp, xmid, scal), data=Orange, fixed = Asymp + xmid + scal ~ 1, start = fixef(mod.lis) ) ### I try a slightly different model specification for the fixed effects, and it does not work. ### fixed = list(Asymp ~ 1, xmid ~ 1, scal ~ 1) ### I tried following the example on page 355. mod - nlme(circumference ~ SSlogis(age, Asymp, xmid, scal), data=Orange, fixed = list(Asymp ~ 1, xmid ~ 1, scal ~ 1), start = fixef(mod.lis) ) ### I get Error in parse(file, n, text, prompt) : syntax error in ~ nlme version 3.1-71 version _ platform powerpc-apple-darwin8.6.0 arch powerpc os darwin8.6.0 system powerpc, darwin8.6.0 status major 2 minor 3.0 year 2006 month 04 day24 svn rev37909 language R version.string Version 2.3.0 (2006-04-24) Dr. M. Hank H. Stevens, Assistant Professor 338 Pearson Hall Botany Department Miami University Oxford, OH 45056 Office: (513) 529-4206 Lab: (513) 529-4262 FAX: (513) 529-4243 http://www.cas.muohio.edu/~stevenmh/ http://www.muohio.edu/ecology/ http://www.muohio.edu/botany/ E Pluribus Unum __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] nlme plot residuals per group
Your example is not self contained, and I don't know enough to replicate it myself, so I can't tell you what caused it or how to fix it. However, I can outline the type of thing I've often done with this kind of problem: 1. First, can you get the plots you want using examples from the book and distributed with the 'nlme' package? If no, might that provide a simple, self-contained example for a refined post? 2. If you get the plots you want from one of the standard examples, how does your example that doesn't work differ from the published example that does? That comparison might provide the insight you need to solve the problem. If it doesn't, I would then experiment with both my example and the published example until I either solved my problem or produced a revision of the published example that illustrated my problem. That revised example might then provide the core for a refined post. 3. However, if it were me, I'd carry it further, making a local copy of the appropriate function, using 'debug', and walking through the code line by line until I found where it broke. By the time I've done this, I've typically figured out the problem. To do this, you need to know about the 'method' function, plus possibly 'getAnywhere'. hope this helps, Spencer Graves Osman Al-Radi wrote: dear list: I used the nlme library according to the great Pinheiro/Bates book, on R2.3, WinXp Lac.lme is an lme object with unbalanced data, group is a factor variable with three levels, when I tried to plot the residuals by group I got this error msg: plot(Lac.lme,resid(.,type='p')~fitted(.)|group) Error in limits.and.aspect(prepanel.default.xyplot, prepanel = prepanel, : need at least one panel Also When I try to use the auPred() function I get the follwoing error msg: plot(augPred(Lac.lme)) Error in tapply(as.character(object[[nm]]), groups, FUN[[dClass]]) : arguments must have same length Any suggestions? Thanks -- Osman O. Al-Radi, MD, MSc, FRCSC Fellow, Cardiovascular Surgery The Hospital for Sick Children University of Toronto, Canada __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] nlme plot residuals per group
dear list: I used the nlme library according to the great Pinheiro/Bates book, on R2.3, WinXp Lac.lme is an lme object with unbalanced data, group is a factor variable with three levels, when I tried to plot the residuals by group I got this error msg: plot(Lac.lme,resid(.,type='p')~fitted(.)|group) Error in limits.and.aspect(prepanel.default.xyplot, prepanel = prepanel, : need at least one panel Also When I try to use the auPred() function I get the follwoing error msg: plot(augPred(Lac.lme)) Error in tapply(as.character(object[[nm]]), groups, FUN[[dClass]]) : arguments must have same length Any suggestions? Thanks -- Osman O. Al-Radi, MD, MSc, FRCSC Fellow, Cardiovascular Surgery The Hospital for Sick Children University of Toronto, Canada __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] nlme for groupedData with inner and outer factors
1. Have you read Pinheiro and Bates (2000) Mixed-Effects Models in S and S-Plus (Springer)? If no, I believe your study of that book will be well rewarded; mine has. 2. If you've looked at Pinheiro and Bates and still have questions about this, PLEASE do read the posting guide! www.R-project.org/posting-guide.html, especially the bit about developing a toy example that is as simple as you can make it that still illustrates your question. I've solved many of my own problems doing this, and I've answered many questions for people on this list dealing with functions I've not previously used. You could help people like me by providing a few lines of R code that we could copy from your email, paste into R and replicate what you see. hope this helps, spencer graves Dan Bebber wrote: Hello, I am having trouble specifying a suitable nlme model. My data structure is described by gd - groupedData(ppath ~ lcut | exp, outer = ~ bait, inner = ~ weight, data = d) i.e. the response (ppath) of several subjects (sub) was measured at levels of a continuous variable (lcut). Subjects were given either of one level of a factor (bait), and all subjects were measured at two levels of another factor (weight). Therefore bait varies among subjects and weight varies within subjects. The relationship ppath ~ cut for each subject and weight appear to follow a logistic curve, with xmid and scal affected by bait and weight. There is also a random effect of subject on xmid and scal. Any help with formulating the correct model would be greatly appreciated. Many thanks, Dan Bebber Department of Plant Sciences University of Oxford p.s. Part of my data are shown below: sublcut ppath bait weight 1 pv1_ 0.0 1.01 0 2 pv1_ 0.1 0.8277738211 0 3 pv1_ 0.2 0.3801025021 0 4 pv1_ 0.3 0.2091518781 0 5 pv1_ 0.4 0.0769293041 0 6 pv1_ 0.5 0.0656815641 0 7 pv1_ 0.6 0.0206701081 0 8 pv1_ 0.7 0.0128170211 0 9 pv1_ 0.8 0.0086615141 0 10 pv1_ 0.9 0.0115683231 0 11 pv19 0.0 1.01 0 12 pv19 0.1 0.6683902911 0 13 pv19 0.2 0.3433184621 0 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] NLME Covariates
What you think about the following: set.seed(1) DF0.3 - data.frame(X=c(a,a, b), y=rnorm(3)) lme(y~1, random=~1|X, data=DF0.3) Linear mixed-effects model fit by REML Data: DF0.3 Log-restricted-likelihood: -2.148692 Fixed: y ~ 1 (Intercept) -0.4261464 Random effects: Formula: ~1 | X (Intercept) Residual StdDev: 1.280461e-06 0.5383504 Number of Observations: 3 Number of Groups: 2 hope this helps. spencer graves p.s. I confess I had difficulties parsing your question. First, I had to Google for HLM. I figured it was NOT Home Life Ministries. Hierarchical Linear Models seemed more consistent with the context. Second, I wasn't even sure about what you meant by category. I assume it's what is returned by the levels function when applied to a factor. Robin Martin wrote: HLM question? Is there a minmum number of observations required for a category..I have individusals in work teams.I have incomplete data for all the teams ..sometimes I only have data for one person in a team.I assume that HLM can't work here! But what would be the mimimal.at the moment I have a sample of about 240 in about 100 teams with teamsizes form 2 to 5. Any advice? Thanks Robin _ Dr Robin Martin School of Psychology University of Queensland Brisbane, QLD 4072 Australia Level 1, Room 132, McElwain Bldg tel. +61 7 3365 6392 fax. +61 7 3365 4466 email [EMAIL PROTECTED] web-page http://www.psy.uq.edu.au/people/personal.html?id=275 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] NLME Covariates
HLM question? Is there a minmum number of observations required for a category..I have individusals in work teams.I have incomplete data for all the teams ..sometimes I only have data for one person in a team.I assume that HLM can't work here! But what would be the mimimal.at the moment I have a sample of about 240 in about 100 teams with teamsizes form 2 to 5. Any advice? Thanks Robin _ Dr Robin Martin School of Psychology University of Queensland Brisbane, QLD 4072 Australia Level 1, Room 132, McElwain Bldg tel. +61 7 3365 6392 fax. +61 7 3365 4466 email [EMAIL PROTECTED] web-page http://www.psy.uq.edu.au/people/personal.html?id=275 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] nlme for groupedData with inner and outer factors
Hello, I am having trouble specifying a suitable nlme model. My data structure is described by gd - groupedData(ppath ~ lcut | exp, outer = ~ bait, inner = ~ weight, data = d) i.e. the response (ppath) of several subjects (sub) was measured at levels of a continuous variable (lcut). Subjects were given either of one level of a factor (bait), and all subjects were measured at two levels of another factor (weight). Therefore bait varies among subjects and weight varies within subjects. The relationship ppath ~ cut for each subject and weight appear to follow a logistic curve, with xmid and scal affected by bait and weight. There is also a random effect of subject on xmid and scal. Any help with formulating the correct model would be greatly appreciated. Many thanks, Dan Bebber Department of Plant Sciences University of Oxford p.s. Part of my data are shown below: sublcut ppath bait weight 1 pv1_ 0.0 1.01 0 2 pv1_ 0.1 0.8277738211 0 3 pv1_ 0.2 0.3801025021 0 4 pv1_ 0.3 0.2091518781 0 5 pv1_ 0.4 0.0769293041 0 6 pv1_ 0.5 0.0656815641 0 7 pv1_ 0.6 0.0206701081 0 8 pv1_ 0.7 0.0128170211 0 9 pv1_ 0.8 0.0086615141 0 10 pv1_ 0.9 0.0115683231 0 11 pv19 0.0 1.01 0 12 pv19 0.1 0.6683902911 0 13 pv19 0.2 0.3433184621 0 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] nlme predict with se?
Emilio == Emilio A Laca [EMAIL PROTECTED] on Sat, 18 Mar 2006 22:01:05 -0800 writes: Emilio Berton, What you say makes sense and helps. I could Emilio do a parametric bootstrapping. However, I cannot Emilio make predict.nlme work at all, even without the Emilio se's. It would save me a lot of time to be able to Emilio use the predict as the statistic in boot. Emilio Does predict.nlme work at all? Emilio It is a listed method in MEMSS. Is there an example? help(predict.nlme) or even more directly example(predict.nlme) show that it does work (at least in some cases). If you read the help page, you can also deduce that an argument 'se' is not made use of. Please do give us a *reproducible* (for us!) example as you are asked to do by the posting guide. For this you must use an R or nlme example data set; or artificially generated data for which you show the exact generative statements {including a set.seed(*) one if you use random number}. Martin Maechler, ETH Zurich __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] nlme predict with se?
Berton, What you say makes sense and helps. I could do a parametric bootstrapping. However, I cannot make predict.nlme work at all, even without the se's. It would save me a lot of time to be able to use the predict as the statistic in boot. Does predict.nlme work at all? It is a listed method in MEMSS. Is there an example? thanks eal On Fri 17-Mar-06, at 3:08 PM, Berton Gunter wrote: This won't be much help, I'm afraid but ... The problem is, of course, that the prediction is a **nonlinear** __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] nlme predict with se?
I am trying to make predictions with se's using a nlme (kew11.nlme below). I get an error indicating levels for a factor are not allowed. I have searched and read Rnews, MEMSS, MASS, R-Help, and other lists in Spanish where I found questions similar to mine but not solution. I do not really care about the method used. Any suggestions to obtain predictions with se's from an nlme would be appreciated. eal R Version 2.2.1 (2005-12-20 r36812) mac osx 10.4.5 on G5 DP predict(kew11.nlme,x.for.lw.pred, se=T) Error in predict.nlme(kew11.nlme, x.for.lw.pred, se = T) : Levels 1. Fall-Winter,2. Spring,3. Summer,4. Fall not allowed for Season x.for.lw.pred[1:10,] Season ecoreg MBreed ageyr2 sheepfarm 1 1. Fall-Winter desert Meat 0.60 1 AksToKa 2 2. Spring desert Meat 1.00 1 AksToKa 3 3. Summer desert Meat 1.25 1 AksToKa 4 4. Fall desert Meat 1.50 1 AksToKa 5 1. Fall-Winter foothills Half 0.60 1 AksToKa 6 2. Spring foothills Half 1.00 1 AksToKa 7 3. Summer foothills Half 1.25 1 AksToKa 8 4. Fall foothills Half 1.50 1 AksToKa 9 1. Fall-Winter foothills Meat 0.60 1 AksToKa 10 2. Spring foothills Meat 1.00 1 AksToKa kew11.nlme$call nlme.formula(model = lw ~ SSasympOff(ageyr2, mw, lgr, age0), data = kew, fixed = list(mw + lgr + age0 ~ Season * MBreed + ecoreg + ecoreg:Season), random = list(farm = list(mw ~ 1, lgr ~ 1), sheep = list(mw ~ 1)), start = c(fixef (kew8.nlme)[1:15], 0, 0, 0, 0, 0, 0, 0, 0, 0, fixef(kew8.nlme)[16:30], 0, 0, 0, 0, 0, 0, 0, 0, 0, fixef(kew8.nlme)[31:45], 0, 0, 0, 0, 0, 0, 0, 0, 0), correlation = corSymm()) levels(kew$Season)==levels(x.for.lw.pred$Season) [1] TRUE TRUE TRUE TRUE kew[1:10,] Grouped Data: lw ~ ageyr2 | farm X sheep doe ageyr2MBreEco Season ecoreg farm village Breed MBreed date doy lw ebw 1 1 1 1 2.50 Meatsemidesert 1. Fall-Winter semidesert AksToKa Aksenger KZP Meat 11/23/02 327 63.8 55.63211 2 2 1 139 2.88 Meatsemidesert 2. Spring semidesert AksToKa Aksenger KZP Meat 4/10/03 100 51.7 44.53119 3 3 1 250 3.191667 Meatsemidesert 3. Summer semidesert AksToKa Aksenger KZP Meat 7/30/03 211 58.3 50.58624 4 4 1 330 3.413889 Meatsemidesert4. Fall semidesert AksToKa Aksenger KZP Meat 10/18/03 291 59.9 52.05413 5 5 2 1 6.00 Meatsemidesert 1. Fall-Winter semidesert AksToKa Aksenger KZP Meat 11/23/02 327 58.0 50.31101 6 6 2 139 6.38 Meatsemidesert 2. Spring semidesert AksToKa Aksenger KZP Meat 4/10/03 100 41.2 34.89817 7 7 2 250 6.691667 Meatsemidesert 3. Summer semidesert AksToKa Aksenger KZP Meat 7/30/03 211 53.3 45.99908 8 8 2 330 6.913889 Meatsemidesert4. Fall semidesert AksToKa Aksenger KZP Meat 10/18/03 291 63.7 55.54037 9 9 3 1 4.00 Meatsemidesert 1. Fall-Winter semidesert AksToKa Aksenger KZP Meat 11/23/02 327 62.3 54.25596 10 10 3 139 4.38 Meatsemidesert 2. Spring semidesert AksToKa Aksenger KZP Meat 4/10/03 100 47.3 40.49450 bcs prcfat cageyrs 1 2.75 26.533 -0.46162659 2 2.00 20.192 -0.07829326 3 2.50 24.478 0.23004008 4 2.50 24.414 0.45226230 5 2.50 24.490 3.03837341 6 1.75 18.337 3.42170674 7 2.25 22.403 3.73004008 8 3.25 31.087 3.95226230 9 2.50 24.318 1.03837341 10 2.00 20.368 1.42170675 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] nlme predict with se?
This won't be much help, I'm afraid but ... The problem is, of course, that the prediction is a **nonlinear** function of the parameters (including possibly the variance components, depending on what level of the variance hierarchy you are trying to predict). So you need to somehow estimate how to propogate the error of a nonlinear function of parameters which, themselves, have an approximate variance-covariance matrix estimated from an approximation to the likelihood surface. Asymptotic approaches (Taylor series/delta method; Laplace approximations) can do this, but they are of questionable accuracy, as Doug Bates has repeatedly emphasized on this list. A better approach might well be to bootstrap: repeatedly fit and predict your desired values via bootstrapping your original data appropriately. This, too, could be somewhat tricky, as you have to bootstrap from the variance components appropriately (preserving group identities, for example). As I have no real experience with this, take this with more than just a grain of salt. But it might give you some insight into the difficulty and why you were unable to find good answers. Cheers, -- Bert Gunter Genentech Non-Clinical Statistics South San Francisco, CA The business of the statistician is to catalyze the scientific learning process. - George E. P. Box -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Dr. Emilio A. Laca Sent: Friday, March 17, 2006 1:55 PM To: R-help@stat.math.ethz.ch Subject: [R] nlme predict with se? I am trying to make predictions with se's using a nlme (kew11.nlme below). I get an error indicating levels for a factor are not allowed. I have searched and read Rnews, MEMSS, MASS, R-Help, and other lists in Spanish where I found questions similar to mine but not solution. I do not really care about the method used. Any suggestions to obtain predictions with se's from an nlme would be appreciated. eal R Version 2.2.1 (2005-12-20 r36812) mac osx 10.4.5 on G5 DP predict(kew11.nlme,x.for.lw.pred, se=T) Error in predict.nlme(kew11.nlme, x.for.lw.pred, se = T) : Levels 1. Fall-Winter,2. Spring,3. Summer,4. Fall not allowed for Season x.for.lw.pred[1:10,] Season ecoreg MBreed ageyr2 sheepfarm 1 1. Fall-Winter desert Meat 0.60 1 AksToKa 2 2. Spring desert Meat 1.00 1 AksToKa 3 3. Summer desert Meat 1.25 1 AksToKa 4 4. Fall desert Meat 1.50 1 AksToKa 5 1. Fall-Winter foothills Half 0.60 1 AksToKa 6 2. Spring foothills Half 1.00 1 AksToKa 7 3. Summer foothills Half 1.25 1 AksToKa 8 4. Fall foothills Half 1.50 1 AksToKa 9 1. Fall-Winter foothills Meat 0.60 1 AksToKa 10 2. Spring foothills Meat 1.00 1 AksToKa kew11.nlme$call nlme.formula(model = lw ~ SSasympOff(ageyr2, mw, lgr, age0), data = kew, fixed = list(mw + lgr + age0 ~ Season * MBreed + ecoreg + ecoreg:Season), random = list(farm = list(mw ~ 1, lgr ~ 1), sheep = list(mw ~ 1)), start = c(fixef (kew8.nlme)[1:15], 0, 0, 0, 0, 0, 0, 0, 0, 0, fixef(kew8.nlme)[16:30], 0, 0, 0, 0, 0, 0, 0, 0, 0, fixef(kew8.nlme)[31:45], 0, 0, 0, 0, 0, 0, 0, 0, 0), correlation = corSymm()) levels(kew$Season)==levels(x.for.lw.pred$Season) [1] TRUE TRUE TRUE TRUE kew[1:10,] Grouped Data: lw ~ ageyr2 | farm X sheep doe ageyr2MBreEco Season ecoreg farm village Breed MBreed date doy lw ebw 1 1 1 1 2.50 Meatsemidesert 1. Fall-Winter semidesert AksToKa Aksenger KZP Meat 11/23/02 327 63.8 55.63211 2 2 1 139 2.88 Meatsemidesert 2. Spring semidesert AksToKa Aksenger KZP Meat 4/10/03 100 51.7 44.53119 3 3 1 250 3.191667 Meatsemidesert 3. Summer semidesert AksToKa Aksenger KZP Meat 7/30/03 211 58.3 50.58624 4 4 1 330 3.413889 Meatsemidesert4. Fall semidesert AksToKa Aksenger KZP Meat 10/18/03 291 59.9 52.05413 5 5 2 1 6.00 Meatsemidesert 1. Fall-Winter semidesert AksToKa Aksenger KZP Meat 11/23/02 327 58.0 50.31101 6 6 2 139 6.38 Meatsemidesert 2. Spring semidesert AksToKa Aksenger KZP Meat 4/10/03 100 41.2 34.89817 7 7 2 250 6.691667 Meatsemidesert 3. Summer semidesert AksToKa Aksenger KZP Meat 7/30/03 211 53.3 45.99908 8 8 2 330 6.913889 Meatsemidesert4. Fall semidesert AksToKa Aksenger KZP Meat 10/18/03 291 63.7 55.54037 9 9 3 1 4.00 Meatsemidesert 1. Fall-Winter semidesert AksToKa Aksenger KZP Meat 11/23/02 327 62.3 54.25596 10 10 3 139 4.38 Meatsemidesert 2. Spring semidesert AksToKa Aksenger KZP Meat 4/10/03 100 47.3 40.49450 bcs prcfat cageyrs 1 2.75 26.533 -0.46162659 2 2.00
Re: [R] nlme in R v.2.2.1 and S-Plus v. 7.0
I see you got an error message from R. Did you have both either the lme4 or the Matrix packages in the search path at the same time you ran nlme to get the result you got below? If yes, please rerun with only nlme in the search path. (This may not be necessary, but I always quite R and restart whenever I want to switch between nlme and lme4.) If this is not the problem, I would encourage you to experient with smaller models and data sets to try to find the simplest toy example that still produces the error message you got from summary(nlme(...)), then submit that to this listserve. I routinely copy reproducible examples into R to see if I get the same error message. Whether I do or not, I think my comments are much more likely to be helpful than if I have to guess. And if I'm guessing about an error message I can't generate myself at will, my comments may not be very helpful. Regarding whether to use S-Plus 7 or R 2.2.1 for this problem, if R gives you an error message when S-Plus does not, doesn't that answer your question? Moreover, the S-Plus logLik is higher than for R, which suggests that it must get closer to the actual maximum of the likelihood function. If you'd like more help from this listserve, I suggest you PLEASE do read the posting guide! www.R-project.org/posting-guide.html. Doing so will on average tend to increase the speed and utility of comments you might receive, I believe. hope this helps, spencer graves Paolo Ghisletta wrote: Dear R-Users, I am comparing the nlme package in S-Plus (v. 7.0) and R (v. 2.2.1, nlme package version 3.1-68.1; the lattice, Matrix, and lme4 have also just been updated today, Jan. 23, 2006) on a PC (2.40 GHz Pentium 4 processor and 1 GHz RAM) operating on Windows XP. I am using a real data set with 1,191 units with at most 4 repeated measures per unit (data are incomplete, unbalanced). I use the same code with the same starting values for both programs and obtain slightly different results. I am aware that at this stage my model is far from being well specified for the given data. Nevertheless, I wonder whether one program is more suited than the other to pursue my modeling. Below I included the input + output code, first for S-Plus, than for R. Many thanks and best regards, Paolo Ghisletta #S-Plus #min=4, max=41 logistic4.a - nlme(jugs ~ 4 + 41 / (1 + xmid*exp( -scal*I(occ-1) + u)), fixed=scal+xmid~1, random= u~1 |id, start=c(scal=.2, xmid=155), data=jug, na.action=na.exclude, method=ML) summary(logistic4.a) Nonlinear mixed-effects model fit by maximum likelihood Model: jugs ~ 4 + 41/(1 + xmid * exp( - scal * I(occ - 1) + u)) Data: jug AIC BIClogLik 29595.62 29621.3 -14793.81 Random effects: Formula: u ~ 1 | id u Residual StdDev: 5.162391 3.718887 Fixed effects: scal + xmid ~ 1 Value Std.Error DF t-value p-value scal 4.96970.0823 3339 60.39508 .0001 xmid 683.5634 125.8509 3339 5.43153 .0001 Standardized Within-Group Residuals: Min Q1 MedQ3 Max -10.66576 -0.5039498 0.0002772166 0.1226745 5.453209 Number of Observations: 4531 Number of Groups: 1191 # R #min=4, max=41 logistic4.a - nlme(jugs ~ 4 + 41 / (1 + xmid*exp( -scal*I(occ-1)+ u)), data=jug, fixed=scal+xmid~1, random= u~1 |id, start=c(scal=.2, xmid=155), method=ML, na.action=na.exclude) summary(logistic4.a) Nonlinear mixed-effects model fit by maximum likelihood Model: jugs ~ 4 + 41/(1 + xmid * exp(-scal * I(occ - 1) + u)) Data: jug AIC BIClogLik 29678.11 29703.78 -14835.05 Random effects: Formula: u ~ 1 | id u Residual StdDev: 5.116542 3.767097 Fixed effects: scal + xmid ~ 1 Value Std.Error DF t-value p-value scal 4.9244 0.08121 3339 60.63763 0 xmid 633.6956 115.37512 3339 5.49248 0 Erreur dans dim(x) : aucun slot de nom Dim pour cet objet de la classe correlation [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] nlme in R v.2.2.1 and S-Plus v. 7.0
Dear R-Users, I am comparing the nlme package in S-Plus (v. 7.0) and R (v. 2.2.1, nlme package version 3.1-68.1; the lattice, Matrix, and lme4 have also just been updated today, Jan. 23, 2006) on a PC (2.40 GHz Pentium 4 processor and 1 GHz RAM) operating on Windows XP. I am using a real data set with 1,191 units with at most 4 repeated measures per unit (data are incomplete, unbalanced). I use the same code with the same starting values for both programs and obtain slightly different results. I am aware that at this stage my model is far from being well specified for the given data. Nevertheless, I wonder whether one program is more suited than the other to pursue my modeling. Below I included the input + output code, first for S-Plus, than for R. Many thanks and best regards, Paolo Ghisletta #S-Plus #min=4, max=41 logistic4.a - nlme(jugs ~ 4 + 41 / (1 + xmid*exp( -scal*I(occ-1) + u)), fixed=scal+xmid~1, random= u~1 |id, start=c(scal=.2, xmid=155), data=jug, na.action=na.exclude, method=ML) summary(logistic4.a) Nonlinear mixed-effects model fit by maximum likelihood Model: jugs ~ 4 + 41/(1 + xmid * exp( - scal * I(occ - 1) + u)) Data: jug AIC BIClogLik 29595.62 29621.3 -14793.81 Random effects: Formula: u ~ 1 | id u Residual StdDev: 5.162391 3.718887 Fixed effects: scal + xmid ~ 1 Value Std.Error DF t-value p-value scal 4.96970.0823 3339 60.39508 .0001 xmid 683.5634 125.8509 3339 5.43153 .0001 Standardized Within-Group Residuals: Min Q1 MedQ3 Max -10.66576 -0.5039498 0.0002772166 0.1226745 5.453209 Number of Observations: 4531 Number of Groups: 1191 # R #min=4, max=41 logistic4.a - nlme(jugs ~ 4 + 41 / (1 + xmid*exp( -scal*I(occ-1)+ u)), data=jug, fixed=scal+xmid~1, random= u~1 |id, start=c(scal=.2, xmid=155), method=ML, na.action=na.exclude) summary(logistic4.a) Nonlinear mixed-effects model fit by maximum likelihood Model: jugs ~ 4 + 41/(1 + xmid * exp(-scal * I(occ - 1) + u)) Data: jug AIC BIClogLik 29678.11 29703.78 -14835.05 Random effects: Formula: u ~ 1 | id u Residual StdDev: 5.116542 3.767097 Fixed effects: scal + xmid ~ 1 Value Std.Error DF t-value p-value scal 4.9244 0.08121 3339 60.63763 0 xmid 633.6956 115.37512 3339 5.49248 0 Erreur dans dim(x) : aucun slot de nom Dim pour cet objet de la classe correlation [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] nlme problems
I meant fitting not maximising, it is a nonlinear mixed effects model, with both fixed and random effects. My assumption is that for the function I am using the approximation approach used in nlme is not quite close enough, and nothing much that I can do, except for looking at starting values. I was hoping that someone would have other suggestions, so I will keep attempting to understand the control parameters. I can add an extra parameter to the model and obtain a worse fit. Ken Dieter Menne writes: I'm maximising a reasonably complex function using nlme (version 3.1-65, have also tried 3.1-66) and am having trouble with fixed parameter estimates slightly away from the maximum of the log likelihood. I have profiled the log likelihood and it is a parabola but with sum dips. Interestingly changing the parameterisation moves the dips around slightly. Unfortunately the PNLS step is finding a maximum at the dips rather than the mle. I have tried using starting values for the fixed parameters without change. Any ideas ? Ken, you should not use nlme for maximising a complex function, because it's a rather specialized tool for mixed-model statistical analysis. Try to use optim directly, which has quite a few methods to choose from, and one of them might work for your problem. Dieter __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] nlme problems
Are you familiar with Pinheiro and Bates (2000) Mixed-Effects Models in S and S-Plus (Springer)? I suspect that book, especially the latter half, might contain the information you seek. spencer graves p.s. PLEASE do read the posting guide! www.R-project.org/posting-guide.html. Anecdotal evidence suggests that posts more consistent with that guide tend to receive quicker, more useful replies. Ken Beath wrote: I meant fitting not maximising, it is a nonlinear mixed effects model, with both fixed and random effects. My assumption is that for the function I am using the approximation approach used in nlme is not quite close enough, and nothing much that I can do, except for looking at starting values. I was hoping that someone would have other suggestions, so I will keep attempting to understand the control parameters. I can add an extra parameter to the model and obtain a worse fit. Ken Dieter Menne writes: I'm maximising a reasonably complex function using nlme (version 3.1-65, have also tried 3.1-66) and am having trouble with fixed parameter estimates slightly away from the maximum of the log likelihood. I have profiled the log likelihood and it is a parabola but with sum dips. Interestingly changing the parameterisation moves the dips around slightly. Unfortunately the PNLS step is finding a maximum at the dips rather than the mle. I have tried using starting values for the fixed parameters without change. Any ideas ? Ken, you should not use nlme for maximising a complex function, because it's a rather specialized tool for mixed-model statistical analysis. Try to use optim directly, which has quite a few methods to choose from, and one of them might work for your problem. Dieter __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html -- Spencer Graves, PhD Senior Development Engineer PDF Solutions, Inc. 333 West San Carlos Street Suite 700 San Jose, CA 95110, USA [EMAIL PROTECTED] www.pdf.com http://www.pdf.com Tel: 408-938-4420 Fax: 408-280-7915 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] nlme problems
Ken Beath kbeath at efs.mq.edu.au writes: I'm maximising a reasonably complex function using nlme (version 3.1-65, have also tried 3.1-66) and am having trouble with fixed parameter estimates slightly away from the maximum of the log likelihood. I have profiled the log likelihood and it is a parabola but with sum dips. Interestingly changing the parameterisation moves the dips around slightly. Unfortunately the PNLS step is finding a maximum at the dips rather than the mle. I have tried using starting values for the fixed parameters without change. Any ideas ? Ken, you should not use nlme for maximising a complex function, because it's a rather specialized tool for mixed-model statistical analysis. Try to use optim directly, which has quite a few methods to choose from, and one of them might work for your problem. Dieter __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] nlme problems
I'm maximising a reasonably complex function using nlme (version 3.1-65, have also tried 3.1-66) and am having trouble with fixed parameter estimates slightly away from the maximum of the log likelihood. I have profiled the log likelihood and it is a parabola but with sum dips. Interestingly changing the parameterisation moves the dips around slightly. Unfortunately the PNLS step is finding a maximum at the dips rather than the mle. I have tried using starting values for the fixed parameters without change. Any ideas ? Ken __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] nlme question
Deepayan, Yes, thanks for confirming my suspicions. I know mixed models are different but, I did not think they were so different as to preclude estimating the var-cov matrix (via the Hessian in Maximum likelihood, as you point out). Thanks for prompting me to think about MCMC. Your suggestion to consider MCMC makes me realize that using BUGS, I could directly sample from the posterior of the linear combination of parameters - to get its variance and eliminate the extra step using the var-cov matrix. As you say, with results better than the asymptotic approximation. (Maybe I can do the same thing with mcmcsamp?, but I'm not familiar with this and will have to take a look at it.) -Original Message- From: Deepayan Sarkar [mailto:[EMAIL PROTECTED] Sent: Thursday, November 17, 2005 2:22 PM To: Doran, Harold Cc: Wassell, James T., Ph.D.; r-help@stat.math.ethz.ch Subject: Re: nlme question On 11/17/05, Doran, Harold [EMAIL PROTECTED] wrote: I think the authors are mistaken. Sigma is random error, and due to its randomness it cannot be systematically related to anything. It is this ind. assumption that allows for the likelihood to be expressed as described in Pinhiero and Bates p.62. I think not. The issue is dependence between the _estimates_ of sigma, tao, etc, and that may well be present. Presumably, if one can compute the likelihood surface as a function of the 3 parameters, the hessian at the MLE's would give the estimated covariance. However, I don't think nlme does this. A different approach you might want to consider is using mcmcsamp in the lme4 package (or more precisely, the Matrix package) to get samples from the joint posterior distribution. This is likely to be better than the asymptotic normal approximation in any case. Deepayan __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] nlme question
On 11/21/05, Wassell, James T., Ph.D. [EMAIL PROTECTED] wrote: Deepayan, Yes, thanks for confirming my suspicions. I know mixed models are different but, I did not think they were so different as to preclude estimating the var-cov matrix (via the Hessian in Maximum likelihood, as you point out). Thanks for prompting me to think about MCMC. Your suggestion to consider MCMC makes me realize that using BUGS, I could directly sample from the posterior of the linear combination of parameters - to get its variance and eliminate the extra step using the var-cov matrix. As you say, with results better than the asymptotic approximation. (Maybe I can do the same thing with mcmcsamp?, but I'm not familiar with this and will have to take a look at it.) That should be easy. mcmcsamp produces mcmc objects, which are essentially matrices, with each row containing one set of parameter values. Getting a sample of a linear combination is one call to %*% away. Deepayan __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] nlme questions
Spencer; Thanks for your suggestions. I found the problem is in the library nlme. If you define phi1~factor(trt1)+factor(trt2) instead of phi1~trt1+trt2 the augPred function works. A bug? I don't know. Christian Spencer Graves [EMAIL PROTECTED] on 17-11-2005 20:19:32 To:Christian Mora [EMAIL PROTECTED] cc:r-help@stat.math.ethz.ch Subject:Re: [R] nlme questions Both your questions seem too vague to me. You might get more useful replies if you provide a simple example in a few lines of R code that a reader could copy from your email into R and see the result (as suggested in the posting guide! www.R-project.org/posting-guide.html). The process of preparing such a simple example might by itself provide the insight you desire. Alternatively, you might work line by line through the code for the R function you are using. Also, if you don't have Pinheiro and Bates (2000) Mixed-Effects Models in S and S-PLUS (Springer), I suggest you get it; it is excellent for things like this. I'm sorry I couldn't help more. spencer graves Christian Mora wrote: Dear R users; Ive got two questions concerning nlme library 3.1-65 (running on R 2.2.0 / Win XP Pro). The first one is related to augPred function. Ive been working with a nonlinear mixed model with no problems so far. However, when the parameters of the model are specified in terms of some other covariates, say treatment (i.e. phi1~trt1+trt2, etc) the augPred function give me the following error: Error in predict.nlme(object, value[1:(nrow(value)/nL),,drop=FALSE], : Levels 0,1 not allowed for trt1, trt2. The same model specification as well as the augPred function under SPlus 2000 run without problems. The second question has to deal with the time needed for the model to converge. It really takes a lot of time to fit the model on R in relation to the time required to fit the same model on SPlus. I can imagine this is related to the optimization algorithm or something like that, but I would like to have a different opinion on these two issues. Thanks in advance Christian Mora __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html -- Spencer Graves, PhD Senior Development Engineer PDF Solutions, Inc. 333 West San Carlos Street Suite 700 San Jose, CA 95110, USA [EMAIL PROTECTED] www.pdf.com http://www.pdf.com Tel: 408-938-4420 Fax: 408-280-7915 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] nlme questions
Warning: non-expert thoughts to follow. When passing an object to a predict method, the method looks at (a copy) of the original information from the dataframe that was used in the fit. Your original data contains information on trt1 and trt2, but factor(trt1) and factor(trt2) cannot be found in the original data. If you did the factor conversion in the original data myDat - factor(myDat$trt1) myDat - factor(myDat$trt2) then used myDat as the dataframe in the nlme call, all information would be available for the augPred method. That's why it works when you use trt1 and trt2 instead of factor(trt1) and factor(trt2). There is actually an implicit factor conversion happening in the nlme call if trt1 and trt2 are character variables, however if trt1 and trt2 are defined as numeric (ie 0 1) then it will fit as a numeric. --Matt -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of Christian Mora Sent: Friday, November 18, 2005 4:01 AM To: Spencer Graves Cc: r-help@stat.math.ethz.ch Subject: Re: [R] nlme questions Spencer; Thanks for your suggestions. I found the problem is in the library nlme. If you define phi1~factor(trt1)+factor(trt2) instead of phi1~trt1+trt2 the augPred function works. A bug? I don't know. Christian Spencer Graves [EMAIL PROTECTED] on 17-11-2005 20:19:32 To:Christian Mora [EMAIL PROTECTED] cc:r-help@stat.math.ethz.ch Subject:Re: [R] nlme questions Both your questions seem too vague to me. You might get more useful replies if you provide a simple example in a few lines of R code that a reader could copy from your email into R and see the result (as suggested in the posting guide! www.R-project.org/posting-guide.html). The process of preparing such a simple example might by itself provide the insight you desire. Alternatively, you might work line by line through the code for the R function you are using. Also, if you don't have Pinheiro and Bates (2000) Mixed-Effects Models in S and S-PLUS (Springer), I suggest you get it; it is excellent for things like this. I'm sorry I couldn't help more. spencer graves Christian Mora wrote: Dear R users; Ive got two questions concerning nlme library 3.1-65 (running on R 2.2.0 / Win XP Pro). The first one is related to augPred function. Ive been working with a nonlinear mixed model with no problems so far. However, when the parameters of the model are specified in terms of some other covariates, say treatment (i.e. phi1~trt1+trt2, etc) the augPred function give me the following error: Error in predict.nlme(object, value[1:(nrow(value)/nL),,drop=FALSE], : Levels 0,1 not allowed for trt1, trt2. The same model specification as well as the augPred function under SPlus 2000 run without problems. The second question has to deal with the time needed for the model to converge. It really takes a lot of time to fit the model on R in relation to the time required to fit the same model on SPlus. I can imagine this is related to the optimization algorithm or something like that, but I would like to have a different opinion on these two issues. Thanks in advance Christian Mora __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html -- Spencer Graves, PhD Senior Development Engineer PDF Solutions, Inc. 333 West San Carlos Street Suite 700 San Jose, CA 95110, USA [EMAIL PROTECTED] www.pdf.com http://www.pdf.com Tel: 408-938-4420 Fax: 408-280-7915 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] nlme questions
Yes, I agree. But if you define at the beginning of the code: data$trt1-as.factor(data$trt1) data$trt2-as.factor(data$trt2) being trt1 and trt2 dummy variables with values 0 or 1, and then run the model, for instance: fit_1-nlme(Y~b0/(1+exp((b1-X)/b2)),fixed=b0+b1+b2~trt1+trt2, random=b0+b1+b2~1,data=data,start=fixef(fit_0)) the augPred function doesn't work and return the error Error in predict.nlme(object, value[1:(nrow(value)/nL),,drop=FALSE], : Levels 0,1 not allowed for trt1, trt2 but, if you modify the code as fit_2-nlme(Y~b0/(1+exp((b1-X)/b2)),fixed=b0+b1+b2~factor(trt1)+factor(trt2), random=b0+b1+b2~1,data=data,start=fixef(fit_0)) i.e. indicating again that trt1 and trt2 are factors, even when they were previouslly defined as factors through the function as.factors, then augPred works Austin, Matt [EMAIL PROTECTED] on 18-11-2005 07:19:24 To:'Christian Mora' [EMAIL PROTECTED], Spencer Graves [EMAIL PROTECTED] cc:r-help@stat.math.ethz.ch Subject:RE: [R] nlme questions Warning: non-expert thoughts to follow. When passing an object to a predict method, the method looks at (a copy) of the original information from the dataframe that was used in the fit. Your original data contains information on trt1 and trt2, but factor(trt1) and factor(trt2) cannot be found in the original data. If you did the factor conversion in the original data myDat - factor(myDat$trt1) myDat - factor(myDat$trt2) then used myDat as the dataframe in the nlme call, all information would be available for the augPred method. That's why it works when you use trt1 and trt2 instead of factor(trt1) and factor(trt2). There is actually an implicit factor conversion happening in the nlme call if trt1 and trt2 are character variables, however if trt1 and trt2 are defined as numeric (ie 0 1) then it will fit as a numeric. --Matt -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of Christian Mora Sent: Friday, November 18, 2005 4:01 AM To: Spencer Graves Cc: r-help@stat.math.ethz.ch Subject: Re: [R] nlme questions Spencer; Thanks for your suggestions. I found the problem is in the library nlme. If you define phi1~factor(trt1)+factor(trt2) instead of phi1~trt1+trt2 the augPred function works. A bug? I don't know. Christian Spencer Graves [EMAIL PROTECTED] on 17-11-2005 20:19:32 To:Christian Mora [EMAIL PROTECTED] cc:r-help@stat.math.ethz.ch Subject:Re: [R] nlme questions Both your questions seem too vague to me. You might get more useful replies if you provide a simple example in a few lines of R code that a reader could copy from your email into R and see the result (as suggested in the posting guide! www.R-project.org/posting-guide.html). The process of preparing such a simple example might by itself provide the insight you desire. Alternatively, you might work line by line through the code for the R function you are using. Also, if you don't have Pinheiro and Bates (2000) Mixed-Effects Models in S and S-PLUS (Springer), I suggest you get it; it is excellent for things like this. I'm sorry I couldn't help more. spencer graves Christian Mora wrote: Dear R users; Ive got two questions concerning nlme library 3.1-65 (running on R 2.2.0 / Win XP Pro). The first one is related to augPred function. Ive been working with a nonlinear mixed model with no problems so far. However, when the parameters of the model are specified in terms of some other covariates, say treatment (i.e. phi1~trt1+trt2, etc) the augPred function give me the following error: Error in predict.nlme(object, value[1:(nrow(value)/nL),,drop=FALSE], : Levels 0,1 not allowed for trt1, trt2. The same model specification as well as the augPred function under SPlus 2000 run without problems. The second question has to deal with the time needed for the model to converge. It really takes a lot of time to fit the model on R in relation to the time required to fit the same model on SPlus. I can imagine this is related to the optimization algorithm or something like that, but I would like to have a different opinion on these two issues. Thanks in advance Christian Mora __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html -- Spencer Graves, PhD Senior Development Engineer PDF Solutions, Inc. 333 West San Carlos Street Suite 700 San Jose, CA 95110, USA [EMAIL PROTECTED] www.pdf.com http://www.pdf.com Tel: 408-938-4420 Fax: 408-280-7915 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] nlme question
-Original Message- From: Wassell, James T., Ph.D. Sent: Thursday, November 17, 2005 9:40 AM To: 'Deepayan Sarkar' Subject: RE: nlme question Deepayan, Thanks for your interest. It's difficult in email but I need the variance of Kappa = mu + 1.645*tau + 1.645* sigma Just using the standard variance calculation Var(K) = Var(mu) + (1.645)^2 * Var(tau) + 1.645^2 * var(sigma) + [three covariance terms with constant multipliers]. So, I can get var(mu) [or actually the standard error] from the summary function -- and var(tau) and var(sigma) using the VarCorr function, but I still need the covariance terms. I am trying to duplicate methods in a paper by Nicas Neuhaus, Variability in Respiratory Protection and the Assigned Protection Factor J. Occ Environ Health, Feb. 2004. p 99-109. (see eqn. 12). The authors used Proc Mixed, but I can't figure out how to get covariance terms with SAS either. Thanks again. Terry Wassell -Original Message- From: Deepayan Sarkar [mailto:[EMAIL PROTECTED] Sent: Thursday, November 17, 2005 2:52 AM To: Wassell, James T., Ph.D. Cc: r-help@stat.math.ethz.ch Subject: Re: nlme question On 11/16/05, Wassell, James T., Ph.D. [EMAIL PROTECTED] wrote: I am using the package nlme to fit a simple random effects (variance components model) with 3 parameters: overall mean (fixed effect), between subject variance (random) and within subject variance (random). So to paraphrase, your model can be written as (with the index i representing subject) y_ij = \mu + b_i + e_ij where b_i ~ N(0, \tao^2) e_ij ~ N(0, \sigma_2) and all b_i's and e_ij's are mutually independent. The model has, as you say, 3 parameters, \mu, \tao and \sigma. I have 16 subjects with 1-4 obs per subject. I need a 3x3 variance-covariance matrix that includes all 3 parameters in order to compute the variance of a specific linear combination. Can you specify the 'linear combination' that you want to estimate in terms of the model above? Deepayan __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] nlme question
James, By assumption, sigma and tau are assumed uncorrelated, which I believe Deepayan noted below. Sigma is also random error so it is uncorrelated with your fixed effects. What are the covariance terms you are seeking? -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Wassell, James T., Ph.D. Sent: Thursday, November 17, 2005 10:29 AM To: r-help@stat.math.ethz.ch Subject: [R] nlme question -Original Message- From: Wassell, James T., Ph.D. Sent: Thursday, November 17, 2005 9:40 AM To: 'Deepayan Sarkar' Subject: RE: nlme question Deepayan, Thanks for your interest. It's difficult in email but I need the variance of Kappa = mu + 1.645*tau + 1.645* sigma Just using the standard variance calculation Var(K) = Var(mu) + (1.645)^2 * Var(tau) + 1.645^2 * var(sigma) + [three covariance terms with constant multipliers]. So, I can get var(mu) [or actually the standard error] from the summary function -- and var(tau) and var(sigma) using the VarCorr function, but I still need the covariance terms. I am trying to duplicate methods in a paper by Nicas Neuhaus, Variability in Respiratory Protection and the Assigned Protection Factor J. Occ Environ Health, Feb. 2004. p 99-109. (see eqn. 12). The authors used Proc Mixed, but I can't figure out how to get covariance terms with SAS either. Thanks again. Terry Wassell -Original Message- From: Deepayan Sarkar [mailto:[EMAIL PROTECTED] Sent: Thursday, November 17, 2005 2:52 AM To: Wassell, James T., Ph.D. Cc: r-help@stat.math.ethz.ch Subject: Re: nlme question On 11/16/05, Wassell, James T., Ph.D. [EMAIL PROTECTED] wrote: I am using the package nlme to fit a simple random effects (variance components model) with 3 parameters: overall mean (fixed effect), between subject variance (random) and within subject variance (random). So to paraphrase, your model can be written as (with the index i representing subject) y_ij = \mu + b_i + e_ij where b_i ~ N(0, \tao^2) e_ij ~ N(0, \sigma_2) and all b_i's and e_ij's are mutually independent. The model has, as you say, 3 parameters, \mu, \tao and \sigma. I have 16 subjects with 1-4 obs per subject. I need a 3x3 variance-covariance matrix that includes all 3 parameters in order to compute the variance of a specific linear combination. Can you specify the 'linear combination' that you want to estimate in terms of the model above? Deepayan __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] nlme question
Thank you for taking the time to think about my problem. The reference states: The covariance structure must be considered, because for unbalanced data the estimates (i.e. mu, sigma and tau hats) are not typically independent. Page 105. It would be nice to simply assume zero covariance terms, but the authors reject this simplification. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] nlme question
I think the authors are mistaken. Sigma is random error, and due to its randomness it cannot be systematically related to anything. It is this ind. assumption that allows for the likelihood to be expressed as described in Pinhiero and Bates p.62. If you are finding that sigma is systematically related to a fixed effect, then your model is misspecified and there are omitted characteristics that need to be accounted for. -Original Message- From: Wassell, James T., Ph.D. [mailto:[EMAIL PROTECTED] Sent: Thursday, November 17, 2005 11:52 AM To: Doran, Harold; r-help@stat.math.ethz.ch Subject: RE: [R] nlme question Thank you for taking the time to think about my problem. The reference states: The covariance structure must be considered, because for unbalanced data the estimates (i.e. mu, sigma and tau hats) are not typically independent. Page 105. It would be nice to simply assume zero covariance terms, but the authors reject this simplification. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] nlme question
On 11/17/05, Doran, Harold [EMAIL PROTECTED] wrote: I think the authors are mistaken. Sigma is random error, and due to its randomness it cannot be systematically related to anything. It is this ind. assumption that allows for the likelihood to be expressed as described in Pinhiero and Bates p.62. I think not. The issue is dependence between the _estimates_ of sigma, tao, etc, and that may well be present. Presumably, if one can compute the likelihood surface as a function of the 3 parameters, the hessian at the MLE's would give the estimated covariance. However, I don't think nlme does this. A different approach you might want to consider is using mcmcsamp in the lme4 package (or more precisely, the Matrix package) to get samples from the joint posterior distribution. This is likely to be better than the asymptotic normal approximation in any case. Deepayan __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] nlme questions
Both your questions seem too vague to me. You might get more useful replies if you provide a simple example in a few lines of R code that a reader could copy from your email into R and see the result (as suggested in the posting guide! www.R-project.org/posting-guide.html). The process of preparing such a simple example might by itself provide the insight you desire. Alternatively, you might work line by line through the code for the R function you are using. Also, if you don't have Pinheiro and Bates (2000) Mixed-Effects Models in S and S-PLUS (Springer), I suggest you get it; it is excellent for things like this. I'm sorry I couldn't help more. spencer graves Christian Mora wrote: Dear R users; Ive got two questions concerning nlme library 3.1-65 (running on R 2.2.0 / Win XP Pro). The first one is related to augPred function. Ive been working with a nonlinear mixed model with no problems so far. However, when the parameters of the model are specified in terms of some other covariates, say treatment (i.e. phi1~trt1+trt2, etc) the augPred function give me the following error: Error in predict.nlme(object, value[1:(nrow(value)/nL),,drop=FALSE], : Levels 0,1 not allowed for trt1, trt2. The same model specification as well as the augPred function under SPlus 2000 run without problems. The second question has to deal with the time needed for the model to converge. It really takes a lot of time to fit the model on R in relation to the time required to fit the same model on SPlus. I can imagine this is related to the optimization algorithm or something like that, but I would like to have a different opinion on these two issues. Thanks in advance Christian Mora __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html -- Spencer Graves, PhD Senior Development Engineer PDF Solutions, Inc. 333 West San Carlos Street Suite 700 San Jose, CA 95110, USA [EMAIL PROTECTED] www.pdf.com http://www.pdf.com Tel: 408-938-4420 Fax: 408-280-7915 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] nlme question
I am using the package nlme to fit a simple random effects (variance components model) with 3 parameters: overall mean (fixed effect), between subject variance (random) and within subject variance (random). I have 16 subjects with 1-4 obs per subject. I need a 3x3 variance-covariance matrix that includes all 3 parameters in order to compute the variance of a specific linear combination. But I can't get the 3x3 matrix. Should I specify the formulae in lme differently or is there some other suggestion that I might try? Thank you very much for any advice. my data and code follows. mydata - structure(list(subject = c(17, 17, 17, 17, 5, 16, 16, 8, 8, 8, 8, 7, 7, 7, 7, 9, 9, 9, 10, 10, 11, 11, 11, 12, 12, 12, 12, 14, 14, 14, 14, 15, 15, 15, 15, 13, 13, 13, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 4, 4, 4), y = c(-2.944, -5.521, -4.644, -4.736, -5.799, -4.635, -5.986, -5.011, -3.989, -4.682, -6.975, -6.064, -5.991, -8.068, -5.075, -5.298, -6.446, -5.037, -6.534, -4.828, -5.886, -4.025, -6.607, -5.914, -4.159, -6.757, -4.564, -5.011, -5.416, -5.371, -5.768, -7.962, -5.635, -4.575, -5.268, -6.975, -5.598, -7.669, -8.292, -7.265, -5.858, -7.003, -3.807, -5.829, -5.613, -3.135, -5.136, -5.394, -5.011, -5.598, -4.174)), .Names = c(subject, y), class = data.frame, row.names = c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51)) lme.res-lme(fixed=y~1,data=mydata,random=~1|subject,method=ML) VarCorr(lme.res) [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] nlme question
On 11/16/05, Wassell, James T., Ph.D. [EMAIL PROTECTED] wrote: I am using the package nlme to fit a simple random effects (variance components model) with 3 parameters: overall mean (fixed effect), between subject variance (random) and within subject variance (random). So to paraphrase, your model can be written as (with the index i representing subject) y_ij = \mu + b_i + e_ij where b_i ~ N(0, \tao^2) e_ij ~ N(0, \sigma_2) and all b_i's and e_ij's are mutually independent. The model has, as you say, 3 parameters, \mu, \tao and \sigma. I have 16 subjects with 1-4 obs per subject. I need a 3x3 variance-covariance matrix that includes all 3 parameters in order to compute the variance of a specific linear combination. Can you specify the 'linear combination' that you want to estimate in terms of the model above? Deepayan __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] nlme error message
p.s. You may also find useful the process I followed to diagnose this problem. 1. I copied your example into R and confirmed that I could replicate the error. 2. I read the documentation, invoked debug, and tried different things to isolate the problem. For example, I listed the code for corCompSymm. The documentation led me to Initialize(cs), which gave me the same error message. 3. Ultimately, I found in Pinhiero and Bates (2000) Mixed-Effects Models for S and S-Plus (Springer) an example that looked exactly like yours but did NOT produce the same error message to Initialize. By comparing the example that worked with the superficially identical case that didn't, I found the difference. hope this helps. spencer graves ### You need repeated measures for a random effect to make any sense. I modified your example as follows, and the error went away. mytable$RIL2 - rep(1:4, 1:4) cs2 - corCompSymm(value=0.5, form=~1|RIL2) model2-lme(mytrait~myloc, data=mytable, random=~1|RIL2, + correlation=cs2) (I've made similar mistakes and had great difficulty finding the problem.) spencer graves J.Fu wrote: Dear Friends, I am seeking for any help on an error message in lme functions. I use mixed model to analyze a data with compound symmetric correlation structure. But I get an error message: Error in corMatrix.corCompSymm(object) : NA/NaN/Inf in foreign function call (arg 1). If I change the correlation structure to corAR1, then no error. I have no clue how to solve this problem. I would highly appreciate any help. Thanks in advance and looking forward to any help. JY I attached my data and codes here: # data: mytable mytrait myloc RIL A1 0.590950330 0 1 A2 -0.315469846 -1 2 A3 -0.265690115 0 3 A4 0.342885046 0 4 A5 0.007613402 1 5 A6 0.285997884 0 6 A7 0.333841975 0 7 A8 -0.599817735 -1 8 A9 0.242621036 0 9 A10 0.518959588 1 10 cs-corCompSymm(0.5, form=~1|RIL, fixed=T) model-lme(mytrait~myloc, data=mytable, random=~1|RIL, na.action=na.omit, correlation=cs) Error in corMatrix.corCompSymm(object) : NA/NaN/Inf in foreign function call (arg 1) __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html -- Spencer Graves, PhD Senior Development Engineer PDF Solutions, Inc. 333 West San Carlos Street Suite 700 San Jose, CA 95110, USA [EMAIL PROTECTED] www.pdf.com http://www.pdf.com Tel: 408-938-4420 Fax: 408-280-7915 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] nlme, predict.nlme, levels not allowed
Going through the R-Dev list, I have found this (from Pedro Afalo), dated 8 April 2004: Dear Richard, The problem that you report is documented (but no solution given) in the file ch08.R in the scripts directory of nlme package. I have found the following workaround just by chance, but it may give a clue of what is the problem to those who know how to program in R. The solution is to add an explicit call to factor in the nlme call. In the case of the error reported by Richard the following call to nlme in the ch08.R file can be used: fm4CO2.nlme - update( fm3CO2.nlme, fixed = list(Asym + lrc ~ factor(Type) * factor(Treatment), c0 ~ 1), start = c(fm3CO2.fix[1:5], 0, 0, 0, fm3CO2.fix[6]) ) instead of: fm4CO2.nlme - update( fm3CO2.nlme, fixed = list(Asym + lrc ~ Type * Treatment, c0 ~ 1), start = c(fm3CO2.fix[1:5], 0, 0, 0, fm3CO2.fix[6]) ) I hope this helps, Pedro. I have tried the workaround for my own case and it works... Any news since then about fixing the problem? Patrick Patrick Giraudoux a écrit : Dear listers, I am trying to fit a nlme model with age and pds as reals, and zone a factor with two levels Annaba and Boumalek . The best model found is the following: modm3 Nonlinear mixed-effects model fit by maximum likelihood Model: pds ~ Asym/(1 + exp((xmid - age)/scal)) Data: croispulm Log-likelihood: -91.86667 Fixed: list(Asym ~ zone, xmid ~ zone, scal ~ 1) Asym.(Intercept) Asym.zoneBoumalek xmid.(Intercept) xmid.zoneBoumalek scal 9.995510790.394239664.97981027 0.069698072.23116661 Random effects: Formula: list(Asym ~ 1, xmid ~ 1) Level: nichoir Structure: General positive-definite, Log-Cholesky parametrization StdDev Corr Asym.(Intercept) 1.796565e-06 As.(I) xmid.(Intercept) 1.219400e-04 0Residual 6.163282e-01 Correlation Structure: Continuous AR(1) Formula: ~age | nichoir Parameter estimate(s): Phi 0.3395242 Number of Observations: 102 Number of Groups: 17 Everything normal so far. Things come to be strange when I try to compute predicted values: pred-predict(modm3,newdata=mydata,type=response) Error in predict.nlme(modm3, newdata = mydata, type = response) : Levels Annaba,Boumalek not allowed for zone I have checked and re-checked that zone in the newdata is well a factor with the good levels, and I can hardly understand why these two levels used when fitting the model are now rejected when used for computing predicted values. Any hint welcome, Best regards, Patrick __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] nlme error message
You need repeated measures for a random effect to make any sense. I modified your example as follows, and the error went away. mytable$RIL2 - rep(1:4, 1:4) cs2 - corCompSymm(value=0.5, form=~1|RIL2) model2-lme(mytrait~myloc, data=mytable, random=~1|RIL2, + correlation=cs2) (I've made similar mistakes and had great difficulty finding the problem.) spencer graves J.Fu wrote: Dear Friends, I am seeking for any help on an error message in lme functions. I use mixed model to analyze a data with compound symmetric correlation structure. But I get an error message: Error in corMatrix.corCompSymm(object) : NA/NaN/Inf in foreign function call (arg 1). If I change the correlation structure to corAR1, then no error. I have no clue how to solve this problem. I would highly appreciate any help. Thanks in advance and looking forward to any help. JY I attached my data and codes here: # data: mytable mytrait myloc RIL A1 0.590950330 0 1 A2 -0.315469846 -1 2 A3 -0.265690115 0 3 A4 0.342885046 0 4 A5 0.007613402 1 5 A6 0.285997884 0 6 A7 0.333841975 0 7 A8 -0.599817735 -1 8 A9 0.242621036 0 9 A10 0.518959588 1 10 cs-corCompSymm(0.5, form=~1|RIL, fixed=T) model-lme(mytrait~myloc, data=mytable, random=~1|RIL, na.action=na.omit, correlation=cs) Error in corMatrix.corCompSymm(object) : NA/NaN/Inf in foreign function call (arg 1) __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html -- Spencer Graves, PhD Senior Development Engineer PDF Solutions, Inc. 333 West San Carlos Street Suite 700 San Jose, CA 95110, USA [EMAIL PROTECTED] www.pdf.com http://www.pdf.com Tel: 408-938-4420 Fax: 408-280-7915 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] nlme, predict.nlme, levels not allowed
Dear listers, I am trying to fit a nlme model with age and pds as reals, and zone a factor with two levels Annaba and Boumalek . The best model found is the following: modm3 Nonlinear mixed-effects model fit by maximum likelihood Model: pds ~ Asym/(1 + exp((xmid - age)/scal)) Data: croispulm Log-likelihood: -91.86667 Fixed: list(Asym ~ zone, xmid ~ zone, scal ~ 1) Asym.(Intercept) Asym.zoneBoumalek xmid.(Intercept) xmid.zoneBoumalek scal 9.995510790.394239664.97981027 0.069698072.23116661 Random effects: Formula: list(Asym ~ 1, xmid ~ 1) Level: nichoir Structure: General positive-definite, Log-Cholesky parametrization StdDev Corr Asym.(Intercept) 1.796565e-06 As.(I) xmid.(Intercept) 1.219400e-04 0 Residual 6.163282e-01 Correlation Structure: Continuous AR(1) Formula: ~age | nichoir Parameter estimate(s): Phi 0.3395242 Number of Observations: 102 Number of Groups: 17 Everything normal so far. Things come to be strange when I try to compute predicted values: pred-predict(modm3,newdata=mydata,type=response) Error in predict.nlme(modm3, newdata = mydata, type = response) : Levels Annaba,Boumalek not allowed for zone I have checked and re-checked that zone in the newdata is well a factor with the good levels, and I can hardly understand why these two levels used when fitting the model are now rejected when used for computing predicted values. Any hint welcome, Best regards, Patrick __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] nlme Singularity in backsolve at level 0, block 1
RSiteSearch(Singularity in backsolve) produced 33 hits, the second of which was the following: http://finzi.psych.upenn.edu/R/Rhelp02a/archive/62691.html This reply from Peter Dalgaard dates 8 Oct. 2005 asks, Which version of R and NLME? R 2.2.0 ships with a version where the internal optimizer is changed to nlminb(). As I understand it, this was in response to reports where code that worked in S-PLUS refused to work in R. If you are using R 2.2.0 and the latest version of nlme, then it's possible (likely?) that your model is overparameterized, and can't be fit with nlme. Can you fit the model using, e.g., nls ignoring the random effects model, as suggested in ch. 8 of Pinheiro and Bates (2000) Mixed-Effects Models for S and S-Plus (Springer)? If no, it's virtually certain that nlme won't work, either. If I had trouble diagnosing the problem, I might recode it for optim(..., hessian=T), then look at the eigenvalues and vectors of the hessian to understand the deficiencies in the model. If nls works for you but not nlsList, there still might be hope, but it would be more difficult. If I had that problem and it were sufficiently important, I would step through nlme until I could figure out how to modify the code so it would continue, as does optim, and provide answers that would help me diagnose the singularity. (I'd also include a facility for providing a prior distribution that should eliminate the singularities.) Hope this helps. spencer graves Elizabeth Lawson wrote: Hi, I am hoping some one can help with this. I am using nlme to fit a random coefficients model. It ran for hours before returning Error: Singularity in backsolve at level 0, block 1 The model is plavix.nlme-nlme(PLX_NRX~loglike( PLX_NRX,PD4_42D,GAT_34D,VIS_42D,MSL_42D,SPE_ROL,XM2_DUM,THX_DUM,b0,b1,b2,b3,b4,b5,b6,b7,alpha), + data=data, + fixed=list(b0 + b1+b2+b3+b4+b5+b6+b7+alpha~1), + random=b0+b1+b2+b3+b4+b5+b6+b7~1|menum, + + start=c(b0=0,b1=0,b2=0,b3=0,b4=0,b5=0,b6=0,b7=0,alpha=5) + ) Can anyone tell me what this error means and how I can run the model? Thanks, Elizabeth Lawson - [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html -- Spencer Graves, PhD Senior Development Engineer PDF Solutions, Inc. 333 West San Carlos Street Suite 700 San Jose, CA 95110, USA [EMAIL PROTECTED] www.pdf.com http://www.pdf.com Tel: 408-938-4420 Fax: 408-280-7915 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] nlme questions
Dear R users; Ive got two questions concerning nlme library 3.1-65 (running on R 2.2.0 / Win XP Pro). The first one is related to augPred function. Ive been working with a nonlinear mixed model with no problems so far. However, when the parameters of the model are specified in terms of some other covariates, say treatment (i.e. phi1~trt1+trt2, etc) the augPred function give me the following error: Error in predict.nlme(object, value[1:(nrow(value)/nL),,drop=FALSE], : Levels 0,1 not allowed for trt1, trt2. The same model specification as well as the augPred function under SPlus 2000 run without problems. The second question has to deal with the time needed for the model to converge. It really takes a lot of time to fit the model on R in relation to the time required to fit the same model on SPlus. I can imagine this is related to the optimization algorithm or something like that, but I would like to have a different opinion on these two issues. Thanks in advance Christian Mora __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] NLME
Guy Forrester ForresterG at landcareresearch.co.nz writes: R : Copyright 2005, The R Foundation for Statistical Computing Version 2.1.1 (2005-06-20), ISBN 3-900051-07-0 Jose Pinheiro, Douglas Bates, Saikat DebRoy and Deepayan Sarkar (2005). ... I am trying to run the scripts from the Mixed Models book and am running into some difficulty - could any one tell me why this doesn't work (from pages 369 - 370) CO2 #FINE plot(CO2,outer= ~Treatment*Type,layout=c(4,1)) #FINE These examples were written for S; until version 2.1.1 (the version you are using) the optimizing engine for R was different from S, and some of the examples did not work. Consult the nlme/scripts, were the non-converging parts are commented out. From 2.2.0, the R/s engines should be similar, so if you update the examples from the book should run. My mileage varied, however: After installing R 2.2.0 (Windows), I immediately tried to run the ping-pong examples in ch06.R and ch08.R, because I had encountered quite few of these in my own stuff. fm1Quin.nlme works now. Strangely, fm2Pheno.nlme still plays merry-go-round. However, when I changed pnlsTol to 0.1, it worked. The results were close to those published, but not exactly the same. I did not try this in previous versions, maybe it would have worked there too with this modification. Dieter __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] NLME
Dear All, Using:- R : Copyright 2005, The R Foundation for Statistical Computing Version 2.1.1 (2005-06-20), ISBN 3-900051-07-0 and Jose Pinheiro, Douglas Bates, Saikat DebRoy and Deepayan Sarkar (2005). nlme: Linear and nonlinear mixed effects models. R package version 3.1-65. on a WINDOWS 2000 machine I am trying to run the scripts from the Mixed Models book and am running into some difficulty - could any one tell me why this doesn't work (from pages 369 - 370) CO2 #FINE plot(CO2,outer= ~Treatment*Type,layout=c(4,1)) #FINE fm1CO2.lis - nlsList( SSasympOff, CO2 ) #FINE fm1CO2.lis #FINE fm1CO2.nlme - nlme( fm1CO2.lis ) #COPIED DIRECTLY FROM BOOK!!?? Gives the error message:- Error in nlme.formula(model = uptake ~ SSasympOff(conc, Asym, lrc, c0), : Maximum number of iterations reached without convergence Any assistance would be greatly appreciated Guy Forrester Guy J Forrester Biometrician Manaaki Whenua - Landcare Research PO Box 69, Lincoln, New Zealand. Tel. +64 3 325 6701 x3738 Fax +64 3 325 2418 E-mail [EMAIL PROTECTED] www.LandcareResearch.co.nz WARNING: This email and any attachments may be confidential ...{{dropped}} __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] nlme error message
Dear Friends, I am seeking for any help on an error message in lme functions. I use mixed model to analyze a data with compound symmetric correlation structure. But I get an error message: Error in corMatrix.corCompSymm(object) : NA/NaN/Inf in foreign function call (arg 1). If I change the correlation structure to corAR1, then no error. I have no clue how to solve this problem. I would highly appreciate any help. Thanks in advance and looking forward to any help. JY I attached my data and codes here: # data: mytable mytrait myloc RIL A1 0.590950330 0 1 A2 -0.315469846 -1 2 A3 -0.265690115 0 3 A4 0.342885046 0 4 A5 0.007613402 1 5 A6 0.285997884 0 6 A7 0.333841975 0 7 A8 -0.599817735 -1 8 A9 0.242621036 0 9 A10 0.518959588 1 10 cs-corCompSymm(0.5, form=~1|RIL, fixed=T) model-lme(mytrait~myloc, data=mytable, random=~1|RIL, na.action=na.omit, correlation=cs) Error in corMatrix.corCompSymm(object) : NA/NaN/Inf in foreign function call (arg 1) __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] nlme Singularity in backsolve at level 0, block 1
Hi, I am hoping some one can help with this. I am using nlme to fit a random coefficients model. It ran for hours before returning Error: Singularity in backsolve at level 0, block 1 The model is plavix.nlme-nlme(PLX_NRX~loglike(PLX_NRX,PD4_42D,GAT_34D,VIS_42D,MSL_42D,SPE_ROL,XM2_DUM,THX_DUM,b0,b1,b2,b3,b4,b5,b6,b7,alpha), + data=data, + fixed=list(b0 + b1+b2+b3+b4+b5+b6+b7+alpha~1), + random=b0+b1+b2+b3+b4+b5+b6+b7~1|menum, + + start=c(b0=0,b1=0,b2=0,b3=0,b4=0,b5=0,b6=0,b7=0,alpha=5) + ) Can anyone tell me what this error means and how I can run the model? Thanks, Elizabeth Lawson - [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] nlme gls() error
Hello I'm fitting a gls model with a variance-covariance structure and an getting an error message I don't understand I'm using gls() from the nlme library with the structure defined by correlation = corSymm(form = ~1|Subject), weights = varIdent(form=~1|strata) I get the error Error in recalc.corSymm(object[[i]], conLin) : NA/NaN/Inf in foreign function call (arg 1) My dependent variable is highly positively skewed and has with many zero value. Any ideas as to the cause of the error? Could I play around with any of the glsControl values to help out? Thanks Richard -- Dr. Richard Nixon, MRC Biostatistics Unit, Cambridge, UK http://www.mrc-bsu.cam.ac.uk/personal/richard __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] nlme and spatially correlated errors
Have you tried anova(fit1, fit2), where fit1 - lme(one model...) fit2 - lme(a submodel ... ) This anova does about the best that anyone knows how to do -- or at lest did 7 years ago when it was written. If the submodel changes the fixed effects, you should use method='ML'. If the submodel changes the noise model specification, use method='REML'. See Pinheiro and Bates (2000) Mixed-Effect Models in S and S-Plus (Springer). If you need something more precise than the standard approximations, try simulate.lme. buena suerte! spencer graves Patricia Balvanera wrote: Dear R users, I am using lme and nlme to account for spatially correlated errors as random effects. My basic question is about being able to correct F, p, R2 and parameters of models that do not take into account the nature of such errors using gls, glm or nlm and replace them for new F, p, R2 and parameters using lme and nlme as random effects. I am studying distribution patterns of 50 tree species along a gradient. That gradient was sampled through 27 transects, with 10 plots within each transect. For each plot I have data on presence/absence, abundance and basal area of the species. I also have data for 4 environmental variables related to water availability (soil water retention capacity, slope, insolation, altitude) and X and Y coordinates for each plot. I explored wether the relationship between any of the response variables (presence/absence, abundance, basal area) and the environmental variables was linear, polinomial, or non-linear. My main interest in this question is that I proceeded to correct for spatial autocorrelation (both within transects and overall) following the procedures suggest by Crawley 2002 for linear models e.g. (GUAMAC = a species, CRAS = soil water retention capacity, TRANSECTO = transect) model1-gls(GUAMAC ~ CRAS) model2-lme(GUAMAC ~ CRAS, random = ~ 1 | TRANSECTO) model3-lme(GUAMAC ~ CRAS, random = GUAMAC ~ CRAS | TRANSECTO) model4-lme(GUAMAC ~ CRAS, random = GUAMAC ~ CRAS -1 | TRANSECTO) AIC(model1,model2,model3,model4) df AIC model1 3 3730.537 model2 4 3698.849 model3 6 3702.408 model4 4 3704.722 plot(Variogram(model2, form = ~ X + Y)) model5-update(model2,corr=corSpher(c(30,0.8), form = ~ X + Y, nugget = T)) plot(Variogram(modelo7, resType = n)) summary(model5) In this case I obtain new F for the independent variable INSOLACION, new R2 for the whole model and new parameters for the linear model. I have also applied this procedure to polinomial models and to glms with binomial errors (presence/absence) with no problem. I am nevertheless stuck with non-linear models. I am using the protocols you suggested in the 1998 manuals by Pinheiro and Bates, and those suggested by Crawley 2002. Please find enclose an example with an exponential model (which I chose for being simple). In fact the linear models I am using are a bit more complicated. (HELLOT is a species, INSOLACION = INSOLATION, basal = basal area of the species, TRANSECTO = transect) HELLOT ~ exp(A + (B * INSOLACION)) basal.HELLOT -function(A,B,INSOLACION) exp(A + (B * INSOLACION)) HELLOT ~ basal.HELLOT(A,B,INSOLACION) basal.HELLOT- deriv(~ exp(A + (B * INSOLACION)) + , LETTERS [1:2], function(A, B, INSOLACION){}) model1- nlme(model = HELLOT ~ exp(A + (B * INSOLACION)), fixed = A + B ~ 1, random = A + B ~ 1, groups = ~ TRANSECTO, start = list(fixed = c(5.23, -0.05))) It runs perfectly and gives new values for parameters A and B, but would only give me F for fixed effects of A and B, while what I am really looking for is F for fixed effects of INSOLACION and the R2 of the new model. Thank you so much in advance for your help Dra. Patricia Balvanera Centro de Investigaciones en Ecosistemas, UNAM-Campus Morelia Apdo. Postal 27-3, Xangari 58090 Morelia, Michoacán, Mexico Tel. (52-443)3-22-27-07, (52-55) 56-23-27-07 FAX (52-443) 3-22-27-19, (52-55) 56-23-27-19 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html -- Spencer Graves, PhD Senior Development Engineer PDF Solutions, Inc. 333 West San Carlos Street Suite 700 San Jose, CA 95110, USA [EMAIL PROTECTED] www.pdf.com http://www.pdf.com Tel: 408-938-4420 Fax: 408-280-7915 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] nlme and spatially correlated errors
Dear R users, I am using lme and nlme to account for spatially correlated errors as random effects. My basic question is about being able to correct F, p, R2 and parameters of models that do not take into account the nature of such errors using gls, glm or nlm and replace them for new F, p, R2 and parameters using lme and nlme as random effects. I am studying distribution patterns of 50 tree species along a gradient. That gradient was sampled through 27 transects, with 10 plots within each transect. For each plot I have data on presence/absence, abundance and basal area of the species. I also have data for 4 environmental variables related to water availability (soil water retention capacity, slope, insolation, altitude) and X and Y coordinates for each plot. I explored wether the relationship between any of the response variables (presence/absence, abundance, basal area) and the environmental variables was linear, polinomial, or non-linear. My main interest in this question is that I proceeded to correct for spatial autocorrelation (both within transects and overall) following the procedures suggest by Crawley 2002 for linear models e.g. (GUAMAC = a species, CRAS = soil water retention capacity, TRANSECTO = transect) model1-gls(GUAMAC ~ CRAS) model2-lme(GUAMAC ~ CRAS, random = ~ 1 | TRANSECTO) model3-lme(GUAMAC ~ CRAS, random = GUAMAC ~ CRAS | TRANSECTO) model4-lme(GUAMAC ~ CRAS, random = GUAMAC ~ CRAS -1 | TRANSECTO) AIC(model1,model2,model3,model4) df AIC model1 3 3730.537 model2 4 3698.849 model3 6 3702.408 model4 4 3704.722 plot(Variogram(model2, form = ~ X + Y)) model5-update(model2,corr=corSpher(c(30,0.8), form = ~ X + Y, nugget = T)) plot(Variogram(modelo7, resType = n)) summary(model5) In this case I obtain new F for the independent variable INSOLACION, new R2 for the whole model and new parameters for the linear model. I have also applied this procedure to polinomial models and to glms with binomial errors (presence/absence) with no problem. I am nevertheless stuck with non-linear models. I am using the protocols you suggested in the 1998 manuals by Pinheiro and Bates, and those suggested by Crawley 2002. Please find enclose an example with an exponential model (which I chose for being simple). In fact the linear models I am using are a bit more complicated. (HELLOT is a species, INSOLACION = INSOLATION, basal = basal area of the species, TRANSECTO = transect) HELLOT ~ exp(A + (B * INSOLACION)) basal.HELLOT -function(A,B,INSOLACION) exp(A + (B * INSOLACION)) HELLOT ~ basal.HELLOT(A,B,INSOLACION) basal.HELLOT- deriv(~ exp(A + (B * INSOLACION)) + , LETTERS [1:2], function(A, B, INSOLACION){}) model1- nlme(model = HELLOT ~ exp(A + (B * INSOLACION)), fixed = A + B ~ 1, random = A + B ~ 1, groups = ~ TRANSECTO, start = list(fixed = c(5.23, -0.05))) It runs perfectly and gives new values for parameters A and B, but would only give me F for fixed effects of A and B, while what I am really looking for is F for fixed effects of INSOLACION and the R2 of the new model. Thank you so much in advance for your help Dra. Patricia Balvanera Centro de Investigaciones en Ecosistemas, UNAM-Campus Morelia Apdo. Postal 27-3, Xangari 58090 Morelia, Michoacán, Mexico Tel. (52-443)3-22-27-07, (52-55) 56-23-27-07 FAX (52-443) 3-22-27-19, (52-55) 56-23-27-19 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] nlme, MASS and geoRglm for spatial autocorrelation?
Hi. I'm trying to perform what should be a reasonably basic analysis of some spatial presence/absence data but am somewhat overwhelmed by the options available and could do with a helpful pointer. My researches so far indicate that if my data were normal, I would simply use gls() (in nlme) and one of the various corSpatial functions (eg. corSpher() to be analagous to similar analysis in SAS) with form = ~ x+y (and a nugget if appropriate). However, my data are binomial, so I need a different approach. Using various packages I could define a mixed model (eg using glmmPQL() in MASS) with similar correlation structure, but I seem to need to define a random effect to use glmmPQL(), and I don't have any. Could this requirement be switched off and still use the mixed model approach? Alternatively, it may be possible to define the variance appropriately in gls and use logits directly, but I'm not quite sure how and suspect there's a more straight-forward alternative. Looking at geoRglm suggests there may be solutions here, but it seems like it might be overkill for what is, at first appearance at least, not such a difficult problem. Maybe I'm just being statistically naive, but I think I'm looking for a function somewhere between gls() and glmmPQL() and would be grateful for any pointers. Thanks very much, Colin Beale ... [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] nlme, MASS and geoRglm for spatial autocorrelation?
You seem to want to model spatially correlated bernoulli variables. That's a difficult task, especially as these are bernoulli and not binomial(n1). With a much fuller description of the problem we may be able to help, but I at least have no idea of the aims of the analysis. glmmPQL is designed for independent observations conditional on the random effects. On Wed, 13 Jul 2005, Beale, Colin wrote: Hi. I'm trying to perform what should be a reasonably basic analysis of some spatial presence/absence data but am somewhat overwhelmed by the options available and could do with a helpful pointer. My researches so far indicate that if my data were normal, I would simply use gls() (in nlme) and one of the various corSpatial functions (eg. corSpher() to be analagous to similar analysis in SAS) with form = ~ x+y (and a nugget if appropriate). However, my data are binomial, so I need a different approach. Using various packages I could define a mixed model (eg using glmmPQL() in MASS) with similar correlation structure, but I seem to need to define a random effect to use glmmPQL(), and I don't have any. Could this requirement be switched off and still use the mixed model approach? Alternatively, it may be possible to define the variance appropriately in gls and use logits directly, but I'm not quite sure how and suspect there's a more straight-forward alternative. Looking at geoRglm suggests there may be solutions here, but it seems like it might be overkill for what is, at first appearance at least, not such a difficult problem. Maybe I'm just being statistically naive, but I think I'm looking for a function somewhere between gls() and glmmPQL() and would be grateful for any pointers. Thanks very much, Colin Beale ... [[alternative HTML version deleted]] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] nlme, MASS and geoRglm for spatial autocorrelation?
My data are indeed bernoulli and not binomial, as I indicated. The dataset consists of points (grid refs) that are either locations of events (animals) or random points (with no animal present). For each point I have a suite of environmental covariates describing the habitat at this point. I was anticipating some sort of function that could run: function(present ~ env1 + env2 + env3 + x + y, correlation = corSpher(form=~x+y), family = binomial) where env1 to env3 are the habitat covariates, x y the grid refs. If my data were normal, I undertand I would use gls() with exactly this, but drop the family requirement. As my data are bernoulli this is clearly not possible, but I was hoping the analysis may be analagous? The eventual aim is to firstly understand which environmental covariates are important in determining presence and then to use habitat maps to identify the areas expected to be most important. Colin -Original Message- From: Prof Brian Ripley [mailto:[EMAIL PROTECTED] Sent: 13 July 2005 11:30 To: Beale, Colin Cc: r-help@stat.math.ethz.ch Subject: Re: [R] nlme, MASS and geoRglm for spatial autocorrelation? You seem to want to model spatially correlated bernoulli variables. That's a difficult task, especially as these are bernoulli and not binomial(n1). With a much fuller description of the problem we may be able to help, but I at least have no idea of the aims of the analysis. glmmPQL is designed for independent observations conditional on the random effects. On Wed, 13 Jul 2005, Beale, Colin wrote: Hi. I'm trying to perform what should be a reasonably basic analysis of some spatial presence/absence data but am somewhat overwhelmed by the options available and could do with a helpful pointer. My researches so far indicate that if my data were normal, I would simply use gls() (in nlme) and one of the various corSpatial functions (eg. corSpher() to be analagous to similar analysis in SAS) with form = ~ x+y (and a nugget if appropriate). However, my data are binomial, so I need a different approach. Using various packages I could define a mixed model (eg using glmmPQL() in MASS) with similar correlation structure, but I seem to need to define a random effect to use glmmPQL(), and I don't have any. Could this requirement be switched off and still use the mixed model approach? Alternatively, it may be possible to define the variance appropriately in gls and use logits directly, but I'm not quite sure how and suspect there's a more straight-forward alternative. Looking at geoRglm suggests there may be solutions here, but it seems like it might be overkill for what is, at first appearance at least, not such a difficult problem. Maybe I'm just being statistically naive, but I think I'm looking for a function somewhere between gls() and glmmPQL() and would be grateful for any pointers. Thanks very much, Colin Beale ... __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] nlme, MASS and geoRglm for spatial autocorrelation?
-Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of Beale, Colin Sent: 13 July 2005 10:15 To: Prof Brian Ripley Cc: r-help@stat.math.ethz.ch Subject: Re: [R] nlme, MASS and geoRglm for spatial autocorrelation? My data are indeed bernoulli and not binomial, as I indicated. The dataset consists of points (grid refs) that are either locations of events (animals) or random points (with no animal present). For each point I have a suite of environmental covariates describing the habitat at this point. I was anticipating some sort of function that could run: function(present ~ env1 + env2 + env3 + x + y, correlation = corSpher(form=~x+y), family = binomial) where env1 to env3 are the habitat covariates, x y the grid refs. If my data were normal, I undertand I would use gls() with exactly this, but drop the family requirement. As my data are bernoulli this is clearly not possible, but I was hoping the analysis may be analagous? The eventual aim is to firstly understand which environmental covariates are important in determining presence and then to use habitat maps to identify the areas expected to be most important. This could be done with geoRglm. I did something similar last week, but without covariates, only the spatial coordinates (i.e. my spatial process had expectation equal to a constant). If you are willing to sacrifice some spatial resolution you can create cells in your spatial data (say 100 m x 100 m) and in each cell count the number of successes in observing your spatial process and the number of trials. This will be a binomial problem and it seems to me to be the spatial equivalent of logistic regression where the predictor continuous variable is structured in bins and then events are counted in those bins. You can move to the R-sig-geo list if you have questions about geoRglm https://stat.ethz.ch/mailman/listinfo/r-sig-geo Btw, this can also be done in SAS using the glimmix macro. Ruben __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] nlme plot
Hello, I am running this script from Pinheiro Bates book in R Version 2.1.1 (WinXP). But, I can't plot Figure 2.3. What's wrong? TIA. Rod. - library(nlme) names( Orthodont ) [1] distance age Subject Sex levels( Orthodont$Sex ) [1] Male Female OrthoFem - Orthodont[ Orthodont$Sex == Female, ] fm1OrthF - lme( distance ~ age, data = OrthoFem, random = ~ 1 | Subject ) fm2OrthF - update( fm1OrthF, random = ~ age | Subject ) orthLRTsim - simulate.lme( fm1OrthF, fm2OrthF, nsim = 1000 ) plot( orthLRTsim, df = c(1, 2) )# produces Figure 2.3 Error in if ((dfType - as.double(names(x)[1])) == 1) { : argument is of length zero Execution halted __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] nlme plot
On 7/11/05, R V [EMAIL PROTECTED] wrote: Hello, I am running this script from Pinheiro Bates book in R Version 2.1.1 (WinXP). But, I can't plot Figure 2.3. What's wrong? There was a change in the way that R handles assignments of names of components and that affected the construction of this plot. I've rewritten the plot construction code (the logic in the current version was bizarre) and will upload a new version of nlme after I test this out. By the way, there is a simpler way of reproducing the examples in our book. Use source(system.file(scripts/ch02.R, package = nlme)) but don't try that with the current version of the nlme package. There is another infelicity (to use Bill Venables' term) that I will correct. TIA. Rod. - library(nlme) names( Orthodont ) [1] distance age Subject Sex levels( Orthodont$Sex ) [1] Male Female OrthoFem - Orthodont[ Orthodont$Sex == Female, ] fm1OrthF - lme( distance ~ age, data = OrthoFem, random = ~ 1 | Subject ) fm2OrthF - update( fm1OrthF, random = ~ age | Subject ) orthLRTsim - simulate.lme( fm1OrthF, fm2OrthF, nsim = 1000 ) plot( orthLRTsim, df = c(1, 2) )# produces Figure 2.3 Error in if ((dfType - as.double(names(x)[1])) == 1) { : argument is of length zero Execution halted __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] nlme leading minor error
Dear all I am struggling with nlme and error message. Even going through Pinheiro, Bates nlme book did not gave me a clue how to avoid this. fit - nlme(ce ~ fi1 / ((1+exp(fi2-fi3*tepl))^(1/fi4)), data = temp1na.gr, start = c(fi1=30, fi2=-100, fi3=-.05, fi4=40), fixed = fi1+fi2+fi3+fi4~1, random = pdDiag(fi2+fi4~1), groups = ~spol.f) gives Error in chol((value + t(value))/2) : the leading minor of order 1 is not positive definite Is this error due to lack of experimental points? Here you have one typical part of my data. It is for level spol.f = 3/11. teplce 800 28.87 800 29.35 825 29 850 28.73 875 26.83 900 24.07 I have 1-5 points for each level (2 levels with 5 points, 1 level with 4 points, several levels with 2 and 3 points and few with only one point. Fitting this model to each level separately led to several sets of coeficients fi1-fi4 and the separate fits were quite OK. Please give me a hint what can be the cause for this error message and how I shall organize my data to avoid this. (Lack of experimental points is also an answer as I can do some subsequent measurement. R 2.1.0, W 2000, nlme package Best regards Petr Pikal [EMAIL PROTECTED] __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] nlme leading minor error
On 6/22/05, Petr Pikal [EMAIL PROTECTED] wrote: Dear all I am struggling with nlme and error message. Even going through Pinheiro, Bates nlme book did not gave me a clue how to avoid this. fit - nlme(ce ~ fi1 / ((1+exp(fi2-fi3*tepl))^(1/fi4)), data = temp1na.gr, start = c(fi1=30, fi2=-100, fi3=-.05, fi4=40), fixed = fi1+fi2+fi3+fi4~1, random = pdDiag(fi2+fi4~1), groups = ~spol.f) gives Error in chol((value + t(value))/2) : the leading minor of order 1 is not positive definite Is this error due to lack of experimental points? Here you have one typical part of my data. It is for level spol.f = 3/11. teplce 800 28.87 800 29.35 825 29 850 28.73 875 26.83 900 24.07 I have 1-5 points for each level (2 levels with 5 points, 1 level with 4 points, several levels with 2 and 3 points and few with only one point. Fitting this model to each level separately led to several sets of coeficients fi1-fi4 and the separate fits were quite OK. Please give me a hint what can be the cause for this error message and how I shall organize my data to avoid this. (Lack of experimental points is also an answer as I can do some subsequent measurement. The first thing to do is to plot the data for each level of spol.f and see if it is reasonable that you would be able to estimate four parameters from such a curve. Then try setting verbose = TRUE, control = list(msVerbose = TRUE) in your call to nlme to see how the parameters are being changed during the iterations. __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] nlme SASmixed in 2.0.1
I assigned a class the first problem in Pinheiro Bates, which uses the data set PBIB from the SASmixed package. I have recently downloaded 2.0.1 and its associated packages. On trying library(SASmixed) data(PBIB) library(nlme) plot(PBIB) I get a warning message Warning message: replacing previous import: coef in: namespaceImportFrom(self, asNamespace(ns)) after library(nlme) and a pairs type plot of PBIB. Pressing on I get: lme1 - lme(response ~ Treatment, data=PBIB, random =~1| Block) summary(lme1) Error: No slot of name rep for this object of class lme Error in deviance([EMAIL PROTECTED], REML = REML) : Unable to find the argument object in selecting a method for function deviance Everything works fine under 1.8.1 and plot(PBIB) is of trellis style, which is what I think the authors intend. Cheers, Murray Jorgensen -- Dr Murray Jorgensen http://www.stats.waikato.ac.nz/Staff/maj.html Department of Statistics, University of Waikato, Hamilton, New Zealand Email: [EMAIL PROTECTED]Fax 7 838 4155 Phone +64 7 838 4773 wkHome +64 7 825 0441Mobile 021 1395 862 __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] nlme SASmixed in 2.0.1
On Tuesday 05 April 2005 18:40, Murray Jorgensen wrote: I assigned a class the first problem in Pinheiro Bates, which uses the data set PBIB from the SASmixed package. I have recently downloaded 2.0.1 and its associated packages. On trying library(SASmixed) data(PBIB) library(nlme) plot(PBIB) I get a warning message Warning message: replacing previous import: coef in: namespaceImportFrom(self, asNamespace(ns)) after library(nlme) and a pairs type plot of PBIB. SASmixed now depends on lme4, which conflicted (until recently) with nlme. If you had your calls to library(SASmixed) and library(nlme) reversed, you probably would have gotten a warning. I think the simplest thing you can do is not to load SASmixed at all. Instead, use data(PBIB, package = SASmixed) However, these datasets are (for all practical purposes) vanilla data frames, and you won't get trellis-style plots unless you create nlme groupedData objects yourself. (You should get them if you load the latticeExtra package manually, and then use 'gplot' instead of 'plot' to plot them.) Deepayan Pressing on I get: lme1 - lme(response ~ Treatment, data=PBIB, random =~1| Block) summary(lme1) Error: No slot of name rep for this object of class lme Error in deviance([EMAIL PROTECTED], REML = REML) : Unable to find the argument object in selecting a method for function deviance Everything works fine under 1.8.1 and plot(PBIB) is of trellis style, which is what I think the authors intend. Cheers, Murray Jorgensen __ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] nlme and pnlsTol
Dear nlme-lovers, I do a fit fo a 3-parameter fit to physiological data with nlme: EmptD-deriv(~(vol+slope*t)*exp(-t/tempt)... The approach described in Pinheiro/Bates by using nlsList as a first approximator was somewhat unstable (many NA), but a direct fit converges quickly and fits look good: contr=nlmeControl(pnlsTol=0.1) v.nlme=nlme(v~EmptD(t,vol,slope,tempt),data=acg, verbose=F,control=contr, fixed=list(tempt+slope~treat+what,vol~1), random=vol+tempt+slope~1, start=c(150,-50,-50,-5,3,5,14,3,645)) However, I had to use a rather large value of pnlsTol of 0.1 go achieve convergence. Questions: 1) What is the price I have to pay when using so large pnlsTol? Could I miss any more distant optima? 2) When using an even larger value of pnlsTol=0.3, AIC is lower by 2 and standard errors are somewhat lower. Can I trust this result? Dieter Menne __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] NLME plottting and Confidence Intervals
All, I have been learning about mixed models and have been able to successfully use lme( ) and nlme( ) to fit some simple linear and 4PL logistic models. As a relative newbie I am at a loss as to how I can do the following: (1) Import a SAS dataset with DATE9. formatted time values and get them converted into a convenient time variable for use with the nlme package. In particular, I would like to use the lattice package to produce panel plots for diagnostic and exploratory purposes. (2) Plot the fitted model(s) along with appropriate 95% confidence bounds for the model (3) Obtain prediction intervals for given individuals in the datasets. Sorry for what must be trivial questions! I very much appreciate any insight. Thanks, Greg __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] nlme vs gls
Dear List: My question is more statistical than R oriented (although it originates from my work with nlme). I know statistical questions are occasionally posted, so I hope my question is relevant to the list as I cannot turn up a solution anywhere else. I will frame it in the context of an R related issue. To illustrate the problem, consider student achievement test score data with multiple observations available for each student. One way of modeling these data might be Y_{ti} = (\mu + \mu_{i} ) + (\beta_0 + \beta_{i} )*(time) + \epsilon_{ti} ; t indexes time and i indexes student The nlme code is tt-lme(reponse~time, data, random=~time|ID) With this, I can extract the growth rate for each individual in the data set. Conceptually this is the sum of the main effect for time plus the empirical bayes estimate for each individual: \beta_0 + \beta_{i} I can use the coef(tt, ...) to extract these coefficients. Now, assume that I do not want to include random effects associated with the slope and intercept, but instead use a gls to account for the variances and covariances through an unstructured covariance matrix. For example, assume the following model fit to the same data Y_{ti} = \mu + \beta_0 * (time) + \epsilon_{ti}; where e~N(0, \Sigma) With Sigma forming a more complex covariance matrix. We can use the gls option as follows for example, tt1-gls(response~time, data, correlation=corSymm(form=~1|ID), weights=varIdent(form=~1|time)) On p. 254 of PB, they note that the mixed model gives as a by-product, estimates for the random effects, which may be of interest in themselves. And in my situation they are. Specifically, I want to estimate the growth rate for each individual student. My questions boils down to: 1) Is there any way possible to extract or to compute (estimate) the growth rate of individual i when the data have been modeled using gls? 2) Can anyone point me to an example or reference where this has been done? I have searched but have really turned up empty handed. It seems that there must be a methodology for doing so as we are accomplishing a similar task. Would there be information in the new covariance matrix, Sigma, that would help play this role? These only illustrate the issue, the actual model I am dealing with is more complex, but the issue generalizes. Fitting random effects in the current model I am dealing with is not a particularly attractive solution. I actually have the issue layed out in more detail in a paper I am working on and would be happy to share if requested. I would appreciate any thoughts you might have on this problem. Harold [[alternative HTML version deleted]] __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
RE: [R] nlme vs gls
One thing to be aware of (as Pinheiro and Bates point out on the same page) is that the general random effects and gls models are not nested. This means that the general covariance matrix you estimate with gls may not correspond to *any* random effects model. In that case there are no subject- specific coefficients (e.g. slopes), in the random effects sense. Rich Raubertas -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Doran, Harold Sent: Friday, October 08, 2004 1:27 PM To: [EMAIL PROTECTED] Subject: [R] nlme vs gls Dear List: My question is more statistical than R oriented (although it originates from my work with nlme). I know statistical questions are occasionally posted, so I hope my question is relevant to the list as I cannot turn up a solution anywhere else. I will frame it in the context of an R related issue. To illustrate the problem, consider student achievement test score data with multiple observations available for each student. One way of modeling these data might be Y_{ti} = (\mu + \mu_{i} ) + (\beta_0 + \beta_{i} )*(time) + \epsilon_{ti} ; t indexes time and i indexes student The nlme code is tt-lme(reponse~time, data, random=~time|ID) With this, I can extract the growth rate for each individual in the data set. Conceptually this is the sum of the main effect for time plus the empirical bayes estimate for each individual: \beta_0 + \beta_{i} I can use the coef(tt, ...) to extract these coefficients. Now, assume that I do not want to include random effects associated with the slope and intercept, but instead use a gls to account for the variances and covariances through an unstructured covariance matrix. For example, assume the following model fit to the same data Y_{ti} = \mu + \beta_0 * (time) + \epsilon_{ti}; where e~N(0, \Sigma) With Sigma forming a more complex covariance matrix. We can use the gls option as follows for example, tt1-gls(response~time, data, correlation=corSymm(form=~1|ID), weights=varIdent(form=~1|time)) On p. 254 of PB, they note that the mixed model gives as a by-product, estimates for the random effects, which may be of interest in themselves. And in my situation they are. Specifically, I want to estimate the growth rate for each individual student. My questions boils down to: 1) Is there any way possible to extract or to compute (estimate) the growth rate of individual i when the data have been modeled using gls? 2) Can anyone point me to an example or reference where this has been done? I have searched but have really turned up empty handed. It seems that there must be a methodology for doing so as we are accomplishing a similar task. Would there be information in the new covariance matrix, Sigma, that would help play this role? These only illustrate the issue, the actual model I am dealing with is more complex, but the issue generalizes. Fitting random effects in the current model I am dealing with is not a particularly attractive solution. I actually have the issue layed out in more detail in a paper I am working on and would be happy to share if requested. I would appreciate any thoughts you might have on this problem. Harold [[alternative HTML version deleted]] __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html