Re: [R] lme ->Error in getGroups.data.frame(dataMix, groups)

2018-11-24 Thread Bert Gunter
In brief, your random effects formula syntax is wrong. You need to (re)
study ?lme or suitable tutorials for details of how to do what you want --
if you can with lme (e.g. crossed random effects are very difficult in lme,
much easier in lmer).

However, you should probably re-post on the r-sig-mixed-models list to
receive better and *more expert* help.

Cheers,
Bert


On Sat, Nov 24, 2018 at 2:31 PM Boy Possen  wrote:

> The basic idea is to create a linear model in R such that FinH is explained
> by SoilNkh, dDDSP, dDDSP2, Provenance, Site, Genotype and Block, where
> SoilNkh, dDDSP and dDDSP2 are continuous covariates, Provenance, Site,
> Genotype and Block are factors, Site and Provenance are fixed and Genotype
> and Block are random. Also, Genotype is nested within Provenance and Block
> within Site.
>
> Since the order the variables go in is of importance, it should be a Anova
> type-I with the parameters in following order:
>
> FinH~SoilNkh,Site,dDDSP,dDDSP2,Provenance,Site:Provenance,Provenance/Genotype,Site/Block
>
>
> For the fixed part I am oké with either:
>
> test31 <-lm(FinH~SoilNkh + Site + dDDSP + dDDSP2 + Provenance +
> Site:Provenance ,data=d1)
>
> test32 <-aov(FinH~SoilNkh + Site + dDDSP + dDDSP2 + Provenance +
> Site:Provenance ,data=d1
>
> When trying to specify the random-part, taking the above text as starting
> point, trouble starts :)
>
> I feel it should be of the form:
>
> test64 <- lme(FinH~SoilNkh + Site + dDDSP + dDDSP2 + Provenance +
> Site:Provenance,
> random = ~1|Provenance/Genotype + ~1|Site/Block,data=d1)
>
> but I can't avoid the error
>
> "Error in getGroups.data.frame(dataMix, groups) : invalid formula for
> groups"
>
> I am lost for clues, really, so any advice would be great! If any data
> should be supplied, I'd be happy to provide of course, but can't (yet)
> figure out how...
>
> Thanks in advance for your time.
>
>
> --
> B.J.H.M. Possen
>
> [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lme ->Error in getGroups.data.frame(dataMix, groups)

2018-11-24 Thread Boy Possen
The basic idea is to create a linear model in R such that FinH is explained
by SoilNkh, dDDSP, dDDSP2, Provenance, Site, Genotype and Block, where
SoilNkh, dDDSP and dDDSP2 are continuous covariates, Provenance, Site,
Genotype and Block are factors, Site and Provenance are fixed and Genotype
and Block are random. Also, Genotype is nested within Provenance and Block
within Site.

Since the order the variables go in is of importance, it should be a Anova
type-I with the parameters in following order:
FinH~SoilNkh,Site,dDDSP,dDDSP2,Provenance,Site:Provenance,Provenance/Genotype,Site/Block


For the fixed part I am oké with either:

test31 <-lm(FinH~SoilNkh + Site + dDDSP + dDDSP2 + Provenance +
Site:Provenance ,data=d1)

test32 <-aov(FinH~SoilNkh + Site + dDDSP + dDDSP2 + Provenance +
Site:Provenance ,data=d1

When trying to specify the random-part, taking the above text as starting
point, trouble starts :)

I feel it should be of the form:

test64 <- lme(FinH~SoilNkh + Site + dDDSP + dDDSP2 + Provenance +
Site:Provenance,
random = ~1|Provenance/Genotype + ~1|Site/Block,data=d1)

but I can't avoid the error

"Error in getGroups.data.frame(dataMix, groups) : invalid formula for
groups"

I am lost for clues, really, so any advice would be great! If any data
should be supplied, I'd be happy to provide of course, but can't (yet)
figure out how...

Thanks in advance for your time.


-- 
B.J.H.M. Possen

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme to determine if there is a group effect

2016-08-25 Thread Thierry Onkelinx
Dear John,

lme() not longer requires a GroupedData object. You can directly use a
data.frame which is easier to specify different models.

You want something like

lme(value ~ time * group, random = ~ time|SS, data = data1)

PS Note that the R-Sig-mixedmodels is more suited for this kind of question.

Best regards,

ir. Thierry Onkelinx
Instituut voor natuur- en bosonderzoek / Research Institute for Nature and
Forest
team Biometrie & Kwaliteitszorg / team Biometrics & Quality Assurance
Kliniekstraat 25
1070 Anderlecht
Belgium

To call in the statistician after the experiment is done may be no more
than asking him to perform a post-mortem examination: he may be able to say
what the experiment died of. ~ Sir Ronald Aylmer Fisher
The plural of anecdote is not data. ~ Roger Brinner
The combination of some data and an aching desire for an answer does not
ensure that a reasonable answer can be extracted from a given body of data.
~ John Tukey

2016-08-25 0:46 GMT+02:00 John Sorkin :

> I apologize for sending this message again. The last time I sent it, the
> subject line was not correct. I have corrected the subject line.
>
> I am trying to run a repeated measures analysis of data in which each
> subject (identified by SS) has 3 observations at three different times (0,
> 3, and 6). There are two groups of subjects (identified by group). I want
> to know if the response differs in the two groups. I have tried to used
> lme. Lme tell me if there is a time effect, but does not tell me if there
> is a group effect. Once I get this to work I will want to know if there is
> a significant group*time effect. Can someone tell me how to get an estimate
> for group. Once I get that, I believe getting an estimate for group*time
> should be straight forward. The code I have tired to use follows.
> Thank you,
> John
>
> > # This is my data
> > data1
>SS group time value baseline
> 1   1  Cont0  9.00 9.00
> 2   2  Cont0  3.00 3.00
> 3   3  Cont0  8.00 8.00
> 4   4  Inte0  5.690702 5.690702
> 5   5  Inte0  7.409493 7.409493
> 6   6  Inte0  7.428018 7.428018
> 7   1  Cont3 13.713148 9.00
> 8   2  Cont3  9.841107 3.00
> 9   3  Cont3 12.843236 8.00
> 10  4  Inte3  9.300899 5.690702
> 11  5  Inte3 10.936389 7.409493
> 12  6  Inte3 12.358499 7.428018
> 13  1  Cont6 18.952390 9.00
> 14  2  Cont6 15.091527 3.00
> 15  3  Cont6 17.578812 8.00
> 16  4  Inte6 12.325499 5.690702
> 17  5  Inte6 15.486513 7.409493
> 18  6  Inte6 18.284965 7.428018
> > # Create a grouped data object. SS identifies each subject
> > # group indentifies group, intervention or control.
> > GD<- groupedData(value~time|SS/group,data=data1,FUN=mean)
> > # Fit the model.
> > fit1 <- lme(GD)
> > cat("The results give information about time, but does not say if the
> gruops are different\n")
> The results give information about time, but does not say if the gruops
> are different
> > summary(fit1)
> Linear mixed-effects model fit by REML
>  Data: GD
>AIC  BIClogLik
>   74.59447 81.54777 -28.29724
>
> Random effects:
>  Formula: ~time | SS
>  Structure: General positive-definite
> StdDevCorr
> (Intercept) 1.3875111 (Intr)
> time0.2208046 -0.243
>
>  Formula: ~time | group %in% SS
>  Structure: General positive-definite
> StdDevCorr
> (Intercept) 1.3875115 (Intr)
> time0.2208051 -0.243
> Residual0.3800788
>
> Fixed effects: value ~ time
>Value Std.Error DF   t-value p-value
> (Intercept) 6.747442 0.8135067 11  8.294268   0
> time1.588653 0.1326242 11 11.978601   0
>  Correlation:
>  (Intr)
> time -0.268
>
> Standardized Within-Group Residuals:
> Min  Q1 Med  Q3 Max
> -1.11412947 -0.44986535  0.08034174  0.34615610  1.29943887
>
> Number of Observations: 18
> Number of Groups:
>SS group %in% SS
> 6 6
>
>
>
> >
> John David Sorkin M.D., Ph.D.
> Professor of Medicine
> Chief, Biostatistics and Informatics
> University of Maryland School of Medicine Division of Gerontology and
> Geriatric Medicine
> Baltimore VA Medical Center
> 10 North Greene Street
> GRECC (BT/18/GR)
> Baltimore, MD 21201-1524
> (Phone) 410-605-7119
> (Fax) 410-605-7913 (Please call phone number above prior to faxing)
>
> Confidentiality Statement:
> This email message, including any attachments, is for ...{{dropped:16}}

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme to determine if there is a group effect

2016-08-24 Thread Bert Gunter
I never used the groupedData structure, precisely because I found it
confusing, but I think:

1. group is *not* a (random) grouping variable; it's a fixed effect covariate.
2. so I believe your groupedData call should be:

GD<- groupedData(value~time|SS,data=data1,outer = group)

Of course, as you did not give us your data in a convenient form, I
can't check. Please let us know if this is wrong, however, as I don't
want to mislead others down the primrose path.


Cheers,
Bert

Bert Gunter

"The trouble with having an open mind is that people keep coming along
and sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )


On Wed, Aug 24, 2016 at 3:46 PM, John Sorkin
 wrote:
> I apologize for sending this message again. The last time I sent it, the 
> subject line was not correct. I have corrected the subject line.
>
> I am trying to run a repeated measures analysis of data in which each subject 
> (identified by SS) has 3 observations at three different times (0, 3, and 6). 
> There are two groups of subjects (identified by group). I want to know if the 
> response differs in the two groups. I have tried to used lme. Lme tell me if 
> there is a time effect, but does not tell me if there is a group effect. Once 
> I get this to work I will want to know if there is a significant group*time 
> effect. Can someone tell me how to get an estimate for group. Once I get 
> that, I believe getting an estimate for group*time should be straight 
> forward. The code I have tired to use follows.
> Thank you,
> John
>
>> # This is my data
>> data1
>SS group time value baseline
> 1   1  Cont0  9.00 9.00
> 2   2  Cont0  3.00 3.00
> 3   3  Cont0  8.00 8.00
> 4   4  Inte0  5.690702 5.690702
> 5   5  Inte0  7.409493 7.409493
> 6   6  Inte0  7.428018 7.428018
> 7   1  Cont3 13.713148 9.00
> 8   2  Cont3  9.841107 3.00
> 9   3  Cont3 12.843236 8.00
> 10  4  Inte3  9.300899 5.690702
> 11  5  Inte3 10.936389 7.409493
> 12  6  Inte3 12.358499 7.428018
> 13  1  Cont6 18.952390 9.00
> 14  2  Cont6 15.091527 3.00
> 15  3  Cont6 17.578812 8.00
> 16  4  Inte6 12.325499 5.690702
> 17  5  Inte6 15.486513 7.409493
> 18  6  Inte6 18.284965 7.428018
>> # Create a grouped data object. SS identifies each subject
>> # group indentifies group, intervention or control.
>> GD<- groupedData(value~time|SS/group,data=data1,FUN=mean)
>> # Fit the model.
>> fit1 <- lme(GD)
>> cat("The results give information about time, but does not say if the gruops 
>> are different\n")
> The results give information about time, but does not say if the gruops are 
> different
>> summary(fit1)
> Linear mixed-effects model fit by REML
>  Data: GD
>AIC  BIClogLik
>   74.59447 81.54777 -28.29724
>
> Random effects:
>  Formula: ~time | SS
>  Structure: General positive-definite
> StdDevCorr
> (Intercept) 1.3875111 (Intr)
> time0.2208046 -0.243
>
>  Formula: ~time | group %in% SS
>  Structure: General positive-definite
> StdDevCorr
> (Intercept) 1.3875115 (Intr)
> time0.2208051 -0.243
> Residual0.3800788
>
> Fixed effects: value ~ time
>Value Std.Error DF   t-value p-value
> (Intercept) 6.747442 0.8135067 11  8.294268   0
> time1.588653 0.1326242 11 11.978601   0
>  Correlation:
>  (Intr)
> time -0.268
>
> Standardized Within-Group Residuals:
> Min  Q1 Med  Q3 Max
> -1.11412947 -0.44986535  0.08034174  0.34615610  1.29943887
>
> Number of Observations: 18
> Number of Groups:
>SS group %in% SS
> 6 6
>
>
>
>>
> John David Sorkin M.D., Ph.D.
> Professor of Medicine
> Chief, Biostatistics and Informatics
> University of Maryland School of Medicine Division of Gerontology and 
> Geriatric Medicine
> Baltimore VA Medical Center
> 10 North Greene Street
> GRECC (BT/18/GR)
> Baltimore, MD 21201-1524
> (Phone) 410-605-7119
> (Fax) 410-605-7913 (Please call phone number above prior to faxing)
>
> Confidentiality Statement:
> This email message, including any attachments, is for ...{{dropped:12}}

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lme to determine if there is a group effect

2016-08-24 Thread John Sorkin
I apologize for sending this message again. The last time I sent it, the 
subject line was not correct. I have corrected the subject line.
 
I am trying to run a repeated measures analysis of data in which each subject 
(identified by SS) has 3 observations at three different times (0, 3, and 6). 
There are two groups of subjects (identified by group). I want to know if the 
response differs in the two groups. I have tried to used lme. Lme tell me if 
there is a time effect, but does not tell me if there is a group effect. Once I 
get this to work I will want to know if there is a significant group*time 
effect. Can someone tell me how to get an estimate for group. Once I get that, 
I believe getting an estimate for group*time should be straight forward. The 
code I have tired to use follows.
Thank you,
John
 
> # This is my data
> data1
   SS group time value baseline
1   1  Cont0  9.00 9.00
2   2  Cont0  3.00 3.00
3   3  Cont0  8.00 8.00
4   4  Inte0  5.690702 5.690702
5   5  Inte0  7.409493 7.409493
6   6  Inte0  7.428018 7.428018
7   1  Cont3 13.713148 9.00
8   2  Cont3  9.841107 3.00
9   3  Cont3 12.843236 8.00
10  4  Inte3  9.300899 5.690702
11  5  Inte3 10.936389 7.409493
12  6  Inte3 12.358499 7.428018
13  1  Cont6 18.952390 9.00
14  2  Cont6 15.091527 3.00
15  3  Cont6 17.578812 8.00
16  4  Inte6 12.325499 5.690702
17  5  Inte6 15.486513 7.409493
18  6  Inte6 18.284965 7.428018
> # Create a grouped data object. SS identifies each subject
> # group indentifies group, intervention or control.
> GD<- groupedData(value~time|SS/group,data=data1,FUN=mean)
> # Fit the model.
> fit1 <- lme(GD)
> cat("The results give information about time, but does not say if the gruops 
> are different\n")
The results give information about time, but does not say if the gruops are 
different
> summary(fit1)
Linear mixed-effects model fit by REML
 Data: GD 
   AIC  BIClogLik
  74.59447 81.54777 -28.29724

Random effects:
 Formula: ~time | SS
 Structure: General positive-definite
StdDevCorr  
(Intercept) 1.3875111 (Intr)
time0.2208046 -0.243

 Formula: ~time | group %in% SS
 Structure: General positive-definite
StdDevCorr  
(Intercept) 1.3875115 (Intr)
time0.2208051 -0.243
Residual0.3800788   

Fixed effects: value ~ time 
   Value Std.Error DF   t-value p-value
(Intercept) 6.747442 0.8135067 11  8.294268   0
time1.588653 0.1326242 11 11.978601   0
 Correlation: 
 (Intr)
time -0.268

Standardized Within-Group Residuals:
Min  Q1 Med  Q3 Max 
-1.11412947 -0.44986535  0.08034174  0.34615610  1.29943887 

Number of Observations: 18
Number of Groups: 
   SS group %in% SS 
6 6 



> 
John David Sorkin M.D., Ph.D.
Professor of Medicine
Chief, Biostatistics and Informatics
University of Maryland School of Medicine Division of Gerontology and Geriatric 
Medicine
Baltimore VA Medical Center
10 North Greene Street
GRECC (BT/18/GR)
Baltimore, MD 21201-1524
(Phone) 410-605-7119
(Fax) 410-605-7913 (Please call phone number above prior to faxing) 

Confidentiality Statement:
This email message, including any attachments, is for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
Any unauthorized use, disclosure or distribution is prohibited. If you are not 
the intended recipient, please contact the sender by reply email and destroy 
all copies of the original message. 
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme() - MEEM error (singularity in Backsolve) due to user-specified contrasts amount

2016-02-23 Thread Bert Gunter
You are probably overfitting.

This *IS* a statistical and not an R issue, and so does not belong
here. You MAY get useful help by posting on the R-SIG-mixed-models
list, however. But PLEASE post in *plain text*, not HTML, as the
posting guide asks.

Cheers,
Bert
Bert Gunter

"The trouble with having an open mind is that people keep coming along
and sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )


On Tue, Feb 23, 2016 at 2:34 AM, Daniel Preciado  wrote:
> Hello all,
>
> I posted this question also in Stackoverflow, but in hindsight I realise it 
> probably fits better here. I am trying to use lme() to fit and compare 
> different models to data from an experiment in a repeated measures design. My 
> dependent variable is response time (RT, in milliseconds); and I have 2 
> factors: F_A (2 levels) and F_B (3 Levels). For F_B, I have specified the 
> following contrasts:
> F_B_C1 <- c(1, -1, 0)  # Contrast prize 1 and 2 levels
> F_B_C2 <- c(1, 0, -1)  # Contrast prize 1 with Neutral (no prize)
> F_B_C3 <- c(1, 0, -1)  # Contrast prize 2 with Neutral (no prize)
> F_B_C4 <- c(1, 1, -2)  # Contrast prize with Neutral
> contrasts(Data$F_B, how.many=4) <- cbind(F_B_C1, F_B_C2, F_B_C3, F_B_C4)
> Conditions 1 and 2 are 2 levels of the same manipulation, condition 3 is a 
> neutral control. I am interested in the effect of each level (individually) 
> on RT, and overall in the difference between the experimental manipulation 
> (pooling the first 2 conditions of factor B) and the control condition (final 
> condition of factor B).
>
> I defined the lme() models step-wise, starting with a Baseline model, and 
> then updating that one to include each factor individually, and finally the 
> interaction:
> RT_Base <- lme(RT ~ 1, random = ~1|SubjID/F_A/F_B, data=Data, method="ML")  
> #Baseline model
> RT_F_A <- update(RT_Base, .~. + F_A)#Baseline + F_A
> RT_F_B <- update(RT_F_A, .~. + F_B) #(Baseline+F_A) + F_B
> RT_Full <- update(RT_F_B, .~. + F_A:F_B)#Full model (+ interaction)
> However, when I execute the code involving F_B, I get an
> "Error in MEEM (...): Singularity in Backsolve at level 0, block 1).
> I can still inspect the results of the model, but I would like to understand 
> where is this error coming from, what does it mean, and how to avoid it. 
> Furthermore, I realized that if I reduce the amount of contrasts to the 
> default 2, the code runs without any error, so I can only assume that it has 
> something to do with the user-specified comparison pairs. Also, the specified 
> contrasts are not displayed (only the default first 2).
>
> I also read in some answer that the intercept needed to be suppressed in 
> order to prevent this error (by adding RT ~ 0+Factors to the model formulae). 
> I tried that, but it produces the same error.
>
> I would appreciate any feedback regarding this, Thanks!
>
>
>
> [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme function to obtain pvalue for fixed effect

2015-05-26 Thread Ben Bolker
li li hannah.hlx at gmail.com writes:

 
 Hi all,
   I am using the lme function to run a random coefficient model.
 Please see
 output (mod1) as below.

  Please don't cross-post to different R lists (this is explicitly
deprecated by the list policy: http://www.r-project.org/mail.html, 
cross-posting is considered to be impolite).  r-sig-mixed-models
seems to be more appropriate for these questions.

  sincerely
Ben Bolker

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lme function to obtain pvalue for fixed effect

2015-05-26 Thread li li
Hi all,
  I am using the lme function to run a random coefficient model. Please see
output (mod1) as below.
  I need to obtain the pvalue for the fixed effect. As you can see,
the pvalues given using the summary function is different from the
resutls given in anova function.
Why should they be different and which one is the correct one to use?
   Thanks!
  Hanna


 summary(mod1)
Linear mixed-effects model fit by REML
 Data: minus20C1
AIC   BIC   logLik
  -82.60042 -70.15763 49.30021

Random effects:
 Formula: ~1 + months | lot
 Structure: General positive-definite, Log-Cholesky parametrization
StdDev   Corr
(Intercept) 8.907584e-03 (Intr)
months  6.039781e-05 -0.096
Residual4.471243e-02

Fixed effects: ti ~ type * months
 Value   Std.Error DF   t-value p-value
(Intercept) 0.25831245 0.016891587 31 15.292373  0.
type 0.13502089 0.026676101  4  5.061493  0.0072
months  0.00804790 0.001218941 31  6.602368  0.
type:months -0.00693679 0.002981859 31 -2.326329  0.0267
 Correlation:
   (Intr) typ months
type-0.633
months -0.785  0.497
type:months  0.321 -0.762 -0.409

Standardized Within-Group Residuals:
  MinQ1   MedQ3   Max
-2.162856e+00 -1.962972e-01 -2.771184e-05  3.749035e-01  2.088392e+00

Number of Observations: 39
Number of Groups: 6
 anova(mod1)
numDF denDF   F-value p-value
(Intercept) 131 2084.0265  .0001
type1 4   10.8957  0.0299
months  131   38.3462  .0001
type:months 1315.4118  0.0267

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme random interactions

2015-03-17 Thread Thierry Onkelinx
Dear Brian,

You want

data$CompLab - interaction(data$Compound, data$Lab)
lme ( data=data, Resp ~ Lab * Compound,
   random = list(CombLab = ~ 1, Date = pdIdent(~0 + Lab)) ,
weights = varIdent(form=~1|Lab)
)

Note that this is untested since you didn't provide a reproducible example.

However, you have only very few levels of Date. See Should I treat factor
xxx as fixed or random? on http://glmm.wikidot.com/faq
Furthermore, you are estimating a lot of parameters. Make you that you have
enough data.

Best regards,

ir. Thierry Onkelinx
Instituut voor natuur- en bosonderzoek / Research Institute for Nature and
Forest
team Biometrie  Kwaliteitszorg / team Biometrics  Quality Assurance
Kliniekstraat 25
1070 Anderlecht
Belgium

To call in the statistician after the experiment is done may be no more
than asking him to perform a post-mortem examination: he may be able to say
what the experiment died of. ~ Sir Ronald Aylmer Fisher
The plural of anecdote is not data. ~ Roger Brinner
The combination of some data and an aching desire for an answer does not
ensure that a reasonable answer can be extracted from a given body of data.
~ John Tukey

2015-03-16 11:10 GMT+01:00 Middleton, Brian J 
brian.middle...@astrazeneca.com:

 I have a method comparison problem, comparing Labs where a set of
 compounds are assayed on 3 different dates for each lab. Both labs will be
 used to assess compounds in the future, so the scientists will potentially
 contrast a compound at assayed at Lab A with one assayed at Lab B, This
 implies I ought to regard the Lab*Compound interaction as random. I also
 have the date within Lab as a random term and the Compound*date as random
 (and as separate variances for each Lab).

 If I regard the Compound*Lab effect as fixed this code works

 lme.out - lme ( data=data, Resp ~ Lab + Compound + Compound:Lab,
random = pdIdent(~Lab-1|Date)  ,
 weights = varIdent(form=~1|Lab)
 )

 The trouble is when I try to regard it as random, eg.

 lme2.out - lme ( data=data, Resp ~ Lab + Compound,
random = list( ~Compound:Lab,  pdIdent(~Lab-1|Date)  ),
 weights = varIdent(form=~1|Lab)
   )

 It appears as if the random interaction is not allowed ... Is this right ?
 Is there a way to fit the interaction as random together with the other
 random terms ?

 I have tried lme4 but note that lme4 does not currently implement nlme's
 features for modeling heteroscedasticity but does implement crossed
 random effects. No joy in my hands though. Nor with lmer ...

 Any help gratefully received, thanks,

 Brian (trying to convert from SAS !)



 

 AstraZeneca UK Limited is a company incorporated in Engl...{{dropped:26}}

 __
 R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lme random interactions

2015-03-16 Thread Middleton, Brian J
I have a method comparison problem, comparing Labs where a set of compounds are 
assayed on 3 different dates for each lab. Both labs will be used to assess 
compounds in the future, so the scientists will potentially contrast a compound 
at assayed at Lab A with one assayed at Lab B, This implies I ought to regard 
the Lab*Compound interaction as random. I also have the date within Lab as a 
random term and the Compound*date as random (and as separate variances for each 
Lab).

If I regard the Compound*Lab effect as fixed this code works

lme.out - lme ( data=data, Resp ~ Lab + Compound + Compound:Lab,
   random = pdIdent(~Lab-1|Date)  ,
weights = varIdent(form=~1|Lab)
)

The trouble is when I try to regard it as random, eg.

lme2.out - lme ( data=data, Resp ~ Lab + Compound,
   random = list( ~Compound:Lab,  pdIdent(~Lab-1|Date)  ),
weights = varIdent(form=~1|Lab)
  )

It appears as if the random interaction is not allowed ... Is this right ? Is 
there a way to fit the interaction as random together with the other random 
terms ?

I have tried lme4 but note that lme4 does not currently implement nlme's 
features for modeling heteroscedasticity but does implement crossed random 
effects. No joy in my hands though. Nor with lmer ...

Any help gratefully received, thanks,

Brian (trying to convert from SAS !)





AstraZeneca UK Limited is a company incorporated in Engl...{{dropped:26}}

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme: Can not find groupData object in a function could this be a scoping problem?

2015-02-14 Thread JS Huang
Hi,

  Unless you defined SS somewhere before you execute data -
data.frame(group=c(rep(Cont,SS),rep(Exp,SS)), pre=pre,post=post), SS is
not assigned.  Maybe it is TS you intended?

doit- function(TS,rho,premean,presd,RxEffect) { 
. 
. 
. 
  # Prepare data frames for regression analyses. 
  data - data.frame(group=c(rep(Cont,SS),rep(Exp,SS)), 
   pre=pre,post=post)   



--
View this message in context: 
http://r.789695.n4.nabble.com/lme-Can-not-find-groupData-object-in-a-function-could-this-be-a-scoping-problem-tp4703243p4703247.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lme: Can not find groupData object in a function could this be a scoping problem?

2015-02-13 Thread John Sorkin
I resolved my program by restating RStudio . . .
Thanks you,
John


R 3.1.0, RStudio  0.98.95
Windows 7

I have written a function that uses lme:

doit- function(TS,rho,premean,presd,RxEffect) {
.
.
.
  # Prepare data frames for regression analyses.
  data - data.frame(group=c(rep(Cont,SS),rep(Exp,SS)),
   pre=pre,post=post)  
.
.
.
  
previous-data.frame(time=c(pre,post),cbind(subject=i,group=data[i,group],t(data[i,c(pre,post)])))
. 

.
.
  inter-groupedData(value~as.integer(time)+as.integer(group)+
 as.integer(time)*as.integer(group)|subject,
 inner=~group,data=previous)

  print(inter)
  lmeinter-lme(inter)
.
.
.
}

When I run the code, at the statement,lmeinter-lme(inter)  I get a message:
Error in is.data.frame(data) : object 'inter' not found.
Please note that the print statement, print(inter) prints the groupedData 
object!
The code works fine when it is not in a function, i.e. 


  inter-groupedData(value~as.integer(time)+as.integer(group)+
 as.integer(time)*as.integer(group)|subject,
 inner=~group,data=previous)

  lmeinter-lme(inter)



runs and has no problem finding inter.


Can someone suggest what I might change to fix the problem? I think I may have 
a scoping problem but I am not knowledgeable enough to know (1) how to check 
this and (2) what to do to fix it.


Thanks,
John




John David Sorkin M.D., Ph.D.
Professor of Medicine
Chief, Biostatistics and Informatics
University of Maryland School of Medicine Division of Gerontology and Geriatric 
Medicine
Baltimore VA Medical Center
10 North Greene Street
GRECC (BT/18/GR)
Baltimore, MD 21201-1524
(Phone) 410-605-7119
(Fax) 410-605-7913 (Please call phone number above prior to faxing) 



John David Sorkin M.D., Ph.D.
Professor of Medicine
Chief, Biostatistics and Informatics
University of Maryland School of Medicine Division of Gerontology and Geriatric 
Medicine
Baltimore VA Medical Center
10 North Greene Street
GRECC (BT/18/GR)
Baltimore, MD 21201-1524
(Phone) 410-605-7119
(Fax) 410-605-7913 (Please call phone number above prior to faxing) 


Confidentiality Statement:
This email message, including any attachments, is for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
Any unauthorized use, disclosure or distribution is prohibited. If you are not 
the intended recipient, please contact the sender by reply email and destroy 
all copies of the original message. 
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lme: Can not find groupData object in a function could this be a scoping problem?

2015-02-13 Thread John Sorkin
R 3.1.0, RStudio  0.98.95
Windows 7


I have written a function that uses lme:


doit- function(TS,rho,premean,presd,RxEffect) {
.
.
.
  # Prepare data frames for regression analyses.
  data - data.frame(group=c(rep(Cont,SS),rep(Exp,SS)),
   pre=pre,post=post)  
.
.
.
  
previous-data.frame(time=c(pre,post),cbind(subject=i,group=data[i,group],t(data[i,c(pre,post)])))
. 

.
.
  inter-groupedData(value~as.integer(time)+as.integer(group)+
 as.integer(time)*as.integer(group)|subject,
 inner=~group,data=previous)

  print(inter)
  lmeinter-lme(inter)
.
.
.
}


When I run the code, at the statement,lmeinter-lme(inter)  I get a message:
Error in is.data.frame(data) : object 'inter' not found.
Please note that the print statement, print(inter) prints the groupedData 
object!
The code works fine when it is not in a function, i.e. 


  inter-groupedData(value~as.integer(time)+as.integer(group)+
 as.integer(time)*as.integer(group)|subject,
 inner=~group,data=previous)

  lmeinter-lme(inter)



runs and has no problem finding inter.


Can someone suggest what I might change to fix the problem? I think I may have 
a scoping problem but I am not knowledgeable enough to know (1) how to check 
this and (2) what to do to fix it.


Thanks,
John




John David Sorkin M.D., Ph.D.
Professor of Medicine
Chief, Biostatistics and Informatics
University of Maryland School of Medicine Division of Gerontology and Geriatric 
Medicine
Baltimore VA Medical Center
10 North Greene Street
GRECC (BT/18/GR)
Baltimore, MD 21201-1524
(Phone) 410-605-7119
(Fax) 410-605-7913 (Please call phone number above prior to faxing) 


Confidentiality Statement:
This email message, including any attachments, is for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
Any unauthorized use, disclosure or distribution is prohibited. If you are not 
the intended recipient, please contact the sender by reply email and destroy 
all copies of the original message. 
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lme slopes and intercepts differences

2013-10-09 Thread Cleber Chaves
Hello all,
I need to infer if the slopes and intercept of each group of my lme analyze,
showed below, are different from each other. Like a Tukey test. I don't
need to compare the slope and interceptof each subject of my fixed data,
but the slope and intercept of each group of them.


 m1-lme(fvfm~term*sp*env*est*time,random=~term|id,table)


You know how can I make it?


Thanks for advance!


Cleber Chaves

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme: object is not a matrix

2013-08-05 Thread Puschner, Bernd
Dear JM,

 

creating a data frame did indeed solve my problem. Thank you so much.

 

Best wishes

 

Bernd

 

Von: zelfortin [mailto:jmichel.for...@gmail.com] 
Gesendet: Freitag, 2. August 2013 22:10
An: r-help-arch...@googlegroups.com
Cc: r-help@r-project.org; Puschner, Bernd
Betreff: Re: [R] lme: object is not a matrix

 

Hi, 

make sure your data is in a data frame:

Data - data.frame(t,can2p,code)

just specify in which object you find those specific variables by using object$ 
before each one.

I hope this solve the problem. If not, post your script, it's really hard to 
work with only an error message!

Cheers

JM


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lme: object is not a matrix

2013-08-02 Thread Puschner, Bernd
Dear all,

 

when running in R a simple lme call (3 variables in data set) which
works perfectly fine in S-PLUS, I keep getting the error message

 

 Error in model.frame.default(formula = ~t + can2p + code, data = list(
: 

  object is not a matrix

 

Any help would be very much appreciated.

 

Thanks.

 

Bernd 

 


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme: object is not a matrix

2013-08-02 Thread Bert Gunter
R is not splus. Read ?lme.

Bert

Sent from my iPhone -- please excuse typos.

On Aug 2, 2013, at 5:45 AM, Puschner, Bernd bernd.pusch...@bkh-guenzburg.de 
wrote:

 Dear all,
 
 
 
 when running in R a simple lme call (3 variables in data set) which
 works perfectly fine in S-PLUS, I keep getting the error message
 
 
 
  Error in model.frame.default(formula = ~t + can2p + code, data = list(
 : 
 
  object is not a matrix
 
 
 
 Any help would be very much appreciated.
 
 
 
 Thanks.
 
 
 
 Bernd 
 
 
 
 
[[alternative HTML version deleted]]
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme: object is not a matrix

2013-08-02 Thread zelfortin
Hi, 

make sure your data is in a data frame:

Data - data.frame(t,can2p,code)

just specify in which object you find those specific variables by using 
object$ before each one.

I hope this solve the problem. If not, post your script, it's really hard 
to work with only an error message!

Cheers

JM
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme (weights) and glht

2013-07-26 Thread ONKELINX, Thierry
Dear Sibylle,

Have you tried to create a new variable?

ME$fDiversity - factor(ME$Diversity)

H08_lme - lme(
log(Height2008_mean) ~ fDiversity,
data = ME,
random = ~ 1|Plot/SubPlot,
weights = varPower(form = ~Diversity),
na.action = na.omit,
subset = ME$Species == Ace_pse,
method = ML
)
summary(H08_lme)
anova(H08_lme)
g_H08_lme - glht(H08_lme, linfct = mcp(fDiversity = Tukey))

Please note that I’ve added (a lot of) whitespace to your code to make it 
easier to read.

Best regards,

Thierry

ir. Thierry Onkelinx
Instituut voor natuur- en bosonderzoek / Research Institute for Nature and 
Forest
team Biometrie  Kwaliteitszorg / team Biometrics  Quality Assurance
Kliniekstraat 25
1070 Anderlecht
Belgium
+ 32 2 525 02 51
+ 32 54 43 61 85
thierry.onkel...@inbo.bemailto:thierry.onkel...@inbo.be
www.inbo.behttp://www.inbo.be/

To call in the statistician after the experiment is done may be no more than 
asking him to perform a post-mortem examination: he may be able to say what the 
experiment died of.
~ Sir Ronald Aylmer Fisher

The plural of anecdote is not data.
~ Roger Brinner

The combination of some data and an aching desire for an answer does not ensure 
that a reasonable answer can be extracted from a given body of data.
~ John Tukey

Van: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] Namens 
Sibylle Stöckli
Verzonden: donderdag 25 juli 2013 12:16
Aan: r-help@r-project.org
Onderwerp: [R] lme (weights) and glht

Dear R members,

I tried to fit an lme model and to use the glht function of multcomp.
However, the glht function gives me some errors when using
weights=varPower().
The glht error makes sense as glht needs factor levels and the model
works fine without weights=. Does anyone know a solution so I do not
have to change the lme model?

Thanks
Sibylle

-- works fine
ME$Diversity=factor(ME$Diversity)
H08_lme-lme(log(Height2005_mean)~Diversity, data=ME, random=~1|Plot/
SubPlot, na.action=na.omit, subset=ME$Species==Pse_men, method=ML)
summary(H08_lme)
anova(H08_lme)
g_H08_lme-glht(H08_lme, linfct=mcp(Diversity=Tukey))
print(summary(g_H08_lme))

-- using lme with weights I changed the order of factor() and
introduced as.factor in the model

H08_lme-lme(log(Height2008_mean)~as.factor(Diversity), data=ME,
random=~1|Plot/SubPlot, weights=varPower(form=~Diversity),
na.action=na.omit, subset=ME$Species==Ace_pse, method=ML)
summary(H08_lme)
anova(H08_lme)
ME$Diversity=factor(ME$Diversity)
g_H08_lme-glht(H08_lme, linfct=mcp(Diversity=Tukey))

Error in mcp2matrix(model, linfct = linfct) :
   Variable(s) �Diversity� have been specified in �linfct� but cannot
be found in �model�!
[[alternative HTML version deleted]]
__
R-help@r-project.orgmailto:R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
* * * * * * * * * * * * * D I S C L A I M E R * * * * * * * * * * * * *
Dit bericht en eventuele bijlagen geven enkel de visie van de schrijver weer en 
binden het INBO onder geen enkel beding, zolang dit bericht niet bevestigd is 
door een geldig ondertekend document.
The views expressed in this message and any annex are purely those of the 
writer and may not be regarded as stating an official position of INBO, as long 
as the message is not confirmed by a duly signed document.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lme (weights) and glht

2013-07-25 Thread Sibylle Stöckli
Dear R members,

I tried to fit an lme model and to use the glht function of multcomp.  
However, the glht function gives me some errors when using  
weights=varPower().
The glht error makes sense as glht needs factor levels and the model  
works fine without weights=. Does anyone know a solution so I do not  
have to change the lme model?

Thanks
Sibylle

-- works fine
ME$Diversity=factor(ME$Diversity)
H08_lme-lme(log(Height2005_mean)~Diversity, data=ME, random=~1|Plot/ 
SubPlot, na.action=na.omit, subset=ME$Species==Pse_men, method=ML)
summary(H08_lme)
anova(H08_lme)
g_H08_lme-glht(H08_lme, linfct=mcp(Diversity=Tukey))
print(summary(g_H08_lme))

-- using lme with weights I changed the order of factor() and  
introduced as.factor in the model

H08_lme-lme(log(Height2008_mean)~as.factor(Diversity), data=ME,  
random=~1|Plot/SubPlot, weights=varPower(form=~Diversity),  
na.action=na.omit, subset=ME$Species==Ace_pse, method=ML)
summary(H08_lme)
anova(H08_lme)
ME$Diversity=factor(ME$Diversity)
g_H08_lme-glht(H08_lme, linfct=mcp(Diversity=Tukey))

Error in mcp2matrix(model, linfct = linfct) :
   Variable(s) ‘Diversity’ have been specified in ‘linfct’ but cannot  
be found in ‘model’! 
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme function cannot find object

2013-06-06 Thread Ben Bolker
Pfeiffer, Steven pfeiffss at miamioh.edu writes:


 I have been using the function lme() from package 'nlme' for several months
 now without any problems.  Suddenly, it cannot find a factor in my data.
 Is this a new bug of some kind?  My code and output are below.
 Thanks for your help!
 -Steve Pfeiffer

  You have an extra comma at the end of the first line of
your lme code (indicated with ^^^), so R thinks your interaction term is a
separate argument and tries to intepret it outside the context
of the formula ...

library(nlme)
fit.1-lme(soil.moisture ~ Trenching + DaysSinceEvent,
   ^
 + Trenching:DaysSinceEvent,
 random = ~ DaysSinceEvent | Plot,
 data=Dat.middle,
 method=ML
)

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lme function cannot find object

2013-06-05 Thread Pfeiffer, Steven
Hello,
I have been using the function lme() from package 'nlme' for several months
now without any problems.  Suddenly, it cannot find a factor in my data.
Is this a new bug of some kind?  My code and output are below.
Thanks for your help!
-Steve Pfeiffer

require(xlsx)
require(nlme)

Dat.middle-read.xlsx(
   C:\\Users\\S\\Google Drive\\RESEARCH (flash
drive)\\Data\\sm reorganized 2.xlsx,
   sheetName=Middle R Friendly,colIndex=1:5)

class(Dat.middle)
  [1] data.frame

dim(Dat.middle)
  [1] 380   5

names(Dat.middle)
  [1] HoneysuckleTrenching  PlotDaysSinceEvent
soil.moisture

Dat.middle[1:10,]
   Honeysuckle Trenching  Plot DaysSinceEvent   soil.moisture
1 Ctrltr  TE.tr
0 15.21690
2 Ctrltr  TE.tr
1 16.68770
3 Ctrltr  TE.tr  2
16.11459
4 Ctrltr  TE.tr  3
14.57441
5 Ctrltr  TE.tr  4
15.16364
6 Ctrltr  TE.tr  5
15.32327
7 Ctrltr  TE.tr  8
13.26730
8 Ctrltr  TE.tr  9
11.81761
9 Ctrltr  TE.tr 10
13.87007
10Ctrl   tr  TE.tr
1112.26743

fit.1-lme(soil.moisture ~ Trenching + DaysSinceEvent,
+ Trenching:DaysSinceEvent,
random = ~ DaysSinceEvent | Plot,
data=Dat.middle,
method=ML
   )
Error in lme.formula(sm ~ Trenching + DaysSinceEvent,
+Trenching:DaysSinceEvent,  :
  object 'Trenching' not found

class(Dat.middle$Trenching)   #Oh, look, here is the factor it said it
couldn't find!
  [1] factor

length(Dat.middle$Trenching)
  [1] 380

Dat.middle$Trenching[1:10]
  [1] tr tr tr tr tr tr tr tr tr tr
   Levels: tr un

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lme: subject-specific slopes.

2012-12-04 Thread John Sorkin
I am running a random intercept random slope regression:
 
fitRIRT - lme(echogen~time,random=~  
1+time|subject,data=repeatdata,na.action=na.omit)
summary(fitRIRT)
 
I would like to get the subject-specific slopes, i.e. the slope that the model 
computes for each subject. If I have 10-subjects I should have 10-slopes. I 
don't see the slope when I look at the items listed in 
names(summary(fitRIRT)
nor when I look at the items listed in 
names(fitRIRT)
 
Thanks 
John
 
 

 
 
 
John David Sorkin M.D., Ph.D.
Chief, Biostatistics and Informatics
University of Maryland School of Medicine Division of Gerontology
Baltimore VA Medical Center
10 North Greene Street
GRECC (BT/18/GR)
Baltimore, MD 21201-1524
(Phone) 410-605-7119
(Fax) 410-605-7913 (Please call phone number above prior to faxing)
Confidentiality Statement:
This email message, including any attachments, is for the sole use of the 
intended recipient(s) and may contain confidential and privileged information.  
Any unauthorized use, disclosure or distribution is prohibited.  If you are not 
the intended recipient, please contact the sender by reply email and destroy 
all copies of the original message. 
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme: subject-specific slopes.

2012-12-04 Thread John Sorkin
Ken,
Thank you for your help. ranef(fitRIRT) does not give me what I expect. The 
subject-specific slopes, and subject-specific intercepts are not anywhere close 
to what I would expect them to be; the mean of the subject-specfic values 
should be close to those reported by 
summary(fitRIRT) and they are not. As you will see by examining the material 
below, the subject-specific slopes are off by many order of magnitude. The 
intercepts are also far from the value reported in summary(fitRIRT). Do you 
have any thoughts?
Thanks,
John
 
 fitRIRT - lme(echogen~time,random=~  
 1+time|subject,data=repeatdata,na.action=na.omit) summary(fitRIRT)Linear 
 mixed-effects model fit by REML
 Data: repeatdata 
  AIC  BIClogLik
  495.097 507.2491 -241.5485

Random effects:
 Formula: ~1 + time | subject
 Structure: General positive-definite, Log-Cholesky parametrization
StdDev   Corr  
(Intercept) 1.917511e+01 (Intr)
time2.032276e-04 0 
Residual1.044601e+01   

Fixed effects: echogen ~ time 
   Value Std.Error DF   t-value p-value
(Intercept) 64.54864  4.258235 32 15.158543  0.
time 0.35795  0.227080 32  1.576307  0.1248
 Correlation: 
 (Intr)
time -0.242

Standardized Within-Group Residuals:
Min  Q1 Med  Q3 Max 
-1.61362755 -0.52710871  0.02948008  0.41793322  1.77340082 

Number of Observations: 58
Number of Groups: 25  ranef(fitRIRT)   (Intercept)  time
1-3.278112  2.221016e-09
2   -35.400618  4.314995e-08
311.493110 -6.797543e-09
4   -16.209586 -7.070834e-08
5 3.585227 -2.389705e-08
6 1.614320 -1.967700e-09
7 8.346905  5.827094e-08
830.917812 -3.768584e-08
9-0.394101 -9.158251e-09
104.437509 -4.057971e-08
11   31.956597 -2.126275e-08
12   41.567402 -4.853942e-08
13  -10.723993  1.307152e-08
14   -4.554837  5.551908e-09
15   -4.554501  4.815086e-08
16   13.296985 -3.743967e-08
17   -8.255439  1.733238e-08
18  -21.317239  2.203885e-08
19  -13.480159  2.194016e-08
20  -13.044766  2.269168e-08
21   11.639198 -1.418706e-08
22  -27.457388 -1.154099e-08
232.194001 -5.509119e-09
24   -3.992646  7.682188e-08
251.614320 -1.967700e-09
 
 


 
 
John David Sorkin M.D., Ph.D.
Chief, Biostatistics and Informatics
University of Maryland School of Medicine Division of Gerontology
Baltimore VA Medical Center
10 North Greene Street
GRECC (BT/18/GR)
Baltimore, MD 21201-1524
(Phone) 410-605-7119
(Fax) 410-605-7913 (Please call phone number above prior to faxing) Kenneth 
Frost kfr...@wisc.edu 12/4/2012 10:44 AM 
I think this might be close to what you want. ranef(fitRIRT)

On 12/04/12, John Sorkin  wrote:
 I am running a random intercept random slope regression:
 
 fitRIRT - lme(echogen~time,random=~ 
 1+time|subject,data=repeatdata,na.action=na.omit)
 summary(fitRIRT)
 
 I would like to get the subject-specific slopes, i.e. the slope that the 
 model computes for each subject. If I have 10-subjects I should have 
 10-slopes. I don't see the slope when I look at the items listed in 
 names(summary(fitRIRT)
 nor when I look at the items listed in 
 names(fitRIRT)
 
 Thanks 
 John
 
 
 
 
 
 
 John David Sorkin M.D., Ph.D.
 Chief, Biostatistics and Informatics
 University of Maryland School of Medicine Division of Gerontology
 Baltimore VA Medical Center
 10 North Greene Street
 GRECC (BT/18/GR)
 Baltimore, MD 21201-1524
 (Phone) 410-605-7119
 (Fax) 410-605-7913 (Please call phone number above prior to faxing)
 Confidentiality Statement:
 This email message, including any attachments, is for the sole use of the 
 intended recipient(s) and may contain confidential and privileged 
 information. Any unauthorized use, disclosure or distribution is prohibited. 
 If you are not the intended recipient, please contact the sender by reply 
 email and destroy all copies of the original message. 
 
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

--
Kenneth Frost
Graduate Research Assistant - Dept. of Plant Pathology
University of Wisconsin - Madison
Lab: (608) 262-9914
Mobile: (608) 556-9637
kfr...@wisc.edu


Confidentiality Statement:
This email message, including any attachments, is for the sole use of the 
intended recipient(s) and may contain confidential and privileged information.  
Any unauthorized use, disclosure or distribution is prohibited.  If you are not 
the intended recipient, please contact the sender by reply email and destroy 
all copies of the original message. 
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme: subject-specific slopes.

2012-12-04 Thread Kenneth Frost
I think the random effects represent the subject adjustments to the population 
averages. You may have to do the addition yourself to get the subject specific 
slopes and intercepts. Someone will hopefully correct me if I'm wrong.

On 12/04/12, John Sorkin  wrote:
 
 
 
 Ken,
 Thank you for your help. ranef(fitRIRT) does not give me what I expect. The 
 subject-specific slopes, and subject-specific intercepts are not anywhere 
 close to what I would expect them to be; the mean of the subject-specfic 
 values should be close to those reported by 
 summary(fitRIRT) and they are not. As you will see by examining the material 
 below, the subject-specific slopes are off by many order of magnitude. The 
 intercepts are also far from the value reported in summary(fitRIRT). Do you 
 have any thoughts?
 Thanks,
 John
 
 
  fitRIRT - lme(echogen~time,random=~ 
  1+time|subject,data=repeatdata,na.action=na.omit)  summary(fitRIRT) Linear 
  mixed-effects model fit by REML Data: repeatdata AIC BIC logLik 495.097 
  507.2491 -241.5485 Random effects: Formula: ~1 + time | subject Structure: 
  General positive-definite, Log-Cholesky parametrization StdDev Corr 
  (Intercept) 1.917511e+01 (Intr) time 2.032276e-04 0 Residual 1.044601e+01 
  Fixed effects: echogen ~ time Value Std.Error DF t-value p-value 
  (Intercept) 64.54864 4.258235 32 15.158543 0. time 0.35795 0.227080 32 
  1.576307 0.1248 Correlation: (Intr) time -0.242 Standardized Within-Group 
  Residuals: Min Q1 Med Q3 Max -1.61362755 -0.52710871 0.02948008 0.41793322 
  1.77340082 Number of Observations: 58 Number of Groups: 25  ranef(fitRIRT) 
  (Intercept) time 1 -3.278112 2.221016e-09 2 -35.400618 4.314995e-08 3 
  11.493110 -6.797543e-09 4 -16.209586 -7.070834e-08 5 3.585227 -2.389705e-08 
  6 1.614320 -1.967700e-09 7 8.346905 5.827094e-08 8 30.917812 -3.768584e-08 
  9 -0.394101 -9.158251e-09 10 4.437509 -4.057971e-08 11 31.956597 
  -2.126275e-08 12 41.567402 -4.853942e-08 13 -10.723993 1.307152e-08 14 
  -4.554837 5.551908e-09 15 -4.554501 4.815086e-08 16 13.296985 -3.743967e-08 
  17 -8.255439 1.733238e-08 18 -21.317239 2.203885e-08 19 -13.480159 
  2.194016e-08 20 -13.044766 2.269168e-08 21 11.639198 -1.418706e-08 22 
  -27.457388 -1.154099e-08 23 2.194001 -5.509119e-09 24 -3.992646 
  7.682188e-08 25 1.614320 -1.967700e-09
 
 
 
 
 
 
 
 John David Sorkin M.D., Ph.D.
 Chief, Biostatistics and Informatics
 University of Maryland School of Medicine Division of Gerontology
 Baltimore VA Medical Center
 10 North Greene Street
 GRECC (BT/18/GR)
 Baltimore, MD 21201-1524
 (Phone) 410-605-7119
 (Fax) 410-605-7913 (Please call phone number above prior to faxing) 
 Kenneth Frost kfr...@wisc.edu 12/4/2012 10:44 AM 
 I think this might be close to what you want. ranef(fitRIRT)
 
 On 12/04/12, John Sorkin wrote:
  I am running a random intercept random slope regression:
  
  fitRIRT - lme(echogen~time,random=~ 
  1+time|subject,data=repeatdata,na.action=na.omit)
  summary(fitRIRT)
  
  I would like to get the subject-specific slopes, i.e. the slope that the 
  model computes for each subject. If I have 10-subjects I should have 
  10-slopes. I don't see the slope when I look at the items listed in 
  names(summary(fitRIRT)
  nor when I look at the items listed in 
  names(fitRIRT)
  
  Thanks 
  John
  
  
  
  
  
  
  John David Sorkin M.D., Ph.D.
  Chief, Biostatistics and Informatics
  University of Maryland School of Medicine Division of Gerontology
  Baltimore VA Medical Center
  10 North Greene Street
  GRECC (BT/18/GR)
  Baltimore, MD 21201-1524
  (Phone) 410-605-7119
  (Fax) 410-605-7913 (Please call phone number above prior to faxing)
  Confidentiality Statement:
  This email message, including any attachments, is for the sole use of the 
  intended recipient(s) and may contain confidential and privileged 
  information. Any unauthorized use, disclosure or distribution is 
  prohibited. If you are not the intended recipient, please contact the 
  sender by reply email and destroy all copies of the original message. 
  
  
  __
  R-help@r-project.org mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained, reproducible code.
 
 --
 Kenneth Frost
 Graduate Research Assistant - Dept. of Plant Pathology
 University of Wisconsin - Madison
 Lab: (608) 262-9914
 Mobile: (608) 556-9637
 kfr...@wisc.edu
 
 
 
 
 Confidentiality Statement: 
 
 This email message, including any attachments, is for the sole use of the 
 intended recipient(s) and may contain confidential and privileged 
 information. Any unauthorized use, disclosure or distribution is prohibited. 
 If you are not the intended recipient, please contact the sender by reply 
 email and destroy all copies of the original message.

--
Kenneth Frost
Graduate Research Assistant - Dept. of Plant Pathology
University of Wisconsin 

Re: [R] lme: subject-specific slopes.

2012-12-04 Thread John Sorkin
Yes, you are correct. 
Thanks,
John

 
John David Sorkin M.D., Ph.D.
Chief, Biostatistics and Informatics
University of Maryland School of Medicine Division of Gerontology
Baltimore VA Medical Center
10 North Greene Street
GRECC (BT/18/GR)
Baltimore, MD 21201-1524
(Phone) 410-605-7119
(Fax) 410-605-7913 (Please call phone number above prior to faxing) Kenneth 
Frost kfr...@wisc.edu 12/4/2012 11:07 AM 
I think the random effects represent the subject adjustments to the population 
averages. You may have to do the addition yourself to get the subject specific 
slopes and intercepts. Someone will hopefully correct me if I'm wrong.

On 12/04/12, John Sorkin  wrote:
 
 
 
 Ken,
 Thank you for your help. ranef(fitRIRT) does not give me what I expect. The 
 subject-specific slopes, and subject-specific intercepts are not anywhere 
 close to what I would expect them to be; the mean of the subject-specfic 
 values should be close to those reported by 
 summary(fitRIRT) and they are not. As you will see by examining the material 
 below, the subject-specific slopes are off by many order of magnitude. The 
 intercepts are also far from the value reported in summary(fitRIRT). Do you 
 have any thoughts?
 Thanks,
 John
 
 
  fitRIRT - lme(echogen~time,random=~ 
  1+time|subject,data=repeatdata,na.action=na.omit)  summary(fitRIRT) Linear 
  mixed-effects model fit by REML Data: repeatdata AIC BIC logLik 495.097 
  507.2491 -241.5485 Random effects: Formula: ~1 + time | subject Structure: 
  General positive-definite, Log-Cholesky parametrization StdDev Corr 
  (Intercept) 1.917511e+01 (Intr) time 2.032276e-04 0 Residual 1.044601e+01 
  Fixed effects: echogen ~ time Value Std.Error DF t-value p-value 
  (Intercept) 64.54864 4.258235 32 15.158543 0. time 0.35795 0.227080 32 
  1.576307 0.1248 Correlation: (Intr) time -0.242 Standardized Within-Group 
  Residuals: Min Q1 Med Q3 Max -1.61362755 -0.52710871 0.02948008 0.41793322 
  1.77340082 Number of Observations: 58 Number of Groups: 25  ranef(fitRIRT) 
  (Intercept) time 1 -3.278112 2.221016e-09 2 -35.400618 4.314995e-08 3 
  11.493110 -6.797543e-09 4 -16.209586 -7.070834e-08 5 3.585227 -2.389705e-08 
  6 1.614320 -1.967700e-09 7 8.346905 5.827094e-08 8 30.917812 -3.768584e-08 
  9 -0.394101 -9.158251e-09 10 4.437509 -4.057971e-08 11 31.956597 
  -2.126275e-08 12 41.567402 -4.853942e-08 13 -10.723993 1.307152e-08 14 
  -4.554837 5.551908e-09 15 -4.554501 4.815086e-08 16 13.296985 -3.743967e-08 
  17 -8.255439 1.733238e-08 18 -21.317239 2.203885e-08 19 -13.480159 
  2.194016e-08 20 -13.044766 2.269168e-08 21 11.639198 -1.418706e-08 22 
  -27.457388 -1.154099e-08 23 2.194001 -5.509119e-09 24 -3.992646 
  7.682188e-08 25 1.614320 -1.967700e-09
 
 
 
 
 
 
 
 John David Sorkin M.D., Ph.D.
 Chief, Biostatistics and Informatics
 University of Maryland School of Medicine Division of Gerontology
 Baltimore VA Medical Center
 10 North Greene Street
 GRECC (BT/18/GR)
 Baltimore, MD 21201-1524
 (Phone) 410-605-7119
 (Fax) 410-605-7913 (Please call phone number above prior to faxing) 
 Kenneth Frost kfr...@wisc.edu 12/4/2012 10:44 AM 
 I think this might be close to what you want. ranef(fitRIRT)
 
 On 12/04/12, John Sorkin wrote:
  I am running a random intercept random slope regression:
  
  fitRIRT - lme(echogen~time,random=~ 
  1+time|subject,data=repeatdata,na.action=na.omit)
  summary(fitRIRT)
  
  I would like to get the subject-specific slopes, i.e. the slope that the 
  model computes for each subject. If I have 10-subjects I should have 
  10-slopes. I don't see the slope when I look at the items listed in 
  names(summary(fitRIRT)
  nor when I look at the items listed in 
  names(fitRIRT)
  
  Thanks 
  John
  
  
  
  
  
  
  John David Sorkin M.D., Ph.D.
  Chief, Biostatistics and Informatics
  University of Maryland School of Medicine Division of Gerontology
  Baltimore VA Medical Center
  10 North Greene Street
  GRECC (BT/18/GR)
  Baltimore, MD 21201-1524
  (Phone) 410-605-7119
  (Fax) 410-605-7913 (Please call phone number above prior to faxing)
  Confidentiality Statement:
  This email message, including any attachments, is for the sole use of the 
  intended recipient(s) and may contain confidential and privileged 
  information. Any unauthorized use, disclosure or distribution is 
  prohibited. If you are not the intended recipient, please contact the 
  sender by reply email and destroy all copies of the original message. 
  
  
  __
  R-help@r-project.org mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained, reproducible code.
 
 --
 Kenneth Frost
 Graduate Research Assistant - Dept. of Plant Pathology
 University of Wisconsin - Madison
 Lab: (608) 262-9914
 Mobile: (608) 556-9637
 kfr...@wisc.edu
 
 
 
 
 Confidentiality Statement: 
 
 This email message, including any 

Re: [R] lme: subject-specific slopes.

2012-12-04 Thread Brian S Cade
John:  I think you want the output from coef(fitRIRT).   The 
ranef(fitRIRT) will give you the subject specific random effect deviations 
from the fixed effects means.  The coef(fitRIRT) will give you the 
combination of the fixed effect means with the subject specific random 
effect deviations and, thus, in the scale you are expecting.

Brian

Brian S. Cade, PhD

U. S. Geological Survey
Fort Collins Science Center
2150 Centre Ave., Bldg. C
Fort Collins, CO  80526-8818

email:  brian_c...@usgs.gov
tel:  970 226-9326



From:
John Sorkin jsor...@grecc.umaryland.edu
To:
r-help@r-project.org, Kenneth Frost kfr...@wisc.edu
Date:
12/04/2012 09:10 AM
Subject:
Re: [R] lme: subject-specific slopes.
Sent by:
r-help-boun...@r-project.org



Ken,
Thank you for your help. ranef(fitRIRT) does not give me what I expect. 
The subject-specific slopes, and subject-specific intercepts are not 
anywhere close to what I would expect them to be; the mean of the 
subject-specfic values should be close to those reported by 
summary(fitRIRT) and they are not. As you will see by examining the 
material below, the subject-specific slopes are off by many order of 
magnitude. The intercepts are also far from the value reported in 
summary(fitRIRT). Do you have any thoughts?
Thanks,
John
 
 fitRIRT - lme(echogen~time,random=~ 
1+time|subject,data=repeatdata,na.action=na.omit) summary(fitRIRT)Linear 
mixed-effects model fit by REML
 Data: repeatdata 
  AIC  BIClogLik
  495.097 507.2491 -241.5485

Random effects:
 Formula: ~1 + time | subject
 Structure: General positive-definite, Log-Cholesky parametrization
StdDev   Corr 
(Intercept) 1.917511e+01 (Intr)
time2.032276e-04 0 
Residual1.044601e+01 

Fixed effects: echogen ~ time 
   Value Std.Error DF   t-value p-value
(Intercept) 64.54864  4.258235 32 15.158543  0.
time 0.35795  0.227080 32  1.576307  0.1248
 Correlation: 
 (Intr)
time -0.242

Standardized Within-Group Residuals:
Min  Q1 Med  Q3 Max 
-1.61362755 -0.52710871  0.02948008  0.41793322  1.77340082 

Number of Observations: 58
Number of Groups: 25  ranef(fitRIRT)   (Intercept)  time
1-3.278112  2.221016e-09
2   -35.400618  4.314995e-08
311.493110 -6.797543e-09
4   -16.209586 -7.070834e-08
5 3.585227 -2.389705e-08
6 1.614320 -1.967700e-09
7 8.346905  5.827094e-08
830.917812 -3.768584e-08
9-0.394101 -9.158251e-09
104.437509 -4.057971e-08
11   31.956597 -2.126275e-08
12   41.567402 -4.853942e-08
13  -10.723993  1.307152e-08
14   -4.554837  5.551908e-09
15   -4.554501  4.815086e-08
16   13.296985 -3.743967e-08
17   -8.255439  1.733238e-08
18  -21.317239  2.203885e-08
19  -13.480159  2.194016e-08
20  -13.044766  2.269168e-08
21   11.639198 -1.418706e-08
22  -27.457388 -1.154099e-08
232.194001 -5.509119e-09
24   -3.992646  7.682188e-08
251.614320 -1.967700e-09
 
 


 
 
John David Sorkin M.D., Ph.D.
Chief, Biostatistics and Informatics
University of Maryland School of Medicine Division of Gerontology
Baltimore VA Medical Center
10 North Greene Street
GRECC (BT/18/GR)
Baltimore, MD 21201-1524
(Phone) 410-605-7119
(Fax) 410-605-7913 (Please call phone number above prior to faxing) 
Kenneth Frost kfr...@wisc.edu 12/4/2012 10:44 AM 
I think this might be close to what you want. ranef(fitRIRT)

On 12/04/12, John Sorkin  wrote:
 I am running a random intercept random slope regression:
 
 fitRIRT - lme(echogen~time,random=~ 
1+time|subject,data=repeatdata,na.action=na.omit)
 summary(fitRIRT)
 
 I would like to get the subject-specific slopes, i.e. the slope that the 
model computes for each subject. If I have 10-subjects I should have 
10-slopes. I don't see the slope when I look at the items listed in 
 names(summary(fitRIRT)
 nor when I look at the items listed in 
 names(fitRIRT)
 
 Thanks 
 John
 
 
 
 
 
 
 John David Sorkin M.D., Ph.D.
 Chief, Biostatistics and Informatics
 University of Maryland School of Medicine Division of Gerontology
 Baltimore VA Medical Center
 10 North Greene Street
 GRECC (BT/18/GR)
 Baltimore, MD 21201-1524
 (Phone) 410-605-7119
 (Fax) 410-605-7913 (Please call phone number above prior to faxing)
 Confidentiality Statement:
 This email message, including any attachments, is for the sole use of 
the intended recipient(s) and may contain confidential and privileged 
information. Any unauthorized use, disclosure or distribution is 
prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message. 
 
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

--
Kenneth Frost
Graduate Research Assistant - Dept. of Plant Pathology
University of Wisconsin - Madison
Lab

[R] lme help configuring random effects

2012-10-01 Thread Julie Lee-Yaw
Hi Everyone, 

Sorry to ask what I think is a basic question but I really haven't found my 
answer yet in the archives. 

I am trying to run a mixed effects model in R using the lme package. My 
experiment is such that I am interested in the effects of Temperature (2 
levels) and Species (3 levels) on Growth. I collected individuals from three 
populations within each species. Because individuals within a population are 
potentially more similar to each other than individuals among populations, I 
want to include population as a random factor in my model. 

I would have thought that I would structure the model as follows: 

z-lme(Growth~Temp*Species, random=~1|Species/Population) 

But the summary for this model includes NAs (e.g. for two of the species). 

I've considered a model such as 

z--lme(Growth~Temp*Species,random=~1|Population) 

but it strikes me that I need to associate/nest population and species 
(as not all the Species treatments are applicable to every population). 

I'm also confused as to the naming of populations. Currently I've got the 
populations named 1,2,3,4,5 ...9. Should I be naming them A1,A2,A3, B1,B2,B3, 
C1,C2, C3 (where the letters represent different species)? When does that 
matter? 

Any help with this would be greatly appreciated! 

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme help configuring random effects

2012-10-01 Thread Ben Bolker
Julie Lee-Yaw julleeyaw at yahoo.ca writes:


 

[snip]
 
 I am trying to run a mixed effects model in R using the lme
 package. My experiment is such that I am interested in the effects
 of Temperature (2 levels) and Species (3 levels) on Growth. I
 collected individuals from three populations within each
 species. Because individuals within a population are potentially
 more similar to each other than individuals among populations, I
 want to include population as a random factor in my model. 

 You may have some practical difficulties incorporating a random
effect with only three populations ...

 
 I would have thought that I would structure the model as follows: 
 z-lme(Growth~Temp*Species, random=~1|Species/Population) 
 But the summary for this model includes NAs (e.g. for two of the species). 
 I've considered a model such as 

  z- lme(Growth~Temp*Species,random=~1|Population) 

I think you need

  z - lme(Growth~Temp*Species, random=~1|Species:Population)

 Your specification (Species/Population) expands to Species+
Species:Population , which ends up including Species as both
a random and a fixed factor.

  You probably don't have the data, but in principle you
should consider  random=~Temp|Species:Population (see a paper
by Schielzeth on incorporating treatment-by-block interactions)

[snip]

 I'm also confused as to the naming of populations. 
 Currently I've got the populations named 1,2,3,4,5
 ...9. Should I be naming them A1,A2,A3, B1,B2,B3, C1,C2, C3 
 (where the letters represent different
 species)? When does that matter? 

  This is the difference between implicit and explicit
nesting.  In general naming them A1, ... C1, ... is better,
because it reduces the probability of making mistakes.

  http://glmm.wikidot.com/faq may be useful.
  Further questions should probably go to the r-sig-mixed-models
mailing list.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lme(y ~ ns(x, df=splineDF)) error

2012-09-26 Thread Jacob Wegelin


I would like to fit regression models of the form

y ~ ns(x, df=splineDF)

where splineDF is passed as an argument to a wrapper function.

This works fine if the regression function is lm(). But with lme(),
I get two different errors, depending on how I handle splineDF
inside the wrapper function.

A workaround is to turn the lme() command, along with the appropriate
value of splineDF, into a text string and then use
eval(parse(text=mystring)).  Is there not a more elegant solution?

The function below demonstrates this. According to the value of
WhichApproach (1, 2, or 3), it attempts one of the approaches that
does not work or the workaround using the text string.


sessionInfo()

R version 2.15.1 (2012-06-22)
Platform: x86_64-apple-darwin9.8.0/x86_64 (64-bit)

locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8

attached base packages:
[1] splines   stats graphics  grDevices utils datasets  methods
base

other attached packages:
[1] nlme_3.1-104  ggplot2_0.9.1

loaded via a namespace (and not attached):
  [1] colorspace_1.1-1   dichromat_1.2-4digest_0.5.2
  grid_2.15.1labeling_0.1
   [6] lattice_0.20-6 MASS_7.3-18memoise_0.1
   munsell_0.3plyr_1.7.1
   [11] proto_0.3-9.2  RColorBrewer_1.0-5 reshape2_1.2.1
   scales_0.2.1   stringr_0.6
   [16] tools_2.15.1


wrapper -function(WhichApproach=1, splineDF=4 ){
#   Create toy dataset
nID-25
IDnames-LETTERS[1:nID]

longdat-data.frame( x=rep(-10:10, nID) / 3 , ID= rep( IDnames , each=21)
)
set.seed(5)
longdat$x-longdat$x + rnorm(nrow(longdat))/10
IDeffect-rnorm(nID) * 20
names(IDeffect) - IDnames
longdat$IDeffect-IDeffect[as.character(longdat$ID)]
longdat$y- (longdat$x
+ longdat$x^3
+ (longdat$x-1)^4  / 5
+ 1/(abs(longdat$x/50) + 0.02)
+ longdat$IDeffect
+ rnorm(1:nrow(longdat)) * 2
)
longdat-longdat[order(longdat$x),]

library(splines)
#   Calling ns within lm works fine:
mylm- lm( y ~ ns(x,df=splineDF), data=longdat)
longdat$lmfit-predict(mylm)
library(ggplot2)
print(
ggplot(longdat, aes(x, y))
+ geom_point(shape=1)
+ geom_line(aes(x=x, y=lmfit), color=red)
)


cat(Enter to attempt lme.)
readline()
library(nlme)

if(WhichApproach==1) {
# returns: Error in eval(expr, envir, enclos) : object 'splineDF' not found
#
# First make sure splineDF does not exist in .GlobalEnv, else we would be in 
WhichApproach==2.
if(exists(splineDF, where=.GlobalEnv)) remove(list=splineDF, 
pos=.GlobalEnv)
mylme-lme(fixed= y ~ ns(x, df=splineDF)
, random= ~ 1 | ID
, correlation = corAR1()
, data=longdat
)
} else if(WhichApproach==2) {
# returns: Error in model.frame.default(formula = ~ID + y + x + splineDF, data = list( : 
# variable lengths differ (found for 'splineDF')

assign(x=splineDF, value=splineDF, pos=.GlobalEnv)
mylme-lme(fixed= y ~ ns(x, df=splineDF)
, random= ~ 1 | ID
, correlation = corAR1()
, data=longdat
)
} else if (WhichApproach==3) {

thecommandstring- paste(mylme-lme(fixed= y ~ ns(x, df=, splineDF, ) , random= ~ 
1 | ID , correlation = corAR1() , data=longdat))
thecommandstring
eval(parse(text=thecommandstring))

} else {
stop(paste(WhichApproach=, WhichApproach,  not valid.))
}

mylme
longdat$fullfit-predict(mylme)

library(ggplot2)
print(
ggplot( longdat, aes(x,y))
+ geom_point(shape=1)
+ facet_wrap( ~ ID )
+ geom_line( aes(x, fullfit), color=red)
)

invisible(mylme)
}

Jacob A. Wegelin
Assistant Professor
Department of Biostatistics
Virginia Commonwealth University
830 E. Main St., Seventh Floor
P. O. Box 980032
Richmond VA 23298-0032
U.S.A.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] translating SAS proc mixed into R lme()

2012-09-22 Thread Ben Bolker
Zoya Pyrkina zoyapyrkina at gmail.com writes:

 I need  help with translating these SAS codes into R with lme()? I have a
 longitudinal data with repeated measures (measurements are equally spaced
 in time, subjects are measured several times a year). I need to allow slope
 and intercept vary.
 
 SAS codes are:
 
 proc mixed data = survey method=reml;
 
 class  subject var1 var3 var2 time;
 
 model score = var2 score_base var4 var5 var3 var6 var7 var1 time/ noint
 solution;
 
 random intercept timecontinious / subject=subject type=un g gcorr v vcorr;
 
 run;
 

  You might have more luck submitting this to r-sig-mixed-models at
r-project.org.  Have you looked at the lme documentation and examples
(and Pinheiro and Bates's 2000 book)?  You probably want something
*approximately* like

lme(score~var2+score_base+var4+var5+var3+var6+var7+var1+time-1,
   random=~time|subject,
   data=...)

A reproducible example would be nice too.

   Ben Bolker

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] translating SAS proc mixed into R lme()

2012-09-21 Thread Zoya Pyrkina
Dear R users,

I need  help with translating these SAS codes into R with lme()? I have a
longitudinal data with repeated measures (measurements are equally spaced
in time, subjects are measured several times a year). I need to allow slope
and intercept vary.

SAS codes are:

proc mixed data = survey method=reml;

class  subject var1 var3 var2 time;

model score = var2 score_base var4 var5 var3 var6 var7 var1 time/ noint
solution;

random intercept timecontinious / subject=subject type=un g gcorr v vcorr;

run;

Thank you a lot!

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lme( y ~ ns(x, df=splineDF)) error

2012-09-06 Thread Jacob Wegelin


I would like to fit regression models of the form

y ~ ns(x, df=splineDF)

where splineDF is passed as an argument to a wrapper function.

This works fine if the regression function is lm(). But with lme(),
I get two different errors, depending on how I handle splineDF
inside the wrapper function.

A workaround is to turn the lme() command, along with the appropriate
value of splineDF, into a text string and then use
eval(parse(text=mystring)).  Is there not a more elegant solution?

The function below demonstrates this.


sessionInfo()

R version 2.15.1 (2012-06-22)
Platform: x86_64-apple-darwin9.8.0/x86_64 (64-bit)

locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8

attached base packages:
[1] splines   stats graphics  grDevices utils datasets  methods
base

other attached packages:
[1] nlme_3.1-104  ggplot2_0.9.1

loaded via a namespace (and not attached):
 [1] colorspace_1.1-1   dichromat_1.2-4digest_0.5.2
 grid_2.15.1labeling_0.1
  [6] lattice_0.20-6 MASS_7.3-18memoise_0.1
  munsell_0.3plyr_1.7.1
  [11] proto_0.3-9.2  RColorBrewer_1.0-5 reshape2_1.2.1
  scales_0.2.1   stringr_0.6
  [16] tools_2.15.1

The following function can also be downloaded from
http://jacobwegelin.net/tmp/r-help/.

wrapper -function(WhichApproach=1, splineDF=4 ){
#   Create toy dataset
nID-25
IDnames-LETTERS[1:nID]

longdat-data.frame( x=rep(-10:10, nID) / 3 , ID= rep( IDnames , each=21)
)
set.seed(5)
longdat$x-longdat$x + rnorm(nrow(longdat))/10
IDeffect-rnorm(nID) * 20
names(IDeffect) - IDnames
longdat$IDeffect-IDeffect[as.character(longdat$ID)]
longdat$y- (longdat$x
+ longdat$x^3
+ (longdat$x-1)^4  / 5
+ 1/(abs(longdat$x/50) + 0.02)
+ longdat$IDeffect
+ rnorm(1:nrow(longdat)) * 2
)
longdat-longdat[order(longdat$x),]

library(splines)
#   Calling ns within lm works fine:
mylm- lm( y ~ ns(x,df=splineDF), data=longdat)
longdat$lmfit-predict(mylm)
library(ggplot2)
print(
ggplot(longdat, aes(x, y))
+ geom_point(shape=1)
+ geom_line(aes(x=x, y=lmfit), color=red)
)


cat(Enter to attempt lme.)
readline()
library(nlme)

if(WhichApproach==1) {
# returns: Error in eval(expr, envir, enclos) : object 'splineDF' not found
#
# First make sure splineDF does not exist in .GlobalEnv, else we would be in 
WhichApproach==2.
if(exists(splineDF, where=.GlobalEnv)) remove(list=splineDF, 
pos=.GlobalEnv)
mylme-lme(fixed= y ~ ns(x, df=splineDF)
, random= ~ 1 | ID
, correlation = corAR1()
, data=longdat
)
} else if(WhichApproach==2) {
# returns: Error in model.frame.default(formula = ~ID + y + x + splineDF, data = list( : 
# variable lengths differ (found for 'splineDF')

assign(x=splineDF, value=splineDF, pos=.GlobalEnv)
mylme-lme(fixed= y ~ ns(x, df=splineDF)
, random= ~ 1 | ID
, correlation = corAR1()
, data=longdat
)
} else if (WhichApproach==3) {

thecommandstring- paste(mylme-lme(fixed= y ~ ns(x, df=, splineDF, ) , random= ~ 
1 | ID , correlation = corAR1() , data=longdat))
thecommandstring
eval(parse(text=thecommandstring))

} else {
stop(paste(WhichApproach=, WhichApproach,  not valid.))
}

mylme
longdat$fullfit-predict(mylme)

library(ggplot2)
print(
ggplot( longdat, aes(x,y))
+ geom_point(shape=1)
+ facet_wrap( ~ ID )
+ geom_line( aes(x, fullfit), color=red)
)

invisible(mylme)
}

Jacob A. Wegelin
Assistant Professor
Department of Biostatistics
Virginia Commonwealth University
830 E. Main St., Seventh Floor
P. O. Box 980032
Richmond VA 23298-0032
U.S.A.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lme random effects in additive models with interaction

2012-06-21 Thread elifnurdogruoz
Hello,
I work with a mixed model with 4 predictor variables Time, Size, Charge,
Density and Size, Charge, Density are factors, all with two levels. Hence I
want to put their interactions with Time into the model. But, I have two
data sets (Replication 1 and 2) and I want that Replication is random
effect. Here is my code:


knots - default.knots(Time)

z - outer(Time, knots, -) 
z - z * (z  0)
z-z^2

i.size50 - I(Size==50)
i.chargepos - I(Charge==+)
i.densitylow - I(Density==0.001)


X - cbind( I(Time^2),Time*i.size50,Time*i.chargepos,Time*i.densitylow)
Z - cbind( z, z*i.size50, z*i.chargepos,z*i.densitylow)
K - length(knots)

block.ind - list(1:K, (K+1):(2*K),(2*K+1):(3*K),(3*K+1):(4*K))
Z.block - list()
for (i in 1:length(block.ind))
Z.block[[i]] -
as.formula(paste(~Z[,c(,paste(block.ind[[i]],collapse=,),)]-1))

group - rep(1, length(Time))
model.data - groupedData(y~X|group, data=data.frame(X, y))

fit - lme(y~-1+X, data=model.data, random=pdBlocked(list(
pdBlocked(Z.block,pdClass=pdIdent), pdIdent(~-1+ Replication) ))
,control=list(maxIter=1000, msMaxIter=1000, niterEM=1000))


It gives errror: Error:  getResponseFormula(el) : Form must be a two
sided formula

Does anybody help how can I write random part? 
Thanks..


--
View this message in context: 
http://r.789695.n4.nabble.com/lme-random-effects-in-additive-models-with-interaction-tp4634067.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme random effects in additive models with interaction

2012-06-21 Thread elifnurdogruoz
Thanks for your answer, I would like to make clear my question: 

My data is like following and there is a response variable y:

Time  Size   Charge Density Replication
3 small +   low 1
.   .  ..   .
.   .  ..   .   
9 small +   low 1  
3 big +   low   .   
.   .  ..   .
.   .  ..   .   

9 big + low 1   
3 small -   low 1  
.   .  ..   .
.   .  ..   .   
   
9 small -   low 1   
3 big -   low   1   
.   .  ..   . 
.   .  ..   .   
   
9 big   -   low 1
3 small +   high1   
.   .  ..   .
.   .  ..   .   
9 small +   high1 
3 big +   high  1 
.   .  ..   . 
.   .  ..   .   
  
9 big + high1   
   
3 small -   high1 
.   .  ..   . 
.   .  ..   1   

9 small -   high1
3 big -   high  1
.   .  ..   . 
.   .  ..   .   
  
9 big -   high  1
3 small +   low 2
.   .  ..   .
.   .  ..   .   
9 small +   low 2  
3 big +   low   2 
.   .  ..   .
.   .  ..   .   

9 big + low 2   
3 small -   low 2  
.   .  ..   .
.   .  ..   .   
   
9 small -   low 2   
3 big -   low   2   
.   .  ..   . 
.   .  ..   .   
   
9 big   -   low 2
3 small +   high2   
.   .  ..   .
.   .  ..   .   
9 small +   high2 
3 big +   high  2 
.   .  ..   . 
.   .  ..   .   
  
9 big + high2   
   
3 small -   high2 
.   .  ..   . 
.   .  ..   .   

9 small -   high2
3 big -   high  2   
.   .  ..   . 
.

[R] lme: extract result-function

2012-06-13 Thread Christof Kluß
Hi,

mod - lme(A ~ -1 + B+C+D+E+F+G, random = ~1 | ...)

results in summary(mod)$coeff

B C D E F G (Intercept)
b c d e f g i

Now I'm interested in the function

f - function(B,C,D,E,F,G) - {
  return(i + b*B + c*C + d*D + e*E + f*F + g*G)
}

Is there a easier way to create such function with flexible number of
coefficient, than do it by hand?

thx
Christof

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme: extract result-function

2012-06-13 Thread Rui Barradas

Hello,

Try


f - function(response, regressors)
as.formula(paste(response, paste(regressors, collapse= + ), sep= ~ 
))


(resp - A)
(regr - c(-1, LETTERS[2:7]))

fmla - f(resp, regr)


Hope this helps,

Rui Barradas

Em 13-06-2012 09:21, Christof Kluß escreveu:

Hi,

mod - lme(A ~ -1 + B+C+D+E+F+G, random = ~1 | ...)

results in summary(mod)$coeff

B C D E F G (Intercept)
b c d e f g i

Now I'm interested in the function

f - function(B,C,D,E,F,G) - {
   return(i + b*B + c*C + d*D + e*E + f*F + g*G)
}

Is there a easier way to create such function with flexible number of
coefficient, than do it by hand?

thx
Christof

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme: extract result-function

2012-06-13 Thread ONKELINX, Thierry
Dear Christof,

You want the predict() function. See ?predict.lme for the details.

Best regards,

Thierry

PS Questions on lme() can be asked at r-sig-mixed models.

ir. Thierry Onkelinx
Instituut voor natuur- en bosonderzoek / Research Institute for Nature and 
Forest
team Biometrie  Kwaliteitszorg / team Biometrics  Quality Assurance
Kliniekstraat 25
1070 Anderlecht
Belgium
+ 32 2 525 02 51
+ 32 54 43 61 85
thierry.onkel...@inbo.be
www.inbo.be

To call in the statistician after the experiment is done may be no more than 
asking him to perform a post-mortem examination: he may be able to say what the 
experiment died of.
~ Sir Ronald Aylmer Fisher

The plural of anecdote is not data.
~ Roger Brinner

The combination of some data and an aching desire for an answer does not ensure 
that a reasonable answer can be extracted from a given body of data.
~ John Tukey

-Oorspronkelijk bericht-
Van: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] Namens 
Christof Kluß
Verzonden: woensdag 13 juni 2012 10:21
Aan: r-h...@stat.math.ethz.ch
Onderwerp: [R] lme: extract result-function

Hi,

mod - lme(A ~ -1 + B+C+D+E+F+G, random = ~1 | ...)

results in summary(mod)$coeff

B C D E F G (Intercept)
b c d e f g i

Now I'm interested in the function

f - function(B,C,D,E,F,G) - {
  return(i + b*B + c*C + d*D + e*E + f*F + g*G) }

Is there a easier way to create such function with flexible number of 
coefficient, than do it by hand?

thx
Christof

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
* * * * * * * * * * * * * D I S C L A I M E R * * * * * * * * * * * * *
Dit bericht en eventuele bijlagen geven enkel de visie van de schrijver weer en 
binden het INBO onder geen enkel beding, zolang dit bericht niet bevestigd is 
door een geldig ondertekend document.
The views expressed in this message and any annex are purely those of the 
writer and may not be regarded as stating an official position of INBO, as long 
as the message is not confirmed by a duly signed document.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme: extract result-function

2012-06-13 Thread Dieter Menne

Christof Kluß wrote
 
 mod - lme(A ~ -1 + B+C+D+E+F+G, random = ~1 | ...)
 ...
 f - function(B,C,D,E,F,G) - {
   return(i + b*B + c*C + d*D + e*E + f*F + g*G)
 }
 

It looks like you are trying to compute contrasts the ugly way. Check
estimable in package gmodels or the vignette of the multcomp package; in the
latter, the usage example of lme is a bit hidden, but it works nevertheless.

Dieter


--
View this message in context: 
http://r.789695.n4.nabble.com/lme-extract-result-function-tp4633220p4633222.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme: extract result-function

2012-06-13 Thread Pascal Oettli

Hello,

Is it to re-create what predict is able to do?

Best Regards,
Pascal


Le 12/06/13 17:21, Christof Kluß a écrit :

Hi,

mod- lme(A ~ -1 + B+C+D+E+F+G, random = ~1 | ...)

results in summary(mod)$coeff

B C D E F G (Intercept)
b c d e f g i

Now I'm interested in the function

f- function(B,C,D,E,F,G)- {
   return(i + b*B + c*C + d*D + e*E + f*F + g*G)
}

Is there a easier way to create such function with flexible number of
coefficient, than do it by hand?

thx
Christof

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lme random slope results the same as random slope and intercept model

2012-06-12 Thread John Sorkin
R 2.15.0
Windows XP
 

 
Can someone help me understand why a random intercept model gives the same 
results as the random intercept and slope models?
 
I am rather surprised by the results I am getting from lme. I am running three 
models
(1) random intercept
 fitRI   - lme(echogen~time,random=~  1 
|subject,data=repeatdata,na.action=na.omit)
(2) random slope
fitRT   - lme(echogen~time,random=~ 
 -1+time|subject,data=repeatdata,na.action=na.omit)
(3) random intercept and slope.
 fitRIRT - lme(echogen~time,random=~  
1+time|subject,data=repeatdata,na.action=na.omit)
The results of the (1) random intercept model are different from the (2) random 
slope model,not a surprise.
The results of the (1) random intercept model and the (3) random intercept and 
slope models are exactly the same, a surprise! 
 
Below I copy the results for each model. Further below I give all my output.
 
RESULTS FROM EACH MODEL
(1) Random intercept results:
Random effects:
 Formula: ~1 | subject
(Intercept) Residual
StdDev: 19.1751 10.44601
 
Fixed effects: echogen ~ time 
   Value Std.Error DF   t-value p-value
(Intercept) 64.54864  4.258235 32 15.158545  0.
time 0.35795  0.227080 32  1.576307  0.1248
 Correlation: 
 (Intr)
time -0.242

 
(2) Random slope results
Random effects:
 Formula: ~-1 + time | subject
 time Residual
StdDev: 0.6014915 19.63638
 
Fixed effects: echogen ~ time 
   Value Std.Error DF   t-value p-value
(Intercept) 65.03691  3.494160 32 18.613032  0.
time 0.22688  0.467306 32  0.485503  0.6306
 Correlation: 
 (Intr)
time -0.625
 
 
(3) Random intercept and slope results
Random effects:
 Formula: ~1 + time | subject
 Structure: General positive-definite, Log-Cholesky parametrization
StdDev   Corr  
(Intercept) 1.917511e+01 (Intr)
time2.032072e-04 0 
Residual1.044601e+01   
 
Fixed effects: echogen ~ time 
   Value Std.Error DF   t-value p-value
(Intercept) 64.54864  4.258235 32 15.158543  0.
time 0.35795  0.227080 32  1.576307  0.1248
 Correlation: 
 (Intr)
time -0.242
 
 
 
 
 
 
COMPLETE OUTPUT
 repeatdata
subject time value echogen
1 1122  63
2 1340  60
3 1   NANA  NA
4 1   NANA  NA
5 1   NANA  NA
6 2139  19
7 2   NANA  NA
8 2   NANA  NA
9 2   NANA  NA
102   NANA  NA
113147  76
123643  82
133   NANA  NA
143   NANA  NA
153   NANA  NA
164144  44
174350  50
184767  67
194   2139  39
204   NANA  NA
215142  58
225360  78
235786  85
245   1956  60
255   3539  84
266157  67
276   NANA  NA
286   NANA  NA
296   NANA  NA
306   NANA  NA
317171  58
327355  67
337   1057  95
347   1762  94
357   2547  73
368179 105
378   NANA  NA
388   NANA  NA
398   NANA  NA
408   NANA  NA
419160  70
429364  62
439968  65
449   NANA  NA
459   NANA  NA
46   10147  75
47   10349  73
48   10946  70
49   10   1748  70
50   10   NANA  NA
51   11157  97
52   11675 108
53   11   NANA  NA
54   11   NANA  NA
55   11   NANA  NA
56   12185 116
57   12377 110
58   12   NANA  NA
59   12   NANA  NA
60   12   NANA  NA
61   13134  51
62   13   NANA  NA
63   13   NANA  NA
64   13   NANA  NA
65   13   NANA  NA
66   14130  59
67   143NA  NA
68   14   NANA  NA
69   14   NANA  NA
70   14   NANA  NA
71   15142  47
72   15350  62
73   15   1133  75
74   15   NANA  NA
75   15   NANA  NA
76   161NA  83
77   167NA  88
78   16   13NA  74
79   16   NANA  NA
80   16   NANA  NA
81   171NA  51
82   177NA  62
83   17   NANA  NA
84   17   NANA  NA
85   17   NANA  NA
86   181NA  39
87   187NA  44
88   18   NANA  NA
89   18   NANA  NA
90   18  

Re: [R] lme random slope results the same as random slope and intercept model

2012-06-12 Thread John Sorkin
Thierry,
Thank you for your thoughts. I agree with your analysis, but am still surprised 
that the results are not approximately, but exactly the same to the limit of 
the precision of the printed results. The exact comparability of the results 
makes me wonder if something else is going on that I have missed.
Thanks,
John


 
John David Sorkin M.D., Ph.D.
Chief, Biostatistics and Informatics
University of Maryland School of Medicine Division of Gerontology
Baltimore VA Medical Center
10 North Greene Street
GRECC (BT/18/GR)
Baltimore, MD 21201-1524
(Phone) 410-605-7119
(Fax) 410-605-7913 (Please call phone number above prior to faxing) 
ONKELINX, Thierry thierry.onkel...@inbo.be 6/12/2012 11:14 AM 
Dear John,

R-sig-mixed-models is a better list for this kind of questions.

It looks like the model finds no evidence for a random slope. Notice the very 
small variance of the random slope. In the model without random intercept, the 
random slope tries to mimic the effect of a random intercept.

Best regards,

Thierry

ir. Thierry Onkelinx
Instituut voor natuur- en bosonderzoek / Research Institute for Nature and 
Forest
team Biometrie  Kwaliteitszorg / team Biometrics  Quality Assurance
Kliniekstraat 25
1070 Anderlecht
Belgium
+ 32 2 525 02 51
+ 32 54 43 61 85
thierry.onkel...@inbo.be
www.inbo.be

To call in the statistician after the experiment is done may be no more than 
asking him to perform a post-mortem examination: he may be able to say what the 
experiment died of.
~ Sir Ronald Aylmer Fisher

The plural of anecdote is not data.
~ Roger Brinner

The combination of some data and an aching desire for an answer does not ensure 
that a reasonable answer can be extracted from a given body of data.
~ John Tukey


-Oorspronkelijk bericht-
Van: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] Namens 
John Sorkin
Verzonden: dinsdag 12 juni 2012 16:52
Aan: r-help@r-project.org
Onderwerp: [R] lme random slope results the same as random slope and intercept 
model

R 2.15.0
Windows XP



Can someone help me understand why a random intercept model gives the same 
results as the random intercept and slope models?

I am rather surprised by the results I am getting from lme. I am running three 
models
(1) random intercept
 fitRI   - lme(echogen~time,random=~  1 
|subject,data=repeatdata,na.action=na.omit)
(2) random slope
fitRT   - lme(echogen~time,random=~ 
 -1+time|subject,data=repeatdata,na.action=na.omit)
(3) random intercept and slope.
 fitRIRT - lme(echogen~time,random=~  
1+time|subject,data=repeatdata,na.action=na.omit)
The results of the (1) random intercept model are different from the (2) random 
slope model,not a surprise.
The results of the (1) random intercept model and the (3) random intercept and 
slope models are exactly the same, a surprise!

Below I copy the results for each model. Further below I give all my output.

RESULTS FROM EACH MODEL
(1) Random intercept results:
Random effects:
Formula: ~1 | subject
(Intercept) Residual
StdDev: 19.1751 10.44601

Fixed effects: echogen ~ time
   Value Std.Error DF   t-value p-value
(Intercept) 64.54864  4.258235 32 15.158545  0.
time 0.35795  0.227080 32  1.576307  0.1248
Correlation:
 (Intr)
time -0.242


(2) Random slope results
Random effects:
Formula: ~-1 + time | subject
 time Residual
StdDev: 0.6014915 19.63638

Fixed effects: echogen ~ time
   Value Std.Error DF   t-value p-value
(Intercept) 65.03691  3.494160 32 18.613032  0.
time 0.22688  0.467306 32  0.485503  0.6306
Correlation:
 (Intr)
time -0.625


(3) Random intercept and slope results
Random effects:
Formula: ~1 + time | subject
Structure: General positive-definite, Log-Cholesky parametrization
StdDev   Corr
(Intercept) 1.917511e+01 (Intr)
time2.032072e-04 0
Residual1.044601e+01

Fixed effects: echogen ~ time
   Value Std.Error DF   t-value p-value
(Intercept) 64.54864  4.258235 32 15.158543  0.
time 0.35795  0.227080 32  1.576307  0.1248
Correlation:
 (Intr)
time -0.242






COMPLETE OUTPUT
 repeatdata
subject time value echogen
1 1122  63
2 1340  60
3 1   NANA  NA
4 1   NANA  NA
5 1   NANA  NA
6 2139  19
7 2   NANA  NA
8 2   NANA  NA
9 2   NANA  NA
102   NANA  NA
113147  76
123643  82
133   NANA  NA
143   NANA  NA
153   NANA  NA
164144  44
174350  50
184767  67
194   2139  39
204   NANA  NA
215142  58
225360  78
235786  85
245   1956  60
255   3539  84
266157

Re: [R] lme random slope results the same as random slope and intercept model

2012-06-12 Thread ONKELINX, Thierry
Dear John,

R-sig-mixed-models is a better list for this kind of questions.

It looks like the model finds no evidence for a random slope. Notice the very 
small variance of the random slope. In the model without random intercept, the 
random slope tries to mimic the effect of a random intercept.

Best regards,

Thierry

ir. Thierry Onkelinx
Instituut voor natuur- en bosonderzoek / Research Institute for Nature and 
Forest
team Biometrie  Kwaliteitszorg / team Biometrics  Quality Assurance
Kliniekstraat 25
1070 Anderlecht
Belgium
+ 32 2 525 02 51
+ 32 54 43 61 85
thierry.onkel...@inbo.be
www.inbo.be

To call in the statistician after the experiment is done may be no more than 
asking him to perform a post-mortem examination: he may be able to say what the 
experiment died of.
~ Sir Ronald Aylmer Fisher

The plural of anecdote is not data.
~ Roger Brinner

The combination of some data and an aching desire for an answer does not ensure 
that a reasonable answer can be extracted from a given body of data.
~ John Tukey


-Oorspronkelijk bericht-
Van: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] Namens 
John Sorkin
Verzonden: dinsdag 12 juni 2012 16:52
Aan: r-help@r-project.org
Onderwerp: [R] lme random slope results the same as random slope and intercept 
model

R 2.15.0
Windows XP



Can someone help me understand why a random intercept model gives the same 
results as the random intercept and slope models?

I am rather surprised by the results I am getting from lme. I am running three 
models
(1) random intercept
 fitRI   - lme(echogen~time,random=~  1 
|subject,data=repeatdata,na.action=na.omit)
(2) random slope
fitRT   - lme(echogen~time,random=~ 
 -1+time|subject,data=repeatdata,na.action=na.omit)
(3) random intercept and slope.
 fitRIRT - lme(echogen~time,random=~  
1+time|subject,data=repeatdata,na.action=na.omit)
The results of the (1) random intercept model are different from the (2) random 
slope model,not a surprise.
The results of the (1) random intercept model and the (3) random intercept and 
slope models are exactly the same, a surprise!

Below I copy the results for each model. Further below I give all my output.

RESULTS FROM EACH MODEL
(1) Random intercept results:
Random effects:
 Formula: ~1 | subject
(Intercept) Residual
StdDev: 19.1751 10.44601

Fixed effects: echogen ~ time
   Value Std.Error DF   t-value p-value
(Intercept) 64.54864  4.258235 32 15.158545  0.
time 0.35795  0.227080 32  1.576307  0.1248
 Correlation:
 (Intr)
time -0.242


(2) Random slope results
Random effects:
 Formula: ~-1 + time | subject
 time Residual
StdDev: 0.6014915 19.63638

Fixed effects: echogen ~ time
   Value Std.Error DF   t-value p-value
(Intercept) 65.03691  3.494160 32 18.613032  0.
time 0.22688  0.467306 32  0.485503  0.6306
 Correlation:
 (Intr)
time -0.625


(3) Random intercept and slope results
Random effects:
 Formula: ~1 + time | subject
 Structure: General positive-definite, Log-Cholesky parametrization
StdDev   Corr
(Intercept) 1.917511e+01 (Intr)
time2.032072e-04 0
Residual1.044601e+01

Fixed effects: echogen ~ time
   Value Std.Error DF   t-value p-value
(Intercept) 64.54864  4.258235 32 15.158543  0.
time 0.35795  0.227080 32  1.576307  0.1248
 Correlation:
 (Intr)
time -0.242






COMPLETE OUTPUT
 repeatdata
subject time value echogen
1 1122  63
2 1340  60
3 1   NANA  NA
4 1   NANA  NA
5 1   NANA  NA
6 2139  19
7 2   NANA  NA
8 2   NANA  NA
9 2   NANA  NA
102   NANA  NA
113147  76
123643  82
133   NANA  NA
143   NANA  NA
153   NANA  NA
164144  44
174350  50
184767  67
194   2139  39
204   NANA  NA
215142  58
225360  78
235786  85
245   1956  60
255   3539  84
266157  67
276   NANA  NA
286   NANA  NA
296   NANA  NA
306   NANA  NA
317171  58
327355  67
337   1057  95
347   1762  94
357   2547  73
368179 105
378   NANA  NA
388   NANA  NA
398   NANA  NA
408   NANA  NA
419160  70
429364  62
439968  65
449   NANA  NA
459   NANA  NA
46   10147  75
47   10349  73
48   10946  70
49   10

[R] lme: Testing within-group coefficients

2012-05-29 Thread Dan Bebber
I need to determine whether a variable y2e increases with a covariate xt within 
individuals, where individuals a-j are measured several times.
Is it possible to test whether within-group coefficients are significantly 
different from zero?
The coefficients from an lmList fitted to my data give:
 coef(lml2)
  (Intercept)  xt
a   3.5689877 -0.05413678
b  -0.1432629  0.02558787
c   6.9933976 -0.04593475
d  -1.2205123  0.03419385
e  11.4861355 -0.02997357
f -13.1410819  0.06999514
g  25.1284971 -0.05560643
h  26.9868990 -0.04947859
i  23.1811000 -0.03006984
j -18.3750713  0.05958911

doing summary(lme(lml2)) appears to give the coeffient across groups, which is 
not of interest:

 summary(lme(lml2))
...
Fixed effects: y2e ~ xt
 Value Std.Error  DF  t-value p-value
(Intercept) 0.28520893 0.7168887 489 0.397843  0.6909
xt  0.02133808 0.0023420 489 9.110898  0.
...

Many thanks,
Dan Bebber

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme or lmer for unbalance data

2012-05-05 Thread Ben Bolker
agent dunham crosspide at hotmail.com writes:

 I'd like to fix a mixed model. I have unbalance data, what should i use: 
 lme in nlme package , or lmer in lme4.
 
 Thanks, user at host.com as user at host.com
 

  More advanced mixed model questions belong on r-sig-mixed-models at
r-project.org , but the answer to this one is: either should be fine,
unless you want one of the features that only lme has (reported denominator
df/p values, R-side effects such as correlation and heteroscedasticity 
models) or one of the features that only lmer has (efficient fitting of
crossed random effects, GLMMs).

  Ben Bolker

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lme or lmer for unbalance data

2012-05-04 Thread agent dunham
Dear community, 

I'd like to fix a mixed model. I have unbalance data, what should i use: 
lme in nlme package , or lmer in lme4.

Thanks, u...@host.com as u...@host.com

--
View this message in context: 
http://r.789695.n4.nabble.com/lme-or-lmer-for-unbalance-data-tp4608425.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lme random intercept model vs. random intercept, random slope model yield difference in fixed effects but not difference in the anova.

2012-04-14 Thread John Sorkin
I am running two mixed  effects regressions. One (fitRandomIntercept) has a 
random intercept, the second (fitRandomInterceptSlope) has a random intercept 
and a random slope. In the random slope regression, the fixed effect for time 
is not significant. In the random intercept random slope model, the fixed 
effect for time is significant.  Despite the difference in the results obtained 
from the two models. a comparison of the two models, performed via 
anova(fitRandomIntercept,fitRandomInterceptSlope), shows that there is no 
significant difference between the two models. I don't understand how this can 
happen, and I don't know which model I should report. The random intercept 
random slope model makes physiologic sense, but the principle of parsimony 
would suggest I report the random intercept model given that it is simpler than 
the random intercept random slope model, and there is no significant difference 
between the two models.

Can someone help me understand (1) why one model has a significant slope where 
as other does not, and (2) given the difference in the two models why the ANOVA 
comparison of the two model is not significant.
Thanks,
John

Log and code follows: 

 # Define data.
 line   - 
 c(1,2,6,11,12,16,17,18,19,21,22,23,24,25,26,31,32,33,34,35,36,41,42,43,46,47,48,49,51,52,56,57,61,66,67,71,72,73,77,82,87,92,97,107,112,117)
  
 subject- 
 c(1,1,2,3,3,4,4,4,4,5,5,5,5,5,6,7,7,7,7,7,8,9,9,9,10,10,10,10,11,11,12,12,13,14,14,15,15,15,16,17,18,19,20,22,23,24)
  
 time - 
 c(1,3,1,1,6,1,3,7,4,1,3,7,3,35,1,1,3,10,2,25,1,1,3,9,1,3,9,2,1,6,1,3,1,1,3,1,3,11,7,7,7,7,7,7,7,6)
  
 value - 
 c(22,4,39,47,5,34,3,33,21,42,9,86,56,39,57,71,8,57,62,47,79,60,10,68,47,6,46,48,57,11,85,12,34,30,1,42,7,33,1,1,1,1,1,1,1,2)
  
 # Add data to dataframe.
 repeatdatax - data.frame(line=line,subject=subject,time=time,value=value)
 # Print the data.
 repeatdatax
   line subject time value
1 1   1122
2 2   13 4
3 6   2139
411   3147
512   36 5
616   4134
717   43 3
818   4733
919   4421
10   21   5142
11   22   53 9
12   23   5786
13   24   5356
14   25   5   3539
15   26   6157
16   31   7171
17   32   73 8
18   33   7   1057
19   34   7262
20   35   7   2547
21   36   8179
22   41   9160
23   42   9310
24   43   9968
25   46  10147
26   47  103 6
27   48  10946
28   49  10248
29   51  11157
30   52  11611
31   56  12185
32   57  12312
33   61  13134
34   66  14130
35   67  143 1
36   71  15142
37   72  153 7
38   73  15   1133
39   77  167 1
40   82  177 1
41   87  187 1
42   92  197 1
43   97  207 1
44  107  227 1
45  112  237 1
46  117  246 2
 
 # Run random effects regression, with random intercept.
 library(nlme)
 
 #random intercept
 fitRandomIntercept -  lme(value~time,random=~1
 |subject,data=repeatdatax)
 summary(fitRandomIntercept)
Linear mixed-effects model fit by REML
 Data: repeatdatax 
   AIC  BIClogLik
  432.7534 439.8902 -212.3767

Random effects:
 Formula: ~1 | subject
(Intercept) Residual
StdDev: 5.78855 25.97209

Fixed effects: value ~ time 
   Value Std.Error DF   t-value p-value
(Intercept) 31.70262  5.158094 22  6.146189  0.
time-0.26246  0.632612 22 -0.414888  0.6822
 Correlation: 
 (Intr)
time -0.611

Standardized Within-Group Residuals:
   Min Q1Med Q3Max 
-1.0972719 -0.9923743  0.1216571  0.6734183  2.0290540 

Number of Observations: 46
Number of Groups: 23 
 
 #random intercept and slope
 fitRandomInterceptSlope - 
 lme(value~time,random=~1+time|subject,data=repeatdatax)
 summary(fitRandomInterceptSlope)
Linear mixed-effects model fit by REML
 Data: repeatdatax 
   AIC  BIClogLik
  434.7684 445.4735 -211.3842

Random effects:
 Formula: ~1 + time | subject
 Structure: General positive-definite, Log-Cholesky parametrization
StdDev  Corr  
(Intercept)  0.05423581 (Intr)
time 2.05242164 -0.477
Residual23.70228346   

Fixed effects: value ~ time 
   Value Std.Error DF   t-value p-value
(Intercept) 38.85068  5.205499 22  7.463392  0.
time-2.45621  1.081599 22 -2.270903  0.0333
 Correlation: 
 (Intr)
time -0.648

Standardized Within-Group Residuals:
Min  Q1 Med  Q3 Max 
-1.30668304 -0.63797284 -0.09662859  0.69620396  2.05363165 

Number of Observations: 46
Number of Groups: 23 
 
 # Compare random intercept model to random 

Re: [R] lme random intercept model vs. random intercept, random slope model yield difference in fixed effects but not difference in the anova.

2012-04-14 Thread Joshua Wiley
Hi John,

R-help is not really for statistical questions (see something like
statsexchange) or mixed models in R (there is a SIG mailing list for
those).  Just a note for next time.

1) The estimate for the time effect when it is a fixed effect versus
random effect are different things.  The former is the average effect
in the population, the latter something like (in loose terms) the
expectation of the distribution of effects (if you add other
predictors, the conditional expectation).  Suppose in each subject
there is a negative relationship between value and time, however, some
subjects have both low values of time and value and others have high
values of time and value.  In this case, the population effect could
be positive, but the individual effect negative.

2) First of all, just because a parameter is non significant in one
model and is significant in another does not imply that the models
overall are significantly different.  Remember the t-value for time is
testing against 0, but the effect in your first model was not 0, is
comparing the fit of the first model to the second, not to a null
model.  Second, your second model includes two additional
parameters---the variance of the slope effect and the covariance of
the slope and intercept.  So you have 2 additional parameters
estimated and only ~2 change in deviance, which is clearly not
significantly different.

If you have not already, I would strongly encourage you to graph your data.


library(ggplot2)
## base plot
p - ggplot(repeatdatax, aes(x = time, y = value)) +
  geom_point() +
  stat_smooth(method = lm, se = FALSE)

p ## overall
p + facet_wrap(~ subject) ## individual

The first graph shows the overall association, the latter breaks it
down by subject.  If this is your full data, considering you only have
more than 1 observation on 50% of your subjects, I would strongly tend
toward parsimony and only include the random intercept.

Cheers,

Josh

On Sat, Apr 14, 2012 at 8:02 PM, John Sorkin
jsor...@grecc.umaryland.edu wrote:
 I am running two mixed  effects regressions. One (fitRandomIntercept) has a 
 random intercept, the second (fitRandomInterceptSlope) has a random intercept 
 and a random slope. In the random slope regression, the fixed effect for time 
 is not significant. In the random intercept random slope model, the fixed 
 effect for time is significant.  Despite the difference in the results 
 obtained from the two models. a comparison of the two models, performed via 
 anova(fitRandomIntercept,fitRandomInterceptSlope), shows that there is no 
 significant difference between the two models. I don't understand how this 
 can happen, and I don't know which model I should report. The random 
 intercept random slope model makes physiologic sense, but the principle of 
 parsimony would suggest I report the random intercept model given that it is 
 simpler than the random intercept random slope model, and there is no 
 significant difference between the two models.

 Can someone help me understand (1) why one model has a significant slope 
 where as other does not, and (2) given the difference in the two models why 
 the ANOVA comparison of the two model is not significant.
 Thanks,
 John

 Log and code follows:

 # Define data.
 line   - 
 c(1,2,6,11,12,16,17,18,19,21,22,23,24,25,26,31,32,33,34,35,36,41,42,43,46,47,48,49,51,52,56,57,61,66,67,71,72,73,77,82,87,92,97,107,112,117)
 subject- 
 c(1,1,2,3,3,4,4,4,4,5,5,5,5,5,6,7,7,7,7,7,8,9,9,9,10,10,10,10,11,11,12,12,13,14,14,15,15,15,16,17,18,19,20,22,23,24)
 time - 
 c(1,3,1,1,6,1,3,7,4,1,3,7,3,35,1,1,3,10,2,25,1,1,3,9,1,3,9,2,1,6,1,3,1,1,3,1,3,11,7,7,7,7,7,7,7,6)
 value - 
 c(22,4,39,47,5,34,3,33,21,42,9,86,56,39,57,71,8,57,62,47,79,60,10,68,47,6,46,48,57,11,85,12,34,30,1,42,7,33,1,1,1,1,1,1,1,2)
 # Add data to dataframe.
 repeatdatax - data.frame(line=line,subject=subject,time=time,value=value)
 # Print the data.
 repeatdatax
   line subject time value
 1     1       1    1    22
 2     2       1    3     4
 3     6       2    1    39
 4    11       3    1    47
 5    12       3    6     5
 6    16       4    1    34
 7    17       4    3     3
 8    18       4    7    33
 9    19       4    4    21
 10   21       5    1    42
 11   22       5    3     9
 12   23       5    7    86
 13   24       5    3    56
 14   25       5   35    39
 15   26       6    1    57
 16   31       7    1    71
 17   32       7    3     8
 18   33       7   10    57
 19   34       7    2    62
 20   35       7   25    47
 21   36       8    1    79
 22   41       9    1    60
 23   42       9    3    10
 24   43       9    9    68
 25   46      10    1    47
 26   47      10    3     6
 27   48      10    9    46
 28   49      10    2    48
 29   51      11    1    57
 30   52      11    6    11
 31   56      12    1    85
 32   57      12    3    12
 33   61      13    1    34
 34   66      14    1    30
 35   67      14    3     1
 36   71      15    1    42
 37   72      15    3     7
 38   73      15 

Re: [R] lme code help

2012-03-15 Thread ONKELINX, Thierry
You want lme(logSSP~logM + K,random=~logM + K|species,data=data1)

Best regards,

ir. Thierry Onkelinx
Instituut voor natuur- en bosonderzoek / Research Institute for Nature and 
Forest
team Biometrie  Kwaliteitszorg / team Biometrics  Quality Assurance
Kliniekstraat 25
1070 Anderlecht
Belgium
+ 32 2 525 02 51
+ 32 54 43 61 85
thierry.onkel...@inbo.be
www.inbo.be

To call in the statistician after the experiment is done may be no more than 
asking him to perform a post-mortem examination: he may be able to say what the 
experiment died of.
~ Sir Ronald Aylmer Fisher

The plural of anecdote is not data.
~ Roger Brinner

The combination of some data and an aching desire for an answer does not ensure 
that a reasonable answer can be extracted from a given body of data.
~ John Tukey


-Oorspronkelijk bericht-
Van: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] Namens 
harkiran
Verzonden: woensdag 14 maart 2012 21:13
Aan: r-help@r-project.org
Onderwerp: [R] lme code help

Hi guys, 

Got a few days left and I need to model a random effect of species on the body 
mass (logM) and temperature (K) slopes. This is what i've done so far that 
works: 
model1-lme(logSSP~logM + K,random=~1|species,data=data1) 
model2-lme(logSSP~logM + K,random=~K|species,data=data1) 
model3-lme(logSSP~logM + K,random=~logM|species,data=data1) 

The one I now want is: 
model4-lme(logSSP~logM + K,random=~logM|species,K|species,data=data1)
#I need the random effect of spp on both slopes of logM and K, but this code 
doesn't work so how do i change the code??

:( Any help will be greatly appreciated 



--
View this message in context: 
http://r.789695.n4.nabble.com/lme-code-help-tp4473008p4473008.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lme code help

2012-03-14 Thread harkiran
Hi guys, 

Got a few days left and I need to model a random effect of species on the
body mass (logM) and temperature (K) slopes. This is what i've done so far
that works: 
model1-lme(logSSP~logM + K,random=~1|species,data=data1) 
model2-lme(logSSP~logM + K,random=~K|species,data=data1) 
model3-lme(logSSP~logM + K,random=~logM|species,data=data1) 

The one I now want is: 
model4-lme(logSSP~logM + K,random=~logM|species,K|species,data=data1) 
#I need the random effect of spp on both slopes of logM and K, but this code
doesn't work so how do i change the code??

:( Any help will be greatly appreciated 



--
View this message in context: 
http://r.789695.n4.nabble.com/lme-code-help-tp4473008p4473008.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lme, lmer, convergence

2012-02-07 Thread Erin McMullen Jonaitis
Hello, all,

I am running some simulations to estimate power for a complicated 
epidemiological study, and am using lme and lmer to get these estimates.  I 
have to run a few thousand iterations, and once in a great while, an iteration 
will create fake data such that the model won't converge.  I see from Google 
searches that this is not an uncommon situation.

My question: is there a way to extract the convergence value from an lme or 
lmer model?  It prints on the screen, but how can I get hold of it to evaluate 
it with some function?  What I'd like to do is build a failsafe into my program 
so that if a particular model in an iteration doesn't converge, I call a redo 
on that iteration.  This way the program will keep running and not stop in a 
fit of pique in the middle of my long simulation.

If I can't do this, my fallback will be to try setting lmeControl options such 
that even bad models return parameter estimates etc -- once or twice in 
10,000 iterations should not ruin things too badly -- but I'd like to try it 
the cleaner way first.


Erin Jonaitis, Ph.D.
Assistant Scientist, Wisconsin Alzheimer's Institute
7818 Big Sky Drive
Madison, WI 53719

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme, lmer, convergence

2012-02-07 Thread Ben Bolker
Erin McMullen Jonaitis jonaitis at wisc.edu writes:

  Hello, all, I am running some simulations to estimate power for a
 complicated epidemiological study, and am using lme and lmer to get
 these estimates.  I have to run a few thousand iterations, and once
 in a great while, an iteration will create fake data such that the
 model won't converge.  I see from Google searches that this is not
 an uncommon situation.
 
 My question: is there a way to extract the convergence value from an
 lme or lmer model?  It prints on the screen, but how can I get hold
 of it to evaluate it with some function?  What I'd like to do is
 build a failsafe into my program so that if a particular model in an
 iteration doesn't converge, I call a redo on that iteration.  This
 way the program will keep running and not stop in a fit of pique in
 the middle of my long simulation.
 
 If I can't do this, my fallback will be to try setting lmeControl
 options such that even bad models return parameter estimates etc
 -- once or twice in 10,000 iterations should not ruin things too
 badly -- but I'd like to try it the cleaner way first.

  There's a somewhat hack-ish solution, which is to use options(warn=2)
to 'upgrade' warnings to errors, and then use try() or tryCatch() to
catch them.

  More fancily, I used code that looked something like this to save
warnings as I went along (sorry about the - ) in a recent simulation
study.  You could also check w$message to do different things in the
case of different warnings.

withCallingHandlers(tryCatch(fun(n=nvec[j],tau=tauvec[i],...),
error = function(e) {
  warn[k,i,j] - paste(ERROR:,e$message)
  NA_ans}),
   warning = function(w) {
  warn[k,i,j] - w$message
  invokeRestart(muffleWarning)
 })

In the slightly longer run, we are working on getting the development
(lme4Eigen) version of lme4 to save convergence warnings in an
accessible slot.  I don't know if I would hold my breath for this
to be back-ported to nlme, though ...

  Some of these discussions might be better suited for r-sig-mixed-models
at r-project.org


  Ben Bolker

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lme model specification problem (Error in MEEM...)

2012-01-06 Thread Pascal A. Niklaus

Dear all,

In lme, models in which a factor is fully contained in another lead to 
an error. This is not the case when using lm/aov.


I understand that these factors are aliased, but believe that such 
models make sense when the factors are fitted sequentially. For example, 
I sometimes fit a factor first as linear term (continuous variable with 
discrete levels, e.g. 1,2,4,6), and then as factor, with the latter then 
accounting for the deviation from linearity.


If x contains the values 1,2,4,6, the model formula then would be y ~ x 
+ as.factor(x)


A complete example is here:

library(nlme)

d - data.frame(x=rep(c(1,2,4,6),each=2),subj=factor(rep(1:16,each=2)))
d$y - rnorm(32) + rnorm(length(levels(d$subj)))[d$subj]

anova(lme(y~x,random=~1|subj,data=d))   ## works
anova(lme(y~as.factor(x),random=~1|subj,data=d))## works

anova(lme(y~x+as.factor(x),random=~1|subj,data=d))  ## fails:
# Error in MEEM(object, conLin, control$niterEM) :
#  Singularity in backsolve at level 0, block 1

summary(aov(y~x+as.factor(x)+Error(subj),data=d))   ## works:

# Error: subj
#  Df Sum Sq Mean Sq F value Pr(F)
# x 1  8.434   8.434   4.780 0.0493 *
# as.factor(x)  2 10.459   5.230   2.964 0.0900 .
# Residuals12 21.176   1.765
# ... rest of output removed ...

I understand I can to some extent work around this limitation by 
modifying the contrast encoding, but I then still don't get an overall 
test for as.factor(x) (in the example above, a test for deviation from 
linearity).


d$xfac - factor(d$x)
contrasts(d$xfac)-c(1,2,4,6)
summary(lme(y~xfac,random=~1|subj,data=d))

Is there a way to work around this limitation of lme? Or do I 
mis-specify the model? Thanks for your advice.


Pascal

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme model specification problem (Error in MEEM...)

2012-01-06 Thread Ben Bolker
Pascal A. Niklaus pascal.niklaus at ieu.uzh.ch writes:

 In lme, models in which a factor is fully contained in another lead to 
 an error. This is not the case when using lm/aov.
 
 I understand that these factors are aliased, but believe that such 
 models make sense when the factors are fitted sequentially. For example, 
 I sometimes fit a factor first as linear term (continuous variable with 
 discrete levels, e.g. 1,2,4,6), and then as factor, with the latter then 
 accounting for the deviation from linearity.
 
 If x contains the values 1,2,4,6, the model formula then would be y ~ x 
 + as.factor(x)
 
 A complete example is here:
 
 library(nlme)
 
 d - data.frame(x=rep(c(1,2,4,6),each=2),subj=factor(rep(1:16,each=2)))
 d$y - rnorm(32) + rnorm(length(levels(d$subj)))[d$subj]
 
 anova(lme(y~x,random=~1|subj,data=d))   ## works
 anova(lme(y~as.factor(x),random=~1|subj,data=d))## works
 
 anova(lme(y~x+as.factor(x),random=~1|subj,data=d))  ## fails:
 # Error in MEEM(object, conLin, control$niterEM) :
 #  Singularity in backsolve at level 0, block 1
 
 summary(aov(y~x+as.factor(x)+Error(subj),data=d))   ## works:
 
 # Error: subj
 #  Df Sum Sq Mean Sq F value Pr(F)
 # x 1  8.434   8.434   4.780 0.0493 *
 # as.factor(x)  2 10.459   5.230   2.964 0.0900 .
 # Residuals12 21.176   1.765
 # ... rest of output removed ...
 
 I understand I can to some extent work around this limitation by 
 modifying the contrast encoding, but I then still don't get an overall 
 test for as.factor(x) (in the example above, a test for deviation from 
 linearity).
 
 d$xfac - factor(d$x)
 contrasts(d$xfac)-c(1,2,4,6)
 summary(lme(y~xfac,random=~1|subj,data=d))
 
 Is there a way to work around this limitation of lme? Or do I 
 mis-specify the model? Thanks for your advice.

  This question would probably be more appropriate for 
the r-sig-mixed-mod...@r-project.org mailing list.

  A short answer (if you re-post to r-sig-mixed-models I might
answer at greater length) would be that you should be able to quantify
the difference between

  y~x

 and

  y~factor(x)

by an anova comparison of the two *models*, i.e.

anova(m1,m2)

Another thing to consider would be setting using orthogonal
polynomial contrasts for factor(x), where summary() would
give you a result for the constant, linear, quadratic ...
terms

  Ben Bolker

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lme with nested factor and random effect

2011-12-15 Thread Mari Pesek
Hello all,

I'm having difficulty with setting up a mixed model using lme in the
nlme package. To summarize my study, I am testing for effects of
ornamentation on foraging behavior of wolf spiders. I tested spiders
at two different ages (penultimate vs. mature) and of two different
phenotypes (one species tested lacks ornamentation throughout life
[non-ornamented males] while the other acquires ornamentation upon
maturation [i.e. brush-legged males]). I tested a sample of
brush-legged and non-ornamented males (as both penultimates and
matures) in 2009, and an additional sample of brush-legged males in
2010 (as both penultimates and matures again) because I had a very
small sample of brush-legged males in 2009.

I would like to set up my lme so the fixed effects are age
(penultimate vs mature), phenotype (non-ornamented vs brush-legged),
and year (2009 vs 2010) nested within phenotype to test for
differences between the two samples of brush-legged males.
Additionally I want to include id (a unique identification number
given to each spider tested) as a random factor to account for testing
each individual twice (once as a penultimate and once as a mature).

So far I have the following code: lme(behavior ~ age*phenotype,
random=~1|maturity/id, data)

But I don't know how to include the code to nest year within phenotype
while testing for all possible interactions. Any help would be greatly
appreciated.

Thank you!
Mari Pesek

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme with nested factor and random effect

2011-12-15 Thread Ben Bolker
Mari Pesek marifrances at gmail.com writes:

 
 Hello all,
 
 I'm having difficulty with setting up a mixed model using lme in the
 nlme package. To summarize my study, I am testing for effects of
 ornamentation on foraging behavior of wolf spiders. I tested spiders
 at two different ages (penultimate vs. mature) and of two different
 phenotypes (one species tested lacks ornamentation throughout life
 [non-ornamented males] while the other acquires ornamentation upon
 maturation [i.e. brush-legged males]). I tested a sample of
 brush-legged and non-ornamented males (as both penultimates and
 matures) in 2009, and an additional sample of brush-legged males in
 2010 (as both penultimates and matures again) because I had a very
 small sample of brush-legged males in 2009.
 
 I would like to set up my lme so the fixed effects are age
 (penultimate vs mature), phenotype (non-ornamented vs brush-legged),
 and year (2009 vs 2010) nested within phenotype to test for
 differences between the two samples of brush-legged males.
 Additionally I want to include id (a unique identification number
 given to each spider tested) as a random factor to account for testing
 each individual twice (once as a penultimate and once as a mature).
 
 So far I have the following code: lme(behavior ~ age*phenotype,
 random=~1|maturity/id, data)
 
 But I don't know how to include the code to nest year within phenotype
 while testing for all possible interactions. Any help would be greatly
 appreciated.

   I have some thoughts on this.  I think your best bet is

lme(behavior~age*phenotype*year, random=~age|id, data)

or possibly

lme(behavior~age*phenotype + phenotype:year, random=~age|id, data)

(crossing for fixed effects is more or less equivalent to
creating an interaction.  You should also make sure that you
have converted 'year' to a factor rather than a numeric variable ...)

  but if you re-post this to the r-sig-mixed mod...@r-project.org list I will
answer more fully ...

  Ben Bolker

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme with nested factor and random effect

2011-12-15 Thread R. Michael Weylandt michael.weyla...@gmail.com


On Dec 15, 2011, at 2:07 PM, Ben Bolker bbol...@gmail.com wrote:

 Mari Pesek marifrances at gmail.com writes:
 
 
 Hello all,
 
 I'm having difficulty with setting up a mixed model using lme in the
 nlme package. To summarize my study, I am testing for effects of
 ornamentation on foraging behavior of wolf spiders. I tested spiders
 at two different ages (penultimate vs. mature) and of two different
 phenotypes (one species tested lacks ornamentation throughout life
 [non-ornamented males] while the other acquires ornamentation upon
 maturation [i.e. brush-legged males]). I tested a sample of
 brush-legged and non-ornamented males (as both penultimates and
 matures) in 2009, and an additional sample of brush-legged males in
 2010 (as both penultimates and matures again) because I had a very
 small sample of brush-legged males in 2009.
 
 I would like to set up my lme so the fixed effects are age
 (penultimate vs mature), phenotype (non-ornamented vs brush-legged),
 and year (2009 vs 2010) nested within phenotype to test for
 differences between the two samples of brush-legged males.
 Additionally I want to include id (a unique identification number
 given to each spider tested) as a random factor to account for testing
 each individual twice (once as a penultimate and once as a mature).
 
 So far I have the following code: lme(behavior ~ age*phenotype,
 random=~1|maturity/id, data)
 
 But I don't know how to include the code to nest year within phenotype
 while testing for all possible interactions. Any help would be greatly
 appreciated.
 
   I have some thoughts on this.  I think your best bet is
 
 lme(behavior~age*phenotype*year, random=~age|id, data)
 
 or possibly
 
 lme(behavior~age*phenotype + phenotype:year, random=~age|id, data)
 
 (crossing for fixed effects is more or less equivalent to
 creating an interaction.  You should also make sure that you
 have converted 'year' to a factor rather than a numeric variable ...)
 
  but if you re-post this to the r-sig-mixed mod...@r-project.org list I will
 answer more fully ...

Note a hyphen got lost along the way (or at least it didn't make it to my 
machine): the email address is r-sig-mixed-mod...@r-project.org

M

 
  Ben Bolker
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme contrast Error in `$-.data.frame`(`*tmp*`, df, value = numeric(0)) :

2011-11-23 Thread rahulprabhuh
I had the same problem while doing classification using rpart. The mistake I
had made was that the column names in the data frames had spaces and other
special characters. I got the output when I changed this. Hope this helps.

--
View this message in context: 
http://r.789695.n4.nabble.com/lme-contrast-Error-in-data-frame-tmp-df-value-numeric-0-tp4081333p4099895.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lme contrast Error in `$-.data.frame`(`*tmp*`, df, value = numeric(0)) :

2011-11-17 Thread Tanu Soni
I am trying to run a lme model and some contrast for a matrix .
lnY
 [1] 10.911628 11.198557 11.316971 11.464869 11.575233 11.612101 11.755903
11.722035 11.757705 11.863744 11.846515 11.852721 11.866936 11.838452
11.946680 11.885509
[17] 11.583309 11.750082 11.756005 11.630797 11.705536 11.566722 11.679448
11.703521NA 11.570949 11.716919 11.573343 11.733770 11.720801
11.804124 11.775074
[33] 11.801669 11.856955 11.875859 11.851852 11.830149 11.920156 11.954247
11.880917 11.806162  7.823646 11.909182NANA 11.912386
12.048816 11.958284
[49] 11.929021 11.986062 11.968418 11.967999 11.911608



plate
 [1] 2 1 2 2 1 1 1 2 2 1 2 1 3 3 3 3 3 3 4 4 4 4 4 4 5 5 5 5 5 5 6 6 6 6 6
6 7 7 7 7 7 7 8 8 8 8 8 9 9 9 9 9 9
Levels: 1 2 3 4 5 6 7 8 9
 gb
 [1] tSac  tSac  tAceK tAceK cDMSO cDMSO tAceK tSac  cDMSO tAceK cDMSO
tSac  cDMSO cDMSO tSac  tSac  tAceK tAceK tSac  cDMSO tSac  tAceK cDMSO
tAceK tSac  cDMSO tAceK cDMSO
[29] tAceK tSac  cDMSO cDMSO tSac  tAceK tSac  tAceK tSac  tAceK cDMSO
cDMSO tAceK tSac  tAceK tSac  cDMSO tAceK tSac  tSac  cDMSO tAceK tSac
tAceK cDMSO
Levels: cDMSO tAceK tSac
 time
 [1] 1hr 1hr 1hr 1hr 1hr 1hr 1hr 1hr 1hr 1hr 1hr 1hr 1hr 1hr 1hr 1hr 1hr
1hr 4hr 4hr 4hr 4hr 4hr 4hr 4hr 4hr 4hr 4hr 4hr 4hr 4hr 4hr 4hr 4hr 4hr 4hr
15m 15m 15m 15m 15m 15m
[43] 15m 15m 15m 15m 15m 15m 15m 15m 15m 15m 15m
Levels: 15m 1hr 4hr


metab2-data.frame(plate,lnY,gb,time)
fm1-lme(lnY ~ time*gb,random=~1|plate,metab2,na.action=na.omit)
t1-contrast(fm1,a = list(gb =cDMSO ,time=15m  ),b = list(gb =
tAceK,time=15m))
t2-contrast(fm1,a = list(gb =cDMSO ,time=15m  ),b = list(gb =
tSac,time=15m))
I am doing similar contrasts in 1hr and 4hr time
result:

t1
lme model parameter contrast

   Contrast  S.E.  LowerUpper   t
df Pr(|t|)
 0.01466447  0.3880718  -0.7459424   0.77527130.0439 0.97
 t2
lme model parameter contrast

  Contrast S.E.  Lower Upper
t   df  Pr(|t|)
 0.8007098 0.401809 0.01317859  1.588241 1.99  39
0.0533

but it doesnt work when my lnY is


lnY
 [1] 14.08164 14.03683 15.23784 14.86681 15.69648 15.62681 15.38057
13.79152 15.59356 15.26301 15.49928 14.02714 15.54317 15.44776 14.51406
14.26436 14.76043 15.01506
[19] 13.75356 15.36528 13.86303 14.40074 15.39995 14.34945 14.32001
15.41146 14.43210 15.87487 14.31152 13.75980 15.44153 15.72775 13.83677
14.35888 14.08998 14.40057
[37] 15.25646 15.21430 15.21883 15.09338 15.24249 15.15223 15.19692
15.10101 15.16232 15.81154 15.30002 15.31443 15.25059 15.10284 15.38775
15.28618 15.38108
I am able to fit the model i.e i am getting my fm1

 t1
lme model parameter contrast

Error in `$-.data.frame`(`*tmp*`, df, value = numeric(0)) :
  replacement has 0 rows, data has 1

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme convergence error

2011-07-26 Thread ONKELINX, Thierry
Dear Ben,

Maybe the model converges more slowy than other models. Running more iterations 
might solve the problem. Have a look at ?lme and ?lmeControl.

Best regards,

Thierry

PS R-sig-mixedmodels is a better list for questions on lme().


ir. Thierry Onkelinx
Instituut voor natuur- en bosonderzoek
team Biometrie  Kwaliteitszorg
Gaverstraat 4
9500 Geraardsbergen
Belgium

Research Institute for Nature and Forest
team Biometrics  Quality Assurance
Gaverstraat 4
9500 Geraardsbergen
Belgium

tel. + 32 54/436 185
thierry.onkel...@inbo.be
www.inbo.be

To call in the statistician after the experiment is done may be no more than 
asking him to perform a post-mortem examination: he may be able to say what the 
experiment died of.
~ Sir Ronald Aylmer Fisher

The plural of anecdote is not data.
~ Roger Brinner

The combination of some data and an aching desire for an answer does not ensure 
that a reasonable answer can be extracted from a given body of data.
~ John Tukey


 -Oorspronkelijk bericht-
 Van: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
 Namens Ben Grannan
 Verzonden: maandag 25 juli 2011 23:00
 Aan: r-help@r-project.org
 Onderwerp: [R] lme convergence error
 
 Hello, I am working from a linux 64 machine on a server with R-2.12 (I can't
 update to 2.13). I am iterating through many linear mixed models for
 longitudinal data and I occasionally receive the following convergence
 error:
 
  BI.lme - lme(cd4 ~ time + genBI + genBI:time + C1 + C2 + C11 + C12,
 random =~ 1 + time | IID, data = d)
 
 Error in lme.formula(cd4 ~ time + genBI + genBI:time + PC1 +  :
   nlminb problem, convergence error code = 1
   message = false convergence (8)
 
 From iteration to iteration, genBI is the only variable which changes
 and it
 is a numerical variable equal to 1, 2, or 3 for a given IID.  When I remove 
 the
 interaction term genBI:cd4years there is no problem with convergence.
 Also, I have played around with scaling time (days to years, eg) which can 
 solve
 the problem but then leads to convergence issues in other iterations.
 When I shift the time (say in units of years by +2) the model also converges. 
  So
 it seems like the genBI:time is the issue.  Do you have any suggestions for
 exploring why I get these convergence errors only on certain iterations?  I 
 really
 appreciate any suggestions!
 
 Ben
 
 Here is a summary of the included variables just in case it is helpful:
 
  IID (808 groups)  cd4  time ( in years)
 C1
  ACTG_617 :   26   Min.   :   0.0   Min.   :0.   Min.   :-0.032700
  ACTG_1055:   25   1st Qu.: 288.0   1st Qu.:0.4603   1st Qu.: 0.001600
  ACTG_1590:   25   Median : 434.0   Median :1.1534   Median : 0.004300
  ACTG_200 :   23   Mean   : 461.3   Mean   :1.2107   Mean   : 0.002139
  ACTG_420 :   23   3rd Qu.: 599.0   3rd Qu.:1.8630   3rd Qu.: 0.006100
  ACTG_454 :   23   Max.   :2022.0   Max.   :2.7808   Max.   : 0.011300
  (Other)  :11992
 
  C2  C11  C12
 Sex
  Min.   :-0.105800   Min.   :0.000   Min.   :0.   Min.   :1.000
  1st Qu.: 0.002100   1st Qu.:0.000   1st Qu.:0.   1st Qu.:1.000
  Median : 0.007900   Median :0.000   Median :0.   Median :1.000
  Mean   : 0.001803   Mean   :0.367   Mean   :0.3020   Mean   :1.095
  3rd Qu.: 0.011700   3rd Qu.:1.000   3rd Qu.:1.   3rd Qu.:1.000
  Max.   : 0.030800   Max.   :1.000   Max.   :1.   Max.   :2.000
 
  genBI
  Min.   :1.000
  1st Qu.:1.000
  Median :1.000
  Mean   :1.369
  3rd Qu.:2.000
  Max.   :3.000
 
   [[alternative HTML version deleted]]
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme convergence error

2011-07-26 Thread Bert Gunter
1. This thread belongs on the r-sig-mixed-models list ,

2. I would bet that for some of the data sets, the model is
unbalanced/overspecified, causing te convergence issues (imagine
climbing a narrow ridge by zig-zagging back and forth across it).

Cheers,
Bert

On Tue, Jul 26, 2011 at 12:56 AM, ONKELINX, Thierry
thierry.onkel...@inbo.be wrote:
 Dear Ben,

 Maybe the model converges more slowy than other models. Running more 
 iterations might solve the problem. Have a look at ?lme and ?lmeControl.

 Best regards,

 Thierry

 PS R-sig-mixedmodels is a better list for questions on lme().

 
 ir. Thierry Onkelinx
 Instituut voor natuur- en bosonderzoek
 team Biometrie  Kwaliteitszorg
 Gaverstraat 4
 9500 Geraardsbergen
 Belgium

 Research Institute for Nature and Forest
 team Biometrics  Quality Assurance
 Gaverstraat 4
 9500 Geraardsbergen
 Belgium

 tel. + 32 54/436 185
 thierry.onkel...@inbo.be
 www.inbo.be

 To call in the statistician after the experiment is done may be no more than 
 asking him to perform a post-mortem examination: he may be able to say what 
 the experiment died of.
 ~ Sir Ronald Aylmer Fisher

 The plural of anecdote is not data.
 ~ Roger Brinner

 The combination of some data and an aching desire for an answer does not 
 ensure that a reasonable answer can be extracted from a given body of data.
 ~ John Tukey


 -Oorspronkelijk bericht-
 Van: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
 Namens Ben Grannan
 Verzonden: maandag 25 juli 2011 23:00
 Aan: r-help@r-project.org
 Onderwerp: [R] lme convergence error

 Hello, I am working from a linux 64 machine on a server with R-2.12 (I can't
 update to 2.13). I am iterating through many linear mixed models for
 longitudinal data and I occasionally receive the following convergence
 error:

  BI.lme - lme(cd4 ~ time + genBI + genBI:time + C1 + C2 + C11 + C12,
 random =~ 1 + time | IID, data = d)

 Error in lme.formula(cd4 ~ time + genBI + genBI:time + PC1 +  :
   nlminb problem, convergence error code = 1
   message = false convergence (8)

 From iteration to iteration, genBI is the only variable which changes
 and it
 is a numerical variable equal to 1, 2, or 3 for a given IID.  When I remove 
 the
 interaction term genBI:cd4years there is no problem with convergence.
 Also, I have played around with scaling time (days to years, eg) which can 
 solve
 the problem but then leads to convergence issues in other iterations.
 When I shift the time (say in units of years by +2) the model also 
 converges.  So
 it seems like the genBI:time is the issue.  Do you have any suggestions for
 exploring why I get these convergence errors only on certain iterations?  I 
 really
 appreciate any suggestions!

 Ben

 Here is a summary of the included variables just in case it is helpful:

  IID (808 groups)      cd4              time ( in years)
 C1
  ACTG_617 :   26   Min.   :   0.0   Min.   :0.   Min.   :-0.032700
  ACTG_1055:   25   1st Qu.: 288.0   1st Qu.:0.4603   1st Qu.: 0.001600
  ACTG_1590:   25   Median : 434.0   Median :1.1534   Median : 0.004300
  ACTG_200 :   23   Mean   : 461.3   Mean   :1.2107   Mean   : 0.002139
  ACTG_420 :   23   3rd Qu.: 599.0   3rd Qu.:1.8630   3rd Qu.: 0.006100
  ACTG_454 :   23   Max.   :2022.0   Max.   :2.7808   Max.   : 0.011300
  (Other)  :11992

          C2                  C11                  C12
 Sex
  Min.   :-0.105800   Min.   :0.000   Min.   :0.   Min.   :1.000
  1st Qu.: 0.002100   1st Qu.:0.000   1st Qu.:0.   1st Qu.:1.000
  Median : 0.007900   Median :0.000   Median :0.   Median :1.000
  Mean   : 0.001803   Mean   :0.367   Mean   :0.3020   Mean   :1.095
  3rd Qu.: 0.011700   3rd Qu.:1.000   3rd Qu.:1.   3rd Qu.:1.000
  Max.   : 0.030800   Max.   :1.000   Max.   :1.   Max.   :2.000

      genBI
  Min.   :1.000
  1st Qu.:1.000
  Median :1.000
  Mean   :1.369
  3rd Qu.:2.000
  Max.   :3.000

       [[alternative HTML version deleted]]

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
Men by nature long to get on to the ultimate truths, and will often
be impatient with elementary studies or fight shy of them. If it were
possible to reach the ultimate truths without the elementary studies
usually prefixed to them, these would not be preparatory studies but
superfluous diversions.

-- Maimonides (1135-1204)

Bert Gunter
Genentech Nonclinical Biostatistics

[R] lme convergence error

2011-07-25 Thread Ben Grannan
Hello, I am working from a linux 64 machine on a server with R-2.12 (I can't
update to 2.13). I am iterating through many linear mixed models for
longitudinal data and I occasionally receive the following convergence
error:

 BI.lme - lme(cd4 ~ time + genBI + genBI:time + C1 + C2 + C11 + C12,
random =~ 1 + time | IID, data = d)

Error in lme.formula(cd4 ~ time + genBI + genBI:time + PC1 +  :
  nlminb problem, convergence error code = 1
  message = false convergence (8)

From iteration to iteration, genBI is the only variable which changes and it
is a numerical variable equal to 1, 2, or 3 for a given IID.  When I remove
the interaction term genBI:cd4years there is no problem with convergence.
Also, I have played around with scaling time (days to years, eg) which can
solve the problem but then leads to convergence issues in other iterations.
When I shift the time (say in units of years by +2) the model also
converges.  So it seems like the genBI:time is the issue.  Do you have any
suggestions for exploring why I get these convergence errors only on certain
iterations?  I really appreciate any suggestions!

Ben

Here is a summary of the included variables just in case it is helpful:

 IID (808 groups)  cd4  time ( in years)
C1
 ACTG_617 :   26   Min.   :   0.0   Min.   :0.   Min.   :-0.032700
 ACTG_1055:   25   1st Qu.: 288.0   1st Qu.:0.4603   1st Qu.: 0.001600
 ACTG_1590:   25   Median : 434.0   Median :1.1534   Median : 0.004300
 ACTG_200 :   23   Mean   : 461.3   Mean   :1.2107   Mean   : 0.002139
 ACTG_420 :   23   3rd Qu.: 599.0   3rd Qu.:1.8630   3rd Qu.: 0.006100
 ACTG_454 :   23   Max.   :2022.0   Max.   :2.7808   Max.   : 0.011300
 (Other)  :11992

 C2  C11  C12
Sex
 Min.   :-0.105800   Min.   :0.000   Min.   :0.   Min.   :1.000
 1st Qu.: 0.002100   1st Qu.:0.000   1st Qu.:0.   1st Qu.:1.000
 Median : 0.007900   Median :0.000   Median :0.   Median :1.000
 Mean   : 0.001803   Mean   :0.367   Mean   :0.3020   Mean   :1.095
 3rd Qu.: 0.011700   3rd Qu.:1.000   3rd Qu.:1.   3rd Qu.:1.000
 Max.   : 0.030800   Max.   :1.000   Max.   :1.   Max.   :2.000

 genBI
 Min.   :1.000
 1st Qu.:1.000
 Median :1.000
 Mean   :1.369
 3rd Qu.:2.000
 Max.   :3.000

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] LME and overall treatment effects

2011-07-15 Thread ONKELINX, Thierry
Dear Mark,

Interpreting one of the main effects when they are part of an interaction is, 
AFAIK, not possible.
Your statement about comparing treatments when Year is continuous is not 
correct. The parameters of treatment assume that Year == 0! Which might lead to 
very strange effect when year is not centered to a year close to the observed 
years. Have a look at the example below

set.seed(123456)
dataset - expand.grid(cYear = 0:20, Treatment = factor(c(A, B, C)), Obs 
= 1:3)
dataset$Year - dataset$cYear + 2000
Trend - c(A = 1, B = 0.1, C = -0.5)
TreatmentEffect - c(A = 2, B = -1, C = 0.5)
sdNoise - 1
dataset$Value - with(dataset, TreatmentEffect[Treatment] + Trend[Treatment] * 
cYear) + rnorm(nrow(dataset), sd = sdNoise)

lm(Value ~ Year * Treatment, data = dataset)
lm(Value ~ cYear * Treatment, data = dataset)


If you want to focus on the treatment effect alone but take the year effect 
into account, then add Year as a random effect.

library(lme4)
lmer(Value ~ 0 + Treatment + (0 + Treatment|Year), data = dataset)

In your case you want to cross the random effect of year with those of plot. 
Crossed random effects are hard to do with the nlme package but easy with the 
lme4 package.

Model - lmer(Species~ 0 + Treatment + (0 + Treatment|Year) + (1|Plot/Quadrat) 
,na.action =na.omit,data=UDD)

Best regards,

Thierry


ir. Thierry Onkelinx
Instituut voor natuur- en bosonderzoek
team Biometrie  Kwaliteitszorg
Gaverstraat 4
9500 Geraardsbergen
Belgium

Research Institute for Nature and Forest
team Biometrics  Quality Assurance
Gaverstraat 4
9500 Geraardsbergen
Belgium

tel. + 32 54/436 185
thierry.onkel...@inbo.be
www.inbo.be

To call in the statistician after the experiment is done may be no more than 
asking him to perform a post-mortem examination: he may be able to say what the 
experiment died of.
~ Sir Ronald Aylmer Fisher

The plural of anecdote is not data.
~ Roger Brinner

The combination of some data and an aching desire for an answer does not ensure 
that a reasonable answer can be extracted from a given body of data.
~ John Tukey

 -Oorspronkelijk bericht-
 Van: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
 Namens Mark Bilton
 Verzonden: donderdag 14 juli 2011 22:05
 Aan: Bert Gunter
 CC: r-help@r-project.org
 Onderwerp: Re: [R] LME and overall treatment effects
 
 Ok...lets try again with some code...
 
 ---
 Hello fellow R users,
 
 I am having a problem finding the estimates for some overall treatment effects
 for my mixed models using 'lme' (package nlme). I hope someone can help.
 
 Firstly then, the model:
 The data: Plant biomass (log transformed) Fixed Factors: Treatment(x3 Dry, 
 Wet,
 Control) Year(x8 2002-2009) Random Factors: 5 plots per treatment, 10 quadrats
 per plot (N=1200 (3*5*10*8)).
 
 I am modelling this in two ways, firstly with year as a continuous variable
 (interested in the difference in estimated slope over time in each treatment
 'year*treatment'), and secondly with year as a categorical variable 
 (interested in
 difference between 'treatments').
 
 --
 ie: (with Year as either numeric or factor)
 
 Model-lme(Species~Year*Treatment,random=~1|Plot/Quadrat,na.action =
 na.omit,data=UDD)
 -
 
 When using Year as a continuous variable, the output of the lme means that I
 can compare the 3 treatments within my model...
 i.e. it takes one of the Treatment*year interactions as the baseline and
 compares (contrasts) the other two to that.
 -
 ie
 
 Fixed effects: Species ~ Year * Treatment
   Value Std.Error   DF   t-value p-value
 (Intercept)  1514.3700  352.7552 1047  4.292978  0.
 Year   -0.75190.1759 1047 -4.274786  0.
 Treatment0   -461.9500  498.8711   12 -0.925991  0.3727
 Treatment1  -1355.0450  498.8711   12 -2.716222  0.0187
 Year:Treatment0 0.23050.2488 1047  0.926537  0.3544
 Year:Treatment1 0.67760.2488 1047  2.724094  0.0066
 
 so Year:Treatment0 differs from baseline Year:Treatment-1 by 0.2305 and
 Year:Treatment1 is significantly different (p=0.0066) from
 Year:Treatment-1
 --
 
 I can then calculate the overall treatment*year effect using 
 'anova.lme(Model)'.
 -
  anova.lme(Model1)
 numDF denDF   F-value p-value
 (Intercept)1  1047 143.15245  .0001
 Year   1  1047  19.56663  .0001
 Treatment  212   3.73890  0.0547
 Year:Treatment 2  1047   3.83679  0.0219
 
 so there is an overall difference in slope between treatments (Year:Treatment
 interaction) p=0.0219
 --
 
 However, the problem comes when I use Year as a categorical variable.
 Here, I am interested solely in the Treatment effect (not the interaction with
 year). However, the output

[R] LME and overall treatment effects

2011-07-14 Thread Mark Bilton

Hello fellow R users,

I am having a problem finding the estimates for some overall treatment  
effects for my mixed models using 'lme' (package nlme). I hope someone  
can help.


Firstly then, the model:
The data: Plant biomass (log transformed)
Fixed Factors: Treatment(x3 Dry, Wet, Control) Year(x8 2002-2009)
Random Factors: 5 plots per treatment, 5 quadrats per plot (N=594 (3*5*5*8)
with 6 missing values).

I am modelling this in two ways, firstly with year as a continuous variable
(interested in the difference in estimated slope over time in each treatment
'year*treatment'), and secondly with year as a categorical variable
(interested in difference between 'treatments').

When using Year as a continuous variable, the output of the lme means  
that I can compare the 3 treatments within my model...
i.e. it takes one of the Treatment*year interactions as the baseline  
and compares (contrasts) the other two to that.
I can then calculate the overall treatment*year effect using  
'anova.lme(Model).


However, the problem comes when I use Year as a categorical variable.  
Here, I am interested solely in the Treatment effect (not the  
interaction with year). However, the output for the two labelled  
'Treatment's against the third comparison, are not the overall effect  
but are a comparison within a year (2002). I can still get my overall  
effect (using anova.lme) but how do I calculate the estimates (with  
P-Values if possible) for each seperate overall treatment comparison  
(not within year). I tried 'glht' (package 'multicomp') but this only  
works if there are no interactions present, otherwise again it gives a  
comparison for one particular year.


Very grateful for any assistance,
Mark

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] LME and overall treatment effects

2011-07-14 Thread Bert Gunter
Probably no hope of help until you do as the posting guide asks:

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and *** provide commented, minimal, self-contained, reproducible code.***


-- Bert

On Thu, Jul 14, 2011 at 11:02 AM, Mark Bilton
mark.bil...@uni-tuebingen.de wrote:
 Hello fellow R users,

 I am having a problem finding the estimates for some overall treatment
 effects for my mixed models using 'lme' (package nlme). I hope someone can
 help.

 Firstly then, the model:
 The data: Plant biomass (log transformed)
 Fixed Factors: Treatment(x3 Dry, Wet, Control) Year(x8 2002-2009)
 Random Factors: 5 plots per treatment, 5 quadrats per plot (N=594 (3*5*5*8)
 with 6 missing values).

 I am modelling this in two ways, firstly with year as a continuous variable
 (interested in the difference in estimated slope over time in each treatment
 'year*treatment'), and secondly with year as a categorical variable
 (interested in difference between 'treatments').

 When using Year as a continuous variable, the output of the lme means that I
 can compare the 3 treatments within my model...
 i.e. it takes one of the Treatment*year interactions as the baseline and
 compares (contrasts) the other two to that.
 I can then calculate the overall treatment*year effect using
 'anova.lme(Model).

 However, the problem comes when I use Year as a categorical variable. Here,
 I am interested solely in the Treatment effect (not the interaction with
 year). However, the output for the two labelled 'Treatment's against the
 third comparison, are not the overall effect but are a comparison within a
 year (2002). I can still get my overall effect (using anova.lme) but how do
 I calculate the estimates (with P-Values if possible) for each seperate
 overall treatment comparison (not within year). I tried 'glht' (package
 'multicomp') but this only works if there are no interactions present,
 otherwise again it gives a comparison for one particular year.

 Very grateful for any assistance,
 Mark

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
Men by nature long to get on to the ultimate truths, and will often
be impatient with elementary studies or fight shy of them. If it were
possible to reach the ultimate truths without the elementary studies
usually prefixed to them, these would not be preparatory studies but
superfluous diversions.

-- Maimonides (1135-1204)

Bert Gunter
Genentech Nonclinical Biostatistics

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] LME and overall treatment effects

2011-07-14 Thread Mark Bilton

Ok...lets try again with some code...

---
Hello fellow R users,

I am having a problem finding the estimates for some overall treatment  
effects for my mixed models using 'lme' (package nlme). I hope someone  
can help.


Firstly then, the model:
The data: Plant biomass (log transformed)
Fixed Factors: Treatment(x3 Dry, Wet, Control) Year(x8 2002-2009)
Random Factors: 5 plots per treatment, 10 quadrats per plot (N=1200  
(3*5*10*8)).


I am modelling this in two ways, firstly with year as a continuous variable
(interested in the difference in estimated slope over time in each treatment
'year*treatment'), and secondly with year as a categorical variable
(interested in difference between 'treatments').

--
ie: (with Year as either numeric or factor)

Model-lme(Species~Year*Treatment,random=~1|Plot/Quadrat,na.action =  
na.omit,data=UDD)

-

When using Year as a continuous variable, the output of the lme means  
that I can compare the 3 treatments within my model...
i.e. it takes one of the Treatment*year interactions as the baseline  
and compares (contrasts) the other two to that.

-
ie

Fixed effects: Species ~ Year * Treatment
 Value Std.Error   DF   t-value p-value
(Intercept)  1514.3700  352.7552 1047  4.292978  0.
Year   -0.75190.1759 1047 -4.274786  0.
Treatment0   -461.9500  498.8711   12 -0.925991  0.3727
Treatment1  -1355.0450  498.8711   12 -2.716222  0.0187
Year:Treatment0 0.23050.2488 1047  0.926537  0.3544
Year:Treatment1 0.67760.2488 1047  2.724094  0.0066

so Year:Treatment0 differs from baseline Year:Treatment-1 by 0.2305
and Year:Treatment1 is significantly different (p=0.0066) from  
Year:Treatment-1

--

I can then calculate the overall treatment*year effect using  
'anova.lme(Model)'.

-

anova.lme(Model1)

   numDF denDF   F-value p-value
(Intercept)1  1047 143.15245  .0001
Year   1  1047  19.56663  .0001
Treatment  212   3.73890  0.0547
Year:Treatment 2  1047   3.83679  0.0219

so there is an overall difference in slope between treatments  
(Year:Treatment interaction) p=0.0219

--

However, the problem comes when I use Year as a categorical variable.  
Here, I am interested solely in the Treatment effect (not the  
interaction with year). However, the output for the two labelled  
'Treatment's against the third comparison, are not the overall effect  
but are a comparison within a year (2002).

--
Fixed effects: Species ~ Year * Treatment
Value Std.Error   DF   t-value p-value
(Intercept)  6.42  1.528179 1029  4.201079  0.
Year2003 4.10  1.551578 1029  2.642471  0.0084
Year2004 5.00  1.551578 1029  3.222526  0.0013
Year2005-1.52  1.551578 1029 -0.979648  0.3275
Year2006-3.08  1.551578 1029 -1.985076  0.0474
Year2007-2.40  1.551578 1029 -1.546813  0.1222
Year2008 2.24  1.551578 1029  1.443692  0.1491
Year2009-4.30  1.551578 1029 -2.771372  0.0057
Treatment0   0.46  2.161171   12  0.212848  0.8350
Treatment1   0.50  2.161171   12  0.231356  0.8209
Year2003:Treatment0 -2.46  2.194262 1029 -1.121106  0.2625
Year2004:Treatment0 -1.34  2.194262 1029 -0.610684  0.5415
Year2005:Treatment0  0.34  2.194262 1029  0.154950  0.8769
Year2006:Treatment0  1.60  2.194262 1029  0.729174  0.4661
Year2007:Treatment0  1.76  2.194262 1029  0.802092  0.4227
Year2008:Treatment0 -3.22  2.194262 1029 -1.467464  0.1426
Year2009:Treatment0  1.80  2.194262 1029  0.820321  0.4122
Year2003:Treatment1  0.22  2.194262 1029  0.100261  0.9202
Year2004:Treatment1  3.48  2.194262 1029  1.585954  0.1131
Year2005:Treatment1  5.00  2.194262 1029  2.278670  0.0229
Year2006:Treatment1  4.70  2.194262 1029  2.141950  0.0324
Year2007:Treatment1  6.08  2.194262 1029  2.770863  0.0057
Year2008:Treatment1  2.32  2.194262 1029  1.057303  0.2906
Year2009:Treatment1  5.56  2.194262 1029  2.533881  0.0114

so Treatment0 (in year 2002) is different to baseline Treatment-1 (in  
year 2002)  by 0.46 p=0.8350



I can still get my overall effect (using anova.lme) but how do I  
calculate the estimates (with P-Values if possible) for each seperate  
overall treatment comparison (not within year). I can do this in SAS  
using 'estimate' or 'lsmeans', but for various reasons I want to do it  
in R as well.
I tried 'glht' (package 'multicomp') but this only works if there are  
no interactions present, otherwise again it gives a comparison for one  
particular year.


--
ie
require(multcomp)
summary(glht(Model, linfct = mcp(Treatment = Tukey)))

(sorry, I can't get this to work at home but trust me that it's 

[R] lme convergence failure within a loop

2011-06-22 Thread Sam Nicol

Hi R-users,

I'm attempting to fit a number of mixed models, all with the same 
structure, across a spatial grid with data points collected at various 
time points within each grid cell. I'm trying to use a 'for' loop to try 
the model fit on each grid cell. In some cells lme does not converge, 
giving me the error:


Error message: In lme.formula(logarea ~ year + summ_d, data = grid2, 
random = ~year +  :

  nlminb problem, convergence error code = 1
  message = iteration limit reached without convergence (9)

When I get the error, the program aborts, and my loop stops.

I'm not too worried about the error, as a generic mixed model structure 
may not be the best fit for every cell. I expect the optimization to 
fail in some places. I want to be able to detect when the algorithm has 
failed to converge automatically, so that I can continue my loop and 
record the places where the model does fit. I've used the lmeControl 
method with returnObject=TRUE options to allow me to continue looping, 
however I want to be able to flag the places where the convergence 
failed so that I can reject these gridcells and not mistakenly claim 
that the model fits at these points. Is there a way to do this?


My example code shows the relevant lines of code-- what I'm hoping for 
is a way to determine that the convergence failed and record this as a 
boolean value, or something similar.


Thanks,

Sam Nicol


#(set working directory)

#read data
grid2 - read.csv(grid2.csv, header= TRUE, sep = ,, na.strings=-1)

library(nlme)

#attempt to fit model after setting control options
lmeCtlList - lmeControl(maxIter=50, msMaxIter=50, tolerance=1e-4, msTol=1e-5, 
nlmStepMax=5, returnObject=TRUE ) #msVerbose=TRUE
global_model3 -  lme(logarea ~ year+summ_d, data= grid2, random= ~ year + 
summ_d | subject, control=lmeCtlList)
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme convergence failure within a loop

2011-06-22 Thread Daniel Malter
I am not an expert in this. But try try()   :)

hth,
Daniel


Sam Nicol wrote:
 
 Hi R-users,
 
 I'm attempting to fit a number of mixed models, all with the same 
 structure, across a spatial grid with data points collected at various 
 time points within each grid cell. I'm trying to use a 'for' loop to try 
 the model fit on each grid cell. In some cells lme does not converge, 
 giving me the error:
 
 Error message: In lme.formula(logarea ~ year + summ_d, data = grid2, 
 random = ~year +  :
nlminb problem, convergence error code = 1
message = iteration limit reached without convergence (9)
 
 When I get the error, the program aborts, and my loop stops.
 
 I'm not too worried about the error, as a generic mixed model structure 
 may not be the best fit for every cell. I expect the optimization to 
 fail in some places. I want to be able to detect when the algorithm has 
 failed to converge automatically, so that I can continue my loop and 
 record the places where the model does fit. I've used the lmeControl 
 method with returnObject=TRUE options to allow me to continue looping, 
 however I want to be able to flag the places where the convergence 
 failed so that I can reject these gridcells and not mistakenly claim 
 that the model fits at these points. Is there a way to do this?
 
 My example code shows the relevant lines of code-- what I'm hoping for 
 is a way to determine that the convergence failed and record this as a 
 boolean value, or something similar.
 
 Thanks,
 
 Sam Nicol
 
 
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.
 

--
View this message in context: 
http://r.789695.n4.nabble.com/lme-convergence-failure-within-a-loop-tp3618778p3618902.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lme, stepAIC, predict: scope and visibility

2011-06-06 Thread Boris Hayete
Hello all,

I've run into a problem where I can't run predict.lme on an object simplified 
via a stepAIC.  A similar post has been recorded on this list:
https://stat.ethz.ch/pipermail/r-help/2008-May/162047.html
but in my case, I'm going to great lengths to not repeat that poster's error 
and still coming up short.  Any advice would be much appreciated.  It would 
seem that, after stepAIC, predict.lme cannot find the training data for the lme 
object (and why is it even needed?).

Here's some example code:

foo = function() {
x = c(1:20, 1:20)
y = c(1:20, 5 + (1:20)) + rnorm(40)
s = c(rep(1, 20), rep(2, 20))

library(lattice)
xyplot(y~x|s)

dframe = data.frame(x, y, s)

m = lme(y~x, random=~1|s, method='ML')

newdf = data.frame(x=40, s = 2)
res = predict(m, newdata=newdf)
print(res)

m2 = stepAIC(m, k=log(nrow(dframe)))
#res2 = predict(m2, newdata=newdf)
res2 = eval(substitute(predict(mL, newdata=nL), list(mL=m2, nL=newdf)))
print(res2)
}

 foo()
   2 
45.86875 
attr(,label)
[1] Predicted values
Start:  AIC=136.4
y ~ x

Error in eval(expr, envir, enclos) : object 'y' not found
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lme error: Error in getGroups.data.frame(dataMix, groups)

2011-02-28 Thread John Sorkin
R 2.10.0
Windows XP

I am trying to run lme. I receive the following error message:
My lme code is: 
fitRandom - lme(values ~ factor(subject),
data=withindata)

Below I have printed the console output, and at the bottom of this message, I 
have printed my code. 
I hope someone can tell my what I am doing wrong.
Thank you,
John
 
 print(withindata)
   subject  values
11   2.3199639
21  -8.5795802
31  -4.1901241
41   0.4588128
51  16.9128232
61   8.9856358
71   1.9303254
81  -1.4320313
91 -15.4225123
10   1   5.9293529
11   2 -29.2014153
12   2  -8.9684986
13   2 -11.9062170
14   2  13.2133887
15   2   1.2491941
16   2  -8.0613768
17   2  -5.6340179
18   2   3.1916857
19   2  -7.7447932
20   2   2.2316354
21   3   0.6444938
22   3   4.6912677
23   3  20.9135073
24   3   2.1433533
25   3  -0.8057022
26   3 -13.0187979
27   3   8.9634065
28   3  13.4815344
29   3   4.6148061
30   3 -18.4781373
31   4  15.5263564
32   4  -2.1993412
33   4   5.1830260
34   4  16.2311097
35   4  -2.5781897
36   4  -3.0167290
37   4  -0.1119353
38   4   1.1983126
39   4  -8.8212143
40   4   3.8895263
 fitRandom - lme(values ~ factor(subject),
+ data=withindata)
Error in getGroups.data.frame(dataMix, groups) : 
  Invalid formula for groups
 summary(fitRandom)
Error in summary(fitRandom) : object 'fitRandom' not found
 

My code:

library(nlme)

# Define essential constants.
# Number of subject studied.
NSubs - 4
# Number of observations per subject.
NObs - 10 
# Between study SD
tau - 4 
# Within study SD.
sigma - 8 
# END Define essential constants.

# Define between subject variation
between - matrix(nrow=10,ncol=1)
between - rnorm(NSubs,0,tau)
between
# END Define between subject variation.
 
# Define within subject varation.
within - matrix(nrow=NObs*NSubs,ncol=2)
for (subject in 1:NSubs) {
  # Create a variable defining subject.
  within[c(1:NObs)+((subject-1)*NObs),1] - subject
  # Create within subject variation.
  within[c(1:NObs)+((subject-1)*NObs),2] - rnorm(NObs,between[subject],sigma)
}
# END Define within subject variation.

# Create a dataframe to hold values.
withindata - data.frame(subject=within[,1],values=within[,2])
print(withindata[1:4,])

print(withindata)
fitRandom - lme(values ~ subject,
data=withindata)
summary(fitRandom)



John David Sorkin M.D., Ph.D.
Chief, Biostatistics and Informatics
University of Maryland School of Medicine Division of Gerontology
Baltimore VA Medical Center
10 North Greene Street
GRECC (BT/18/GR)
Baltimore, MD 21201-1524
(Phone) 410-605-7119
(Fax) 410-605-7913 (Please call phone number above prior to faxing)

Confidentiality Statement:
This email message, including any attachments, is for th...{{dropped:6}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lme error message: Error in getGroups.data.frame(dataMix, groups) :

2011-02-28 Thread John Sorkin
Windows XP 
R 2.10


I am trying to run lme and get the following error:

 fitRandom - lme(values ~ subject,
+ data=withindata)
Error in getGroups.data.frame(dataMix, groups) : 
  Invalid formula for groups

my data follows, below which is a copy of all my code


 
 print(withindata)
   subject  values
11   2.3199639
21  -8.5795802
31  -4.1901241
41   0.4588128
51  16.9128232
61   8.9856358
71   1.9303254
81  -1.4320313
91 -15.4225123
10   1   5.9293529
11   2 -29.2014153
12   2  -8.9684986
13   2 -11.9062170
14   2  13.2133887
15   2   1.2491941
16   2  -8.0613768
17   2  -5.6340179
18   2   3.1916857
19   2  -7.7447932
20   2   2.2316354
21   3   0.6444938
22   3   4.6912677
23   3  20.9135073
24   3   2.1433533
25   3  -0.8057022
26   3 -13.0187979
27   3   8.9634065
28   3  13.4815344
29   3   4.6148061
30   3 -18.4781373
31   4  15.5263564
32   4  -2.1993412
33   4   5.1830260
34   4  16.2311097
35   4  -2.5781897
36   4  -3.0167290
37   4  -0.1119353
38   4   1.1983126
39   4  -8.8212143
40   4   3.8895263



my code:
library(nlme)

# Define essential constants.
# Number of subject studied.
NSubs - 4
# Number of observations per subject.
NObs - 10 
# Between study SD
tau - 4 
# Within study SD.
sigma - 8 
# END Define essential constants.

# Define between subject variation
between - matrix(nrow=10,ncol=1)
between - rnorm(NSubs,0,tau)
between
# END Define between subject variation.
 
# Define within subject varation.
within - matrix(nrow=NObs*NSubs,ncol=2)
for (subject in 1:NSubs) {
  # Create a variable defining subject.
  within[c(1:NObs)+((subject-1)*NObs),1] - subject
  # Create within subject variation.
  within[c(1:NObs)+((subject-1)*NObs),2] - rnorm(NObs,between[subject],sigma)
}
# END Define within subject variation.

# Create a dataframe to hold values.
withindata - data.frame(subject=within[,1],values=within[,2])
print(withindata[1:4,])

print(withindata)
fitRandom - lme(values ~ subject,
data=withindata)
summary(fitRandom)


John David Sorkin M.D., Ph.D.
Chief, Biostatistics and Informatics
University of Maryland School of Medicine Division of Gerontology
Baltimore VA Medical Center
10 North Greene Street
GRECC (BT/18/GR)
Baltimore, MD 21201-1524
(Phone) 410-605-7119
(Fax) 410-605-7913 (Please call phone number above prior to faxing)

Confidentiality Statement:
This email message, including any attachments, is for th...{{dropped:6}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme error: Error in getGroups.data.frame(dataMix, groups)

2011-02-28 Thread Dennis Murphy
Hi:

On Mon, Feb 28, 2011 at 8:42 AM, John Sorkin jsor...@grecc.umaryland.eduwrote:

 R 2.10.0
 Windows XP

 I am trying to run lme. I receive the following error message:
 My lme code is:
 fitRandom - lme(values ~ factor(subject),
 data=withindata)


Where's the random factor? Perhaps you mean

lme(values ~ 1, random = ~ 1 | subject, data = withindata)

or in lme4,

lmer(values ~ 1 + (1 | subject), data = withindata)

HTH,
Dennis

PS: Use of dput() output is more helpful than copying from the console, as
it retains the same classes you have on your end.


 dput(withindata)
structure(list(subject = c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L, 3L,
3L, 3L, 3L, 3L, 3L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L),
values = c(2.3199639, -8.5795802, -4.1901241, 0.4588128,
16.9128232, 8.9856358, 1.9303254, -1.4320313, -15.4225123,
5.9293529, -29.2014153, -8.9684986, -11.906217, 13.2133887,
1.2491941, -8.0613768, -5.6340179, 3.1916857, -7.7447932,
2.2316354, 0.6444938, 4.6912677, 20.9135073, 2.1433533, -0.8057022,
-13.0187979, 8.9634065, 13.4815344, 4.6148061, -18.4781373,
15.5263564, -2.1993412, 5.183026, 16.2311097, -2.5781897,
-3.016729, -0.1119353, 1.1983126, -8.8212143, 3.8895263)), .Names =
c(subject,
values), class = data.frame, row.names = c(1, 2, 3,
4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15,
16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26,
27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37,
38, 39, 40))

Below I have printed the console output, and at the bottom of this message,
 I have printed my code.
 I hope someone can tell my what I am doing wrong.
 Thank you,
 John

  print(withindata)
   subject  values
 11   2.3199639
 21  -8.5795802
 31  -4.1901241
 41   0.4588128
 51  16.9128232
 61   8.9856358
 71   1.9303254
 81  -1.4320313
 91 -15.4225123
 10   1   5.9293529
 11   2 -29.2014153
 12   2  -8.9684986
 13   2 -11.9062170
 14   2  13.2133887
 15   2   1.2491941
 16   2  -8.0613768
 17   2  -5.6340179
 18   2   3.1916857
 19   2  -7.7447932
 20   2   2.2316354
 21   3   0.6444938
 22   3   4.6912677
 23   3  20.9135073
 24   3   2.1433533
 25   3  -0.8057022
 26   3 -13.0187979
 27   3   8.9634065
 28   3  13.4815344
 29   3   4.6148061
 30   3 -18.4781373
 31   4  15.5263564
 32   4  -2.1993412
 33   4   5.1830260
 34   4  16.2311097
 35   4  -2.5781897
 36   4  -3.0167290
 37   4  -0.1119353
 38   4   1.1983126
 39   4  -8.8212143
 40   4   3.8895263
  fitRandom - lme(values ~ factor(subject),
 + data=withindata)
 Error in getGroups.data.frame(dataMix, groups) :
  Invalid formula for groups
  summary(fitRandom)
 Error in summary(fitRandom) : object 'fitRandom' not found
 

 My code:

 library(nlme)

 # Define essential constants.
 # Number of subject studied.
 NSubs - 4
 # Number of observations per subject.
 NObs - 10
 # Between study SD
 tau - 4
 # Within study SD.
 sigma - 8
 # END Define essential constants.

 # Define between subject variation
 between - matrix(nrow=10,ncol=1)
 between - rnorm(NSubs,0,tau)
 between
 # END Define between subject variation.

 # Define within subject varation.
 within - matrix(nrow=NObs*NSubs,ncol=2)
 for (subject in 1:NSubs) {
  # Create a variable defining subject.
  within[c(1:NObs)+((subject-1)*NObs),1] - subject
  # Create within subject variation.
  within[c(1:NObs)+((subject-1)*NObs),2] -
 rnorm(NObs,between[subject],sigma)
 }
 # END Define within subject variation.

 # Create a dataframe to hold values.
 withindata - data.frame(subject=within[,1],values=within[,2])
 print(withindata[1:4,])

 print(withindata)
 fitRandom - lme(values ~ subject,
 data=withindata)
 summary(fitRandom)



 John David Sorkin M.D., Ph.D.
 Chief, Biostatistics and Informatics
 University of Maryland School of Medicine Division of Gerontology
 Baltimore VA Medical Center
 10 North Greene Street
 GRECC (BT/18/GR)
 Baltimore, MD 21201-1524
 (Phone) 410-605-7119
 (Fax) 410-605-7913 (Please call phone number above prior to faxing)

 Confidentiality Statement:
 This email message, including any attachments, is for ...{{dropped:13}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme error: Error in getGroups.data.frame(dataMix, groups)

2011-02-28 Thread John Sorkin
Dennis,
Thank you,
John




John David Sorkin M.D., Ph.D.
Chief, Biostatistics and Informatics
University of Maryland School of Medicine Division of Gerontology
Baltimore VA Medical Center
10 North Greene Street
GRECC (BT/18/GR)
Baltimore, MD 21201-1524
(Phone) 410-605-7119
(Fax) 410-605-7913 (Please call phone number above prior to faxing) Dennis 
Murphy djmu...@gmail.com 2/28/2011 4:08 PM 
Hi:

On Mon, Feb 28, 2011 at 8:42 AM, John Sorkin jsor...@grecc.umaryland.eduwrote:

 R 2.10.0
 Windows XP

 I am trying to run lme. I receive the following error message:
 My lme code is:
 fitRandom - lme(values ~ factor(subject),
 data=withindata)


Where's the random factor? Perhaps you mean

lme(values ~ 1, random = ~ 1 | subject, data = withindata)

or in lme4,

lmer(values ~ 1 + (1 | subject), data = withindata)

HTH,
Dennis

PS: Use of dput() output is more helpful than copying from the console, as
it retains the same classes you have on your end.


 dput(withindata)
structure(list(subject = c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L, 3L,
3L, 3L, 3L, 3L, 3L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L),
values = c(2.3199639, -8.5795802, -4.1901241, 0.4588128,
16.9128232, 8.9856358, 1.9303254, -1.4320313, -15.4225123,
5.9293529, -29.2014153, -8.9684986, -11.906217, 13.2133887,
1.2491941, -8.0613768, -5.6340179, 3.1916857, -7.7447932,
2.2316354, 0.6444938, 4.6912677, 20.9135073, 2.1433533, -0.8057022,
-13.0187979, 8.9634065, 13.4815344, 4.6148061, -18.4781373,
15.5263564, -2.1993412, 5.183026, 16.2311097, -2.5781897,
-3.016729, -0.1119353, 1.1983126, -8.8212143, 3.8895263)), .Names =
c(subject,
values), class = data.frame, row.names = c(1, 2, 3,
4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15,
16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26,
27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37,
38, 39, 40))

Below I have printed the console output, and at the bottom of this message,
 I have printed my code.
 I hope someone can tell my what I am doing wrong.
 Thank you,
 John

  print(withindata)
   subject  values
 11   2.3199639
 21  -8.5795802
 31  -4.1901241
 41   0.4588128
 51  16.9128232
 61   8.9856358
 71   1.9303254
 81  -1.4320313
 91 -15.4225123
 10   1   5.9293529
 11   2 -29.2014153
 12   2  -8.9684986
 13   2 -11.9062170
 14   2  13.2133887
 15   2   1.2491941
 16   2  -8.0613768
 17   2  -5.6340179
 18   2   3.1916857
 19   2  -7.7447932
 20   2   2.2316354
 21   3   0.6444938
 22   3   4.6912677
 23   3  20.9135073
 24   3   2.1433533
 25   3  -0.8057022
 26   3 -13.0187979
 27   3   8.9634065
 28   3  13.4815344
 29   3   4.6148061
 30   3 -18.4781373
 31   4  15.5263564
 32   4  -2.1993412
 33   4   5.1830260
 34   4  16.2311097
 35   4  -2.5781897
 36   4  -3.0167290
 37   4  -0.1119353
 38   4   1.1983126
 39   4  -8.8212143
 40   4   3.8895263
  fitRandom - lme(values ~ factor(subject),
 + data=withindata)
 Error in getGroups.data.frame(dataMix, groups) :
  Invalid formula for groups
  summary(fitRandom)
 Error in summary(fitRandom) : object 'fitRandom' not found
 

 My code:

 library(nlme)

 # Define essential constants.
 # Number of subject studied.
 NSubs - 4
 # Number of observations per subject.
 NObs - 10
 # Between study SD
 tau - 4
 # Within study SD.
 sigma - 8
 # END Define essential constants.

 # Define between subject variation
 between - matrix(nrow=10,ncol=1)
 between - rnorm(NSubs,0,tau)
 between
 # END Define between subject variation.

 # Define within subject varation.
 within - matrix(nrow=NObs*NSubs,ncol=2)
 for (subject in 1:NSubs) {
  # Create a variable defining subject.
  within[c(1:NObs)+((subject-1)*NObs),1] - subject
  # Create within subject variation.
  within[c(1:NObs)+((subject-1)*NObs),2] -
 rnorm(NObs,between[subject],sigma)
 }
 # END Define within subject variation.

 # Create a dataframe to hold values.
 withindata - data.frame(subject=within[,1],values=within[,2])
 print(withindata[1:4,])

 print(withindata)
 fitRandom - lme(values ~ subject,
 data=withindata)
 summary(fitRandom)



 John David Sorkin M.D., Ph.D.
 Chief, Biostatistics and Informatics
 University of Maryland School of Medicine Division of Gerontology
 Baltimore VA Medical Center
 10 North Greene Street
 GRECC (BT/18/GR)
 Baltimore, MD 21201-1524
 (Phone) 410-605-7119
 (Fax) 410-605-7913 (Please call phone number above prior to faxing)

 Confidentiality Statement:
 This email message, including any attachments, is for ...{{dropped:17}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lme in loop help

2011-02-25 Thread Ram H. Sharma
Dear R users

I am new R user, execuse me I bother you, but I worked hard to find a
solution:

# data
ID - c(1:100)
set.seed(21)
y - rnorm(100, 10,2)
x1 - rnorm(100, 10,2)
x2 - rnorm(100, 10,2)
x3 - rnorm(100, 10,2)
x4 - rnorm(100, 10,2)
x5 - rnorm(100, 10,2)
x6 - rnorm(100, 10,2)
mydf - data.frame(ID,y, x1,x2, x3, x4, x5, x6)
# just seperate analyis
require(nlme)
mod1- lme(fixed= y ~ x1 + x2, random = ~ 1 | ID) # i want to put subject /
ID as random is it correct??
m1 - anova(mod1)
m1 [1,4] # I am not getting exact value below .0001???, how to get one?
m1 [2,4]
m1 [3,4]
# putting in a loop
for(i in length(mydf)){
  mylme - NULL
  print(m1[i+1,]- lme(fixed= mydf$y ~ mydf$x[,i] + mydf$x[,i+1], random = ~
1 | ID))}

could not help myself to work ! However I have the following output in my
mind
# The output in my mind a data fram with the following
model   p-intercept  variable1   p-value1   variable2  p-value2
1.0001   x10.9452 x2 0.5455
2   .0001x30.3301  x 4   0.9905
3  .0001  x50.9971 x60.0487

I need a solution as I have tons of variables to work on. I am sowing these
six just for an example.

Thank you in advance;

ram sharma

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lme-post hoc

2011-01-18 Thread Prabhath Pushpakumara
Hi all,

I analysed my data with lme and after that I spent a lot of time for
 mean separation of treatments (post hoc). But still I couldn’t make 
through it. This is my data set and R scripts I tried.

replication fertilizer  variety plotheight

1   level1  var1150452

1   level1  var3150659

1   level1  var4150954

1   level1  var2151048

2   level1  var1260447

2   level1  var4260651

2   level1  var3260755

2   level1  var2260944

3   level1  var2340146

3   level1  var3340264

3   level1  var4340364

3   level1  var1340450

1   level2  var3160159

1   level2  var1160556

1   level2  var2161053

1   level2  var4161153

2   level2  var2240356

2   level2  var1240561

2   level2  var4240769

2   level2  var3241370

3   level2  var3350867

3   level2  var4351173

3   level2  var2351262

3   level2  var1351367

1   level3  var4140677

1   level3  var3140874

1   level3  var1140971

1   level3  var2141069

2   level3  var4250162

2   level3  var3250758

2   level3  var2250856

2   level3  var1251363

3   level3  var3360173

3   level3  var2360359

3   level3  var1360956

3   level3  var4361261

modela-lme(height~variety*fertilizer, random=~1|replication)

summary(modela)

anova(modela)


library(multcomp)

hgt - glht(modela,linfct=mcp(fertilizer=Tukey))

summary(hgt)


Any body can help me to proceed tukey (or lsd) with lme that would be highly 
appreciated.

Prabhath

University of Saskatchewan



[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme-post hoc

2011-01-18 Thread Dieter Menne


-- 
View this message in context: 
http://r.789695.n4.nabble.com/lme-post-hoc-tp3224436p3224652.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme-post hoc

2011-01-18 Thread Dieter Menne


plsc wrote:
 
 
 I analysed my data with lme and after that I spent a lot of time for
  mean separation of treatments (post hoc). But still I couldn’t make 
 through it. This is my data set and R scripts I tried.
 
 
 
 3 level3  var4361261
 
 modela-lme(height~variety*fertilizer, random=~1|replication)
 library(multcomp)
 hgt - glht(modela,linfct=mcp(fertilizer=Tukey))
 Any body can help me to proceed tukey (or lsd) with lme that would be
 highly appreciated.
 
 

Try a search on tukey and lme, and make sure that you have a current version
of R installed. The combination was broken on some earlier versions.

http://r-project.markmail.org/search/?q=tukey+lme

Dieter

-- 
View this message in context: 
http://r.789695.n4.nabble.com/lme-post-hoc-tp3224436p3224655.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme-post hoc

2011-01-18 Thread Dieter Menne


plsc wrote:
 
 
 I analysed my data with lme and after that I spent a lot of time for
  mean separation of treatments (post hoc). But still I couldn’t make 
 through it. This is my data set and R scripts I tried.
 
 
 
 3 level3  var4361261
 
 modela-lme(height~variety*fertilizer, random=~1|replication)
 library(multcomp)
 hgt - glht(modela,linfct=mcp(fertilizer=Tukey))
 Any body can help me to proceed tukey (or lsd) with lme that would be
 highly appreciated.
 
 

Try a search on tukey and lme, and make sure that you have a current version
of R installed. The combination was broken on some earlier versions.

http://r-project.markmail.org/search/?q=tukey+lme

Dieter

-- 
View this message in context: 
http://r.789695.n4.nabble.com/lme-post-hoc-tp3224436p3224656.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] [R-lme] Extract estimated variances from output of lme?

2010-12-01 Thread Tingting Zhan
Hi all,

I have the output of summary() of an lme object called lme.exp1, for
example

#
 summary(lme.exp1)

Linear mixed-effects model fit by REML
  Data: DATA
  Log-restricted-likelihood: -430.8981
  Fixed: fixed.exp1


Random effects:
 Formula: ~-1 + mu1 + log.sig1 | animID
 Structure: Diagonal
 mu1  log.sig1 Residual
StdDev: 2.605223 0.1013134 3.876431

Correlation Structure: General
 Formula: ~1 | animID/fieldN
 Parameter estimate(s):
 Correlation:
  1
2 0.612
Variance function:
 Structure: Different standard deviations per stratum
 Formula: ~1 | type
 Parameter estimates:
   mu1   log.sig1
1. 0.03722675
Number of Observations: 376
Number of Groups: 32
##

If I want to reuse these estimates, I can use

 summary(lme.exp1)$coefficients$fixed;

to extract fixed effect estimates, and

  summary(lme.exp1)$sigma

for the common variance parameter sigma.   But if I need the covariance
estimates 0.612 and 0.0372, do I have a function to extract these numbers
too?   I'm looking for something similar to function ranef() [of course
ranef() does not serve my purpose].  Any advice appreciated!

Tingting

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] [R-lme] Extract estimated variances from output of lme?

2010-12-01 Thread Ben Bolker
Tingting Zhan tingting.zhan at jefferson.edu writes:

 
 Hi all,
 
 I have the output of summary() of an lme object called lme.exp1, for
 example
 
 #
  summary(lme.exp1)


  [snip]

 for the common variance parameter sigma.   But if I need the covariance
 estimates 0.612 and 0.0372, do I have a function to extract these numbers
 too?   I'm looking for something similar to function ranef() [of course
 ranef() does not serve my purpose].  Any advice appreciated!
 
 Tingting


  ?VarCorr

and

str(VarCorr(lme.exp1))

to figure out how to extract the values you want (it's
a slightly odd object, a matrix of mode 'character' with
labels interspersed with numeric values ...)

  r-sig-mixed-models is probably the best list for this
sort of query ...

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lme with time series

2010-11-24 Thread María José Sorrondegui
Hi:

I'm trying to fit a linear mixed effect model to two time series. My data
base has 3 columns, number of observation, y, and x. Both y and x are market
and specific asset return (both measured on a daily basis for the last 3
years). I want to explain y in terms of x.

Response=y
fixed=intercept
random=x

I've tried lme(y~1,random=~x) and got: Error en
getGroups.data.frame(dataMix, groups) :   Invalid formula for groups

Then I tried lme(rcopec~1,random=~ripsa|obs) (I don´t have groups, each
group is each period of  time and have one observation per group). I got
Error ein n lme.formula(rcopec ~ 1, random = ~ripsa | obs) :
  nlminb problem, convergence error code = 1   message = false convergence
(8)
Warning message:
In lme.formula(rcopec ~ 1, random = ~ripsa | obs) :
  Fewer observations than random effects in all level 1 groups

I can´t make it work. Is there any other function beside lme to do this?

Can some one help me out?

Thanks a lot in advance.

Regards.

María

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lme Random Effects and Covariates

2010-11-18 Thread patze003

1.  I'm attempting to test for Random Effects.  I've grouped the data on
subject (grid) but want to use lme to build the model without subject as a
RE then add it and do anova between the 2 models.  This is the result I get
and it appears it's adding Random Effects.

tmp.dat4 - groupedData(Trials ~ 1 | grid, data = tmp.dat4)

mod2a - lme(Trials ~ factor(group_id) + reversal, data = tmp.dat4,
na.action = na.omit, method = REML)

 summary(mod2a)
Linear mixed-effects model fit by REML
 Data: tmp.dat4 
   AIC  BIClogLik
  4544.054 4587.718 -2262.027

Random effects:
 Formula: ~factor(group_id) + reversal | grid
 Structure: General positive-definite
  StdDevCorr 
(Intercept)   10.505303 (Intr) fc(_)2
factor(group_id)2  9.830679 -0.778   
reversal2  7.106839 -0.275  0.023
Residual   9.995963  

Fixed effects: Trials ~ factor(group_id) + reversal 
  Value Std.Error  DF   t-value p-value
(Intercept)   23.275874  1.876185 510 12.405960   0e+00
factor(group_id)2 -7.639842  2.151004  72 -3.551757   7e-04
reversal2  7.681495  1.206858 510  6.364869   0e+00
 Correlation: 
  (Intr) fc(_)2
factor(group_id)2 -0.785   
reversal2 -0.308 -0.015

Standardized Within-Group Residuals:
   Min Q1Med Q3Max 
-2.6884393 -0.5059063 -0.1892908  0.4944976  2.8477377 

Number of Observations: 585
Number of Groups: 74 

2.  Secondly is this the correct way to add covariates (such as age).

mod2i - lme(Trials ~ factor(group_id)*factor(reversal) * age, data =
tmp.dat4, random = ~ 1 | grid, na.action = na.omit, method = ML)

-- 
View this message in context: 
http://r.789695.n4.nabble.com/lme-Random-Effects-and-Covariates-tp3049181p3049181.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lme weights glht

2010-11-17 Thread Sibylle Stöckli

Dear R-user

I used lme to fit a linear mixed model inlcuding weights=varPower().  
Additionally I wanted to use glht to calculate Tukey-Kramer multiple  
comparision.


error:

 glht(modelF, linfct=mcp(Species=Tukey))
Error in glht.matrix(model = list(modelStruct = list(reStruct =  
list(SubPlot = -0.305856275920955,  :

  ‘ncol(linfct)’ is not equal to ‘length(coef(model))’
 glht(modelF, linfct=mcp(Diversity=Tukey))
Error in mcp2matrix(model, linfct = linfct) :
  Variable(s) ‘Diversity’ of class ‘integer’ is/are not contained as  
a factor in ‘model’.



Does anyone know, if glht does not function using weights/interactions  
in lme?


Thanks
Sibylle


 LME MODEL

 Kaltenborn-read.table(Kaltenborn_YEARS.txt, na.strings=*,  
header=TRUE)

 library(nlme)
 modelF-lme(asin(sqrt(PropMortality))~Diversity+Management+Species 
+Height+Year+Diversity*Year, data=Kaltenborn, random=~1|Plot/SubPlot,  
na.action=na.omit, weights=varPower(form=~Diversity), subset=Kaltenborn 
$ADDspecies!=1, method=REML)



 anova(modelF, type=marginal)
   numDF denDF  F-value p-value
(Intercept)1   162  7.12789  0.0084
Diversity  114 12.89284  0.0030
Management 230  5.52544  0.0091
Species3   162 41.81003  .0001
Height 1   162  2.59134  0.1094
Year   1   162  7.07660  0.0086
Diversity:Year 1   162 12.88095  0.0004


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lme vs. lmer results

2010-10-26 Thread Dimitri Liakhovitski
Hello,
and sorry for asking a question without the data - hope it can still
be answered:

I've run two things on the same data:

# Using lme:
mix.lme - lme(DV ~a+b+c+d+e+f+h+i, random = random = ~ e+f+h+i|
group, data = mydata)

# Using lmer
mix.lmer - lmer(DV
~a+b+c+d+(1|group)+(e|group)+(f|group)+(h|group)+(i|group), data =
mydata)

lme provided an output (fixed effects and random effects coefficients).
lmer gave me an error: Error in mer_finalize(ans) : Downdated X'X is
not positive definite, 10.
I've rerun lmer with - but specifying the random effects for 2 fewer
predictors. This time it ran and provided an output. (BTW, the random
effects of lmer with 2 fewer predictors specified as random were very
close to the output of lme).

Question:
Looks like lmer could not invert the matrix, right? But how come lme
(which I thought was an earlier version of lmer) COULD invert it?
Greatly appreciate a clarification!


-- 
Dimitri Liakhovitski
Ninah Consulting
www.ninah.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme vs. lmer results

2010-10-26 Thread Douglas Bates
On Tue, Oct 26, 2010 at 12:27 PM, Dimitri Liakhovitski
dimitri.liakhovit...@gmail.com wrote:
 Hello,
 and sorry for asking a question without the data - hope it can still
 be answered:

 I've run two things on the same data:

 # Using lme:
 mix.lme - lme(DV ~a+b+c+d+e+f+h+i, random = random = ~ e+f+h+i|
 group, data = mydata)

 # Using lmer
 mix.lmer - lmer(DV
 ~a+b+c+d+(1|group)+(e|group)+(f|group)+(h|group)+(i|group), data =
 mydata)

Those models aren't the same and the model for lmer doesn't make
sense.  You would need to write the random effects terms as
(0+e|group), etc. because (e|group) is the same as (1 + e|group) so
you are including (Intercept) random effects for group in each of
those 5 terms.

To generate the same model as you fit with lme you would use

mix.lmer - lmer(DV ~a+b+c+d+(e+f+g+h+ii|group), mydata)

I wouldn't recommend it though as this requires estimating  21
variance and covariance parameters for the random effects.  Almost
certainly the estimated variance-covariance matrix will end up being
singular.  Unless you are careful you may not notice this.

 lme provided an output (fixed effects and random effects coefficients).

lme is not as picky about singularity of the variance-covariance
matrix as lmer is.

 lmer gave me an error: Error in mer_finalize(ans) : Downdated X'X is
 not positive definite, 10.
 I've rerun lmer with - but specifying the random effects for 2 fewer
 predictors. This time it ran and provided an output. (BTW, the random
 effects of lmer with 2 fewer predictors specified as random were very
 close to the output of lme).

Yes, lmer could converge in such as case but the parameter estimates
are not meaningful because of the ambiguity described above.

 Question:
 Looks like lmer could not invert the matrix, right?

Well, lmer never tries to invert matrices but it does factor them and
that is where the problem is recognized. However, I think that
singularity is a symptom of the problem, not the cause.

 But how come lme
 (which I thought was an earlier version of lmer) COULD invert it?

The computational methods in the two packages are quite different.  I
think that the methods in lme4 are superior because we have learned a
bit in the last 10 years.


 Greatly appreciate a clarification!


 --
 Dimitri Liakhovitski
 Ninah Consulting
 www.ninah.com

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme vs. lmer results

2010-10-26 Thread Dimitri Liakhovitski
Thanks a lot, Douglas. It's very heplful.

A clarification question about specifying the model in lmer. You said
it should be:
mix.lmer - lmer(DV ~a+b+c+d+(e+f+g+h+ii|group), mydata)

I assume it was a typo and you meant that the last predictor in
brackets should be i (rather than ii), right?

Also you said: I wouldn't recommend it though as this requires
estimating  21 variance and covariance parameters for the random
effects.  Almost certainly the estimated variance-covariance matrix
will end up being singular.  Unless you are careful you may not notice
this.

Question: What would you recommend one to do in the following
situation: I have quite a few predictors that are all fixed (and are
usually estimated using simple OLS regression). But now I have the
same situation, however, my observations come from different groupings
(the same number of observations per grouping, the same set of
predictors and the same DV in all groupings). I thought I would use
the factor group to define the random effects model - because I
didn't want predictor coefficients to vary like crazy from group to
group - I wanted them to be anchored in the pooled model's predictor
coefficients. But if it's not feasible mathematically - what should
one do?
Maybe run a bunch of models like this:
mix.lmer.e - lmer(DV ~a+b+c+d+ (e|group)+ f+g+h+i, mydata)
mix.lmer.f - lmer(DV ~a+b+c+d+ e + (f|group)+ g+h+i, mydata)
mix.lmer.g - lmer(DV ~a+b+c+d+ e + f + (g|group) + h+i, mydata)
etc.
Look at the random effect for each and every predictor in question?
But would it be right?

My ultimate goal - is to accurately estimate the original regression
model (DV ~a+b+c+d+e+f+g+h+i) - but for each of the different
groups...

Thanks a lot for your advice!
Dimitri


On Tue, Oct 26, 2010 at 3:57 PM, Douglas Bates ba...@stat.wisc.edu wrote:
 On Tue, Oct 26, 2010 at 12:27 PM, Dimitri Liakhovitski
 dimitri.liakhovit...@gmail.com wrote:
 Hello,
 and sorry for asking a question without the data - hope it can still
 be answered:

 I've run two things on the same data:

 # Using lme:
 mix.lme - lme(DV ~a+b+c+d+e+f+h+i, random = random = ~ e+f+h+i|
 group, data = mydata)

 # Using lmer
 mix.lmer - lmer(DV
 ~a+b+c+d+(1|group)+(e|group)+(f|group)+(h|group)+(i|group), data =
 mydata)

 Those models aren't the same and the model for lmer doesn't make
 sense.  You would need to write the random effects terms as
 (0+e|group), etc. because (e|group) is the same as (1 + e|group) so
 you are including (Intercept) random effects for group in each of
 those 5 terms.

 To generate the same model as you fit with lme you would use

 mix.lmer - lmer(DV ~a+b+c+d+(e+f+g+h+ii|group), mydata)

 I wouldn't recommend it though as this requires estimating  21
 variance and covariance parameters for the random effects.  Almost
 certainly the estimated variance-covariance matrix will end up being
 singular.  Unless you are careful you may not notice this.

 lme provided an output (fixed effects and random effects coefficients).

 lme is not as picky about singularity of the variance-covariance
 matrix as lmer is.

 lmer gave me an error: Error in mer_finalize(ans) : Downdated X'X is
 not positive definite, 10.
 I've rerun lmer with - but specifying the random effects for 2 fewer
 predictors. This time it ran and provided an output. (BTW, the random
 effects of lmer with 2 fewer predictors specified as random were very
 close to the output of lme).

 Yes, lmer could converge in such as case but the parameter estimates
 are not meaningful because of the ambiguity described above.

 Question:
 Looks like lmer could not invert the matrix, right?

 Well, lmer never tries to invert matrices but it does factor them and
 that is where the problem is recognized. However, I think that
 singularity is a symptom of the problem, not the cause.

 But how come lme
 (which I thought was an earlier version of lmer) COULD invert it?

 The computational methods in the two packages are quite different.  I
 think that the methods in lme4 are superior because we have learned a
 bit in the last 10 years.


 Greatly appreciate a clarification!


 --
 Dimitri Liakhovitski
 Ninah Consulting
 www.ninah.com

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.





-- 
Dimitri Liakhovitski
Ninah Consulting
www.ninah.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lme with log-normal distribution of parameters

2010-10-20 Thread Hoai Thu Thai
Dear R-users,

Do you know if we can use the function lme in R for log-normal 
distribution of parameters as used in Nonmem ?
theta=theta0*exp(eta)

In our model, the parameters follow the log-normal distribution so it's 
not reasonable to deal with normal distribution which gives us negative 
values in simulation

Thanks for your help,

Thu



[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme with log-normal distribution of parameters

2010-10-20 Thread Ben Bolker
Hoai Thu Thai hoai-thu.thai at inserm.fr writes:

 
 Dear R-users,
 
 Do you know if we can use the function lme in R for log-normal 
 distribution of parameters as used in Nonmem ?
 theta=theta0*exp(eta)
 
 In our model, the parameters follow the log-normal distribution so it's 
 not reasonable to deal with normal distribution which gives us negative 
 values in simulation
 

  No.
  There are various tools (AD Model Builder, WinBUGS, JAGS) which have
interfaces to R which could be used, but neither of the main R packages
(nlme or lme4) will handle non-normal random effect distributions
(it's possible that some of Jim Lindsey's packages at 
popgen.unimaas.nl/~jlindsey/rcode.html will handle this case;
I don't remember).

   You might want to try this question on r-sig-mixed-mod...@r-project.org

  Ben Bolker

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] LME with 2 factors with 3 levels each

2010-10-13 Thread Ista Zahn
Hi Laura,

If you want ANOVA output, ask for it! A general strategy that almost
always works in R is to fit 2 models, one without the term(s) you want
to test, and one with. Then use the anova() function to test them.
(models must be nested, and in the lmer() case you need to use REML =
FALSE).

So, try something like this:

m1 - lmer(PTR ~ Test  +  Group + (1 | student), data=ptr)
m2 - lmer(PTR ~ Test * Group + (1 | student), data=ptr)
anova(m1, m2)

Best,
Ista

On Tue, Oct 12, 2010 at 11:59 PM, Laura Halderman lk...@pitt.edu wrote:
 Hello.  I am new to R and new to linear mixed effects modeling.  I am trying 
 to model some data which has two factors.  Each factor has three levels 
 rather than continuous data.  Specifically, we measured speech at Test 1, 
 Test 2 and Test 3.  We also had three groups of subjects: RepTP, RepNTP and 
 NoRepNTP.

 I am having a really hard time interpreting this data since all the examples 
 I have seen in the book I am using (Baayen, 2008) either have continuous 
 variables or factors with only two levels.  What I find particularly 
 confusing are the interaction terms in the output.  The output doesn't 
 present the full interaction (3 X 3) as I would expect with an ANOVA.

Instead, it only presents an interaction term for one Test and one
Group, presumably comparing it to the reference Test and reference
Group.  Therefore, it is hard to know what to do with the interactions
that aren't significant.  In the book, non-significant interactions
are dropped from the model.  However, in my model, I'm only ever
seeing the 2 X 2 interactions, not the full 3 X 3 interaction, so it's
not clear what I should do when only two levels of group and two
levels of test interact but the third group doesn't.

 If anyone can assist me in interpreting the output, I would really appreciate 
 it.  I may be trying to interpret it too much like an ANOVA where you would 
 be looking for main effects of Test (was there improvement from Test 1 to 
 Test 2), main effects of Group (was one of the Groups better than the other) 
 and the interactions of the two factors (did one Group improve more than 
 another Group from Test 1 to Test 2, for example).  I guess another question 
 to pose here is, is it pointless to do an LME analysis with more than two 
 levels of a factor?  Is it too much like trying to do an ANOVA?  
 Alternatively, it's possible that what I'm doing is acceptable, I'm just not 
 able to interpret it correctly.

 I have provided output from my model to hopefully illustrate my question.  
 I'm happy to provide additional information/output if someone is interested 
 in helping me with this problem.

 Thank you,
  Laura

 Linear mixed model fit by REML
 Formula: PTR ~ Test * Group + (1 | student)
   Data: ptr
 AIC             BIC             logLik  deviance        REMLdev
  -625.7         -559.8          323.9           -706.5          -647.7
 Random effects:
  Groups Name            Variance        Std.Dev.
  student        (Intercept)     0.0010119       0.03181
  Residual                       0.0457782       0.21396
 Number of obs: 2952, groups: studentID, 20

 Fixed effects:
                                Estimate        Std. Error      t value
 (Intercept)                     0.547962        0.016476        33.26
 Testtest2                       -0.007263       0.015889        -0.46
 Testtest1                       -0.050653       0.016305        -3.11
 GroupNoRepNTP   0.008065        0.022675        0.36
 GroupRepNTP             -0.018314       0.025483        -0.72
 Testtest2:GroupNoRepNTP  0.006073   0.021936    0.28
 Testtest1:GroupNoRepNTP  0.013901   0.022613    0.61
 Testtest2:GroupRepNTP   0.046684        0.024995        1.87
 Testtest1:GroupRepNTP   0.039994        0.025181        1.59

 Note: The reference level for Test is Test3.  The reference level for Group 
 is RepTP.  The interaction p value (after running pvals.fnc with the MCMC) 
 for Testtest2:GroupRepNTP is p = .062 which I'm willing to accept and 
 interpret since speech data with English Language Learners is particularly 
 variable.
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
Ista Zahn
Graduate student
University of Rochester
Department of Clinical and Social Psychology
http://yourpsyche.org

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] LME with 2 factors with 3 levels each

2010-10-13 Thread Dennis Murphy
Hi:

On Tue, Oct 12, 2010 at 8:59 PM, Laura Halderman lk...@pitt.edu wrote:

 Hello.  I am new to R and new to linear mixed effects modeling.  I am
 trying to model some data which has two factors.  Each factor has three
 levels rather than continuous data.  Specifically, we measured speech at
 Test 1, Test 2 and Test 3.  We also had three groups of subjects: RepTP,
 RepNTP and NoRepNTP.


Do you have three groups of subjects, where each subject is tested on three
separate occasions? Are the tests meant to be replicates, or is there some
other purpose for why they should be represented in the model? Based on this
description, it would appear to me that the groups constitute one factor,
the students nested within groups another, with three measurements taken on
each student. How many students per group?


 I am having a really hard time interpreting this data since all the
 examples I have seen in the book I am using (Baayen, 2008) either have
 continuous variables or factors with only two levels.  What I find
 particularly confusing are the interaction terms in the output.  The output
 doesn't present the full interaction (3 X 3) as I would expect with an
 ANOVA.  Instead, it only presents an interaction term for one Test and one
 Group, presumably comparing it to the reference Test and reference Group.
  Therefore, it is hard to know what to do with the interactions that aren't
 significant.  In the book, non-significant interactions are dropped from the
 model.  However, in my model, I'm only ever seeing the 2 X 2 interactions,
 not the full 3 X 3 interaction, so it's not clear what I should do when only
 two levels of group and two levels of test interact but the third group
 doesn't.


Let's get the design straight first and the model will work itself out...

Dennis


 If anyone can assist me in interpreting the output, I would really
 appreciate it.  I may be trying to interpret it too much like an ANOVA where
 you would be looking for main effects of Test (was there improvement from
 Test 1 to Test 2), main effects of Group (was one of the Groups better than
 the other) and the interactions of the two factors (did one Group improve
 more than another Group from Test 1 to Test 2, for example).  I guess
 another question to pose here is, is it pointless to do an LME analysis with
 more than two levels of a factor?  Is it too much like trying to do an
 ANOVA?  Alternatively, it's possible that what I'm doing is acceptable, I'm
 just not able to interpret it correctly.

 I have provided output from my model to hopefully illustrate my question.
  I'm happy to provide additional information/output if someone is interested
 in helping me with this problem.

 Thank you,
  Laura





 Linear mixed model fit by REML
 Formula: PTR ~ Test * Group + (1 | student)
   Data: ptr
 AIC BIC logLik  devianceREMLdev
  -625.7 -559.8  323.9   -706.5  -647.7
 Random effects:
  Groups NameVarianceStd.Dev.
  student(Intercept) 0.0010119   0.03181
  Residual   0.0457782   0.21396
 Number of obs: 2952, groups: studentID, 20

 Fixed effects:
EstimateStd. Error  t value
 (Intercept) 0.5479620.01647633.26
 Testtest2   -0.007263   0.015889-0.46
 Testtest1   -0.050653   0.016305-3.11
 GroupNoRepNTP   0.0080650.0226750.36
 GroupRepNTP -0.018314   0.025483-0.72
 Testtest2:GroupNoRepNTP  0.006073   0.0219360.28
 Testtest1:GroupNoRepNTP  0.013901   0.0226130.61
 Testtest2:GroupRepNTP   0.0466840.0249951.87
 Testtest1:GroupRepNTP   0.0399940.0251811.59

 Note: The reference level for Test is Test3.  The reference level for Group
 is RepTP.  The interaction p value (after running pvals.fnc with the MCMC)
 for Testtest2:GroupRepNTP is p = .062 which I'm willing to accept and
 interpret since speech data with English Language Learners is particularly
 variable.
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] LME with 2 factors with 3 levels each

2010-10-12 Thread Laura Halderman
Hello.  I am new to R and new to linear mixed effects modeling.  I am trying to 
model some data which has two factors.  Each factor has three levels rather 
than continuous data.  Specifically, we measured speech at Test 1, Test 2 and 
Test 3.  We also had three groups of subjects: RepTP, RepNTP and NoRepNTP.  

I am having a really hard time interpreting this data since all the examples I 
have seen in the book I am using (Baayen, 2008) either have continuous 
variables or factors with only two levels.  What I find particularly confusing 
are the interaction terms in the output.  The output doesn't present the full 
interaction (3 X 3) as I would expect with an ANOVA.  Instead, it only presents 
an interaction term for one Test and one Group, presumably comparing it to the 
reference Test and reference Group.  Therefore, it is hard to know what to do 
with the interactions that aren't significant.  In the book, non-significant 
interactions are dropped from the model.  However, in my model, I'm only ever 
seeing the 2 X 2 interactions, not the full 3 X 3 interaction, so it's not 
clear what I should do when only two levels of group and two levels of test 
interact but the third group doesn't.

If anyone can assist me in interpreting the output, I would really appreciate 
it.  I may be trying to interpret it too much like an ANOVA where you would be 
looking for main effects of Test (was there improvement from Test 1 to Test 2), 
main effects of Group (was one of the Groups better than the other) and the 
interactions of the two factors (did one Group improve more than another Group 
from Test 1 to Test 2, for example).  I guess another question to pose here is, 
is it pointless to do an LME analysis with more than two levels of a factor?  
Is it too much like trying to do an ANOVA?  Alternatively, it's possible that 
what I'm doing is acceptable, I'm just not able to interpret it correctly.

I have provided output from my model to hopefully illustrate my question.  I'm 
happy to provide additional information/output if someone is interested in 
helping me with this problem.

Thank you,
 Laura

Linear mixed model fit by REML 
Formula: PTR ~ Test * Group + (1 | student) 
   Data: ptr 
AIC BIC logLik  devianceREMLdev
 -625.7 -559.8  323.9   -706.5  -647.7
Random effects:
 Groups NameVarianceStd.Dev.
 student(Intercept) 0.0010119   0.03181 
 Residual   0.0457782   0.21396 
Number of obs: 2952, groups: studentID, 20

Fixed effects:
EstimateStd. Error  t value
(Intercept) 0.5479620.01647633.26
Testtest2   -0.007263   0.015889-0.46
Testtest1   -0.050653   0.016305-3.11
GroupNoRepNTP   0.0080650.0226750.36
GroupRepNTP -0.018314   0.025483-0.72
Testtest2:GroupNoRepNTP  0.006073   0.0219360.28
Testtest1:GroupNoRepNTP  0.013901   0.0226130.61
Testtest2:GroupRepNTP   0.0466840.0249951.87
Testtest1:GroupRepNTP   0.0399940.0251811.59

Note: The reference level for Test is Test3.  The reference level for Group is 
RepTP.  The interaction p value (after running pvals.fnc with the MCMC) for 
Testtest2:GroupRepNTP is p = .062 which I'm willing to accept and interpret 
since speech data with English Language Learners is particularly variable.
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lme vs. lmer, how do they differ?

2010-09-10 Thread John Sorkin
windows Vista
R 2.10.1

What is the difference (or differences) between lme and lmer? Both appear to 
perform mixed effects regression analyses.
Thanks
John 


John David Sorkin M.D., Ph.D.
Chief, Biostatistics and Informatics
University of Maryland School of Medicine Division of Gerontology
Baltimore VA Medical Center
10 North Greene Street
GRECC (BT/18/GR)
Baltimore, MD 21201-1524
(Phone) 410-605-7119
(Fax) 410-605-7913 (Please call phone number above prior to faxing)

Confidentiality Statement:
This email message, including any attachments, is for th...{{dropped:6}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lme, groupedData, random intercept and slope

2010-09-10 Thread John Sorkin
Windows Vista
R 2.10.1




Does the following use of groupedData and lme produce an analysis with both 
random intercept and slope, or only random slope?


zz-groupedData(y~time |  Subject,data=data.frame(data),
  labels = list( x = Time,
y = y ),
  units = list( x = (yr), y = (mm))
)
plot(zz)

fit10-lme(zz)
summary(fit10)

Linear mixed-effects model fit by REML
 Data: zz 
AIC   BIC  logLik
  -123.1942 -115.2010 67.5971

Random effects:
 Formula: ~time | Subject
 Structure: General positive-definite
StdDev   Corr  
(Intercept) 6.054897e+00 (Intr)
time4.160662e-05 1 
Residual9.775954e-04   

Fixed effects: y ~ time 
Value Std.Error DF   t-value p-value
(Intercept) 15.000217  1.914727 19 7.834   0
time-1.51  0.000219 19 -4566.598   0
 Correlation: 
 (Intr)
time 0.059 

Standardized Within-Group Residuals:
Min  Q1 Med  Q3 Max 
-1.73706837 -0.36289558  0.06892484  0.59777067  1.69095476 

Number of Observations: 30
Number of Groups: 10 


John David Sorkin M.D., Ph.D.
Chief, Biostatistics and Informatics
University of Maryland School of Medicine Division of Gerontology
Baltimore VA Medical Center
10 North Greene Street
GRECC (BT/18/GR)
Baltimore, MD 21201-1524
(Phone) 410-605-7119
(Fax) 410-605-7913 (Please call phone number above prior to faxing)

Confidentiality Statement:
This email message, including any attachments, is for th...{{dropped:6}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme vs. lmer, how do they differ?

2010-09-10 Thread S Ellison
 John Sorkin jsor...@grecc.umaryland.edu 10/09/2010 13:21:09 
What is the difference (or differences) between lme and lmer? Both
appear to perform mixed effects regression analyses.

From a user's point of view:
- lme only accepts nested random effect; lmer handles crossed random
effects
- lme has a convenient methof of handling heteroscedasticity; lmer
doesn't.
- lme gives you p-values; lmer doesn't (there is explanation of why at
https://stat.ethz.ch/pipermail/r-help/2006-May/094765.html which, I
guess, should not be all that reassuring for lme users either)


Steve Ellison


***
This email and any attachments are confidential. Any use...{{dropped:8}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme vs. lmer, how do they differ?

2010-09-10 Thread Ben Bolker
John Sorkin jsorkin at grecc.umaryland.edu writes:

 
 windows Vista
 R 2.10.1
 
 What is the difference (or differences) between lme and lmer? Both appear to
perform mixed effects
 regression analyses.
 Thanks
 John 
 

  in a nutshell:

  lmer is newer, much faster, handles crossed random effects well (and
generalized linear mixed models), has some support for producing
likelihood profiles (in the development version), and is under rapid 
development.
It does not attempt to estimate residual degrees of freedom and
hence does not give p-values for significance of effects.
  lme is older, better documented (Pinheiro and Bates 2000), more
stable, and handles 'R-side' structures (heteroscedasticity,
within-group correlations).

  r-sig-mixed-models is a good place for questions about these
packages.

  Ben Bolker

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme, groupedData, random intercept and slope

2010-09-10 Thread array chip
from the output, I think it's both.



- Original Message 
From: John Sorkin jsor...@grecc.umaryland.edu
To: r-help@r-project.org
Sent: Fri, September 10, 2010 5:25:44 AM
Subject: [R] lme, groupedData, random intercept and slope

Windows Vista
R 2.10.1




Does the following use of groupedData and lme produce an analysis with both 
random intercept and slope, or only random slope?


zz-groupedData(y~time |  Subject,data=data.frame(data),
  labels = list( x = Time,
y = y ),
  units = list( x = (yr), y = (mm))
)
plot(zz)

fit10-lme(zz)
summary(fit10)

Linear mixed-effects model fit by REML
Data: zz 
AIC   BIC  logLik
  -123.1942 -115.2010 67.5971

Random effects:
Formula: ~time | Subject
Structure: General positive-definite
StdDev   Corr  
(Intercept) 6.054897e+00 (Intr)
time4.160662e-05 1
Residual9.775954e-04  

Fixed effects: y ~ time 
Value Std.Error DF   t-value p-value
(Intercept) 15.000217  1.914727 19 7.834   0
time-1.51  0.000219 19 -4566.598   0
Correlation: 
 (Intr)
time 0.059 

Standardized Within-Group Residuals:
Min  Q1 Med  Q3 Max 
-1.73706837 -0.36289558  0.06892484  0.59777067  1.69095476 

Number of Observations: 30
Number of Groups: 10 


John David Sorkin M.D., Ph.D.
Chief, Biostatistics and Informatics
University of Maryland School of Medicine Division of Gerontology
Baltimore VA Medical Center
10 North Greene Street
GRECC (BT/18/GR)
Baltimore, MD 21201-1524
(Phone) 410-605-7119
(Fax) 410-605-7913 (Please call phone number above prior to faxing)

Confidentiality Statement:
This email message, including any attachments, is for th...{{dropped:12}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lme, spline

2010-06-15 Thread Farhad Shokoohi
Dear All,
I have a problem running this program on R. Z is a  matrix of spline which is 
random
 fit-lme(anc~X,random=pdIdent(~Z))
Error in getGroups.data.frame(dataMix, groups) : 
  Invalid formula for groups
What I have done wrong?

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme, spline

2010-06-15 Thread Joris Meys
On Tue, Jun 15, 2010 at 9:47 AM, Farhad Shokoohi stat...@gmail.com wrote:
 Dear All,
 I have a problem running this program on R. Z is a  matrix of spline which is 
 random
 fit-lme(anc~X,random=pdIdent(~Z))
 Error in getGroups.data.frame(dataMix, groups) :
  Invalid formula for groups
 What I have done wrong?
You didn't read the posting guide.  http://www.R-project.org/posting-guide.html
A little more information on the data is necessary. Please provide
minimal, self-contained, reproducible code that mimicks the error.

This said, I do not understand what you want to do with
random=pdldent(~Z). I cannot find a function pdldent, so this
notation doesn't make any sense.

Cheers
Joris


 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
Joris Meys
Statistical consultant

Ghent University
Faculty of Bioscience Engineering
Department of Applied mathematics, biometrics and process control

tel : +32 9 264 59 87
joris.m...@ugent.be
---
Disclaimer : http://helpdesk.ugent.be/e-maildisclaimer.php

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lme, spline (revised question)

2010-06-15 Thread Farhad Shokoohi
Dear All,
I revise my question about the problem I have.
Take a look at the article 
http://www.jstatsoft.org/v09/i01
and download the attached code.
try to run one of the codes for example section 2.1 in R
here is the code

fossil - read.table(fossil.dat,header=T)
x - fossil$age
y - 10*fossil$strontium.ratio
knots - seq(94,121,length=25)
n - length(x)
X - cbind(rep(1,n),x)
Z - outer(x,knots,-)
Z - Z*(Z0)
fit - lme(y~-1+X,random=pdIdent(~-1+Z))

there is an error which prevents running the lme function. The error is 
something like this
 Error in getGroups.data.frame(dataMix, groups) :
 Invalid formula for groups

Let me know what's wrong?
Best

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme, spline

2010-06-15 Thread Farhad Shokoohi

Dear All,
I revise my question about the problem I have.
Take a look at the article 
http://www.jstatsoft.org/v09/i01
and download the attached code.
try to run one of the codes for example section 2.1 in R
here is the code

fossil - read.table(fossil.dat,header=T)
x - fossil$age
y - 10*fossil$strontium.ratio
knots - seq(94,121,length=25)
n - length(x)
X - cbind(rep(1,n),x)
Z - outer(x,knots,-)
Z - Z*(Z0)
fit - lme(y~-1+X,random=pdIdent(~-1+Z))

there is an error which prevents running the lme function. The error is
something like this
Error in getGroups.data.frame(dataMix, groups) :
Invalid formula for groups

Let me know what's wrong?
Best
-- 
View this message in context: 
http://r.789695.n4.nabble.com/lme-spline-tp2255500p2255903.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


  1   2   3   >