[R] lme and aov

2007-08-03 Thread Gang Chen
I have a mixed balanced ANOVA design with a between-subject factor  
(Grp) and a within-subject factor (Rsp). When I tried the following  
two commands which I thought are equivalent,

  fit.lme - lme(Beta ~ Grp*Rsp, random = ~1|Subj, Model);
  fit.aov - aov(Beta ~ Rsp*Grp+Error(Subj/Rsp)+Grp, Model);

I got totally different results. What did I do wrong?

Thanks,
Gang

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme and aov

2007-08-03 Thread Peter Dalgaard
Gang Chen wrote:
 I have a mixed balanced ANOVA design with a between-subject factor  
 (Grp) and a within-subject factor (Rsp). When I tried the following  
 two commands which I thought are equivalent,

   fit.lme - lme(Beta ~ Grp*Rsp, random = ~1|Subj, Model);
   fit.aov - aov(Beta ~ Rsp*Grp+Error(Subj/Rsp)+Grp, Model);
   
 I got totally different results. What did I do wrong?

   
Except for not telling us what your data are and what you mean by 
totally different?

One model has a random interaction between Subj and Rsp, the other does 
not. This may make a difference, unless the interaction term is aliased 
with the residual error.

If your data are unbalanced, aov is not guaranteed to give meaningful 
results.

-pd

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme and aov

2007-08-03 Thread Gang Chen
Thanks for the response!

It is indeed a balanced design. The results are different in the  
sense all the F tests for main effects are not the same. Do you mean  
that a random interaction is modeled in the aov command? If so, what  
would be an equivalent command of aov to the one with lme?

Thanks,
Gang

On Aug 3, 2007, at 3:52 PM, Peter Dalgaard wrote:

 Gang Chen wrote:
 I have a mixed balanced ANOVA design with a between-subject  
 factor  (Grp) and a within-subject factor (Rsp). When I tried the  
 following  two commands which I thought are equivalent,

   fit.lme - lme(Beta ~ Grp*Rsp, random = ~1|Subj, Model);
   fit.aov - aov(Beta ~ Rsp*Grp+Error(Subj/Rsp)+Grp, Model);
  
 I got totally different results. What did I do wrong?


 Except for not telling us what your data are and what you mean by  
 totally different?

 One model has a random interaction between Subj and Rsp, the other  
 does not. This may make a difference, unless the interaction term  
 is aliased with the residual error.

 If your data are unbalanced, aov is not guaranteed to give  
 meaningful results.

-pd

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme and aov

2007-08-03 Thread Doran, Harold
Gang:

I think what Peter is asking for is for you to put some of your output
in an email. If the values of the fixed effects are the same across
models, but the F-tests are different, then there is a whole other
thread we will point you to for an explanation. (I don't presume to
speak for other people, btw, and I'm happy to stand corrected)

 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of Gang Chen
 Sent: Friday, August 03, 2007 4:01 PM
 To: Peter Dalgaard
 Cc: r-help@stat.math.ethz.ch
 Subject: Re: [R] lme and aov
 
 Thanks for the response!
 
 It is indeed a balanced design. The results are different in 
 the sense all the F tests for main effects are not the same. 
 Do you mean that a random interaction is modeled in the aov 
 command? If so, what would be an equivalent command of aov to 
 the one with lme?
 
 Thanks,
 Gang
 
 On Aug 3, 2007, at 3:52 PM, Peter Dalgaard wrote:
 
  Gang Chen wrote:
  I have a mixed balanced ANOVA design with a 
 between-subject factor  
  (Grp) and a within-subject factor (Rsp). When I tried the 
 following  
  two commands which I thought are equivalent,
 
fit.lme - lme(Beta ~ Grp*Rsp, random = ~1|Subj, Model);   
  fit.aov - aov(Beta ~ Rsp*Grp+Error(Subj/Rsp)+Grp, Model);
 
  I got totally different results. What did I do wrong?
 
 
  Except for not telling us what your data are and what you mean by 
  totally different?
 
  One model has a random interaction between Subj and Rsp, the other 
  does not. This may make a difference, unless the 
 interaction term is 
  aliased with the residual error.
 
  If your data are unbalanced, aov is not guaranteed to give 
 meaningful 
  results.
 
 -pd
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide 
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme and aov

2007-08-03 Thread Peter Dalgaard
Gang Chen wrote:
 Thanks a lot for clarification! I just started to learn programming in 
 R for a week, and wanted to try a simple mixed design of balanced 
 ANOVA with a between-subject factor
 (Grp) and a within-subject factor (Rsp), but I'm not sure whether I'm 
 modeling the data correctly with either of the command lines.

 Here is the result. Any help would be highly appreciated.

  fit.lme - lme(Beta ~ Grp*Rsp, random = ~1|Subj, Model);
  summary(fit.lme)
 Linear mixed-effects model fit by REML
 Data: Model
   AIC  BIClogLik
   233.732 251.9454 -108.8660

 Random effects:
 Formula: ~1 | Subj
 (Intercept)  Residual
 StdDev:1.800246 0.3779612

 Fixed effects: Beta ~ Grp * Rsp
  Value Std.Error DFt-value p-value
 (Intercept)  1.1551502 0.5101839 36  2.2641837  0.0297
 GrpB-1.1561248 0.7215090 36 -1.6023706  0.1178
 GrpC-1.2345321 0.7215090 36 -1.7110417  0.0957
 RspB-0.0563077 0.1482486 36 -0.3798196  0.7063
 GrpB:RspB   -0.3739339 0.2096551 36 -1.7835665  0.0829
 GrpC:RspB0.3452539 0.2096551 36  1.6467705  0.1083
 Correlation:
   (Intr) GrpB   GrpC   RspB   GrB:RB
 GrpB  -0.707
 GrpC  -0.707  0.500
 RspB  -0.145  0.103  0.103
 GrpB:RspB  0.103 -0.145 -0.073 -0.707
 GrpC:RspB  0.103 -0.073 -0.145 -0.707  0.500

 Standardized Within-Group Residuals:
 Min  Q1 Med  Q3 Max
 -1.72266114 -0.41242552  0.02994094  0.41348767  1.72323563

 Number of Observations: 78
 Number of Groups: 39

  fit.aov - aov(Beta ~ Rsp*Grp+Error(Subj/Rsp)+Grp, Model);
  fit.aov

 Call:
 aov(formula = Beta ~ Rsp * Grp + Error(Subj/Rsp) + Grp, data = Model)

 Grand Mean: 0.3253307

 Stratum 1: Subj

 Terms:
  Grp
 Sum of Squares  5.191404
 Deg. of Freedom1

 1 out of 2 effects not estimable
 Estimated effects are balanced

 Stratum 2: Subj:Rsp

 Terms:
  Rsp
 Sum of Squares  7.060585e-05
 Deg. of Freedom1

 2 out of 3 effects not estimable
 Estimated effects are balanced

 Stratum 3: Within

 Terms:
   Rsp   Grp   Rsp:Grp Residuals
 Sum of Squares0.33428  36.96518   1.50105 227.49594
 Deg. of Freedom 1 2 270

 Residual standard error: 1.802760
 Estimated effects may be unbalanced

This looks odd.  It is a standard split-plot layout, right? 3 groups of 
13 subjects, each measured with two kinds of Rsp = 3x13x2 = 78 
observations.

In that case you shouldn't see the same effect allocated to multiple 
error strata. I suspect you forgot to declare Subj as factor.

Also: summary(fit.aov) is nicer, and anova(fit.lme) should be informative.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme and aov

2007-08-03 Thread Gang Chen
Thanks a lot for clarification! I just started to learn programming  
in R for a week, and wanted to try a simple mixed design of balanced  
ANOVA with a between-subject factor
(Grp) and a within-subject factor (Rsp), but I'm not sure whether I'm  
modeling the data correctly with either of the command lines.

Here is the result. Any help would be highly appreciated.

  fit.lme - lme(Beta ~ Grp*Rsp, random = ~1|Subj, Model);
  summary(fit.lme)
Linear mixed-effects model fit by REML
Data: Model
   AIC  BIClogLik
   233.732 251.9454 -108.8660

Random effects:
Formula: ~1 | Subj
 (Intercept)  Residual
StdDev:1.800246 0.3779612

Fixed effects: Beta ~ Grp * Rsp
  Value Std.Error DFt-value p-value
(Intercept)  1.1551502 0.5101839 36  2.2641837  0.0297
GrpB-1.1561248 0.7215090 36 -1.6023706  0.1178
GrpC-1.2345321 0.7215090 36 -1.7110417  0.0957
RspB-0.0563077 0.1482486 36 -0.3798196  0.7063
GrpB:RspB   -0.3739339 0.2096551 36 -1.7835665  0.0829
GrpC:RspB0.3452539 0.2096551 36  1.6467705  0.1083
Correlation:
   (Intr) GrpB   GrpC   RspB   GrB:RB
GrpB  -0.707
GrpC  -0.707  0.500
RspB  -0.145  0.103  0.103
GrpB:RspB  0.103 -0.145 -0.073 -0.707
GrpC:RspB  0.103 -0.073 -0.145 -0.707  0.500

Standardized Within-Group Residuals:
 Min  Q1 Med  Q3 Max
-1.72266114 -0.41242552  0.02994094  0.41348767  1.72323563

Number of Observations: 78
Number of Groups: 39

  fit.aov - aov(Beta ~ Rsp*Grp+Error(Subj/Rsp)+Grp, Model);
  fit.aov

Call:
aov(formula = Beta ~ Rsp * Grp + Error(Subj/Rsp) + Grp, data = Model)

Grand Mean: 0.3253307

Stratum 1: Subj

Terms:
  Grp
Sum of Squares  5.191404
Deg. of Freedom1

1 out of 2 effects not estimable
Estimated effects are balanced

Stratum 2: Subj:Rsp

Terms:
  Rsp
Sum of Squares  7.060585e-05
Deg. of Freedom1

2 out of 3 effects not estimable
Estimated effects are balanced

Stratum 3: Within

Terms:
   Rsp   Grp   Rsp:Grp Residuals
Sum of Squares0.33428  36.96518   1.50105 227.49594
Deg. of Freedom 1 2 270

Residual standard error: 1.802760
Estimated effects may be unbalanced



Thanks,
Gang



On Aug 3, 2007, at 4:09 PM, Doran, Harold wrote:

 Gang:

 I think what Peter is asking for is for you to put some of your output
 in an email. If the values of the fixed effects are the same across
 models, but the F-tests are different, then there is a whole other
 thread we will point you to for an explanation. (I don't presume to
 speak for other people, btw, and I'm happy to stand corrected)

 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of Gang Chen
 Sent: Friday, August 03, 2007 4:01 PM
 To: Peter Dalgaard
 Cc: r-help@stat.math.ethz.ch
 Subject: Re: [R] lme and aov

 Thanks for the response!

 It is indeed a balanced design. The results are different in
 the sense all the F tests for main effects are not the same.
 Do you mean that a random interaction is modeled in the aov
 command? If so, what would be an equivalent command of aov to
 the one with lme?

 Thanks,
 Gang

 On Aug 3, 2007, at 3:52 PM, Peter Dalgaard wrote:

 Gang Chen wrote:
 I have a mixed balanced ANOVA design with a
 between-subject factor
 (Grp) and a within-subject factor (Rsp). When I tried the
 following
 two commands which I thought are equivalent,

 fit.lme - lme(Beta ~ Grp*Rsp, random = ~1|Subj, Model);  
 fit.aov - aov(Beta ~ Rsp*Grp+Error(Subj/Rsp)+Grp, Model);

 I got totally different results. What did I do wrong?


 Except for not telling us what your data are and what you mean by
 totally different?

 One model has a random interaction between Subj and Rsp, the other
 does not. This may make a difference, unless the
 interaction term is
 aliased with the residual error.

 If your data are unbalanced, aov is not guaranteed to give
 meaningful
 results.

-pd

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme and aov

2007-08-03 Thread Gang Chen
 This looks odd.  It is a standard split-plot layout, right? 3  
 groups of 13 subjects, each measured with two kinds of Rsp = 3x13x2  
 = 78 observations.

Yes, that is right.


 In that case you shouldn't see the same effect allocated to  
 multiple error strata. I suspect you forgot to declare Subj as factor.


This is exactly the problem I had: Model$Subj was not a factor! Now  
they converge. A lesson well learned.

Thanks a lot for the help,
Gang


On Aug 3, 2007, at 4:53 PM, Peter Dalgaard wrote:

 Gang Chen wrote:
 Thanks a lot for clarification! I just started to learn  
 programming in R for a week, and wanted to try a simple mixed  
 design of balanced ANOVA with a between-subject factor
 (Grp) and a within-subject factor (Rsp), but I'm not sure whether  
 I'm modeling the data correctly with either of the command lines.

 Here is the result. Any help would be highly appreciated.

  fit.lme - lme(Beta ~ Grp*Rsp, random = ~1|Subj, Model);
  summary(fit.lme)
 Linear mixed-effects model fit by REML
 Data: Model
   AIC  BIClogLik
   233.732 251.9454 -108.8660

 Random effects:
 Formula: ~1 | Subj
 (Intercept)  Residual
 StdDev:1.800246 0.3779612

 Fixed effects: Beta ~ Grp * Rsp
  Value Std.Error DFt-value p-value
 (Intercept)  1.1551502 0.5101839 36  2.2641837  0.0297
 GrpB-1.1561248 0.7215090 36 -1.6023706  0.1178
 GrpC-1.2345321 0.7215090 36 -1.7110417  0.0957
 RspB-0.0563077 0.1482486 36 -0.3798196  0.7063
 GrpB:RspB   -0.3739339 0.2096551 36 -1.7835665  0.0829
 GrpC:RspB0.3452539 0.2096551 36  1.6467705  0.1083
 Correlation:
   (Intr) GrpB   GrpC   RspB   GrB:RB
 GrpB  -0.707
 GrpC  -0.707  0.500
 RspB  -0.145  0.103  0.103
 GrpB:RspB  0.103 -0.145 -0.073 -0.707
 GrpC:RspB  0.103 -0.073 -0.145 -0.707  0.500

 Standardized Within-Group Residuals:
 Min  Q1 Med  Q3 Max
 -1.72266114 -0.41242552  0.02994094  0.41348767  1.72323563

 Number of Observations: 78
 Number of Groups: 39

  fit.aov - aov(Beta ~ Rsp*Grp+Error(Subj/Rsp)+Grp, Model);
  fit.aov

 Call:
 aov(formula = Beta ~ Rsp * Grp + Error(Subj/Rsp) + Grp, data = Model)

 Grand Mean: 0.3253307

 Stratum 1: Subj

 Terms:
  Grp
 Sum of Squares  5.191404
 Deg. of Freedom1

 1 out of 2 effects not estimable
 Estimated effects are balanced

 Stratum 2: Subj:Rsp

 Terms:
  Rsp
 Sum of Squares  7.060585e-05
 Deg. of Freedom1

 2 out of 3 effects not estimable
 Estimated effects are balanced

 Stratum 3: Within

 Terms:
   Rsp   Grp   Rsp:Grp Residuals
 Sum of Squares0.33428  36.96518   1.50105 227.49594
 Deg. of Freedom 1 2 270

 Residual standard error: 1.802760
 Estimated effects may be unbalanced

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lme v. aov?

2003-11-27 Thread John Christie
I am trying to understand better an analysis mean RT in various 
conditions in a within subjects design with the overall mean RT / 
subject as one of the factors.  LME seems to be the right way to do 
this. using something like m- lme(rt~ a *b *subjectRT, random= 
~1|subject) and then anova(m,type = marginal).  My understanding is 
that lme is an easy interface for dummy coding variables and doing a 
multiple regression (and that could be wrong).  But, what is aov doing 
in this instance? MANOVA?  I also haven't been able to find anything 
really useful on what to properly assign to  random in the lme 
formula.  For repeated measures the use above is always in the 
examples.

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] lme v. aov?

2003-11-27 Thread Spencer Graves
 Do you want to make inference about the specific subjects in your 
study?  If yes, the subjects are a fixed effect.  If instead you want to 
make inference about the societal processes that will generate the 
subjects you will get in the future, that is a random effect.  The 
function lme handles both fixed and random effects, as does 
varcomp.  The functions aov and lm are restricted to fixed effects 
only.  You can use dummy coding for lm and aov as well. 

 The the distinction between fixed and random effects seems to 
me to be the same as what Deming called the difference between 
enumerative and analytic studies:  With a fixed effect / enumerative 
study, the objective is to determine the disposition of the sampling 
frame.  For example, Deming managed a survey of food distribution in 
Japan in 1946 or so, right after World War II.  The purpose was to 
determine where to deliver food the next day, etc., to keep people from 
dying of starvation.  That was an enumerative study.  If the purpose had 
been to advance economic theories for use not only in Japan or in 
1946-47, that is an analytic study. 

 Do you have the book Pinhiero and Bates (2000) Mixed-Effects 
Models in S and S-Plus (Springer)?  If you have more than one use for 
analyzing data on human subjects, I suggest you get and study this book 
if you haven't already.  Doug Bates and several of his graduate students 
have developed lme.  I am not current in the absolute latest 
literature in that area of statistics, but Bates seems to me to be among 
the leaders in that area and specifically in statistical computing for 
that kind of problem. 

 hope this helps.  spencer graves

John Christie wrote:

I am trying to understand better an analysis mean RT in various 
conditions in a within subjects design with the overall mean RT / 
subject as one of the factors.  LME seems to be the right way to do 
this. using something like m- lme(rt~ a *b *subjectRT, random= 
~1|subject) and then anova(m,type = marginal).  My understanding is 
that lme is an easy interface for dummy coding variables and doing a 
multiple regression (and that could be wrong).  But, what is aov doing 
in this instance? MANOVA?  I also haven't been able to find anything 
really useful on what to properly assign to  random in the lme 
formula.  For repeated measures the use above is always in the examples.

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] lme v. aov?

2003-11-27 Thread John Christie
Its not so much that I wasn't getting the difference between fixed and 
random effects.  Although, I do like the way you put the comment below. 
 For my purposes subject is a random effect.  It was more on correct 
notation in lme with repeated measures designs (my a and b are repeated 
while the mean subjectRT is between).  And, on whether the way aov 
treats repeated measures might best be called a MANOVA method.

On Nov 27, 2003, at 12:54 PM, Spencer Graves wrote:

 Do you want to make inference about the specific subjects in your 
study?  If yes, the subjects are a fixed effect.  If instead you want 
to make inference about the societal processes that will generate the 
subjects you will get in the future, that is a random effect.  The 
function lme handles both fixed and random effects, as does 
varcomp.  The functions aov and lm are restricted to fixed 
effects only.  You can use dummy coding for lm and aov as well.
 The the distinction between fixed and random effects seems to 
me to be the same as what Deming called the difference between 
enumerative and analytic studies:  With a fixed effect / 
enumerative study, the objective is to determine the disposition of 
the sampling frame.  For example, Deming managed a survey of food 
distribution in Japan in 1946 or so, right after World War II.  The 
purpose was to determine where to deliver food the next day, etc., to 
keep people from dying of starvation.  That was an enumerative study.  
If the purpose had been to advance economic theories for use not only 
in Japan or in 1946-47, that is an analytic study.
 Do you have the book Pinhiero and Bates (2000) Mixed-Effects 
Models in S and S-Plus (Springer)?  If you have more than one use for 
analyzing data on human subjects, I suggest you get and study this 
book if you haven't already.  Doug Bates and several of his graduate 
students have developed lme.  I am not current in the absolute 
latest literature in that area of statistics, but Bates seems to me to 
be among the leaders in that area and specifically in statistical 
computing for that kind of problem.
 hope this helps.  spencer graves

John Christie wrote:

I am trying to understand better an analysis mean RT in various 
conditions in a within subjects design with the overall mean RT / 
subject as one of the factors.  LME seems to be the right way to do 
this. using something like m- lme(rt~ a *b *subjectRT, random= 
~1|subject) and then anova(m,type = marginal).  My understanding is 
that lme is an easy interface for dummy coding variables and doing a 
multiple regression (and that could be wrong).  But, what is aov 
doing in this instance? MANOVA?  I also haven't been able to find 
anything really useful on what to properly assign to  random in the 
lme formula.  For repeated measures the use above is always in the 
examples.

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help



__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] lme vs. aov with Error term

2003-10-02 Thread array chip
Hi,

I have a question about using lme and aov for the
following dataset. If I understand correctly, using
aov with an Error term in the formula is equivalent
to using lme with default settings, i.e. both assume
compound symmetry correlation structure. And I have
found that equivalency in the past. However, with the
follwing dataset, I got different answers with using
lme and using aov, can anyone explain what
happened here? I have 2 differnt response variables
x and y in the following dataset, they are
actually slightly different (only 3 values of them are
different). With y, I achieved the equivalency
between lme and aov; but with x, I got different
p values for the ANOVA table.

---

x-c(-0.0649,-0.0923,-0.0623,0.1809,0.0719,0.1017,0.0144,-0.1727,-0.1332,0.0986,0.304,-0.4093,0.2054,0.251,-0.1062,0.3833,0.0649,0.2908,0.1073,0.0919,0.1167,0.2369,0.306,0.1379)

y-c(-0.0649,-0.0923,0.32,0.08,0.0719,0.1017,0.05,-0.1727,-0.1332,0.15,0.304,-0.4093,0.2054,0.251,-0.1062,0.3833,0.0649,0.2908,0.1073,0.0919,0.1167,0.2369,0.306,0.1379)

treat-as.factor(c(1,1,1,1,1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,2,2,2,2,2))
time-as.factor(c(1,1,1,1,2,2,2,2,3,3,3,3,1,1,1,1,2,2,2,2,3,3,3,3))
sex-as.factor(c('F','F','M','M','F','F','M','M','F','F','M','M','F','F','M','M','F','F','M','M','F','F','M','M'))
subject-as.factor(c(rep(1:4,3),rep(5:8,3)))
xx-cbind(x=data.frame(x),y=y,treat=treat,time=time,sex=sex,subject=subject)

 using x as dependable variable

xx.lme-lme(x~treat*sex*time,random=~1|subject,xx)
xx.aov-aov(x~treat*sex*time+Error(subject),xx)

summary(xx.aov)

Error: subject
  Df   Sum Sq  Mean Sq F value  Pr(F)  
treat  1 0.210769 0.210769  6.8933 0.05846 .
sex1 0.005775 0.005775  0.1889 0.68627  
treat:sex  1 0.000587 0.000587  0.0192 0.89649  
Residuals  4 0.122304 0.030576  
---
Signif. codes:  0 `***' 0.001 `**' 0.01 `*' 0.05 `.'
0.1 ` ' 1 

Error: Within
   Df  Sum Sq Mean Sq F value Pr(F)
time2 0.00102 0.00051  0.0109 0.9891
treat:time  2 0.00998 0.00499  0.1066 0.9002
sex:time2 0.02525 0.01263  0.2696 0.7704
treat:sex:time  2 0.03239 0.01619  0.3458 0.7178
Residuals   8 0.37469 0.04684 

anova(xx.lme)
   numDF denDF  F-value p-value
(Intercept)1 8 3.719117  0.0899
treat  1 4 5.089022  0.0871
sex1 4 0.139445  0.7278
time   2 8 0.012365  0.9877
treat:sex  1 4 0.014175  0.9110
treat:time 2 8 0.120538  0.8880
sex:time   2 8 0.304878  0.7454
treat:sex:time 2 8 0.391012  0.6886

 using y as dependable variable

xx.lme2-lme(y~treat*sex*time,random=~1|subject,xx)
xx.aov2-aov(y~treat*sex*time+Error(subject),xx)

summary(xx.aov2)

Error: subject
  Df   Sum Sq  Mean Sq F value Pr(F)
treat  1 0.147376 0.147376  2.0665 0.2239
sex1 0.000474 0.000474  0.0067 0.9389
treat:sex  1 0.006154 0.006154  0.0863 0.7836
Residuals  4 0.285268 0.071317   

Error: Within
   Df   Sum Sq  Mean Sq F value Pr(F)
time2 0.009140 0.004570  0.1579 0.8565
treat:time  2 0.012598 0.006299  0.2177 0.8090
sex:time2 0.043132 0.021566  0.7453 0.5049
treat:sex:time  2 0.069733 0.034866  1.2050 0.3488
Residuals   8 0.231480 0.028935   

anova(xx.lme2)
   numDF denDF   F-value p-value
(Intercept)1 8 3.0667809  0.1180
treat  1 4 2.0664919  0.2239
sex1 4 0.0066516  0.9389
time   2 8 0.1579473  0.8565
treat:sex  1 4 0.0862850  0.7836
treat:time 2 8 0.2177028  0.8090
sex:time   2 8 0.7453185  0.5049
treat:sex:time 2 8 1.2049883  0.3488

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] lme vs. aov

2003-09-30 Thread array chip
Hi,

I have a question about using lme and aov for the
following dataset. If I understand correctly, using
aov with Error term in the formula is equivalent to
using lme with default settings, i.e. both assume
compound symmetry correlation structure. And I have
found that equivalency in the past. However, with the
follwing dataset, I got different answers, can anyone
explain what happened here? I have 2 differnt response
variables x and y in the following dataset, with
y, I achieved the equivalency between lme and
aov, but with x, I got different p values for the
ANOVA table.

---

x-c(-0.0649,-0.0923,-0.0623,0.1809,0.0719,0.1017,0.0144,-0.1727,-0.1332,0.0986,0.304,-0.4093,0.2054,0.251,-0.1062,0.3833,
0.0649,0.2908,0.1073,0.0919,0.1167,0.2369,0.306,0.1379)

y-c(-0.0649,-0.0923,0.32,0.08,0.0719,0.1017,0.05,-0.1727,-0.1332,0.15,0.304,-0.4093,0.2054,0.251,-0.1062,0.3833,0.0649,
0.2908,0.1073,0.0919,0.1167,0.2369,0.306,0.1379)

treat-as.factor(c(1,1,1,1,1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,2,2,2,2,2))
time-as.factor(c(1,1,1,1,2,2,2,2,3,3,3,3,1,1,1,1,2,2,2,2,3,3,3,3))
sex-as.factor(c('F','F','M','M','F','F','M','M','F','F','M','M','F','F','M','M','F','F','M','M','F','F','M','M'))
subject-as.factor(c(rep(1:4,3),rep(5:8,3)))
xx-cbind(x=data.frame(x),y=y,treat=treat,time=time,sex=sex,subject=subject)

 using x as dependable variable

xx.lme-lme(x~treat*sex*time,random=~1|subject,xx)
xx.aov-aov(x~treat*sex*time+Error(subject),xx)

summary(xx.aov)

Error: subject
  Df   Sum Sq  Mean Sq F value  Pr(F)  
treat  1 0.210769 0.210769  6.8933 0.05846 .
sex1 0.005775 0.005775  0.1889 0.68627  
treat:sex  1 0.000587 0.000587  0.0192 0.89649  
Residuals  4 0.122304 0.030576  
---
Signif. codes:  0 `***' 0.001 `**' 0.01 `*' 0.05 `.'
0.1 ` ' 1 

Error: Within
   Df  Sum Sq Mean Sq F value Pr(F)
time2 0.00102 0.00051  0.0109 0.9891
treat:time  2 0.00998 0.00499  0.1066 0.9002
sex:time2 0.02525 0.01263  0.2696 0.7704
treat:sex:time  2 0.03239 0.01619  0.3458 0.7178
Residuals   8 0.37469 0.04684 

anova(xx.lme)
   numDF denDF  F-value p-value
(Intercept)1 8 3.719117  0.0899
treat  1 4 5.089022  0.0871
sex1 4 0.139445  0.7278
time   2 8 0.012365  0.9877
treat:sex  1 4 0.014175  0.9110
treat:time 2 8 0.120538  0.8880
sex:time   2 8 0.304878  0.7454
treat:sex:time 2 8 0.391012  0.6886

 using y as dependable variable

xx.lme2-lme(y~treat*sex*time,random=~1|subject,xx)
xx.aov2-aov(y~treat*sex*time+Error(subject),xx)

 summary(xx.aov2)

Error: subject
  Df   Sum Sq  Mean Sq F value Pr(F)
treat  1 0.147376 0.147376  2.0665 0.2239
sex1 0.000474 0.000474  0.0067 0.9389
treat:sex  1 0.006154 0.006154  0.0863 0.7836
Residuals  4 0.285268 0.071317   

Error: Within
   Df   Sum Sq  Mean Sq F value Pr(F)
time2 0.009140 0.004570  0.1579 0.8565
treat:time  2 0.012598 0.006299  0.2177 0.8090
sex:time2 0.043132 0.021566  0.7453 0.5049
treat:sex:time  2 0.069733 0.034866  1.2050 0.3488
Residuals   8 0.231480 0.028935   

anova(xx.lme2)
   numDF denDF   F-value p-value
(Intercept)1 8 3.0667809  0.1180
treat  1 4 2.0664919  0.2239
sex1 4 0.0066516  0.9389
time   2 8 0.1579473  0.8565
treat:sex  1 4 0.0862850  0.7836
treat:time 2 8 0.2177028  0.8090
sex:time   2 8 0.7453185  0.5049
treat:sex:time 2 8 1.2049883  0.3488

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] lme() vs aov(y ~ A*B + Error(aa %in% A + bb %in% B)) [repost]

2003-06-17 Thread Martin Maechler
I've posted the following to R-help on May 15.
It has reproducible R code for real data -- and a real
(academic, i.e unpaid) consultion background.

I'd be glad for some insight here, mainly not for myself.
In the mean time, we've learned that it is to be expected for
anova(*, marginal) to be contrast dependent, but still are
glad for advice if you have experience.

Thank you in advance,
Martin

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] lme() vs aov(y ~ A*B + Error(aa %in% A + bb %in% B)) [repost]

2003-06-17 Thread Martin Maechler
 MM == Martin Maechler [EMAIL PROTECTED]
 on Tue, 17 Jun 2003 10:13:44 +0200 writes:

MM I've posted the following to R-help on May 15.
MM It has reproducible R code for real data -- and a real
MM (academic, i.e unpaid) consultion background.

MM I'd be glad for some insight here, mainly not for myself.
MM In the mean time, we've learned that it is to be expected for
MM anova(*, marginal) to be contrast dependent, but still are
MM glad for advice if you have experience.

MM Thank you in advance,
MM Martin

and indeed, the forwarded message, also a kind of attachment (message/rfc822)
was dropped by mailman's content filtering too...
Here, it's as regular text:

Here is a reproducible (when you're on-net) script for a
relatively simple real data (consulting) situation here.

The data analyst tried to use lme() rather than  aov( + Error()) 
and asked several questions that I couldn't easily answer and
thought to be interesting for a more general audience in any
case.

I attach the script as text/plain file that you should run
everywhere.  The three questions are summarized at the
beginning of the file.

snip-snip

### LME/aov example {real data!}
### Two crossed fixed effects each with a random factor inside

### Problems :
### 1) need(?) ugly pseudo grouping
### 2) anova() depends on setting of contrasts
### 3) numerical problem  Singular precision matrix .. (should not happen?)

download.file(ftp://stat.ethz.ch/U/maechler/R/ex4.rda;, lme-ex4.rda)
load(lme-ex4.rda)##- object `dw'
summary(dw)
str(dw)

## For some EDA {and S+ compatibility; for R, I'd use  with(dw, ...)} :
attach(dw)
## TREE: 1:3 at 1, 4:6 at 2 : TREE completely nested in LOCATION
table(TREE, LOCATION)
## and the same with THALLUS in PROV: th 1:5  pr1; 6:7  2; 8:12  3;...
table(THALLUS, PROV)
colSums(table(THALLUS, PROV)  0)
## pr1 pr2 pr3 pr4 pr5 pr6
##   5   2   5   5   1   5

interaction.plot(PROV, LOCATION, y)
## Hmm.. PROV=5 is a bit peculiar
## well, for one because it has only 3+3 observations :
table(PROV, LOCATION)

plot(y ~ PROV, data = dw)# not so useful
## or (better?):
stripchart(y ~ PROV, vertical=TRUE, method=stack)

if(FALSE)# quite a bit to see
table(PROV, THALLUS, TREE)
all(table(PROV, THALLUS, TREE) %in% c(0,1)) ## TRUE

detach(dw)##- end of EDA -

## Setup:
## - 2 locations (LOCATION) with 3 trees each
## - 6 kinds proveniences (PROV) with 2-5 kinds thallus each
## - L * P are fixed (crossed), each has a random factor nested inside

## ==  Simple Approach is possible
##
## aov Models with Error()
## ---

##- 1  default contrasts -
options(contrasts=c(contr.treatment, contr.poly))
aov1 - aov(y ~ PROV*LOCATION + Error(TREE%in%LOCATION + THALLUS%in%PROV),
data = dw)
summary(aov1)
##- 2  sum contrasts -
options(contrasts=c(contr.sum, contr.poly))
aov2 - aov(y ~ PROV*LOCATION + Error(TREE%in%LOCATION + THALLUS%in%PROV),
data = dw)
## really the imporant things do NOT depend on the contrasts:
stopifnot(all.equal(summary(aov1),
summary(aov2)))

## For residual analysis :
aov1. - aov(y ~ PROV*LOCATION + TREE%in%LOCATION + THALLUS%in%PROV, data = dw)
## non-sense model
## -- used only for residuals() , fitted()  which fail for aov1  !!
##  ((this is not nice behavior !))
par(mfrow = c(1,2))
plot(fitted(aov1.), residuals(aov1.)); abline(h = 0, lty = 2)
qqnorm(residuals(aov1.)); qqline(residuals(aov1.))

## model.tables(aov1.,type = means)



### Now try to be modern, use REML and hence
###
### lme models:
### === ---

library(nlme)

## Construct a pseudo grouping factor with just one level
dw$pseudogrp - factor(c(g1))
dwgrp - groupedData(y ~ 1 | pseudogrp, data=dw)

###- 1  default contrasts -

options(contrasts=c(contr.treatment, contr.poly))

lme1  - lme(y ~ LOCATION * PROV , data = dwgrp,
 random = pdBlocked(list(pdIdent(~TREE -1),
 pdIdent(~THALLUS -1
anova(lme1, type = marginal)
## quite close to the aov() results above;
##  F value for PROV:LOCATION is the same, but has wrong (?) deg.freedom
##  ((because of the wrong pseudogrp grouping))

if(FALSE)
summary(lme1)
intervals(lme1,which = var-cov)
ranef(lme1)

trellis.device()
lset(col.whitebg())

## Residual analysis:
par(mfrow = c(1,2))
plot(fitted(lme1), residuals(lme1)); abline(h = 0, lty = 2)
qqnorm(residuals(lme1)); qqline(residuals(lme1))


###- 2  sum contrasts -

options(contrasts=c(contr.sum, contr.poly))

lme2  - lme(y ~ LOCATION * PROV , data = dwgrp,
 random = pdBlocked(list(pdIdent(~TREE -1),
 pdIdent(~THALLUS -1

## Same model as lme1 ?

## In some sense, yes:
stopifnot(all.equal(residuals(lme1),