[R] R: Re: Differences in output of lme() when introducing interactions

2015-07-23 Thread John Maindonald
Do you have legal advice that suing the University (if that is the right 
context)
would actually be a fruitful way forwards, that it would achieve anything
useful within reasonable time and without causing the student severe
financial risk?

What may work in that context is for students to collectively complain that
important aspects of their training and support are being neglected.
With the rapidity of recent technological change, the issue is widespread.
To an extent, able post-docs and PhDs have to lead the charge in getting
training and support updated and brought into the modern world.


John Maindonald email: 
john.maindon...@anu.edu.au<mailto:john.maindon...@anu.edu.au>

On 22/07/2015, at 22:00, 
r-help-requ...@r-project.org<mailto:r-help-requ...@r-project.org> wrote:

Da: li...@dewey.myzen.co.uk<mailto:li...@dewey.myzen.co.uk>
Data: 21-lug-2015 11.58
A: 
"angelo.arc...@virgilio.it<mailto:angelo.arc...@virgilio.it>"mailto:angelo.arc...@virgilio.it>>,
 mailto:bgunter.4...@gmail.com>>
Cc: mailto:r-help@r-project.org>>
Ogg: Re: R: Re: [R] R: Re: Differences in output of lme() when introducing 
interactions

Dear Angelo

I suggest you do an online search for marginality which may help to
explain the relationship between main effects and interactions. As I
said in my original email this is a complicated subject which we are not
going to retype for you.

If you are doing this as a student I suggest you sue your university for
failing to train you appropriately and if it is part of your employment
I suggest you find a better employer.

On 21/07/2015 10:04, 
angelo.arc...@virgilio.it<mailto:angelo.arc...@virgilio.it> wrote:
Dear Bert,
thank you for your feedback. Can you please provide some references
online so I can improve "my ignorance"?
Anyways, please notice that it is not true that I do not know statistics
and regressions at all, and I am strongly
convinced that my question can be of interest for some one else in the
future.

This is what forums serve for, isn't it? This is why people help each
other, isn't it?

Moreover, don't you think that I would not have asked to this R forum if
I had the possibility to ask or pay a statician?
Don't you think I have done already my best to study and learn before
posting this message? Trust me, I have read different
online tutorials on lme and lmer, and I am confident that I have got the
basic concepts. Still I have not found the answer
to solve my problem, so if you know the answer can you please give me
some suggestions that can help me?

I do not have a book where to learn and unfortunately I have to analyze
the results soon. Any help? Any online reference to-the-point
that can help me in solving this problem?

Thank you in advance

Best regards

Angelo


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] R: Re: Differences in output of lme() when introducing interactions

2015-07-22 Thread angelo.arc...@virgilio.it
Dear Terry,
I am very grateful to you for such a detailed and helpful answer.
Following your recommendation then I will skip the method presented at 
http://www.ats.ucla.edu/stat/r/faq/type3.htm

So far, based on my understanding of R I arrived to the conclusion that the 
correct way to see if there is
a correlation between my dependent variable (spectral centroid of a sound) and 
weight, height, and interaction
between weight and height of participants asked to create those sounds (in a 
repeated measure design) is:


lme_centroid <- lme(Centroid ~ Weight*Height*Shoe_Size, data = My_data, random 
= ~1 | Subject)

anova.lme(lme_centroid,type = "marginal")


Can anyone please confirm me that those formulas are actually correct and give 
the significant or
non significant p-values for the main effects and their interactions? I would 
prefer to use lme(), not lmer().

I am making the assumption of course that the model I am using (Centroid ~ 
Weight*Height*Shoe_Size) is 
the best fit for my data.

Thanks in advance

Angelo




Messaggio originale
Da: thern...@mayo.edu
Data: 22-lug-2015 15.15
A: , 
Ogg: Re:  Differences in output of lme() when introducing interactions

"Type III" is a peculiarity of SAS, which has taken root in the world.  There 
are 3 main 
questions wrt to it:

1. How to compute it (outside of SAS).  There is a trick using contr.treatment 
coding that 
works if the design has no missing factor combinations, your post has a link to 
such a 
description.  The SAS documentation is very obtuse, thus almost no one knows 
how to 
compute the general case.

2. What is it?  It is a population average.  The predicted average treatment 
effect in a 
balanced population-- one where all the factor combinations appeared the same 
number of 
times.  One way to compute 'type 3' is to create such a data set, get all the 
predicted 
values, and then take the average prediction for treatment A, average for 
treatment B, 
average for C, ...  and test "are these averages the same".   The algorithm of 
#1 above 
leads to another explanation which is a false trail, in my opinion.

3. Should you ever use it?  No.  There is a very strong inverse correlation 
between 
"understand what it really is" and "recommend its use".   Stephen Senn has 
written very 
intellgently on the issues.

Terry Therneau


On 07/22/2015 05:00 AM, r-help-requ...@r-project.org wrote:
> Dear Michael,
> thanks a lot. I am studying the marginality and I came across to this post:
>
> http://www.ats.ucla.edu/stat/r/faq/type3.htm
>
> Do you think that the procedure there described is the right one to solve my 
> problem?
>
> Would you have any other online resources to suggest especially dealing with 
> R?
>
> My department does not have a statician, so I have to find a solution with my 
> own capacities.
>
> Thanks in advance
>
> Angelo



   
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R: Re: Differences in output of lme() when introducing interactions

2015-07-20 Thread Bert Gunter
I believe Michael's point is that you need to STOP asking such
questions and START either learning some statistics or work with
someone who already knows some. You should not be doing such analyses
on your own given your present state of statistical ignorance.

Cheers,
Bert


Bert Gunter

"Data is not information. Information is not knowledge. And knowledge
is certainly not wisdom."
   -- Clifford Stoll


On Mon, Jul 20, 2015 at 5:45 PM, angelo.arc...@virgilio.it
 wrote:
> Dear Michael,
> thanks for your answer. Despite it answers to my initial question, it does 
> not help me in finding the solution to my problem unfortunately.
>
> Could you please tell me which analysis of the two models should I trust then?
> My goal is to know whether participants’ choices
>  of the dependent variable are linearly related to their own weight, height, 
> shoe size and
>  the combination of those effects.
> Would the analysis of model 2 be more
> correct than that of model 1? Which of the two analysis should I trust 
> according to my goal?
> What is your recommendation?
>
>
> Thanks in advance
>
> Angelo
>
>
>
>
>
> Messaggio originale
> Da: li...@dewey.myzen.co.uk
> Data: 20-lug-2015 17.56
> A: "angelo.arc...@virgilio.it", 
> 
> Ogg: Re: [R] Differences in output of lme() when introducing interactions
>
> In-line
>
> On 20/07/2015 15:10, angelo.arc...@virgilio.it wrote:
>> Dear List Members,
>>
>>
>>
>> I am searching for correlations between a dependent variable and a
>> factor or a combination of factors in a repeated measure design. So I
>> use lme() function in R. However, I am getting very different results
>> depending on whether I add on the lme formula various factors compared
>> to when only one is present. If a factor is found to be significant,
>> shouldn't remain significant also when more factors are introduced in
>> the model?
>>
>
> The short answer is 'No'.
>
> The long answer is contained in any good book on statistics which you
> really need to have by your side as the long answer is too long to
> include in an email.
>
>>
>> I give an example of the outputs I get using the two models. In the first 
>> model I use one single factor:
>>
>> library(nlme)
>> summary(lme(Mode ~ Weight, data = Gravel_ds, random = ~1 | Subject))
>> Linear mixed-effects model fit by REML
>>   Data: Gravel_ds
>>AIC  BIC   logLik
>>2119.28 2130.154 -1055.64
>>
>> Random effects:
>>   Formula: ~1 | Subject
>>  (Intercept) Residual
>> StdDev:1952.495 2496.424
>>
>> Fixed effects: Mode ~ Weight
>>  Value Std.Error DF   t-value p-value
>> (Intercept) 10308.966 2319.0711 95  4.445299   0.000
>> Weight-99.036   32.3094 17 -3.065233   0.007
>>   Correlation:
>> (Intr)
>> Weight -0.976
>>
>> Standardized Within-Group Residuals:
>>  Min  Q1 Med  Q3 Max
>> -1.74326719 -0.41379593 -0.06508451  0.39578734  2.27406649
>>
>> Number of Observations: 114
>> Number of Groups: 19
>>
>>
>> As you can see the p-value for factor Weight is significant.
>> This is the second model, in which I add various factors for searching their 
>> correlations:
>>
>> library(nlme)
>> summary(lme(Mode ~ Weight*Height*Shoe_Size*BMI, data = Gravel_ds, random = 
>> ~1 | Subject))
>> Linear mixed-effects model fit by REML
>>   Data: Gravel_ds
>> AIC  BIClogLik
>>1975.165 2021.694 -969.5825
>>
>> Random effects:
>>   Formula: ~1 | Subject
>>  (Intercept) Residual
>> StdDev:1.127993 2494.826
>>
>> Fixed effects: Mode ~ Weight * Height * Shoe_Size * BMI
>>  Value Std.Error DFt-value p-value
>> (Intercept)   5115955  10546313 95  0.4850941  0.6287
>> Weight  -13651237   6939242  3 -1.9672518  0.1438
>> Height -18678 53202  3 -0.3510740  0.7487
>> Shoe_Size   93427213737  3  0.4371115  0.6916
>> BMI -13011088   7148969  3 -1.8199949  0.1663
>> Weight:Height   28128 14191  3  1.9820883  0.1418
>> Weight:Shoe_Size   351453186304  3  1.8864467  0.1557
>> Height:Shoe_Size -783  1073  3 -0.7298797  0.5183
>> Weight:BMI  19475 11425  3  1.7045450  0.1868
>> Height:BMI 226512118364  3  1.9136867  0.1516
>> Shoe_Size:BMI  329377190294  3  1.7308827  0.1819
>> Weight:Height:Shoe_Size  -706   371  3 -1.9014817  0.1534
>> Weight:Height:BMI-10963  3 -1.7258742  0.1828
>> Weight:Shoe_Size:BMI -273   201  3 -1.3596421  0.2671
>> Height:Shoe_Size:BMI-5858  3200  3 -1.8306771  0.1646
>> Weight:Height:Shoe_Size:BMI 2 1  3  1.3891782  0.2589
>>   Correlation:
>>  (Intr) Weight Height Sho_Sz BMIWght:H 
>> Wg:S_S Hg:S_S Wg:BMI Hg:BMI S_S:BM Wg:H:S_S W:H:BM W:S_S: H:S_S:
>>

[R] R: Re: Differences in output of lme() when introducing interactions

2015-07-20 Thread angelo.arc...@virgilio.it
Dear Michael, 
thanks for your answer. Despite it answers to my initial question, it does not 
help me in finding the solution to my problem unfortunately.

Could you please tell me which analysis of the two models should I trust then?
My goal is to know whether participants’ choices
 of the dependent variable are linearly related to their own weight, height, 
shoe size and
 the combination of those effects. 
Would the analysis of model 2 be more 
correct than that of model 1? Which of the two analysis should I trust 
according to my goal? 
What is your recommendation?


Thanks in advance

Angelo





Messaggio originale
Da: li...@dewey.myzen.co.uk
Data: 20-lug-2015 17.56
A: "angelo.arc...@virgilio.it", 

Ogg: Re: [R] Differences in output of lme() when introducing interactions

In-line

On 20/07/2015 15:10, angelo.arc...@virgilio.it wrote:
> Dear List Members,
>
>
>
> I am searching for correlations between a dependent variable and a
> factor or a combination of factors in a repeated measure design. So I
> use lme() function in R. However, I am getting very different results
> depending on whether I add on the lme formula various factors compared
> to when only one is present. If a factor is found to be significant,
> shouldn't remain significant also when more factors are introduced in
> the model?
>

The short answer is 'No'.

The long answer is contained in any good book on statistics which you 
really need to have by your side as the long answer is too long to 
include in an email.

>
> I give an example of the outputs I get using the two models. In the first 
> model I use one single factor:
>
> library(nlme)
> summary(lme(Mode ~ Weight, data = Gravel_ds, random = ~1 | Subject))
> Linear mixed-effects model fit by REML
>   Data: Gravel_ds
>AIC  BIC   logLik
>2119.28 2130.154 -1055.64
>
> Random effects:
>   Formula: ~1 | Subject
>  (Intercept) Residual
> StdDev:1952.495 2496.424
>
> Fixed effects: Mode ~ Weight
>  Value Std.Error DF   t-value p-value
> (Intercept) 10308.966 2319.0711 95  4.445299   0.000
> Weight-99.036   32.3094 17 -3.065233   0.007
>   Correlation:
> (Intr)
> Weight -0.976
>
> Standardized Within-Group Residuals:
>  Min  Q1 Med  Q3 Max
> -1.74326719 -0.41379593 -0.06508451  0.39578734  2.27406649
>
> Number of Observations: 114
> Number of Groups: 19
>
>
> As you can see the p-value for factor Weight is significant.
> This is the second model, in which I add various factors for searching their 
> correlations:
>
> library(nlme)
> summary(lme(Mode ~ Weight*Height*Shoe_Size*BMI, data = Gravel_ds, random = ~1 
> | Subject))
> Linear mixed-effects model fit by REML
>   Data: Gravel_ds
> AIC  BIClogLik
>1975.165 2021.694 -969.5825
>
> Random effects:
>   Formula: ~1 | Subject
>  (Intercept) Residual
> StdDev:1.127993 2494.826
>
> Fixed effects: Mode ~ Weight * Height * Shoe_Size * BMI
>  Value Std.Error DFt-value p-value
> (Intercept)   5115955  10546313 95  0.4850941  0.6287
> Weight  -13651237   6939242  3 -1.9672518  0.1438
> Height -18678 53202  3 -0.3510740  0.7487
> Shoe_Size   93427213737  3  0.4371115  0.6916
> BMI -13011088   7148969  3 -1.8199949  0.1663
> Weight:Height   28128 14191  3  1.9820883  0.1418
> Weight:Shoe_Size   351453186304  3  1.8864467  0.1557
> Height:Shoe_Size -783  1073  3 -0.7298797  0.5183
> Weight:BMI  19475 11425  3  1.7045450  0.1868
> Height:BMI 226512118364  3  1.9136867  0.1516
> Shoe_Size:BMI  329377190294  3  1.7308827  0.1819
> Weight:Height:Shoe_Size  -706   371  3 -1.9014817  0.1534
> Weight:Height:BMI-10963  3 -1.7258742  0.1828
> Weight:Shoe_Size:BMI -273   201  3 -1.3596421  0.2671
> Height:Shoe_Size:BMI-5858  3200  3 -1.8306771  0.1646
> Weight:Height:Shoe_Size:BMI 2 1  3  1.3891782  0.2589
>   Correlation:
>  (Intr) Weight Height Sho_Sz BMIWght:H Wg:S_S 
> Hg:S_S Wg:BMI Hg:BMI S_S:BM Wg:H:S_S W:H:BM W:S_S: H:S_S:
> Weight  -0.895
> Height  -0.996  0.869
> Shoe_Size   -0.930  0.694  0.933
> BMI -0.911  0.998  0.887  0.720
> Weight:Height0.894 -1.000 -0.867 -0.692 -0.997
> Weight:Shoe_Size 0.898 -0.997 -0.873 -0.700 -0.999  0.995
> Height:Shoe_Size 0.890 -0.612 -0.904 -0.991 -0.641  0.609  0.619
> Weight:BMI   0.911 -0.976 -0.887 -0.715 -0.972  0.980  0.965  
> 0.637
> Height:BMI   0.900 -1.000 -0.875 -0.703 -0.999  0.999  0.999  
> 0.622  0.973
> Shoe_Size:B