Re: Analysis of covariance

2001-12-19 Thread Elliot Cramer

Bruce Weaver <[EMAIL PROTECTED]> wrote:


: Paul's post reminded me of something I read in Keppel's Design and
: Analysis.  Here's an excerpt from my notes on ANCOVA:


: the analysis of covariance is more precise with correlations greater than
: .6.  Since we rarely obtain correlations of this latter magnitude in the
: behavioral sciences, we will not find a unique advantage in the analysis
: of covariance in most research applications.
I've NEVER seen a pre-post correlation less than .4


:   Keppel (1982, p. 513) also prefers the Treatments X Blocks design
: to ANCOVA on the grounds that the underlying assumptions are less
: stringent:
He's wrong in the random assignment case;  the assumptions are essentially
the same.  The ANCOVA estimates are unbiased without any assumptions and
without assumption test exactly the same hypothesis as the simple t test
or the test of change scores (or treatments x blocks) test



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Analysis of covariance

2001-12-19 Thread Elliot Cramer

Paul R. Swank <[EMAIL PROTECTED]> wrote:
: Some years ago I did a simulation on the pretest-posttest control group
: design lokking at three methods of analysis, ANCOVA, repeated measures
: ANOVA, and treatment by block factorial ANOVA (blocking on the pretest using
: a median split). 
I found that that with typical sample sizes, the repeated
: measures ANOVA was a bit more powerful than the ANCOVA procedure when the
: correlation between pretest and posttest was fairly high (say .90). As noted

I tried to
: publish the results at the time but aimed a bit too high and received such a
: scathing review (what kind of idiot would do this kind of study?) that I
: shoved it a drawer and it has never seen the light of day since. 

You did good.

Median splits are always dumb while a test of the change scores will only
be more powerful than ANCOVA if the regression coefficient is near
1.  Usually the reg coeff is about the same as the corrrelation since the
sds are likely to be about the same.

Hence ALWAYS use ANCOVA with random assignment to groups



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Analysis of covariance

2001-12-19 Thread Elliot Cramer

Morelli Paolo <[EMAIL PROTECTED]> wrote:
: HI all,
: I have to analyse some clinical data. In particular the analysis is a
: comparison between two groups of the mean change baseline to endpoint of a
: score. 

i hope that your study is randomized;  if not it's not worth worrying
about.  If so his analysis is equivalent to ANCOVA on post covarying pre
and is the only proper analysis.  The true measure of change is a
comparison between the two groups since the  proper question is how does
the experimental group compare to what it would have been without the
experimental condition.  Ancova simply increases power here

you should test for parallelism



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



RE: Analysis of covariance

2001-10-02 Thread Bruce Weaver


On 27 Sep 2001, Paul R. Swank wrote:

> Some years ago I did a simulation on the pretest-posttest control group
> design lokking at three methods of analysis, ANCOVA, repeated measures
> ANOVA, and treatment by block factorial ANOVA (blocking on the pretest using
> a median split). I found that that with typical sample sizes, the repeated
> measures ANOVA was a bit more powerful than the ANCOVA procedure when the
> correlation between pretest and posttest was fairly high (say .90). As noted
> below, this is because the ANCOVA and ANOVA methods are approaching the same
> solution but ANCOVA loses a degree of freedom estimating the regression
> parameter when the ANOVA doesn't. Of course this effect diminshes as the
> sample size gets larger because the loss of one df is diminished. On the
> other hand, the treatment by block design tends to have a bit more power
> when the correlation between pretest and posttest is low (< .30). I tried to
> publish the results at the time but aimed a bit too high and received such a
> scathing review (what kind of idiot would do this kind of study?) that I
> shoved it a drawer and it has never seen the light of day since. I did the
> syudy because it seemed at the time that everyone was using this design but
> were unsure of the analysis and I thought a demonstration would be helpful.
> SO, to make a long story even longer, the ANCOVA seems to be most powerful
> in those circumstances one is likely to run into but does have somewhat
> rigid assumptions about homogeneity of regression slopes. Of course the
> repeated measures ANOVA indirectly makes the same assumption but at such
> high correlations, this is really a homogenity of variance issue as well.
> The second thought is for you reviewers out there trying to soothe your own
> egos by dumping on someone else's. Remember, the researcher you squelch
> today might be turned off to research and fail to solve a meaty problem
> tomorrow.
>
> Paul R. Swank, Ph.D.
> Professor
> Developmental Pediatrics
> UT Houston Health Science Center
>

Paul's post reminded me of something I read in Keppel's Design and
Analysis.  Here's an excerpt from my notes on ANCOVA:


Keppel (1982, p. 512) says:

If the choice is between blocking and the analysis of covariance, Feldt
(1958) has shown that blocking is more precise when the correlation
between the covariate and the dependent variable is less than .4, while
the analysis of covariance is more precise with correlations greater than
.6.  Since we rarely obtain correlations of this latter magnitude in the
behavioral sciences, we will not find a unique advantage in the analysis
of covariance in most research applications.

Keppel (1982, p. 513) also prefers the Treatments X Blocks design
to ANCOVA on the grounds that the underlying assumptions are less
stringent:

Both within-subjects designs and analyses of covariance require a number
of specialized statistical assumptions.  With the former, homogeneity of
between treatment differences and the absence of differential carryover
effects are assumptions that are critical for an unambiguous
interpretation of the results of an experiment.  With the latter, the most
stringent is the assumption of homogeneous within-group regression
coefficients.  Both the analysis of covariance and the analysis of
within-subjects designs are sensitive only to the linear relationship
between X and Y, in the first case, and between pairs of treatment
conditions in the second case.  In contrast, the Treatments X Blocks
design is sensitive to any type of relationship between treatments and
blocks--not just linear.  As Winer puts it, the Treatments X Blocks design
"is a function-free regression scheme" (1971, p. 754).  This is a major
advantage of the Treatments X Blocks design.  In short, the Treatments X
Blocks design does not have restrictive assumptions and, for this reason,
is to be preferred for its relative freedom from statistical assumptions
underlying the data analysis.

-- 
Bruce Weaver
E-mail: [EMAIL PROTECTED]
Homepage:   http://www.angelfire.com/wv/bwhomedir/



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Analysis of covariance

2001-09-27 Thread Frank E Harrell Jr

I would have to respectfully disagree with Dennis' comment
also.  Having the pre values twice in the model does not
hurt or change anything in interpreting the treatment effect.

BUT I do not like this approach.  It makes the results more
difficult to interpret when you do have a variable in both
places.  As it is mandatory to have the pre measurement as
a separate covariable at any rate, the response variable
I prefer is the follow-up assessment, not the change.
A good discussion is in Stephen Senn's "Statistical Issues
in Drug Development" book (Wiley).  -Frank Harrell



Radford Neal wrote:
> 
> In article <[EMAIL PROTECTED]>,
> Dennis Roberts <[EMAIL PROTECTED]> wrote:
> 
> >the basic idea is to be able to "explain" the post score variance in terms
> >of something ELSE ... that is, for example ... we know that some of the
> >variance in pain is due to one's TOLERANCE for PAIN ... thus, if we can
> >remove the part of pain variance that is due to TOLERANCE FOR pain ... then
> >the leftover variance on pain is a purer measure in its own right ..
> >
> >if you do as suggested ... remove the pre from the post ... say pre pain
> >from post pain ... what is left over? it is not pain anymore but rather,
> >some OTHER variable ... which is not what the purpose of the study was ...
> >to investigate (i assume anyway)
> 
> Well, the idea is that the OTHER variable is the treatment effect,
> whose quantification presumably IS the purpose of the study.  I think
> this is a pretty standard thing to do.
> 
> It seems that the original question was meant to address the more
> technical issue of whether you can include the pre-treatment value as
> an explanatory variable when the response variable is already the
> CHANGE from before treatment to after treatment.  As another poster
> has ably explained, you can, though it's a bit strange and redundant.
> 
>Radford
> 
> 
> Radford M. Neal   [EMAIL PROTECTED]
> Dept. of Statistics and Dept. of Computer Science [EMAIL PROTECTED]
> University of Toronto http://www.cs.utoronto.ca/~radford
> 

-- 
Frank E Harrell Jr  Prof. of Biostatistics & Statistics
Div. of Biostatistics & Epidem. Dept. of Health Evaluation Sciences
U. Virginia School of Medicine  http://hesweb1.med.virginia.edu/biostat


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



RE: Analysis of covariance

2001-09-27 Thread Paul R. Swank

Some years ago I did a simulation on the pretest-posttest control group
design lokking at three methods of analysis, ANCOVA, repeated measures
ANOVA, and treatment by block factorial ANOVA (blocking on the pretest using
a median split). I found that that with typical sample sizes, the repeated
measures ANOVA was a bit more powerful than the ANCOVA procedure when the
correlation between pretest and posttest was fairly high (say .90). As noted
below, this is because the ANCOVA and ANOVA methods are approaching the same
solution but ANCOVA loses a degree of freedom estimating the regression
parameter when the ANOVA doesn't. Of course this effect diminshes as the
sample size gets larger because the loss of one df is diminished. On the
other hand, the treatment by block design tends to have a bit more power
when the correlation between pretest and posttest is low (< .30). I tried to
publish the results at the time but aimed a bit too high and received such a
scathing review (what kind of idiot would do this kind of study?) that I
shoved it a drawer and it has never seen the light of day since. I did the
syudy because it seemed at the time that everyone was using this design but
were unsure of the analysis and I thought a demonstration would be helpful.
SO, to make a long story even longer, the ANCOVA seems to be most powerful
in those circumstances one is likely to run into but does have somewhat
rigid assumptions about homogeneity of regression slopes. Of course the
repeated measures ANOVA indirectly makes the same assumption but at such
high correlations, this is really a homogenity of variance issue as well.
The second thought is for you reviewers out there trying to soothe your own
egos by dumping on someone else's. Remember, the researcher you squelch
today might be turned off to research and fail to solve a meaty problem
tomorrow.

Paul R. Swank, Ph.D.
Professor
Developmental Pediatrics
UT Houston Health Science Center

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of jim clark
Sent: Thursday, September 27, 2001 7:00 AM
To: [EMAIL PROTECTED]
Subject: Re: Analysis of covariance


Hi

On 26 Sep 2001, Burke Johnson wrote:
> R Pretest   Treatment   Posttest
> R PretestControl   Posttest
> In the social sciences (e.g., see Pedhazur's popular
> regression text), the most popular analysis seems to be to
> run a GLM (this version is often called an ANCOVA), where Y
> is the posttest measure, X1 is the pretest measure, and X2 is
> the treatment variable. Assuming that X1 and X2 do not
> interact, ones' estimate of the treatment effect is given by
> B2 (i.e., the partial regression coefficient for the
> treatment variable which controls for adjusts for pretest
> differences).

> Another traditionally popular analysis for the design given
> above is to compute a new, gain score variable (posttest
> minus pretest) for all cases and then run a GLM (ANOVA) to
> see if the difference between the gains (which is the
> estimate of the treatment effect) is statistically
> significant.

> The third, and somewhat less popular (?) way to analyze the
> above design is to do a mixed ANOVA model (which is also a
> GLM but it is harder to write out), where Y is the posttest,
> X1 is "time" which is a repeated measures variable (e.g.,
> time is 1 for pretest and 2 for posttest for all cases), and
> X2 is the between group, treatment variable. In this case one
> looks for treatment impact by testing the statistical
> significance of the two-way interaction between the time and
> the treatment variables. Usually, you ask if the difference
> between the means at time two is greater than the difference
> at time one (i.e., you hope that the treatment lines will not
> be parallel)

> Results will vary depending on which of these three
> approaches you use, because each approach estimates the
> counterfactual in a slightly different way. I believe it was
> Reichardt and Mark (in Handbook of Applied Social Research
> Methods) that suggested analyzing your data using more than
> one of these three statistical methods.

Methods 2 and 3 are equivalent to one another.  The F for the
difference between change scores will equal the F for the
interaction.  I believe that one way to think of the difference
between methods 1 and 2/3 is that in 2/3 you "regress" t2 on t1
assuming slope=1 and intercept=0 (i.e., the "predicted" score is
the t1 score), whereas in method 1 you estimate the slope and
intercept from the data.  Presumably it would be possible to
simulate the differences between the two analyses as a function
of the magnitude of the difference between means and the
relationship between t1 and t2.  I don't know if anyone has done
that.

Best wishes
Jim

=

Re: Analysis of covariance

2001-09-27 Thread jim clark

Hi

On 26 Sep 2001, Burke Johnson wrote:
> R Pretest   Treatment   Posttest 
> R PretestControl   Posttest
> In the social sciences (e.g., see Pedhazur's popular
> regression text), the most popular analysis seems to be to
> run a GLM (this version is often called an ANCOVA), where Y
> is the posttest measure, X1 is the pretest measure, and X2 is
> the treatment variable. Assuming that X1 and X2 do not
> interact, ones' estimate of the treatment effect is given by
> B2 (i.e., the partial regression coefficient for the
> treatment variable which controls for adjusts for pretest
> differences).

> Another traditionally popular analysis for the design given
> above is to compute a new, gain score variable (posttest
> minus pretest) for all cases and then run a GLM (ANOVA) to
> see if the difference between the gains (which is the
> estimate of the treatment effect) is statistically
> significant.

> The third, and somewhat less popular (?) way to analyze the
> above design is to do a mixed ANOVA model (which is also a
> GLM but it is harder to write out), where Y is the posttest,
> X1 is "time" which is a repeated measures variable (e.g.,
> time is 1 for pretest and 2 for posttest for all cases), and
> X2 is the between group, treatment variable. In this case one
> looks for treatment impact by testing the statistical
> significance of the two-way interaction between the time and
> the treatment variables. Usually, you ask if the difference
> between the means at time two is greater than the difference
> at time one (i.e., you hope that the treatment lines will not
> be parallel)

> Results will vary depending on which of these three
> approaches you use, because each approach estimates the
> counterfactual in a slightly different way. I believe it was
> Reichardt and Mark (in Handbook of Applied Social Research
> Methods) that suggested analyzing your data using more than
> one of these three statistical methods.

Methods 2 and 3 are equivalent to one another.  The F for the
difference between change scores will equal the F for the
interaction.  I believe that one way to think of the difference
between methods 1 and 2/3 is that in 2/3 you "regress" t2 on t1
assuming slope=1 and intercept=0 (i.e., the "predicted" score is
the t1 score), whereas in method 1 you estimate the slope and
intercept from the data.  Presumably it would be possible to
simulate the differences between the two analyses as a function
of the magnitude of the difference between means and the
relationship between t1 and t2.  I don't know if anyone has done
that.

Best wishes
Jim


James M. Clark  (204) 786-9757
Department of Psychology(204) 774-4134 Fax
University of Winnipeg  4L05D
Winnipeg, Manitoba  R3B 2E9 [EMAIL PROTECTED]
CANADA  http://www.uwinnipeg.ca/~clark




=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Analysis of covariance

2001-09-26 Thread Dennis Roberts

At 02:26 PM 9/26/01 -0500, Burke Johnson wrote:
> >From my understanding, there are three popular ways to analyze the 
> following design (let's call it the pretest-posttest control-group design):
>
>R Pretest   Treatment   Posttest
>R PretestControl   Posttest

if random assignment has occurred ... then, we assume and we had better 
find that the means on the pretest are close to being the same ... if we 
don't, then we wonder about random assignment (which creates a mess)

anyway, i digress ...

what i would do is to do a simple t test on the difference in posttest 
means and, if you find something here ... then that means that treatment 
"changed" differentially compared to control

if that happens, why do anything more complicated? has not the answer to 
your main question been found?

now, what if you don't ... then, maybe something a bit more complex is 
appropriate

IMHO

_
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Analysis of covariance

2001-09-26 Thread Burke Johnson

>From my understanding, there are three popular ways to analyze the following design 
>(let's call it the pretest-posttest control-group design):

R Pretest   Treatment   Posttest 
R PretestControl   Posttest

In the social sciences (e.g., see Pedhazur's popular regression text), the most 
popular analysis seems to be to run a GLM (this version is often called an ANCOVA), 
where Y is the posttest measure, X1 is the pretest measure, and  X2  is the treatment 
variable. Assuming that X1 and X2 do not interact, ones' estimate of the treatment 
effect is given by B2 (i.e., the partial regression coefficient for the treatment 
variable which controls for adjusts for pretest differences). 

Another traditionally popular analysis for the design given above is to compute a new, 
gain score variable (posttest minus pretest) for all cases and then run a GLM (ANOVA) 
to see if the difference between the gains (which is the estimate of the treatment 
effect) is statistically significant. 

The third, and somewhat less popular (?) way to analyze the above design is to do a 
mixed ANOVA model (which is also a GLM but it is harder to write out), where Y is the 
posttest, X1 is "time" which is a  repeated measures variable (e.g., time is 1 for 
pretest and 2 for posttest for all cases), and X2 is the between group, treatment 
variable. In this case one looks for treatment impact by testing the statistical 
significance of the two-way interaction between the time and the treatment variables. 
Usually, you ask if the difference between the means at time two is greater than the 
difference at time one (i.e., you hope that the treatment lines will not be parallel)

Results will vary depending on which of these three approaches you use, because each 
approach estimates the counterfactual in a slightly different way. I believe it was 
Reichardt and Mark (in Handbook of Applied Social Research Methods) that suggested 
analyzing your data using more than one of these three statistical methods. 

I'd be interested in any thoughs you have about these three approaches.

Take care,
Burke Johnson
http://www.coe.usouthal.edu/bset/Faculty/BJohnson/Burke.html



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Analysis of covariance

2001-09-25 Thread Radford Neal

In article <[EMAIL PROTECTED]>,
Dennis Roberts <[EMAIL PROTECTED]> wrote:

>the basic idea is to be able to "explain" the post score variance in terms 
>of something ELSE ... that is, for example ... we know that some of the 
>variance in pain is due to one's TOLERANCE for PAIN ... thus, if we can 
>remove the part of pain variance that is due to TOLERANCE FOR pain ... then 
>the leftover variance on pain is a purer measure in its own right ..
>
>if you do as suggested ... remove the pre from the post ... say pre pain 
>from post pain ... what is left over? it is not pain anymore but rather, 
>some OTHER variable ... which is not what the purpose of the study was ... 
>to investigate (i assume anyway)

Well, the idea is that the OTHER variable is the treatment effect,
whose quantification presumably IS the purpose of the study.  I think
this is a pretty standard thing to do.

It seems that the original question was meant to address the more
technical issue of whether you can include the pre-treatment value as
an explanatory variable when the response variable is already the
CHANGE from before treatment to after treatment.  As another poster
has ably explained, you can, though it's a bit strange and redundant.

   Radford


Radford M. Neal   [EMAIL PROTECTED]
Dept. of Statistics and Dept. of Computer Science [EMAIL PROTECTED]
University of Toronto http://www.cs.utoronto.ca/~radford



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Analysis of covariance

2001-09-25 Thread Jerry Dallal

Morelli Paolo wrote:
> 
> HI all,
> I have to analyse some clinical data. In particular the analysis is a
> comparison between two groups of the mean change baseline to endpoint of a
> score. The statistician who planned the analysis used the ANCOVA on the mean
> change, using as covariate the baseline values of the scores.
> Do you think this analysis is correct?
> I thing that in this way we are correcting twice. I think that the right
> analysis is an ANOVA on the mean change.
> Please let me know your opinion
> thanks
> Paolo

It's convoluted, but not wrong.  I do it sometimes because some
researchers, for whatever reason, are more comfortable with that
approach. The research question is usually: If two people have the same
initial value, will there final value be the same except for the effect
of treatment.  (I'm assuming your groups are the result of random
assignment to treatment.  If not, these arguments does not apply and I
leave it to you to read the literature to find out why. I'm quickly
using up my daily allotment of keystrokes!)  This gets you the ANCOVA
model

final = constant + b1 * initial + treatment effect

Change is final - initial, so the model can be rewritten

change = constant + (b1-1)* initial + treatment effect

and the estimated treatment effect is the same. Since the treatment
effect is the same, the analysis is okay, odd as it looks.


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Analysis of covariance

2001-09-25 Thread Dennis Roberts

At 03:19 PM 9/25/01 +, Radford Neal wrote:

>Neither the question nor the response are all that clearly phrased, but
>when I interpret them according to my reading, I don't agree.  For instance,
>if you're measuring pain levels, I don't see anything wrong with measuring
>pain before treatment, randomly assigning patients to treatment and control
>groups, doing a regression for pain level afterwards with the pain level
>before and a treatment/control indicator as explanatory variables, and
>judging the effectiveness of the treatment by looking at the coefficient for
>the treatment/control variable.  Or is the actual proposal something else?

IMHO seems like to remove the variance from post pain ... using pre pain 
variance ... is a no brainer ... since the r between the two pain readings 
will necessarily be high (unless there is something really screwy about the 
data like severe restriction of range on the post measure) ... what has 
been explained in the post pain variance? pain?

the basic idea is to be able to "explain" the post score variance in terms 
of something ELSE ... that is, for example ... we know that some of the 
variance in pain is due to one's TOLERANCE for PAIN ... thus, if we can 
remove the part of pain variance that is due to TOLERANCE FOR pain ... then 
the leftover variance on pain is a purer measure in its own right ..

if you do as suggested ... remove the pre from the post ... say pre pain 
from post pain ... what is left over? it is not pain anymore but rather, 
some OTHER variable ... which is not what the purpose of the study was ... 
to investigate (i assume anyway)

i do most certainly agree with radford that ... random assignment is still 
essential in this design ... unfortunately, far too many folks use ANCOVA 
to somehow makeup for the fact that NON random assignment happened and, 
they think ANCOVA will solve that problem ...

it won't




>Radford Neal
>
>
>Radford M. Neal   [EMAIL PROTECTED]
>Dept. of Statistics and Dept. of Computer Science [EMAIL PROTECTED]
>University of Toronto http://www.cs.utoronto.ca/~radford
>
>
>
>=
>Instructions for joining and leaving this list and remarks about
>the problem of INAPPROPRIATE MESSAGES are available at
>   http://jse.stat.ncsu.edu/
>=

_
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Analysis of covariance

2001-09-25 Thread Radford Neal

Morelli Paolo wrote:

>>I have to analyse some clinical data. In particular the analysis is a
>>comparison between two groups of the mean change baseline to endpoint of a
>>score. The statistician who planned the analysis used the ANCOVA on the mean
>>change, using as covariate the baseline values of the scores.
>>Do you think this analysis is correct?

Dennis Roberts <[EMAIL PROTECTED]> wrote:

>NO! ... this is not a legitimate covariate ... a pre measure of the same 
>thing you are measuring latter as evidence of effectiveness

Neither the question nor the response are all that clearly phrased, but
when I interpret them according to my reading, I don't agree.  For instance,
if you're measuring pain levels, I don't see anything wrong with measuring
pain before treatment, randomly assigning patients to treatment and control
groups, doing a regression for pain level afterwards with the pain level 
before and a treatment/control indicator as explanatory variables, and 
judging the effectiveness of the treatment by looking at the coefficient for
the treatment/control variable.  Or is the actual proposal something else?

   Radford Neal


Radford M. Neal   [EMAIL PROTECTED]
Dept. of Statistics and Dept. of Computer Science [EMAIL PROTECTED]
University of Toronto http://www.cs.utoronto.ca/~radford



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Analysis of covariance

2001-09-25 Thread Joe Ward

Paolo --

Here comes my usual response to messages similar to yours:

Following the use of Regression/Linear Models:

1. State your research question in "NATURAL LANGUAGE" not
in terms of a "canned statistical name" that may or may not
be relevant to your question.

2. Create an ASSUMED MODEL that allows you to translate your
"NATURAL LANGUAGE" questions into RESTRICTIONS on your ASSUMED MODEL.

3. Impose the restrictions on your ASSUMED MODEL to obtain your RESTRICTED
MODEL and then you have the essentials to test your
hypotheses.

If this procedure is IDENTICAL to someone's COVARIANCE ANALYSIS then
you might want to call yours a COVARIANCE ANALYSIS.

-- Joe


*** Joe H. Ward,  Jr.
*** 167 East Arrowhead Dr.
*** San Antonio, TX 78228-2402
*** Phone: 210-433-6575
*** Fax:   210-433-2828
*** Email: [EMAIL PROTECTED]
*** http://www.northside.isd.tenet.edu/healthww/biostatistics/wardindex.html
*** ---
*** Health Careers High School
*** 4646 Hamilton-Wolfe
*** San Antonio, TX 78229
*

- Original Message -
From: "Morelli Paolo" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, September 25, 2001 5:26 AM
Subject: Analysis of covariance


> HI all,
> I have to analyse some clinical data. In particular the analysis is a
> comparison between two groups of the mean change baseline to endpoint of a
> score. The statistician who planned the analysis used the ANCOVA on the
mean
> change, using as covariate the baseline values of the scores.
> Do you think this analysis is correct?
> I thing that in this way we are correcting twice. I think that the right
> analysis is an ANOVA on the mean change.
> Please let me know your opinion
> thanks
> Paolo
>
>
>
>
> =
> Instructions for joining and leaving this list and remarks about
> the problem of INAPPROPRIATE MESSAGES are available at
>   http://jse.stat.ncsu.edu/
> =
>


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Analysis of covariance

2001-09-25 Thread Dr. John Ambrose

If you are using ANCOVA  then the base score is the covariate and the final 
score the criterion. ANCOVA is generally preferred to ANOVA on gain scores.

John Ambrose
University of the Virgin Islands
St. Thomas VI 00802

At 10:26 AM 9/25/01 +, Morelli Paolo wrote:
>HI all,
>I have to analyse some clinical data. In particular the analysis is a
>comparison between two groups of the mean change baseline to endpoint of a
>score. The statistician who planned the analysis used the ANCOVA on the mean
>change, using as covariate the baseline values of the scores.
>Do you think this analysis is correct?
>I thing that in this way we are correcting twice. I think that the right
>analysis is an ANOVA on the mean change.
>Please let me know your opinion
>thanks
>Paolo
>
>
>
>
>=
>Instructions for joining and leaving this list and remarks about
>the problem of INAPPROPRIATE MESSAGES are available at
>   http://jse.stat.ncsu.edu/
>=




=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Analysis of covariance

2001-09-25 Thread Dennis Roberts

At 10:26 AM 9/25/01 +, Morelli Paolo wrote:
>HI all,
>I have to analyse some clinical data. In particular the analysis is a
>comparison between two groups of the mean change baseline to endpoint of a
>score. The statistician who planned the analysis used the ANCOVA on the mean
>change, using as covariate the baseline values of the scores.
>Do you think this analysis is correct?

NO! ... this is not a legitimate covariate ... a pre measure of the same 
thing you are measuring latter as evidence of effectiveness

the notion of a covariate is to have previously collected data ... on a 
variable that rationally should explain some of the variance in the 
criterion ... and the idea is to "remove" that part of the criterion 
variance that can be accounted for by the co-linearity with the covariate

in situations where the treatment effect is likely to be small ... 
especially if error variance is large ... using an appropriate covariate 
(assuming of course that Ss were randomly assigned to the different 
conditions) is a good way to reduce the error term and hence, increase your 
chances for finding "significance" (if that is your goal)

>I thing that in this way we are correcting twice. I think that the right
>analysis is an ANOVA on the mean change.
>Please let me know your opinion
>thanks
>Paolo
>
>
>
>
>=
>Instructions for joining and leaving this list and remarks about
>the problem of INAPPROPRIATE MESSAGES are available at
>   http://jse.stat.ncsu.edu/
>=

_
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Analysis of covariance

2001-09-25 Thread Morelli Paolo

HI all,
I have to analyse some clinical data. In particular the analysis is a
comparison between two groups of the mean change baseline to endpoint of a
score. The statistician who planned the analysis used the ANCOVA on the mean
change, using as covariate the baseline values of the scores.
Do you think this analysis is correct?
I thing that in this way we are correcting twice. I think that the right
analysis is an ANOVA on the mean change.
Please let me know your opinion
thanks
Paolo




=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=