niversally applicable.
Similarly --
On Thu, 13 Sep 2001 18:17:54 -0500, jim clark <[EMAIL PROTECTED]>
wrote:
> Hi
>
> I found the Rosenthal reference that addresses the following
> point:
>
> On 13 Sep 2001, Herman Rubin wrote:
> > The effect size is NOT small, o
I remember I read somewhere about different effect size measures
and now I found the spot: A book by Michael Oakes, U. of Sussex,
"Statistical Inference" 1990. The measures were (xbar-ybar)/s,
Proportion misclassified, r squared (biserial corr) and w squared
(which I think means t
Mike Granaas wrote:
> I think that we might agree: I would say that studies need a clear a
> priori rational (theoretical or empirical) prior to being conducted. It
> is only in that context that effect sizes can become meaningful. If a
Even then standardized effect sizes may not be very helpf
Dennis Roberts <[EMAIL PROTECTED]> wrote:
: given a simple effect size calculation ... some mean difference compared to
: that is ... can we not get both NS or sig results ... when calculated
: effect sizes are small, medium, or large?
: if that is true ... then what benefit is there t
Hi
I found the Rosenthal reference that addresses the following
point:
On 13 Sep 2001, Herman Rubin wrote:
> The effect size is NOT small, or it would not save more
> than a very small number of lives. If it were small,
> considering the dangers of aspirin, it would not be used
&
Hi
On 13 Sep 2001, Herman Rubin wrote:
> jim clark <[EMAIL PROTECTED]> wrote:
> >Or consider a study with a small effect size that is significant.
> >The fact that the effect is significant indicates that some
> >non-chance effect is present and it m
jim clark wrote:
>
>
> Sometimes I think that people are looking for some "magic
> bullet" in statistics (i.e., significance, effect size,
> whatever) that is going to avoid all of the problems and
> misinterpretations that arise from existing practices. I think
On Thu, 13 Sep 2001, Paul R. Swank wrote in part:
> Dennis said
>
> other than being able to say that the experimental group ... ON AVERAGE ...
> had a mean that was about 1.11 times (control group sd units) larger than
> the control group mean, which is purely DESCRIPTIVE ... what can you say
hen what benefit is there to look at
>> significance AT ALL
...
>What your table shows is that _both_ dimensions are informative.
>That is, you cannot derive effect size from significance, nor
>significance from effect size.
This has to be
me "magic
>bullet" in statistics (i.e., significance, effect size,
>whatever) that is going to avoid all of the problems and
>misinterpretations that arise from existing practices. I think
>that is a naive belief and that we need to teach how to use all
>of the tool
Dennis said
other than being able to say that the experimental group ... ON AVERAGE ...
had a mean that was about 1.11 times (control group sd units) larger than
the control group mean, which is purely DESCRIPTIVE ... what can you say
that is important?
However, can you say even that unless it
t
is only in that context that effect sizes can become meaningful. If a
study was done "just 'cause" then we will frequently not be able to make
sense of the effect size measures.
Michael
>
> if that ONE piece of information were insisted upon ... then all of us
> would be i
u cont
Estimate for difference: 4.40
95% CI for difference: (1.04, 7.76)
T-Test of difference = 0 (vs not =): T-Value = 2.69 P-Value = 0.012 <<<<<
p value
MTB > let k1=(26.13-21.73)/3.95
MTB > prin k1
Data Display
K11.11392 <<<< simple effect size calculation
At 02:33 PM 9/13/01 +0100, Thom Baguley wrote:
>Rolf Dalin wrote:
> > Yes it would be the same debate. No matter how small the p-value it
> > gives very little information about the effect size or its practical
> > importance.
>
>Neither do standardized effect sizes
over, your original question was "then what benefit is there
> > to look at significance AT ALL?" which implied to me that your
> > view was that significance was not important and that effect
> > size conveyed all that was needed.
>
> When using the information con
Rolf Dalin wrote:
> Yes it would be the same debate. No matter how small the p-value it
> gives very little information about the effect size or its practical
> importance.
Neither do standardized effect siz
Hi, this is about Jim Clark's reply to dennis roberts.
> On 12 Sep 2001, dennis roberts wrote:
> > At 07:23 PM 9/12/01 -0500, jim clark wrote:
> > >What your table shows is that _both_ dimensions are informative.
> > >That is, you cannot derive ef
Hi
On 12 Sep 2001, dennis roberts wrote:
> At 07:23 PM 9/12/01 -0500, jim clark wrote:
> >What your table shows is that _both_ dimensions are informative.
> >That is, you cannot derive effect size from significance, nor
> >significance from effect size. To illustr
At 07:23 PM 9/12/01 -0500, jim clark wrote:
>Hi
>
>
>What your table shows is that _both_ dimensions are informative.
>That is, you cannot derive effect size from significance, nor
>significance from effect size. To illustrate why you need both,
>consider a study with sma
Hi
On 12 Sep 2001, Dennis Roberts wrote:
> given a simple effect size calculation ... some mean difference compared to
> some pooled group or group standard deviation ... is it not possible to
> obtain the following combinations (assuming some significance tes
At 04:04 PM 9/12/01 -0400, you wrote:
if that is true ... then what benefit is there
to look at significance AT ALL
To get published, get tenure, and avoid having to live in a cardboard box
in the park. Ha ha!
Lise
given a simple effect size calculation ... some mean difference compared to
some pooled group or group standard deviation ... is it not possible to
obtain the following combinations (assuming some significance test is done)
effect size
small
: how
> do you compute the effect size of each of the simple main effects? I
> was wondering if I could simply do a paired sample t-test of each of
> the significant simple effects and then compute d from the
> t-statistic.
Paired t-tests are the usual followup for repeated measures,
and
Hi,
I'm running a 2 x 5 fully-within ANOVA design, where A has 2 levels
and B has 5 levels. After finding a significant interaction, I looked
at the simple main effect of A at each level of B. My question is: how
do you compute the effect size of each of the simple main effects? I
was wond
Melady Preece writes:
>Question 2: Can an effect size of .29 (or .33) be considered
>clinically significant?
With all due respect, clinical significance can only be assessed by
clinicians (i.e., subject matter experts). The doctors and nurses I work
with want me to tell them what a clin
On Sun, 15 Jul 2001, Melady Preece wrote:
> I have done a paired t-test on a measure of self-esteem before and
> after a six-week group intervention.
>
> There is a significant difference (in the right direction!) between
> the means using a paired t-test, p=.009. The effect s
I have done a paired t-test on a measure of self-esteem before and after a
six-week group intervention.
There is a significant difference (in the right direction!) between the
means using a paired t-test, p=.009. The effect size is .29 if I divide by
the standard deviation of the pre-test mean
o be analyzed, the
> >"effect size" is magnified by averaging. That is, if you can change
> >an average by .01, that fraction is a lot bigger fraction of the
> >between-Subject variance (of averages) than it is of the
> >between-trial variance.
WIll >
> Effect si
>
>I think I've resolved this question with a colleague. We likened the
>heritability of a given trait, for example, jump height, to the
>relationship between that trait and some other explanatory variable,
>such as leg length. The R^2 for leg length explaining jump height
>might be 0.36. No
Rich, thanks for those comments. I have a few remarks in reply.
>If you have a criterion (reaction time, etc.) where you average dozens
>or hundreds of observations to make a point to be analyzed, the
>"effect size" is magnified by averaging. That is, if you can change
using Cohen's scale of effect magnitudes (<0.1 =
> trivial, 0.1-0.3 = small, 0.3-0.5 = moderate, >0.5 = large). Thus, a
> variance explained of 0.01 (1%) is actually a small but non-trivial
> effect, because it is equivalent to an effect size of 0.1.
"Cohen's scale&
large). Thus, a
variance explained of 0.01 (1%) is actually a small but non-trivial
effect, because it is equivalent to an effect size of 0.1.
So my question is this: should we take the square root of
heritability to get an idea of the contribution of inheritance to a
particular trait?
Supple
since t = (M(1) - M(2)) / S*sqrt(1/n(1) + 1/n(2))
and d = (M(1) - M(2)) / S,
Doesn't that make d / sqrt(1/n(1) + 1/n(2)) = t ?
At 05:51 PM 4/19/00 -0400, you wrote:
>is there a standard error ... for an effect size?
>
>as an example ... say you were looking at differences
An effect size can be expressed in terms of the coefficients of the
underlying regression model for the experimental design being used. The
standard errors for an effect can therefore be obtained from the standard
errors of the coefficients.
You need to be careful about the relationship between
is there a standard error ... for an effect size?
as an example ... say you were looking at differences between means between
control and treatment ... and, the effect size came out to be ... for sake
of argument ... .3 ... in favor of the treatment
is there (in this case) some standard error
me good examples before going
for it, over the Odds Ratio.
> 3. A third, less important issue, was raised in response to point 2. If
> effect size measures that are resistant to skew are more desirable, is there
> one that could be applied to both dichotomous and quantitative criteria? If
lue of .31 (medium-sized) transformed to an r
value of .07.
The discussion that followed focused on which is the "better" effect size
for understanding the usefulness of these predictors. Some of the key
points raised:
1. r is more useful here for several reasons:
a. It is generall
37 matches
Mail list logo