Hi, this is about Jim Clark's reply to dennis roberts.

> On 12 Sep 2001, dennis roberts wrote:
> > At 07:23 PM 9/12/01 -0500, jim clark wrote:
> > >What your table shows is that _both_ dimensions are informative.
> > >That is, you cannot derive effect size from significance, nor
> > >significance from effect size.  To illustrate why you need both,
> > >consider a study with small n that happened to get a large effect
> > >that was not significant.  The large effect should be "ignored"
> > >as being due to chance.  Only having the effect size would likely
> > >lead to the error of treating it as real (i.e., non-chance.
> > 
> > or, another way to view it is that neither of the dimensions is very
> > informative
> 
> I'm not sure how "both informative" gets translated into "neither
> very informative."  Seems like a perverse way of thinking to me.  
> Moreover, your original question was "then what benefit is there
> to look at significance AT ALL?" which implied to me that your
> view was that significance was not important and that effect
> size conveyed all that was needed.

When using the information conveyed in the p-values and/or effect 
size measures and/or decisions about some null hypothesis, in my 
opinion there's only one place to look: effect size measures given 
with CIs are informative. Significance alone gives you no clue to 
whether an effect is of any practical importance in the real world 
situation.

>...

> > the distinction between significant or not ... is based on an arbitrary
> > cutoff point ... which has on one side ... the notion that the null
> > seems as though it might be tenable ... and the other side ... the
> > notion that the null does not seem to tenable ... but this is not an
> > either/or deal ... it is only a matter of degree
> 
> It was your table, but the debate would be the same if you put
> multiple rows with p values along the dimension.  That is, what
> is the relative importance of significance (or p value) and
> effect size.
 
Yes it would be the same debate. No matter how small the p-value it 
gives very little information about the effect size or its practical 
importance. 

When your data are on a scale which is arbitrary, not meters or 
USDs, let's say you have constructed a scale from multiple items. 
How do you define effect size? How can differences between means 
be interpreted to be informative? 

Cheers! /Rolf Dalin
**************************************************
Rolf Dalin
Department of Information Tchnology and Media
Mid Sweden University
S-870 51 SUNDSVALL
Sweden
Phone: 060 148690, international: +46 60 148690
Fax: 060 148970, international: +46 60 148970
Mobile: 0705 947896, intnational: +46 70 5947896
Home: 060 21933, intnational: +46 60 21933
mailto:[EMAIL PROTECTED]
http://www.itk.mh.se/~roldal/
**************************************************


=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to