At 03:04 PM 10/9/01 -0700, Dale Glaser wrote:

>Dennis......yes, the effect size index may be arbitrary, but for argument 
>sake, say I have a measure of 'self-esteem', a 10 item measure (each item 
>a 5-pt. Likert scale) that has a range of 10-50;  sample1 has a 95% CI of 
>[23, 27] whereas a comparison sample2 has CI of [22, 29].  Thus, by 
>maintaining the CI in its own unit of measurement, we can observe that 
>there is more error/wider interval for sample1 than sample2 (for now 
>assuming equal 'n' for each sample).



>However, it is problematic, given the inherent subjectivity of measuring 
>self-esteem, to claim what is too wide of an interval for this type of 
>phenomenon.

i did not know that CIs could tell you this ... under any circumstance ... 
i don't see that standardizing it will solve this problem ...

supposedly, CIs tell you something about the parameter values ... and 
nothing else ... i don't think it is within the capacity of ANY statistic 
... to tell you if some CI is too wide or too narrow ... WE have to judge 
that ... given what we consider in our heads ... is too much error or what 
we are willing to tolerate as precision of our estimates

>  How do we know, especially with self-report measures, where indeed the 
> scaling may be arbitrary, if the margin of error is of concern?  It would 
> seem that by standardizing the CI, as Karl suggests, then we may be able 
> to get a better grasp of the dimensions of error.......at least I know 
> the differences between .25 SD vs. 1.00 SD in terms of 
> magnitude..........or is this just a stretch?!!!

you do this ahead of time ... BEFORE data are collected ... perhaps with 
some pilot work as a guide to what sds you might get ... and then you 
design it so you try to work withIN some margin of error ...

i think the underlying problem here is trying to make sense of things AFTER 
the fact ... without sufficient PREplanning to achieve some approximate 
desired result

after the fact musings will not solve what should have been dealt with 
ahead of time ... and certainly, IMHO of course, standardizing things won't 
solve this either

karl was putting regular CIs (and effect sizes) and standardized CIs (or 
effect sizes) in juxtaposition to those not liking null hypothesis testing 
but, to me, these are two different issues ...

i think that CIs and/or effect sizes are inherently more useful than ANY 
null hypothesis test ... again, IMHO ... thus, brining null hypothesis 
testing into this discussion seems not to be of value ...

of course, i suppose that debating to standardize or not standardize effect 
sizes and/or CIs ... is a legitimate matter to deal with ... even though i 
am not convinced that standardizing "these things" will really gain you 
anything of value

we might draw some parallel between covariance and correlation ... where, 
putting the linear relationship measure on a 'standardized' dimension IS 
useful ... so that the boundaries have some fixed limits ... which 
covariances do not ... but, i am not sure that the analog for effect sizes 
and/or CIs ... is equally beneficial


>Dale N. Glaser, Ph.D.
>Pacific Science & Engineering Group
>6310 Greenwich Drive; Suite 200
>San Diego, CA 92122
>Phone: (858) 535-1661 Fax: (858) 535-1665
><http://www.pacific-science.com>http://www.pacific-science.com
>
>-----Original Message-----
>From: dennis roberts [<mailto:[EMAIL PROTECTED]>mailto:[EMAIL PROTECTED]]
>Sent: Tuesday, October 09, 2001 1:52 PM
>To: Wuensch, Karl L; edstat (E-mail)
>Subject: Re: Standardized Confidence Intervals

dennis roberts, penn state university
educational psychology, 8148632401
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to