Sorry for the delayed response.  I would like to respond just to
clarify some things in my own mind more than anything else.  Your
advice is very helpful, and some of the content from my statistics
class taken years ago is coming back to me now.

Rich Ulrich <[EMAIL PROTECTED]> wrote in message 
news:<[EMAIL PROTECTED]>...
> On 26 Jun 2002 20:46:50 -0700, [EMAIL PROTECTED] (David Emery)
> wrote:
> 
> > Hello,
> > 
> > I've implemented my first statistical research study for college which
> > attempts to determined subjects' levels of interest (ordinal) compared
> > to levels of attributes (continuous) during an educational
> > presentation.  Due to nonlinear relationships, I used ANOVA instead of
> > multiple regression.  
> 
> 'Ordinal'  works out as pretty close to 'interval' for a lot of
> purposes, especially whenever you start with a scale with
> only a few points.   You could consider using your scores
> of (integers, 1-to-5)  for correlation.  On the other hand,
> there is the really odd result that you write, where var#4
> show the (relatively) huge difference, and the groups are
> badly out of order.  (Unless that is a typographical error 
> where 3.09  was 2.09.)

This is an interesting result but not a typographical error. 
Participants (I see now 53 actually) of an educational presentation
provided their interest rating every 20 seconds for 48 repetitions
each.  X4 is the instructional complexity of the material.  From my
novice perspective, the results seem to indicate that the
representative population are mostly either very interested in
complexity, or very uninterested in complexity.  So, as far as
complexity goes, it looks like participants either "love it or hate
it."  If the "no opinion" response is not taken too seriously, the
results indicate that interest increases as complexity increases,
until complexity peaks out, when interest diverges.

Back to your first point, I think that treating interest ratings as
ordinal variables is better than treating them as interval variables. 
There is no reason to assume each rating of interest is evenly
disbursed, as I believe an interval variable would imply.  Of course
treating the interest level as ordinal takes away any theoretical
issues about how to accurately measure interest, except in terms of a
linguistic (ordinal) response where one level is known to be just
higher or lower than another.


> 
> >                        Sometimes the simplest answers are the hardest
> > ones for me to find, so I was wondering if anyone could give me a hint
> > about how I would further present the precision of my findings,
> > including sampling error... in this case I had 54 subjects with about
> > 48 repeated measures each.
> > 
> > The data I obtained is below, and to be honest, I am not entirely sure
> > what the F-ratio is.  For X2 and X4, p=0.0, which I believe indicates
> > more significance, but I'm not entirely sure why that is the case
> > either.  
> 
> What p=0.0  denotes was merely shorthand for "P < .0005" ...
> or whatever the limit of precision that was used elsewhere.
> And that signifies that you don't expect results that extreme
> to happen by chance unless you do a WHOLE LOT of trials.
>  [ ... ]
> >                          Interest Level
> > Attribute      -2        -1         0        1        2      F-Ratio
>  [ ... ]
> > X4/10          3.20      2.62      2.69     2.24     3.09    27.17
> 
> The 48 repetitions are hardly supposed to be independent
> trials and independent tests of separate, important 
> hypotheses.  Before you start correcting for 
> 'multiple tests', you want to reduce the multiplicity to
> something less than a handful.
> 
> What to do?  Select out the important ones (a-priori:
> meaning, before considering 'tests.')
> 
> Select out the 'reliable' ones  based on correlation
> with other measures.

The way I understand this is that I can test for reliability by
selecting a few of the iterations.  I did this by selecting each
interval ending in 7, or 9, meaning 7,9,17,19,...,39,47.  (I'm
somehwat ccnfused about why the statistician would knowlingly select
the 'reliable' ones instead of choosing them at random.)  This reduced
the number of observations from to 2491 to 496.  Maybe this is still
more than a handful of observations, but I had to select
representatative obseravations throughout the presentation to account
for the relevance of the linear increase in X4 as the presentation
progressed, or instructional complexity.  I got p<0.05 for X2 and xI4
with both the full and partial datasets.

> Create one or a few 'total-scores'  or averages
> where the logical dimension seems united.
> 
> Create one or a few 'composite scores'  where
> the logical dimensions are several, but you can argue
> for an underlying similarity based on intercorrelations.


Since my results from the preselected reduced data set matched up very
well with the full data, does that mean that "the logical dimension
seems united" as you say?  I'm also uncomfortable with the use of the
word 'correlation' because I understand the word to be derived from
multiple regression, which I'm not using here, but I know is common in
conjoint analysis (either that or ANOVA) when there are relashionships
that work well with CA.

My paper is essentially finished now and your advice has helped me
wrap things up, so if you would like a full copy let me know and I'll
be able to provide a link to it.


Dave
.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to