It seems to me that there is a relationship between the question and the scale on
which someone answers that question so that, as Deborah has noted, all Likert
scales (or there variations) cannot be used for all questions.  One way to avoid
some of the problems about which I have heard TIPsters talk is to provide the
"anchor" in the question (Marc talked a similar issue briefly when he mentioned the
use of the midpoint in the scale as an anchor).  In Deborah's example, could the
question state "relative to other games that you have seen, how gory is "Doom"?
Either way, we are simply trying to find an anchor against which we can compare S's
responses.  One way is to attempt to encourage people to interpret the Likert scale
in a similar way, and the other, it seems to me, is to put the anchor in the
question itself.  Still, I'm still not sure that the problem is solved by
"standardizing" either the interpretation of the question or the interpretation of
the scale.

On a related topic (to which I think Marc also eluded when he talked about the
noise that we might get, even within groups), if we ever reach a point where we can
get subjects to interpret a exactly the same as each other (or at least convince
ourselvles that they are close enough), and we get them to interpret the scale
exactly the same as each other, are we removing much of the variation that is of
most interest to psychologists?  In other words, if we could COMPLETELY solve the
"problem" of individual interpretations of a scale (or of a question), would we be
studying people, or "modified" people.  I understand that some amount of standard
interpretation is necessary (we wouldn't want one person to be evaluating
temperature when asked to assess how "hot" a food was, another the spice level, and
a third the attractiveness of it, but might we be approaching a sort of modern
Wundtian introspection where we train people to give us particular types of
responses.  Just a thought.

By the way, research on this issue from Gallop (I am told that they still argue
about this intensely over there) is that there are negligible (if any) benefits of
having more than 5 points on a Likert scale.  The research was internal and was
only described to me by a peer who works there.  I would be most happy to see any
more published research on the topic if anyone comes across it.

Cheers!

Steve

Deborah Briihl wrote:

> One clever way that I have seen that helps fix this problem is the way that
> Linda Bartoshuk uses to measure taste perception. Instead of the standard 9
> point scale from 1 to 9, she uses what she calls the Green Labeled
> Magnitude Scale. For example, if measuring bitterness, the scale ranges
> from nothing to very strong to strongest imaginable sensation. While I'm
> not sure how easy it would be to use in all situations that use the Likert
> scale, it could be adapted to a variety of measures. Using the computer
> game example - the most gory game I have ever seen or for computers, the
> most frustrating situation I have been in - you get the picture. And,
> instead of numbers, the scale is marked on a line like so (sorry if this wraps)
>
> /_/____/______/______/______/_____________________________________/
> nothing weak moderate strong very
> strong                                       strongest imaginable sensation
>     barely detectable
>
> I am trying out this scale this semester with a student who is interested
> in perception of spicy foods. We knew that we would get ceiling effects
> using a standard scale (one of our hot sauces is VERY hot), so we are
> trying out this one.
>
> At 05:51 PM 10/24/00 -0500, G. Marc Turner wrote:
> >On #1, I was taught LIE-kert as an undergrad (and my mother learned it this
> >way in her grad work) but LICK-ert as a grad student. After further
> >investigation, Ken's statement is correct as best I can tell. It should be
> >LICK-ert. (And hey, some of my professors in grad school knew him, and so I
> >trust their pronunciation of his name.)
> >
> >On #2, again I'm going to agree with what I think Ken is getting at. The
> >big question is one of instrumentation. Are the two groups using the scale
> >in the same way? My feeling is that when a participant approaches a scale
> >like this they form an idea in their mind that represents the mid-point.
> >They then use this imaginary mid-point to determine how they respond. Not
> >only could there be differences in interprtation between groups, there
> >could be lots of variation within a group... and hence lots of noise and
> >error in our measurements.
> >
> >On a semi-related note, when I finally finish my dissertation I'm hoping to
> >revive some work on computer literacy I did a couple of years ago.
> >Basically, I was in the process of developing a new measure of computer
> >literacy and one of the things we looked at in the development was the
> >issue of gender differences. Basically, we kept hearing claims that "males
> >are more computer literate than females." Well, on the self-report portions
> >of our instrument, which used a Likert scale, there was a difference
> >between the genders. BUT, on the knowledge/application portion where
> >participants had to actually perform some tasks...or at least demonstrate
> >some knowledge about how to perform a task... there was NO difference.
> >(Okay, the average scores between males and females differed by less than
> >half a point on a scale of 0-50 so there was a "difference" but not a
> >meaningful one.)
> >
> >Basically, it looked like one of two things was happening:
> >
> >1) Females were less confident in their abilities to use a computer despite
> >being equally capable (which appeared to be the case given the manner in
> >which questions were asked.), or
> >
> >2) Females interpretted and used the response scale differently than males
> >did, which brings us back to the point Ken was making (I think).
> >
> >This was a side project I did on a whim in grad school so I never got to
> >really look at things as much as I would have liked...
> >
> >Okay, back to working on the dissertation....
> >- Marc
> >G. Marc Turner, MEd
> >Lecturer & Head of Computer Operations
> >Department of Psychology
> >Southwest Texas State University
> >San Marcos, TX  78666
> >phone: (512)245-2526
> >email: [EMAIL PROTECTED]
>
> Deb
>
> Dr. Deborah S. Briihl
> Dept. of Psychology and Counseling
> Valdosta State University
> Valdosta, GA 31698
> (229) 333-5994
> [EMAIL PROTECTED]
>
> Well I know these voices must be my soul...
> Rhyme and Reason - DMB

--
****************************************************
Steve Vanden Avond, Ph.D.
Assistant Professor of Psychology
Department of Social Science and History
Silver Lake College
2406 Alverno Drive
Manitowoc, WI  54220
Voice:  (920) 686-6227
e-mail: [EMAIL PROTECTED]

                        _/_/_/_/_/      _/               _/_/_/_/
                       _/                 _/               _/
                     _/_/_/_/_/      _/               _/
                                _/     _/               _/
                   _/_/_/_/_/     _/_/_/_/_/    _/_/_/_/_/

**********************************************************
               http://www.sl.edu/socscience/Default.htm


Reply via email to