Two approaches occur to mind.
 (1) If your error response is only an indicator variable (e.g., 1 =
respondent erred, 0 = respondent correctly identified the alphabet), and
if your sample size is large enough, you can compare the proportion of
errors in the 0.7-m groups with the proportion in the 2.5-m group.
 If your error response is multinomial (as it might be if there are more
than two alphabets that might be (mis)identified), you can make the same
comparison but you can also pursue a similar analysis to identify the
relative likelihood of different misidentifications.  (Hard to give
anything like an hypothethical example, because I don't really know what
you mean by "alphabet" in this context, let alone how many there might
be to choose from.)
 (2)  You could measure the response time and analyze that.  I've been
given to understand by folks who have carried out similar research that
response time is often a more sensitive measure than incidence of error,
and that response time is often different between correct and erroneous
responses.  (This would not prevent you from imposing a time limit as
well, but the time limit might result in the distribution of some
response times being right-truncated.)
 (3)  If you have more than two alphabets, you might want to analyze
their, what shall I say, mutual confusability.  To illustrate, suppose
there are three languages:  A, B, C.  With data you could produce a 3x3
matrix of incidences:  the number of times A was reported when A was
presented (a correct response), the number of times B was reported when
A was presented, and so on.  The principal diagonal of the matrix
contains the incidences of correct responses;  the other entries
indicate errors.  Incidences can readily be turned into proportions (of
row, column, or grand totals), which would permit you to address such
questions as which alphabets are most likely to be wrongly invoked when
alphabet A is presented, or which alphabets are most likely to have been
presented when A is the response.

On Sat, 7 Feb 2004, DG wrote:

> For my thesis I am planning a 3X3X3 within subjects experiment. The
> experimental task requires subjects to count the number of times a
> particular alphabet appears on the monitor and enter it using a
> keyboard (depending on the number of times the alphabet appears). I
> have two dependent variables - error in response and number of
> subtasks completed (ranging from 1 to 5) in a time limit of 15
> seconds.  My research hypothesis is that performance (speed and
> accuracy) will be better when the monitor is at a certain distance
> (0.7m) as compared to when it is at another distance (2.5 m).
>
> For the error in response dependent variable will it be nominal data?
> If it is nominal data then how can I verify the hypothesis. What test
> would I use?
>
> Thanks
> DG

 -----------------------------------------------------------------------
 Donald F. Burrill                                         [EMAIL PROTECTED]
 56 Sebbins Pond Drive, Bedford, NH 03110                 (603) 626-0816
.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to