The state of UCD and the overall usefulness of design testing are
fascinating topics, but I%u2019d like to return to the original topic
of usability testing, sample size and statistical significance,
because I think it is relevant in these times of tight research
budgets.

Research methods like usability testing are not quantitative or
qualitative in and of themselves. It%u2019s the manner in which the
data is collected and analyzed that makes the results either
quantitative or qualitative. You can have quantitative usability
testing or user interviews, and you can have qualitative surveys.
(More on this at: http://www.virtualfloorspace.com/?p=22) 

The companies I work with would find it financially impractical to
undertake a statistically valid usability test, because of the
resources required to operationalize the concept of usability into
quantifiable variables that can be consistently and reliably
measured, and to engage a sample large enough to reach a satisfactory
confidence interval. A company like Microsoft, on the other hand, with
products that last for many years in a consistent form, and millions
of users performing repetitive operations, could get value from
quantitative usability testing.

The web sites I conduct usability testing for are large scale
e-commerce sites. They are trying to do something different and new
with every major release, and the usability of the site design will
have a dramatic impact on the bottom line. So they agree to user
testing at reasonable intervals to discover challenges that people
who know nothing about web site design may have, people who are in
their underwear at 2 a.m. buying a pair of shoes online or a new
appliance to replace one that broke down. 

It%u2019s possible that genius designers are so in tune with their
customers that they don%u2019t need to run their designs at
successive stages of fidelity by a sample of customers to gain a
better understanding of how they will interpret and respond to new
interactive features, the kinds of supporting content they need, the
points in the process when they are likely to stop and consult
discussion boards or chat, etc. etc., but I haven%u2019t met these
designers yet.

In qualitative research, regardless of data collection method, sample
selection and size are always part science and part art. The science
part uses an understanding of different types of samples for
qualitative research and how to ensure that you are seeing a broad
enough range of people based on their variance along key dimensions
relevant to the site you are testing. A good source for this type of
information is Qualitative Evaluation Methods, by Michael Patton. 

The art is that an experienced design researcher can estimate the
variability they are likely to see for a given system and set of user
segments, and balance that with the research goals and budget to
designate a sample size that is likely to result in enough repetition
to give the team confidence in the results. To publish a paper about
this number of participants and have people apply it to their
projects without understanding the impact of different design
variables, different goals, different user segment characteristics,
etc., is to sell your audience a bill of defective goods. 

Paul Bryan
Usography (www.usography.com)
Linked In: http://www.linkedin.com/in/uxexperts





. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=46278


________________________________________________________________
Welcome to the Interaction Design Association (IxDA)!
To post to this list ....... disc...@ixda.org
Unsubscribe ................ http://www.ixda.org/unsubscribe
List Guidelines ............ http://www.ixda.org/guidelines
List Help .................. http://www.ixda.org/help

Reply via email to