Hopefully I'm not dragging the conversation back into the mud, but I
understand where both sides are coming from regarding #5. I have seen
fundamental changes in products that took them from dying in the
market to a a leading example in the field thanks to user
observations and usability tests. I have also seen projects weighted
down by nonstop focus groups and groupthink, as well as poorly run
usability tests that led to inaccurate solutions. 

That said, I think each of your points is valid, and addressable.
I've heard each of these from clients, and I wouldn't mind running
through the conversation here to make sure my thoughts are in order.

1. Most products are fairly simple and most of the testing can be
done in house.

- Fair enough. Though I have mostly been working with large systems
and evolving ecosystems of products and services lately, these can
usually be broken down into their simpler components.

It seems that doing testing in house (assuming this means only having
other people in the company look at it) would mean that the same
biases would be with everyone. Unless you are your own client, you
probably won't unveil any new revelations, aside from bug reports.
This is useful, but it's more QA than UX.

2. Most usability tests are not even close to reflect any realistic
version of the environment your product will end up in.

- Great point. Also easily solvable. Do your usability tests in the
environment your product will end up in. Go to your users and test on
their systems, in their environment. Have a conversation with them. It
does take a little more time, but you will get much more useful
information than pulling people into a little florescent
interrogation room and drilling them with questions about why they
clicked -there- while you stand over their shoulder. (No offense to
anyone who has really enjoyed doing this.)

3. The mistakes that you might find are not going to be those that
will determine the success of your company.

- I've seen very poorly run usability tests that prove this (One in
particular involved a woman who repeatedly asked "This makes you
feel like X, right?"). However, I think you may be looking at a
different question than I am. 

Usability testing, for me, isn't about finding mistakes. It's about
uncovering opportunities for success you may not have thought of. I
start very early in the design process, when only the initial
concepts are thought through, and most likely sketched out on pen and
paper or modeled out of foamcore. This makes sure we are addressing
the right issues with the concepts. Finding mistakes is much later in
the process.

4. Many usability tests consist of max 10 people which is simply not
a significantly high enough number to make any decisions based on.
The single best solution is to start simple simple and make sure you
can measure how people use your product. If people are having
problems you will find out soon enough and you will find out where it
matters.

-I think this may be the biggest discrepancy. Qualitative research
(User observations, Usability testing) and Quantitative research
(Focus groups, QA) are very different things and serve very different
purposes. 

Qual is not meant to have statistical significance. There is simply
too much information there, and it's only done until themes start
emerging. Usually 10 or so rounds and they become very obvious. This
is intended to be much earlier in the strategy phase and is much
deeper than Quant. You get inside people's lives and really figure
out what would be useful, how what you are developing will fit into
their lives. Yes, you ask questions about the product, but what they
say doesn't matter as much as how they say it. 

At the risk of sounding trite by quoting Steve Jobs, "You can%u2019t
just ask a customer what they want and try to give that to them. By
the time you get it built, they%u2019ll want something new." This is
where the experience of a designer/researcher really shows. They know
what to listen for, how to ask questions that indirectly reveal the
customer's feelings, and they can convey it later.

Quant is a very different animal. This happens much later in the
process for me, when everything is nearly built. Everything is on
track, and you have very specific questions that you need answers to.
This is when you want statistical significance. People don't
typically look forward to surveys as a highlight of their day, and
they will only give you the quickest and most basic answers off the
top of their head. This is not a dialog or conversation, this is
feedback.

Doing Quant too early in the process tends to give useless feedback
on something that is changing anyway. Doing Qual too late in the
process tends to unearth major issues that will only frustrate
everyone who has invested so much in the product. Doing Qual early on
to make sure you build the right thing, then Quant later to make sure
the details are right, then QA at the end to bug test has been
successful for me.

I have seen many examples that prove your points, and typically they
really only show that the wrong kind of testing is being done, or it
is being done poorly. I have lost track of the number of companies
who, when I asked if they have done user interviews, replied "Of
course, we do focus groups all the time. Want to see the charts?"

At which point I smile and start a conversation about what they have
done so far, and the different methods we are going to use, as well
as what kinds of results they should get. So far, people seem to be
happy when we uncover new things, or put their findings in a new
light.

Chris Dame
http://theusabilityofthings.com


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=45640


________________________________________________________________
Welcome to the Interaction Design Association (IxDA)!
To post to this list ....... disc...@ixda.org
Unsubscribe ................ http://www.ixda.org/unsubscribe
List Guidelines ............ http://www.ixda.org/guidelines
List Help .................. http://www.ixda.org/help

Reply via email to