On 21 Mar 2009, at 12:15, William Brall wrote:
[snip]
Collecting Data is a big part of IxD, and like any field with a
science background, that data need not be collected a second time for
the same problem.
[snip]

Actually it does.

Scientists recollect data and retest things - they have to. That's what separates the theories that last from those that don't - protecting against misinterpretation, experimental error, bias, fraud, etc.

[snip]
The reason the 41 blue test is bad, is any one of us would have told
google the right choice for FREE!

... but 41 shades of blue thing came out of a designer picking a colour that he (and the rest of the team) thought best then...

   "a product manager tested a different color with users and
    found they were more likely to click on the toolbar if it
    was painted a greener shade." (http://tinyurl.com/acs4jp)

So the choice for free - in this particular instance - for this particular context and goal - was not the best one.

Now - I don't know the full context of that testing. I might disagree with how the test was run, or what was being tested, or the goal that the business wanted to achieve.

I'd also hope that I would be open to the idea that I might be wrong - and be willing to look for things to learn to make me a better designer. Maybe by investigating options with some more tests :-)

It it were me the information that the colour I thought would perform better actually perform worse would fascinate me. I'd want to figure that out. Wouldn't you?

Because we are informed by other, older, tests. And a healthy spoon
full of our own observations.

... and sometimes we're wrong.

We are all arguing the same things. Testing is good, when it isn't
moronic. Using what we have learned already is the design of IxD and
is only good if informed by good data. Which we mostly are. Not
perfect, but test data isn't perfect either, and we are a hell of a
lot cheaper than testing everything must be.


I know I am wrong on occasion. Often even :-)

From the number of times in the past I've helped some company deal with the usability disaster that a design agency left them with - I'm pretty sure lots of other folk are wrong on occasion too. Few people knowingly put out bad work.

That's why I pay attention to the results of the usability tests, play with A/B testing, look at the logs, etc. when it comes to my designs. I use that feedback to make me better at design. Because when the mistake is mine it's really hard to figure out which tests are moronic and which aren't.

The nature of the problem means I'm not going to know where I'm making a bad decision. If I did know - I wouldn't think the test was moronic!

We are all arguing the same thing though. I'm in complete agreement that bad testing can act as as a crutch and a route to bad decisions. Unfortunately folk making decisions based on their own expertise and experience are sometimes wrong as well.

When it comes to design decisions I think the right response is "Trust, but Verify".

I think what everybody is arguing about is where that trust/verify line lives. Which is something that is going to depend on the context you're working in - no?

Cheers,

Adrian
--
delicious.com/adrianh - twitter.com/adrianh - adri...@quietstars.com

________________________________________________________________
Welcome to the Interaction Design Association (IxDA)!
To post to this list ....... disc...@ixda.org
Unsubscribe ................ http://www.ixda.org/unsubscribe
List Guidelines ............ http://www.ixda.org/guidelines
List Help .................. http://www.ixda.org/help

Reply via email to