Jared,

loved your post about getting the team to observe users, instead of doing 
traditional usability tests. (Loved the way you presented the same thoughts at 
IA Summit in Miami last year, too, btw!)

Let's say we're developing a new version of an existing service. Based on the 
insights from your research -  what do you think would be the best strategy?

To stick with letting the team watch users use the existing version - and thus, 
over the project, collect richer and richer real experience, and trust that the 
teams design skills will provide us with good solutions for the new version?

Or should we make the users try our gradually developed prototypes of the new 
product, in session after session?

BTW, do the successfull teams require their team members to document their 
observations of users? Or is it more efficient to let them use this input and 
the insights in an informal way?


Jonas Söderström
senior information architect
Sweden

--------------------------------------------------------
For the lesson lies in learning and by teaching I'll be taught
for there's nothing hidden anywhere, it's all there to be sought
                                        - Keith Reid
---------------------------------------------------------



At 15.01 -0400 09-10-04, Jared Spool wrote:
>On Oct 4, 2009, at 5:40 AM, James Page wrote:
>
>>The issue I have with testing with just a few users is that it can exclude a 
>>significant issue.
>
>James,
>
>I think that's the major flaw in your thinking. You're trying to use usability 
>testing primarily for issue detection and it's a very inefficient tool for 
>that.
>
>>Nielsen makes a claim that his useit site might look awful, but that it is 
>>readable, which is is not the case for me. I am Dyslexic, and I find 
>>Nielsen's useit website hard going, because he uses very wide column widths.
>
>I too am dyslexic, but the column widths aren't the big issue I have with 
>Jakob's site. The big issue issue I have is his content.
>
>>By only using a few people for user research in one location, are you not 
>>excluding a significant number of your site's audience?
>
>Yes.
>
>Which is why using usability testing as a sole source for issue detection will 
>inevitably fail.
>
>There's no way you could put together a cost-effective study (even with 
>super-duper remote testing applications) that would participants at chance for 
>every possible variance found in humans.
>
>By trying to use usability testing in this way, you're creating a final 
>inspection mentality, which Demming and the world of statistical quality 
>control has taught us (since the 40s) is the most expensive and least reliable 
>way of ensuring high quality. Issues will be missed and users will be less 
>satisfied using this approach.
>
>Instead, a better approach is to prevent the usability problems from being 
>built into the design in the first place. Jakob shouldn't need to conduct 
>usability tests to discover that longer column widths could be a problem with 
>people with reading disabilities. In fact, those of us who've paid attention 
>to the research on effective publishing practices have known for a long time 
>that shorter columns are better.
>
>Larger sample sizes, even when the testing is dirt cheap, is too expensive for 
>finding problems like this. We need to shift away from the mentality that 
>usability testing is a quality control technique.
>
>Because of this, we've found in our research that teams get the most value 
>from  usability testing (along the other user research techniques) when they 
>use it to inform their design process. By getting exposure to the users, the 
>teams can make informed decisions about their design. The more exposure, the 
>better the outcomes of the designs.
>
>To research this, we studied teams building a variety of online experiences. 
>We looked for correlations between those teams' user research practices and 
>how effective the team was at producing great designs. We looked at the range 
>of techniques they employed, whether they hired experienced researchers, how 
>many studies they ran, how frequently the studies were, and about 15 other 
>related variables.
>
>We found that many of the variables, including the nature of the studies (lab 
>versus field, for example) or number of study participants did not correlate 
>to better designs.
>
>More importantly, we found that 2 key variables did correlate substantially to 
>better designs: the % of hours of exposure each team member had to primary 
>observation and the frequency of primary observation.
>
>This led us to start recommending that teams try to get every team member 
>exposed to as many hours of observing users throughout the design process. The 
>minimum we're recommending is 2 hours of observation every 6 weeks. The best 
>teams have their team members observing users for several hours every week or 
>so.
>
>Based on our research, we can confidently predict that having each team member 
>watch two users for two hours every 3 weeks will result in a substantially 
>better design than hiring the world's most experienced user researchers to 
>conduct a 90-participant study that none of the team members observe.
>
>So, number of participants in the study is a red herring. The real value is 
>number of hours each team member is exposed to users.
>
>That's my opinion, and it's worth what you paid for it.
>
>Jared
>
>p.s. Is Webnographer an unmoderated remote usability testing tool? It occurred 
>to me this morning that it would be great to combine unmoderated remote 
>usability testing with eye tracking. Then we could throw out all the data in a 
>single step, instead of having to ignore it piecemeal. A huge step forward in 
>efficiency, I would think.
>
>Jared M. Spool
>User Interface Engineering
>510 Turnpike St., Suite 102, North Andover, MA 01845
>e: jsp...@uie.com p: +1 978 327 5561
>http://uie.com  Blog: http://uie.com/brainsparks  Twitter: @jmspool
>
>
>________________________________________________________________
>Welcome to the Interaction Design Association (IxDA)!
>To post to this list ....... disc...@ixda.org
>Unsubscribe ................ http://www.ixda.org/unsubscribe
>List Guidelines ............ http://www.ixda.org/guidelines
>List Help .................. http://www.ixda.org/help


-- 

________________________________________________________________
Welcome to the Interaction Design Association (IxDA)!
To post to this list ....... disc...@ixda.org
Unsubscribe ................ http://www.ixda.org/unsubscribe
List Guidelines ............ http://www.ixda.org/guidelines
List Help .................. http://www.ixda.org/help

Reply via email to