@Jared,

Are we talking about the same Deming that said  "In God we trust, all others
bring data." :-)

Measurement is very important part of Deming's, *PLAN* , *DO*, *CHECK*, and
*ACT *cycle. Remote usability testing can play a critical role in the check
part of the cycle. There is nothing final about it, it should be continuous.


As Deming said "*Statistical thinking and statistical methods are to
Japanese production workers, foremen, and all the way through the company, a
second language.*"

I think where you are confused is that Deming did not believe in raw
targets.

I think our argument boils down to if you believe that peoples behavior is
homogeneous across the world. From the studies that we have carried out
people in behave differently in different cultures. Also people using
different machine configurations, screen sizes and OS behave differently.
Macintosh users 'Think Different'. Most of our clients use traditional
usability testing as well as remote, if our remote method was not useful
they would not use us again and again. Both your methods and our method help
clear the fog of understanding the user, ours remotely and yours locally.

You may have used a remote tool years ago and been disappointed, but we
would not have gone to the trouble and hard work of developing our own
tool<http://www.webnographer.com>if we felt that current tools where
answering the questions that was
possible through the remote method.

@Dana
There is a talk in Brighton in the UK. The theme of the talk is how Remote
Usability can be used at every stage of the design process. Come along it
may change your thinking.
http://uxbrighton.org.uk/event-remote-user-research-a-360%C2%B0-degree-view/

On A vs B testing versus remote. Both are good but answer different
questions. I will blog about the differences and send you a link.


All the best

James
blog.feralabs.com





2009/10/4 Jared Spool <jsp...@uie.com>

>
> On Oct 4, 2009, at 5:40 AM, James Page wrote:
>
>  The issue I have with testing with just a few users is that it can exclude
>> a significant issue.
>>
>
> James,
>
> I think that's the major flaw in your thinking. You're trying to use
> usability testing primarily for issue detection and it's a very inefficient
> tool for that.
>
>  Nielsen makes a claim that his useit site might look awful, but that it is
>> readable, which is is not the case for me. I am Dyslexic, and I find
>> Nielsen's useit website hard going, because he uses very wide column widths.
>>
>
> I too am dyslexic, but the column widths aren't the big issue I have with
> Jakob's site. The big issue issue I have is his content.
>
>  By only using a few people for user research in one location, are you not
>> excluding a significant number of your site's audience?
>>
>
> Yes.
>
> Which is why using usability testing as a sole source for issue detection
> will inevitably fail.
>
> There's no way you could put together a cost-effective study (even with
> super-duper remote testing applications) that would participants at chance
> for every possible variance found in humans.
>
> By trying to use usability testing in this way, you're creating a final
> inspection mentality, which Demming and the world of statistical quality
> control has taught us (since the 40s) is the most expensive and least
> reliable way of ensuring high quality. Issues will be missed and users will
> be less satisfied using this approach.
>
> Instead, a better approach is to prevent the usability problems from being
> built into the design in the first place. Jakob shouldn't need to conduct
> usability tests to discover that longer column widths could be a problem
> with people with reading disabilities. In fact, those of us who've paid
> attention to the research on effective publishing practices have known for a
> long time that shorter columns are better.
>
> Larger sample sizes, even when the testing is dirt cheap, is too expensive
> for finding problems like this. We need to shift away from the mentality
> that usability testing is a quality control technique.
>
> Because of this, we've found in our research that teams get the most value
> from  usability testing (along the other user research techniques) when they
> use it to inform their design process. By getting exposure to the users, the
> teams can make informed decisions about their design. The more exposure, the
> better the outcomes of the designs.
>
> To research this, we studied teams building a variety of online
> experiences. We looked for correlations between those teams' user research
> practices and how effective the team was at producing great designs. We
> looked at the range of techniques they employed, whether they hired
> experienced researchers, how many studies they ran, how frequently the
> studies were, and about 15 other related variables.
>
> We found that many of the variables, including the nature of the studies
> (lab versus field, for example) or number of study participants did not
> correlate to better designs.
>
> More importantly, we found that 2 key variables did correlate substantially
> to better designs: the % of hours of exposure each team member had to
> primary observation and the frequency of primary observation.
>
> This led us to start recommending that teams try to get every team member
> exposed to as many hours of observing users throughout the design process.
> The minimum we're recommending is 2 hours of observation every 6 weeks. The
> best teams have their team members observing users for several hours every
> week or so.
>
> Based on our research, we can confidently predict that having each team
> member watch two users for two hours every 3 weeks will result in a
> substantially better design than hiring the world's most experienced user
> researchers to conduct a 90-participant study that none of the team members
> observe.
>
> So, number of participants in the study is a red herring. The real value is
> number of hours each team member is exposed to users.
>
> That's my opinion, and it's worth what you paid for it.
>
> Jared
>
> p.s. Is Webnographer an unmoderated remote usability testing tool? It
> occurred to me this morning that it would be great to combine unmoderated
> remote usability testing with eye tracking. Then we could throw out all the
> data in a single step, instead of having to ignore it piecemeal. A huge step
> forward in efficiency, I would think.
>
> Jared M. Spool
> User Interface Engineering
> 510 Turnpike St., Suite 102, North Andover, MA 01845
> e: jsp...@uie.com p: +1 978 327 5561
> http://uie.com  Blog: http://uie.com/brainsparks  Twitter: @jmspool
>
>
>
________________________________________________________________
Welcome to the Interaction Design Association (IxDA)!
To post to this list ....... disc...@ixda.org
Unsubscribe ................ http://www.ixda.org/unsubscribe
List Guidelines ............ http://www.ixda.org/guidelines
List Help .................. http://www.ixda.org/help

Reply via email to