This can be ratified however by for each data point, ask three
questions for it. Basically the same question three times in a
different way.
Then you can cross calculate from there the validity of data input.
Sure, not as great as field work but...will get the job done safely.
On Nov 26, 2009, at 2:04 PM, Jared Spool wrote:
On Nov 25, 2009, at 1:12 PM, Elizabeth Kell wrote:
Does anyone have lessons learned they are
willing to share from their own attempts at Mulder-style quant work,
in particular in crafting and deploying surveys? :)
I would *highly recommend* you not try to do this with surveys. It's
very easy to create a survey where the participant actually answers
a different question than the one you think you're asking. When that
happens, you can't trust any of the data you've collected, because
you're not doing an apples-to-apples comparison.
Instead, I recommend discussion-based interviews. This is time
consuming, but produces far more reliable data.
Jared
Jared M. Spool
User Interface Engineering
510 Turnpike St., Suite 102, North Andover, MA 01845
e: jsp...@uie.com p: +1 978 327 5561
http://uie.com Blog: http://uie.com/brainsparks Twitter: @jmspool
________________________________________________________________
Welcome to the Interaction Design Association (IxDA)!
To post to this list ....... disc...@ixda.org
Unsubscribe ................ http://www.ixda.org/unsubscribe
List Guidelines ............ http://www.ixda.org/guidelines
List Help .................. http://www.ixda.org/help
________________________________________________________________
Welcome to the Interaction Design Association (IxDA)!
To post to this list ....... disc...@ixda.org
Unsubscribe ................ http://www.ixda.org/unsubscribe
List Guidelines ............ http://www.ixda.org/guidelines
List Help .................. http://www.ixda.org/help