Thanks a lot Thomas and John for your suggestions.

To better clarify my settings, I have two new tools (aiming at teaching two different topics: general programming the first, sorting algorithms the second) that I want to compare against NOT using the tools.

Do you have any references to similar evalutations?

Stefano


Citando John Daughtry <j...@daughtryhome.com>:

I would suggest taking a more holistic view of the design space. Rather than
asking which tool is best, you may be better served by seeking to
empirically describe and explain the underlying trade-offs. In what ways do
option1 help, hinder, and undermine learning? In what ways do option2 help,
hinder, and undermine learning? In all likelihood there are answers to all
six questions.

John
--------------------------------------------------
Associate Research Engineer
The Applied Research Laboratory
Penn State University
daugh...@psu.edu



On Tue, Mar 1, 2011 at 7:08 AM, Thomas Green <green...@ntlworld.com> wrote:

Depending on your aims, you might want to measure transfer to other
problems: that is,  do participants who used tool A for the sorting task,
then do better when tackling  a new problem, possibly with a different tool,
than participants who used tool B?

You might also want to look at memory and savings: how do the participants
manage two months later? Occasionally cognitive tasks like yours show no
effect at the time but produce measurable differences when the same people
do the same tasks later.

Pretty hard to create a truly fair test, but things to think about are
controlling for practice and order effects, which should be easy, and
controlling for experimenter expectation effects. The hardest thing to
balance for is sometimes the training period: people using a new tool have
to learn about it, and that gives them practice effects that the controls
might not get. Sometimes people create a dummy task for the control
condition to avoid that problem; or you can compare different versions of
the tools, with differing features.

I suggest you try to avoid the simple A vs B design and instead look for a
design when you can predict a trend: find A, B, C such that your theory says
A > B > C. The statistical power is much better.

Don't forget to talk to the people afterwards and get their opinions.
Sometimes you can find they weren't playing the same game that you were.

Good luck

Thomas Green




On 1 Mar 2011, at 11:20, Stefano Federici wrote:

 Dear Collegues,
I need to plan an evaluation of the improvements brought by the usage of
specific software tools when learning the basic concepts of computer
programming (sequence, loop, variables, arrays, etc) and the specific topic
of sorting algorithms.

Which are the best practises for the necessary steps? I guess the steps
should be: selection of test group, test of initial skills, partition of the
test group in smaller homogenous groups, delivery of learning materials by
or by not making use of the tools, test of final skills, comparative
analysis.

What am I supposed to do to perform a fair test?

Any help or reference is welcome.

Best Regards



Stefano Federici
-------------------------------------------------
Università degli Studi di Cagliari
Facoltà di Scienze della Formazione
Dipartimento di Scienze Pedagogiche e Filosofiche
Via Is Mirrionis 1, 09123 Cagliari, Italia
-------------------------------------------------
Cell: +39 349 818 1955 Tel.: +39 070 675 7815
Fax: +39 070 675 7113



--
The Open University is incorporated by Royal Charter (RC 000391), an exempt charity 
in England & Wales and a charity registered in Scotland (SC 038302).

Reply via email to