At 4:31 PM -0400 11/8/04, [EMAIL PROTECTED] wrote:
Perhaps another educational use of Rev-based products would be exploratory
learning... then assessed, perhaps, by the dreaded m/c questions

Judy

Exactly! While assessment can drive learning, there is more to teaching and learning than tests ;-)


I use simulations that I've written with Rev to first allow (university) students to conduct experiments and learn from the results, but then to design and conduct their own experiments to answer questions. The learning objectives of the two stages are different, but obviously synergistic. At the moment most of the use of the simulations is in supervised conditions, but I am planning a kit of simulations that students will use as part of self-directed projects. At the moment I toying with the idea that students will be required to make reports that can be distributed to the class as learning resources; having to teach something is a terrific incentive for learning it first!

On Wed, 11 Aug 2004, Richard Gaskin wrote:

 Marian Petrides wrote:

 > Not only in teaching programming but in designing custom educational
 > courseware. Who wants the student to have ONLY simple multiple-guess
 > questions to work with?
 >
 > Life doesn't come with a series of four exclusive-or questions tattooed
 > across it, so why give student this unrealistic view of the real world,
 > when a little work in Rev will permit far more challenging interactivity?

 Agreed wholeheartedly.  Education-related work was the largest single
 set of tasks folks did with HyperCard, and for all the tools that have
 come out since there remains an unaddressed gap which may be an ideal
 focus for DreamCard.

 But moving beyond simple questions models like multiple choice is
 difficult.  The AICC courseware interoperability standard describes
> almost a dozen question models, but most are variants of "choose one",
 "choose many", "closest match", etc., sometimes enlived by using
 drag-and-drop as the mechanism for applying the answer but not
 substantially different from what gets tested with a simple multiple
 choice in terms of truer assessment of what's been learned.

 The challenge is to find more open-ended question models which can still
 be assessed by the computer.  For example, the most open-ended question
 is an essay, but I sure don't want to write the routine that scores
 essays. :)

 What sorts of enhanced question models do you think would be ideal for
 computer-based learning?

 --
   Richard Gaskin
> Fourth World Media Corporation

-- Michael J. Lew

Senior Lecturer
Department of Pharmacology
The University of Melbourne
Parkville 3010
Victoria
Australia

Phone +613 8344 8304

**
New email address: [EMAIL PROTECTED]
**
_______________________________________________
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution

Reply via email to