damon bryant wrote:
> As you got more items correct 
> you got harder questions. In contrast, if you initially got questions 
> incorrect, you would have received easier questions....
In the 70s there was research on such systems (keeping people at 80%
correct is great rule-of-thumb goal).  See Stuff done at Stanford's
Institute for Mathematical Studies in the Social Sciences.  At IMSSS
we did lots of this kind of stuff.  We generally broke the skills into
strands (separate concepts), and kept track of the student's performance
in each strand separately (try it; it helps).  BIP (Basic Instructional
Program) was an ONR (Office of Naval Research) sponsored system, that
tried to teach "programming in Basic."  The BIP model (and often the
"standard" IMSSS model) was to score every task in each strand, and find
the "best" for the student based on his current position.
For arithmetic, we actually generated problems based on the different
desired strand properties; nobody was clever enough to generate software
problems; we simply consulted our DB.  We taught how to do proofs in
Logic and Set Theory using some of these techniques.
Names to look for on papers in the 70s-80s include Patrick Suppes (head
of one side of IMSSS), Richard Atkinson (head of the other side),
Barbara Searle, Avron Barr, and Marian Beard.  These are not the only
people who worked there, but a number I recall that should help you to
find the research publications (try Google Scholar).


A follow-on for some of this work is:
     http://www-epgy.stanford.edu/

I worked there "back in the day" and was quite proud to be a part of
some of that work.

--Scott David Daniels
[EMAIL PROTECTED]

_______________________________________________
Edu-sig mailing list
Edu-sig@python.org
http://mail.python.org/mailman/listinfo/edu-sig

Reply via email to