On 7 Dec 2001 14:24:17 -0800, [EMAIL PROTECTED] (Dennis Roberts) wrote:
> At 08:08 PM 12/7/01 +, J. Williams wrote:
> >On 6 Dec 2001 11:34:20 -0800, [EMAIL PROTECTED] (Dennis Roberts) wrote:
> >
> > >if anything, selectivity has decreased at some of these top schools due to
> > >the fact that
At 08:08 PM 12/7/01 +, J. Williams wrote:
>On 6 Dec 2001 11:34:20 -0800, [EMAIL PROTECTED] (Dennis Roberts) wrote:
>
> >if anything, selectivity has decreased at some of these top schools due to
> >the fact that given their extremely high tuition ...
i was just saying that IF anything had ha
On 6 Dec 2001 11:34:20 -0800, [EMAIL PROTECTED] (Dennis Roberts) wrote:
>generally speaking, it is kind of difficult to muster sufficient evidence
>that the amount of grade inflation that is observed ... within and across
>schools or colleges ... is due to an increase in student ability
>
>i f
generally speaking, it is kind of difficult to muster sufficient evidence
that the amount of grade inflation that is observed ... within and across
schools or colleges ... is due to an increase in student ability
i find it difficult to believe that the average ability at a place like
harvard
Just in case someone is interested in the Harvard instance
that I mentioned -- while you might get the article from a newsstand
or a friend --
On Sun, 02 Dec 2001 19:19:38 -0500, Rich Ulrich <[EMAIL PROTECTED]>
wrote:
[ ... ]
>
> Now, in the NY Times, just a week or two ago. The
> dean of und
On Sun, 02 Dec 2001 19:19:38 -0500, Rich Ulrich <[EMAIL PROTECTED]>
wrote:
>With the curve, and low, low averages, you do notice
>that a single *good* performance can outweigh several
>poor ones. So that is good.
>
It is good, but conversely having several "high" scores even with low,
low ave
- I guess I am commenting on the statistical perspective,
at least, to start with.
On Fri, 23 Nov 2001 16:22:46 GMT, "L.C." <[EMAIL PROTECTED]>
wrote:
> The question got me thinking about this problem as a
> multiple comparison problem. Exam scores are typically
> sums of problem scores. The p
In article <[EMAIL PROTECTED]>,
jim clark <[EMAIL PROTECTED]> wrote:
>Hi
>On 25 Nov 2001, Herman Rubin wrote:
>> If it is a good test, ability should predominate, and there is
>> absolutely no reason for ability to even have close to a normal
>> distribution. If one has two groups with differen
In article <[EMAIL PROTECTED]>,
Thom Baguley <[EMAIL PROTECTED]> wrote:
>Donald Burrill wrote:
>> On Fri, 23 Nov 2001, L.C. wrote:
>> > The question got me thinking about this problem as a
>> > multiple comparison problem. Exam scores are typically
>> > sums of problem scores. The problem sco
Hi
On 28 Nov 2001, Dennis Roberts wrote:
> At 01:35 PM 11/28/01 -0600, jim clark wrote:
> >The distribution of grades will depend on the distribution of
> >difficulties of the items, one of the elements examined by
> >psychometrists in the development of professional-quality
> >assessments.
> u
At 01:35 PM 11/28/01 -0600, jim clark wrote:
>Hi
>
>On Tue, 27 Nov 2001, Thom Baguley wrote:
> > I'd argue that they probably aren't that independent. If I ask three
> > questions all involving simple algebra and a student doesn't
> > understand simple algebra they'll probably get all three wrong.
Hi
On Tue, 27 Nov 2001, Thom Baguley wrote:
> I'd argue that they probably aren't that independent. If I ask three
> questions all involving simple algebra and a student doesn't
> understand simple algebra they'll probably get all three wrong. In
> my experience most statistics exams are better r
Hi
On 25 Nov 2001, Herman Rubin wrote:
> If it is a good test, ability should predominate, and there is
> absolutely no reason for ability to even have close to a normal
> distribution. If one has two groups with different normal
> distributions, combining them will never get normality.
I think
On Tue, 27 Nov 2001, Thom Baguley wrote in part:
> Donald Burrill wrote:
> >
> > On Fri, 23 Nov 2001, L.C. wrote:
> >
> > > The question got me thinking about this problem as a
> > > multiple comparison problem. Exam scores are typically
> > > sums of problem scores. The problem scores may be
Donald Burrill wrote:
>
> On Fri, 23 Nov 2001, L.C. wrote:
>
> > The question got me thinking about this problem as a
> > multiple comparison problem. Exam scores are typically
> > sums of problem scores. The problem scores may be
> > thought of as random variables. By the central limit theor
but no. They diagnose by Z scores (thereby
> > defining their own prevalences :) and assert that they are
> > discovering diseases, and not punishing unusual people.
> >
> > Best Regards,
> > -Larry (And they get to testify in court) C.
>
> Hmm. This thread started o
gt; defining their own prevalences :) and assert that they are
>> discovering diseases, and not punishing unusual people.
Anyone who converts data to normality, or even standardizes
variances, is using statistics as pure ritual.
There is often a justification for using procedures based
In article <[EMAIL PROTECTED]>,
L.C. <[EMAIL PROTECTED]> wrote:
>The question got me thinking about this problem as a
>multiple comparison problem. Exam scores are typically
>sums of problem scores. The problem scores may be
>thought of as random variables. By the central limit theorem,
>the distr
ntinuous/discrete
> distributions, but no. They diagnose by Z scores (thereby
> defining their own prevalences :) and assert that they are
> discovering diseases, and not punishing unusual people.
>
> Best Regards,
> -Larry (And they get to testify in court) C.
Hmm. This thread star
ost:
>> >
>> >For most mathematics / statistics examinations, the "answer" to a
>> >question is the
>> >*process* by which the student obtains the incidental final number or
>> >result.
>> >The result itself is most often just not that imp
In article <[EMAIL PROTECTED]>,
dennis roberts <[EMAIL PROTECTED]> wrote:
>i would also like to again make a push for correct answers rising to the
>approximate same level of importance as process ... we just cannot take
>lightly the fact that when someone gets the wrong answer, saying that thi
the general problems evaluating students are how much time do you have for
(say) exams, what can be reasonably expected that students will be able to
do with that amount of time, what content can you examine on, and ... what
sort of formats do you opt for with your exams
in statistics
At 02:45 PM 11/18/01 -0700, Roy St Laurent wrote:
>Comments interspersed below...
>
>
>Sure, I wouldn't give a student full credit if their process was correct but
>their
>final result was wrong. But an answer that shows me they know the process
>but have the wrong final result is worth MUCH, MUC
ematics / statistics examinations, the "answer" to a
> >question is the
> >*process* by which the student obtains the incidental final number or
> >result.
> >The result itself is most often just not that important to evaluating
> >students'
> >understan
r most mathematics / statistics examinations, the "answer" to a
>>question is the
>>*process* by which the student obtains the incidental final number or
>>result.
>>The result itself is most often just not that important to evaluating
>>students'
>>
, the "answer" to a
> >question is the
> >*process* by which the student obtains the incidental final number or
> >result.
> >The result itself is most often just not that important to evaluating
> >students'
> >understanding or knowledge of the subject
rocess* by which the student obtains the incidental final number or
>result.
>The result itself is most often just not that important to evaluating
>students'
>understanding or knowledge of the subject. And therefore an unsupported
>
>or lucky answer is worth nothing.
the
e result itself is most often just not that important to evaluating
students'
understanding or knowledge of the subject. And therefore an unsupported
or lucky answer is worth nothing.
Stan Brown wrote:
> Jerry Dallal <[EMAIL PROTECTED]> wrote in sci.stat.edu:
> >Problem: Div
dennis roberts wrote:
> would we give full credit for 87/18 = 7/1 ... 8's cancel?
>
> >Full marks. As Napoleon used to ask, "Is he lucky?". :) He/she deserves it.!
> >
> > --
> >John Kane
> >The Rideau Lakes, Ontario Canada
> >
Of course not. No sign of inspired luck just lou
would we give full credit for 87/18 = 7/1 ... 8's cancel?
>Full marks. As Napoleon used to ask, "Is he lucky?". :) He/she deserves it.!
>
> --
>John Kane
>The Rideau Lakes, Ontario Canada
>
>
>
>
>=
>Instructions
Jerry Dallal <[EMAIL PROTECTED]> wrote in sci.stat.edu:
>Problem: Divide 95 by 19.
>
>Student writes 95/19, 9's cancel, leaving 5/1 = 5 .
>How much credit do you award?
Perfect example!
--
Stan Brown, Oak Road Systems, Cortland County, New York, USA
http://oa
Jerry Dallal wrote:
> John Kane wrote:
>
> > Very true and I was being deliberatly provocative. Howeever I still cannot
> > see penalizing someone for gerttaingt the right anwser no matter how arried
> > at.
>
> Problem: Divide 95 by 19.
>
> Student writes 95/19, 9's cancel, leaving 5/1 = 5 .
Thom Baguley wrote:
>
> Alan McLean wrote:
> > This describes a BAD closed book exam. It also describes a bad open book
> > exam.
>
> Not entirely. I have found that many students still worry about such
> things regardless of the information they have about the exam.
>
> > A good one-hour exa
In article <[EMAIL PROTECTED]>, Carl Lee <[EMAIL PROTECTED]> wrote:
>Using introductory statistics as an example, concepts are built in a certain
>sequence. If students get lost at a certain stage, s/he will have difficulty
>to connect the later concepts together. Therefore, it is crucial to test
In article <[EMAIL PROTECTED]>,
Alan McLean <[EMAIL PROTECTED]> wrote:
>Herman Rubin wrote:
>> In article <[EMAIL PROTECTED]>,
>> Thom Baguley <[EMAIL PROTECTED]> wrote:
>> >Glen wrote:
>> >> As a student I *always* preferred closed book exams. If I know the
>> >> material I don't need the book,
the problem with any exam ... given in any format ... is the extent to
which you can INFER what the examinee knows or does not know from their
responses
in the case of recognition tests ... where precreated answers are given and
you make a choice ... it is very difficult to infer anything BUT
Alan McLean wrote:
> This describes a BAD closed book exam. It also describes a bad open book
> exam.
Not entirely. I have found that many students still worry about such
things regardless of the information they have about the exam.
> A good one-hour exam would have
> > three, or at most fo
Herman Rubin wrote:
> >Yes. Also, closed book exams tend to be easier because the range of
> >questions is more restricted. I have found them a way to avoid
> >students spending most of their time memorizing near-useless material.
>
> On the contrary, closed book exams emphasize memorizing
> near
Students also confuse histograms with time series graphs. They describe
a graph as, for example, 'starting low, increasing then decreasing
again'. It's easy enough to see how they get this approach from their
school maths. It's much more difficult to get them to see a histogram as
rather more like
Using introductory statistics as an example, concepts are built in a certain
sequence. If students get lost at a certain stage, s/he will have difficulty
to connect the later concepts together. Therefore, it is crucial to test the
understanding of the connection (or relationship) among related con
On Wed, 14 Nov 2001, Alan McLean wrote in part:
> Herman Rubin wrote:
> >
> > A good exam would be one which someone who has merely
> > memorized the book would fail, and one who understands
> > the concepts but has forgotten all the formulas would
> > do extremely well on.
>
> Since to underst
Herman Rubin wrote:
>
> In article <[EMAIL PROTECTED]>,
> Thom Baguley <[EMAIL PROTECTED]> wrote:
> >Glen wrote:
> >> As a student I *always* preferred closed book exams. If I know the
> >> material I don't need the book, and if I don't know the material,
> >> the book isn't going to help in the
John Kane wrote:
> Very true and I was being deliberatly provocative. Howeever I still cannot
> see penalizing someone for gerttaingt the right anwser no matter how arried
> at.
Problem: Divide 95 by 19.
Student writes 95/19, 9's cancel, leaving 5/1 = 5 .
How much credit do you award?
===
niques correctly in the real world.
>How do we know? How can we do a better job of evaluating students
>than merely setting and marking written timed exams?
We can make part of the exam a take-home exam. We can
allow calculators, and in the near future we are likely to
be able to allow co
In article <[EMAIL PROTECTED]>,
Thom Baguley <[EMAIL PROTECTED]> wrote:
>Glen wrote:
>> As a student I *always* preferred closed book exams. If I know the
>> material I don't need the book, and if I don't know the material,
>> the book isn't going to help in the exam enough anyway. For open
>Yes
Herman Rubin wrote:
> In article <[EMAIL PROTECTED]>,
> John Kane <[EMAIL PROTECTED]> wrote:
> >Herman Rubin wrote:
>
> >> In article <[EMAIL PROTECTED]>,
> >> John Kane <[EMAIL PROTECTED]> wrote:
> >> >Stan Brown wrote:
>
> >> >> Herman Rubin <[EMAIL PROTECTED]> wrote in sci.stat.edu:
> >> >>
.
>
> And this brings me back to why I actually posted my query: how do we
> evaluate students who do poorly on exams but may in fact be able to
> do well in real-world situations where they must use the material?
> Saying it another way, students AA and BB both answered questions
>
Glen wrote:
> As a student I *always* preferred closed book exams. If I know the
> material I don't need the book, and if I don't know the material,
> the book isn't going to help in the exam enough anyway. For open
Yes. Also, closed book exams tend to be easier because the range of
questions is
(or both) may be quite likely to
apply correct statistical techniques correctly in the real world.
How do we know? How can we do a better job of evaluating students
than merely setting and marking written timed exams?
--
Stan Brown, Oak Road Systems, Cortland County, New York, USA
In article <[EMAIL PROTECTED]>,
John Kane <[EMAIL PROTECTED]> wrote:
>Herman Rubin wrote:
>> In article <[EMAIL PROTECTED]>,
>> John Kane <[EMAIL PROTECTED]> wrote:
>> >Stan Brown wrote:
>> >> Herman Rubin <[EMAIL PROTECTED]> wrote in sci.stat.edu:
>> >> >Test for understanding, not for imitat
Stan Brown wrote:
> Herman Rubin <[EMAIL PROTECTED]> wrote in sci.stat.edu:
> >Test for understanding, not for imitation of robots. Give
> >a few multi-part problems, and be sure to give partial credit.
>
> Excellent advice. I do (try to) test for understanding, by posing
> problems in real-worl
In article <[EMAIL PROTECTED]>,
Gus Gassmann <[EMAIL PROTECTED]> wrote:
>"J. Williams" wrote:
>> When I taught undergraduate statistics in a previous lifetime, I would
>> distribute copies of the mid-term and final examinations minus the
>> data sets one week prior. Students could study the act
I had to comment on the thread. I've been involved in teaching since, 1958
and have taught at many levels (maybe too many). I tried the open book
approach and believed at one time it was a good method but I always wondered
it it really was the best way to go. I tried take-home exams but was
Jerry Dallal <[EMAIL PROTECTED]> wrote in sci.stat.edu:
>I *do*
>allow one sheet of notes (both sides) for each exam. They're
>cumulative. At any exam, students may bring the sheets for all
>previous exams plus a new one for the current exam.
>
>Students report learning as much if not more from
Herman Rubin <[EMAIL PROTECTED]> wrote in sci.stat.edu:
>Test for understanding, not for imitation of robots. Give
>a few multi-part problems, and be sure to give partial credit.
Excellent advice. I do (try to) test for understanding, by posing
problems in real-world terms and seeing if the stu
Gus Gassmann <[EMAIL PROTECTED]> wrote in sci.stat.edu:
>I much prefer Herman Rubin's suggestion
>of open book, open notes. The problem I have encountered quite
>frequently, however, is that many students don't bother to study,
>because they "can always look it up during the exam". This creates
>e
J. Williams wrote in
sci.stat.edu:
>When I taught undergraduate statistics in a previous lifetime, I would
>distribute copies of the mid-term and final examinations minus the
>data sets one week prior. Students could study the actual exam
>together, apart, or however best fit their mode.
This
Jerry Dallal <[EMAIL PROTECTED]> wrote in message
news:<[EMAIL PROTECTED]>...
> Students report learning as much if not more from preparing what
> they call "cheat sheets" (I refer to them as "reference notes") than
> from any other class activity. I had one PhD student tell me last
> year that
58 matches
Mail list logo