I sometimes teach two sections of the same class.  I always like to compare the stats 
for them to see if my instructing is the same. 

My college has lecture or computer mediated developmental studies.  I was curious, 
when I taught a lecture class and a computer mediated class one semester, if the stats 
would match up (since the computer does most of the instruction not me).  The stats 
didn't match at the end of the semester.  It was the same course and material, but 
different deliver methods.  I only have the data for those courses I taught, but my 
college is collecting the data for all classes over a few years so a comparison can be 
made.

For the same two sections (I teach) that are both lecture, the stats are closer, but 
never the same.  Student learning is definitely multivariate.  :-)  And I have learned 
to "read" the students to see how to approach the teaching.  Some classes laugh at my 
jokes others don't.  If the jokes aren't getting responses by the second week then I 
throw the rest of them out the window for that class.  This semester I have two stat 
classes that are totally different.  One class is really easy going and the other is 
serious.  So I change my teaching style to match and the stats for the first test were 
really close.  I'm looking forward to the stats on the second test.  I find it isn't 
always how students learn but how flexiable our teaching style is. 
:-)


SR Chandler
Mathematics Faculty
TCC - Moss Campus
[EMAIL PROTECTED]
http://onlinelearning.tcc.vccs.edu/faculty/tcchans/
--------------
"Mathematics is the alphabet with which God has written the universe." -- Galileo 
Galilei (1564-1642)


>>> [EMAIL PROTECTED] 10/02/01 06:45AM >>>

edstat-digest        Tuesday, October 2 2001        Volume 2000 : Number 520
Date: Mon, 01 Oct 2001 14:33:53 -0300
From: Gus Gassmann <[EMAIL PROTECTED]>
Subject: Re: They look different; are they really?

Stan Brown wrote:

> Another instructor and I gave the same exam to our sections of a
> course. Here's a summary of the results:
>
> Section A: n=20, mean=56.1, median=52.5, standard dev=20.1
> Section B: n=23  mean=73.0, median=70.0, standard dev=21.6
>
> Now, they certainly _look_ different. (If it's of any valid I can
> post the 20+23 raw data.) If I treat them as samples of two
> populations -- which I'm not at all sure is valid -- I can compute
> 90% confidence intervals as follows:
>
> Class A: 48.3 < mu < 63.8
> Class B: 65.4 < mu < 80.9
>
> As I say, I have major qualms about whether this computation means
> anything. So let me pose my question: given the two sets of results
> shown earlier, _is_ there a valid statistical method to say whether
> one class really is learning the subject better than the other, and
> by how much?

Before you jump out of a window, you should ask yourself if there
is any reason to suspect that the samples should be homogeneous
(assuming equal learning). Remember that the students are often
self-selected into the sections, and the reasons for selecting one
section over the other may well be correlated with learning styles
and/or scholastic achievements.

- -------------------------------------------------------

gus gassmann          ([EMAIL PROTECTED])

"When in doubt, travel."


Remove NOSPAM in the reply-to address




=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================



=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to