Hi

On 24 Apr 2001, Mark W. Humphries wrote:
> I concur. As I mentioned at the start of this thread, I am "self-learning"
> statistics from books. I have difficulty telling what is being taught as
> necessary theoretical 'scaffolding' or 'superceded procedures', and what one
> would actually apply in a realistic case. I would love a textbook which
> walks through a realistic analysis step by step, while providing the
> 'theoretical scaffolding' as insets within this flow. Its frustrating to
> read 50 pages only to find that 'one never actually does it this way'.

My gut feeling is that this would be a terribly confusing way to
_teach_ anything.  Students would be started with a (relatively)
advanced procedure and at various points have to be taken aside
for lessons on sampling distributions, probability, whatever, and
then brought back somehow to the flow of the current lesson.  
There is a logic to the way that statistics is developed in most
intro texts (although some people might not agree with that logic
in the absence of a direct empirical test of its efficacy).  It
would be an interesting study of course, and not that difficult
to set up with some hypertext-like instruction.  Students could
be led through the material in a hierarchical manner or entered
at some upper level with recursive links to foundational
material.  We might find some kind of interaction, with better
students doing Ok by either procedure (and perhaps preferring the
latter) and weaker students doing Ok by the hierarchical
procedure but not the unstructured (for want of a better word)
method.  At least, that is my prediction.

Start of Dennis's comments (I believe)

> the problem with all these details is that ... the quality of data we get
> and the methods we use to get it ... PALE^2 in comparison to what such
> methods might tell us IF everything were clean
> 
> DATA ARE NOT CLEAN!
> 
> but, we prefer it seems to emphasize all this minutiae .. rather than spend
> much much more time on formulating clear questions to ask and, designing
> good ways to develop measures and collect good data

I for one was not saying anything at all about how much time was
spent on various topics.  And it seems likely to me that more
effective methods of instruction (for whatever) leave more time
for other material, and not less.

> we pay NO attention to whether some measure we use provides us with
> reliable data ...
> 
> the lack of random assignment in even the simplest of experimental designs
> ... seems to cause barely a whimper

Speak for yourself.  How can you know what else is done in a
class from a narrow discussion of how best to teach one
particular component?

> we pound statistical significance into the ground when, it has such LIMITED
> application

I think that reading the scientific literature would disabuse one
about the limited application of statistical significance.  My
students tell me that learning about statistical inference
greatly increases their capacity to read primary
literature.  Perhaps it is different in your discipline.

> but yet, we get in a tizzy (me too i guess) and fight tooth and nail over
> such silly things as should we start the discussion of hypothesis testing
> for a mean with z or t? WHO CARES? ... the difference is trivial at best

Perhaps the people who don't care shouldn't get involved in the
discussion.  Again, you seem to be drawing some pretty broad
inferences from a discussion of one topic on a list that is
dedicated to teaching statistics.

> in the overall process of research and gathering data ... the process of
> analysis is the LEAST important aspect of it ... let's face it ... errors
> that are made in papers/articles/research projects are rarely caused by
> faulty analysis applications ... though sure, now and then screw ups do
> happen ...

Perhaps that is because students learned those techniques well.  
Nor are statistical analysis matters independent of good research
design.  A number of aspects of design follow from an
understanding of statistical tests, such as: the importance of
sample size, minimizing noise in the study (e.g., standard
testing procedures, homogeneous samples), and having a
sufficiently powerful manipulation of the predictor variable.

> the biggest (by a light year) problem is bad data ... collected in a bad
> way ... hoping to chase answers to bad questions ... or highly overrated
> and/or unimportant questions
> 
> NO analysis will salvage these problems ... and to worry and agonize over z
> or t ... and a hundred other such things is putting too much weight on the
> wrong things
> 
> AND ALL IN ONE COURSE TOO! (as some advisors are hoping is all that their
> students will EVER have to take!)

Then it would seem that your argument should be with the people
in your area who have this naive expectation.  In psychology,
undergraduate students will get a number of courses on data
analysis and research methods, depending in part on whether they
are majors or honours students.  So I have the luxury of
focussing on statistics, for example, knowing that other courses
will build on that foundation (e.g., talking about reliability,
experimental design, ...).  Of course, students in statistics
will get an even deeper exposure.

Best wishes
Jim

============================================================================
James M. Clark                          (204) 786-9757
Department of Psychology                (204) 774-4134 Fax
University of Winnipeg                  4L05D
Winnipeg, Manitoba  R3B 2E9             [EMAIL PROTECTED]
CANADA                                  http://www.uwinnipeg.ca/~clark
============================================================================



=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to