On 24 Apr 2001, Mark W. Humphries wrote:
>> I concur. As I mentioned at the start of this thread, I am
"self-learning"
>> statistics from books. I have difficulty telling what is being taught as
>> necessary theoretical 'scaffolding' or 'superceded procedures', and what
one
>> would actually apply in a realistic case. I would love a textbook which
>> walks through a realistic analysis step by step, while providing the
>> 'theoretical scaffolding' as insets within this flow. Its frustrating to
>> read 50 pages only to find that 'one never actually does it this way'.

>Jim Clark responded:
>My gut feeling is that this would be a terribly confusing way to
>_teach_ anything.  Students would be started with a (relatively)
>advanced procedure and at various points have to be taken aside
>for lessons on sampling distributions, probability, whatever, and
>then brought back somehow to the flow of the current lesson.
>There is a logic to the way that statistics is developed in most
>intro texts (although some people might not agree with that logic
>in the absence of a direct empirical test of its efficacy).  It
>would be an interesting study of course, and not that difficult
>to set up with some hypertext-like instruction.  Students could
>be led through the material in a hierarchical manner or entered
>at some upper level with recursive links to foundational
>material.  We might find some kind of interaction, with better
>students doing Ok by either procedure (and perhaps preferring the
>latter) and weaker students doing Ok by the hierarchical
>procedure but not the unstructured (for want of a better word)
>method.  At least, that is my prediction.

[snip]

You're likely right. Currently, as I learn each new concept or statistical
procedure, I test my understanding by writing small snippets of code (in awk
would you believe). I get perplexed when I come across descriptions which
seem heuristic, rather than algorithmic. i.e. I just started the chapter on
the analysis of category data. The description of the chi-squared statistic
ends with "The approximation is very good provided all expected cell
frequencies are 5 or greater. This is a conservative rule, and even smaller
expected frequencies have resulted in good approximations." Such a statement
makes me wonder if modern statistical methods actually use this particular
approximation-cum-heuristic, or is there a more 'definite' algorithm.
Am I learning 'real world' statistics, or a sanitized textbook version? And
how can I tell? :)

Cheers,
 Mark



=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to