At 12:16 PM 3/22/01 -0700, Harold W Kerster wrote:
> Maybe the most common mistake is omission of graphic eye-balling.

Another common error is drawing inferences from graphs!  (re: P. Swank's comments 
below)

In particular, I think that using graphs to check normality based on small samples  is 
as questionable as formal tests.  I have an exercise that I use with my classes to 
illustrate this:  I randomly generate nine data sets of n=10 observations from a 
normal population and make histograms for each set.   I give the nine graphs to the 
students and tell them that they represent samples from nine different populations.  
The students are then asked to identify which of the nine populations are normal.  Out 
of 70 students that I have tried this with so far, only two have seen through my ploy 
and have
correctly picked all nine.  The rest have selected no more than 2 of the nine as 
coming from normal populations.  Even my faculty colleagues have been tricked!

Rich Einsporn
U. of Akron


> >On Thu, 22 Mar 2001, Paul Swank wrote:
> >
> >> I couldn't help wanting to add my own 2 cents to the discussion about statistical 
>errors because I have always thought that people put too much faith in formal tests 
>of assumptions. When the tests of assumptions are most sensitive to violations is 
>when they are of less concern, when the sample size is large. When the ramifications 
>of violating assumptions are greatest, when samples are small, the tests have no 
>power to detect violations. There is no substitute for examining your data. If the 
>data are badly skewed, you don't need a normality test to tell you that, a simple 
>histogram will do it.
> >>
>
>



=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to