Dear Yetta,
 
Thanks for the comments, and I agree with you. I think there is a function between sample size and statistical power. The power increases with the increase of n. It's true that it is hard to define how powerful is "too powerful". Some people suggest to use a lower significance level for large n. However, it is also a problem that how low (e.g., 0.0000001) is low enough? Some people suggest not to use the p-value as mentioned in the summary.
 
It is also a question how serious it may be if the data set does not follow a normal distribution. Statisticians may provide us some artificial examples showing how serious it is, but this may not be so serious in the real world if it's only a minor departure. Some people even say that statistical methods can not be used because our samples are not independent at all because of spatial autocorrelation. Well, perhaps I have gone too far, but it is an interesting topic. (Geo)statisticians may have better comments.
 
By the way, I may not summarize again. If anyone would like to share your ideas with the list, please copy to it.
 
Cheers,
 
Chaosheng
 
 
----- Original Message -----
From: "zij" <[EMAIL PROTECTED]>
To: "ai-geostats" <[EMAIL PROTECTED]>; "Chaosheng Zhang" <[EMAIL PROTECTED]>
Sent: Monday, August 11, 2003 7:13 PM
Subject: RE: AI-GEOSTATS: Summary: Large sample size and normal distribution

> Hi,
>
> I'm not sure i agree with the idea that a test can be too powerful.  This is a
> common argument in simulation experiments, that because you can do an infinite
> number of replicate simulations, somehow the differences detected are not
> real.  In fact, the differences are real.  They may not be biologically (or
> geologically or whatever field you are in) significant, but they are still
> real.  That is why it is better to decide first on the magnitude of difference
> that you consider significant.  Now, in the case of deviation from normality,
> I suppose you wouldn't have much intuition about what is significant, but the
> relevant question is what is the effect of small deviations from normality on
> your test or conclusions of your analysis?  These kinds of studies are out
> there in the statistical literature for many tests (T-tests etc.) --I'm not
> sure how much has been done to look at the robustness of geostatistical
> analyses, but there are probably some studies (does anyone know?) I would not
> opt for a less-powerful test just to justify an assumption - that's, like,
> unethical or something.
>
> Yetta
>
>
>

Reply via email to