At 05:38 PM 10/17/00 -0700, David Heiser wrote:
The 5% is a historical arifact, the result of statistics being invented
before electronic computers were invented.
an artifact is some anomaly of the data ... but, how could 5% be considered
an artifact DUE to the lack of electronic computers?
Members,
I am currently drafting a proposal for an Applied Statistics minor at my
college, Northland College, a private liberal arts/environmental college in
northern Wisconsin. I teach in a mathematics department of four but am the
primary professor of our statistics curriculum. As such, it
Many posters to this thread have used the phrase "practical
significance". I find it only confuses things. Just so all of us
are
clear on what we're talking about, might we restrict ourselves to
the terms "statistical signficance" and "practical importance"?
San wrote:
When we analyze data which we ought to know whether the difference of
mean between two populations isn't equal to zero, which method will
generally be better? hypothesis or confidence interval?
Confidence interval.
Bruce Weaver wrote:
1. There is at least one discipline out there in which a bunch of
Bonferroni t-tests ARE known as the LSD approach.
2. The authors are in error.
Comments, anyone?
--
Very odd. I'd lean to 2. I've only ever come across LSD as t tests following
omnibus ANOVA
Robert J. MacG. Dawson wrote:
Well, yes, I am assuming that it's enforceable. Any evidence that it
isn't, apart
from the fact that a lot of people would _like_ it not to be? Myself
included, but cussing at a busted straight won't fill it.
I'm not assuming that US law applies, though
In article [EMAIL PROTECTED],
Thom Baguley [EMAIL PROTECTED] wrote:
You can get important significant effects, unimportant significant
effects, important non-significant effects and unimportant
non-significant effects.
I'll go for three out of four of these. But "important non-significant
--- Radford Neal wrote:
In article [EMAIL PROTECTED],
Thom Baguley [EMAIL PROTECTED] wrote:
You can get important significant effects, unimportant significant
effects, important non-significant effects and unimportant
non-significant effects.
I'll go for three out of four of these. But
"Richard M. Barton" wrote:
--- Radford Neal wrote:
In article [EMAIL PROTECTED],
Thom Baguley [EMAIL PROTECTED] wrote:
You can get important significant effects, unimportant significant
effects, important non-significant effects and unimportant
non-significant effects.
I'll go
In article 8sill5$gvf$[EMAIL PROTECTED],
[EMAIL PROTECTED] wrote:
In article [EMAIL PROTECTED],
[EMAIL PROTECTED] (Robert J. MacG. Dawson) wrote:
.
Fair enough: but I would argue that the right question is rarely "if
there were no effect whatsoever,
In article [EMAIL PROTECTED],
Jerry Dallal [EMAIL PROTECTED] wrote:
Many posters to this thread have used the phrase "practical
significance". I find it only confuses things. Just so all of us
are
clear on what we're talking about, might we restrict ourselves to
the terms "statistical
In article [EMAIL PROTECTED],
[EMAIL PROTECTED] (dennis roberts) wrote:
thus, the idea is that 5% and/or 1% were "chosen" due to the tables
that
were available and not, some logical reasoning for these values?
i don't see any logic to the notion that 5% and/or 1% ... have any
special
nor
Thom Baguley [EMAIL PROTECTED] wrote:
You can get important significant effects, unimportant significant
effects, important non-significant effects and unimportant
non-significant effects.
Radford Neal wrote:
I'll go for three out of four of these. But "important non-significant
In article [EMAIL PROTECTED],
Thom Baguley [EMAIL PROTECTED] wrote:
Robert J. MacG. Dawson wrote:
[EMAIL PROTECTED] wrote:
In article [EMAIL PROTECTED],
Jerry Dallal [EMAIL PROTECTED] wrote:
(1) statistical significance usually is unrelated to practice
importance.
I don't
In article [EMAIL PROTECTED],
Jerry Dallal [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] wrote:
I
said before, I don't think this can be seen as a problem with
hypothesis
testing; but it is a matter for hypothesis *testers*.
Nothing wrong with this, but it might be a good time to review
As a statistically ignorant engineer I have a problem in establishing an
Upper limit based on a restricted set of data samples. The problem is
described below :-
An infinite, normally distributed population is sampled and N samples
are collected with mean xbar and variance s^2.
What Upper
Radford Neal wrote:
I'll go for three out of four of these. But "important non-significant
effects"?
Perhaps what Thom means is that getting nonsignificant effects can be an
important finding. If the research was conducted under conditions for which
power would be great (say 95%) even for the
The origins of the silly .05 criterion of statistical significance are
discussed in the article:
Cowles, M., Davis, C. (1982). On the origins of the .05 level of
statistical significance. American Psychologist, 37, 553-558.
I suggest that we not use the phrase "LSD" to describe the "protected t
test," or "Fisher's procedure" (the procedure that requires having first
obtained a significant omnibus ANOVA effect). After all, one can compute a
"least significant difference" (between means to be "significant" at an
I've often been called upon to do a t-test with 5 animals in one group
and 4 animals in the other. The power is abysmally low and rarely do I
get a p less than 0.05. One of the difficulties that medical researcher
have is with the notion of power and concomitant sample size. I make it
a point of
20 matches
Mail list logo