Herman Rubin wrote:
> and until recently,
> scientists believed that their models could be exactly right.
but, as you wrote in another context
--
3 Oct 1998 08:07:23 -0500;
Message-ID:6v57ib$[EMAIL PROTECTED]
"Normality is rarely a tenable hypothesis. Its usefulness as a means
of der
In article <[EMAIL PROTECTED]>,
Robert J. MacG. Dawson <[EMAIL PROTECTED]> wrote:
>[EMAIL PROTECTED] wrote (in part):
<> I'm saying that the entire concept of practical significance is not only
<> subjective, but limited to the extent of current knowledge. You may
<> regard a 0.01% effect at th
On Thu, 19 Oct 2000 [EMAIL PROTECTED] wrote:
> In article <[EMAIL PROTECTED]>,
> Peter Lewycky <[EMAIL PROTECTED]> wrote:
> > I've often been called upon to do a t-test with 5 animals in one
> > group and 4 animals in the other. The power is abysmally low and
> > rarely do I get a p less than
Thom Baguley wrote:
>
> Robert J. MacG. Dawson wrote:
>
> > [EMAIL PROTECTED] wrote:
> > >
> > > In article <[EMAIL PROTECTED]>,
> > > Jerry Dallal <[EMAIL PROTECTED]> wrote:
> > >
> > > > (1) statistical significance usually is unrelated to practice
> > > > importance.
> > >
> > > I don't thi
[EMAIL PROTECTED] wrote:
>
> In article <[EMAIL PROTECTED]>,
> Jerry Dallal <[EMAIL PROTECTED]> wrote:
> > [EMAIL PROTECTED] wrote:
> > >
> > I
> > > said before, I don't think this can be seen as a problem with
> hypothesis
> > > testing; but it is a matter for hypothesis *testers*.
> >
> > No
>This has got to be one of the funniest things I have read on a stats
>newsgroup. I'm sure its not really meant to be funny, but the thought
>of truckloads upon truckload of rats arriving to satisfy power
>requirements puts a highly amusing spin on the whole thing. :)
>I am stifling an insane cac
In article <[EMAIL PROTECTED]>,
Peter Lewycky <[EMAIL PROTECTED]> wrote:
> I've often been called upon to do a t-test with 5 animals in one group
> and 4 animals in the other. The power is abysmally low and rarely do I
> get a p less than 0.05. One of the difficulties that medical
researcher
> h
I've often been called upon to do a t-test with 5 animals in one group
and 4 animals in the other. The power is abysmally low and rarely do I
get a p less than 0.05. One of the difficulties that medical researcher
have is with the notion of power and concomitant sample size. I make it
a point of c
In article <[EMAIL PROTECTED]>,
Jerry Dallal <[EMAIL PROTECTED]> wrote:
> [EMAIL PROTECTED] wrote:
> >
> I
> > said before, I don't think this can be seen as a problem with
hypothesis
> > testing; but it is a matter for hypothesis *testers*.
>
> Nothing wrong with this, but it might be a good ti
In article <[EMAIL PROTECTED]>,
Thom Baguley <[EMAIL PROTECTED]> wrote:
>Robert J. MacG. Dawson wrote:
>> [EMAIL PROTECTED] wrote:
>> > In article <[EMAIL PROTECTED]>,
>> > Jerry Dallal <[EMAIL PROTECTED]> wrote:
>> > > (1) statistical significance usually is unrelated to practice
>> > > imp
Thom Baguley <[EMAIL PROTECTED]> wrote:
>> You can get important significant effects, unimportant significant
>> effects, important non-significant effects and unimportant
>> non-significant effects.
Radford Neal wrote:
>I'll go for three out of four of these. But "important non-significant
In article <[EMAIL PROTECTED]>,
[EMAIL PROTECTED] (dennis roberts) wrote:
>
> thus, the idea is that 5% and/or 1% were "chosen" due to the tables
that
> were available and not, some logical reasoning for these values?
>
> i don't see any logic to the notion that 5% and/or 1% ... have any
specia
In article <[EMAIL PROTECTED]>,
Jerry Dallal <[EMAIL PROTECTED]> wrote:
>Many posters to this thread have used the phrase "practical
>significance". I find it only confuses things. Just so all of us
>are
>clear on what we're talking about, might we restrict ourselves to
>the terms "statistical
In article <8sill5$gvf$[EMAIL PROTECTED]>,
<[EMAIL PROTECTED]> wrote:
>In article <[EMAIL PROTECTED]>,
> [EMAIL PROTECTED] (Robert J. MacG. Dawson) wrote:
.
>> Fair enough: but I would argue that the right question is rarely "if
>> there were no effect
"Richard M. Barton" wrote:
>
> --- Radford Neal wrote:
> In article <[EMAIL PROTECTED]>,
> Thom Baguley <[EMAIL PROTECTED]> wrote:
>
> > You can get important significant effects, unimportant significant
> > effects, important non-significant effects and unimportant
> > non-significant effect
--- Radford Neal wrote:
In article <[EMAIL PROTECTED]>,
Thom Baguley <[EMAIL PROTECTED]> wrote:
> You can get important significant effects, unimportant significant
> effects, important non-significant effects and unimportant
> non-significant effects.
I'll go for three out of four of these. B
In article <[EMAIL PROTECTED]>,
Thom Baguley <[EMAIL PROTECTED]> wrote:
> You can get important significant effects, unimportant significant
> effects, important non-significant effects and unimportant
> non-significant effects.
I'll go for three out of four of these. But "important non-signif
Robert J. MacG. Dawson wrote:
> [EMAIL PROTECTED] wrote:
> >
> > In article <[EMAIL PROTECTED]>,
> > Jerry Dallal <[EMAIL PROTECTED]> wrote:
> >
> > > (1) statistical significance usually is unrelated to practice
> > > importance.
> >
> > I don't think so. I can think of many examples in which
[EMAIL PROTECTED] wrote:
>
I
> said before, I don't think this can be seen as a problem with hypothesis
> testing; but it is a matter for hypothesis *testers*.
Nothing wrong with this, but it might be a good time to review the
question that started this thread, namely,
"What are the limitations
Many posters to this thread have used the phrase "practical
significance". I find it only confuses things. Just so all of us
are
clear on what we're talking about, might we restrict ourselves to
the terms "statistical signficance" and "practical importance"?
===
[EMAIL PROTECTED] wrote (in part):
> I'm saying that the entire concept of practical significance is not only
> subjective, but limited to the extent of current knowledge. You may
> regard a 0.01% effect at this point in time as a trivial and (virtually)
> artifactual byproduct of hypothesis te
At 05:38 PM 10/17/00 -0700, David Heiser wrote:
>The 5% is a historical arifact, the result of statistics being invented
>before electronic computers were invented.
an artifact is some anomaly of the data ... but, how could 5% be considered
an artifact DUE to the lack of electronic computers?
- Original Message -
From: <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, October 16, 2000 4:24 PM
Subject: Re: questions on hypothesis
> In article <[EMAIL PROTECTED]>,
> > Chris: That's not what Jerry means. What he's saying is that if
In article <[EMAIL PROTECTED]>,
[EMAIL PROTECTED] (Robert J. MacG. Dawson) wrote:
>
>
> > Wrt to your example, it seems that the decision you are making about
> > practical importance is purely subjective.
>
> What exactly do you mean by this? Are you saying that _my_
> example is purely s
In article <[EMAIL PROTECTED]>,
dennis roberts <[EMAIL PROTECTED]> wrote:
>At 10:06 PM 10/16/00 +, Peter Lewycky wrote:
>>It happens all the time in medicine. If I can show a p value 0.05 or
>>less the researchers are delighted. Whenever I can't produce a p of 0.05
>>or less they start looking
[EMAIL PROTECTED] wrote:
>
> In article <[EMAIL PROTECTED]>,
> > Chris: That's not what Jerry means. What he's saying is that if
> > your sample size is large enough, a difference may be statistically
> > significant (a term which has a very precise meaning, especially to
> > the Apostles
At 10:06 PM 10/16/00 +, Peter Lewycky wrote:
>It happens all the time in medicine. If I can show a p value 0.05 or
>less the researchers are delighted. Whenever I can't produce a p of 0.05
>or less they start looking for another statistician and will even
>withhold a paper from publication.
In article <[EMAIL PROTECTED]>,
> Chris: That's not what Jerry means. What he's saying is that if
> your sample size is large enough, a difference may be statistically
> significant (a term which has a very precise meaning, especially to
> the Apostles of the Holy 5%) but not large enough to
It happens all the time in medicine. If I can show a p value 0.05 or
less the researchers are delighted. Whenever I can't produce a p of 0.05
or less they start looking for another statistician and will even
withhold a paper from publication.
"Simon, Steve, PhD" wrote:
>
> In a post to EDSTAT-L
In a post to EDSTAT-L, you wrote:
>I believe you will find that most researchers in the sciences
>accept the p-value as religion. In the report of the recent
>British study on Type 2 diabetes, there was an effect which
>was stated as "unimportant" because the p-value was .052.
Do you have a cit
[EMAIL PROTECTED] wrote:
>
> In article <[EMAIL PROTECTED]>,
> Jerry Dallal <[EMAIL PROTECTED]> wrote:
>
> > (1) statistical significance usually is unrelated to practice
> > importance.
>
> I don't think so. I can think of many examples in which statistical
> inference plays an invaluable
On Sat, 14 Oct 2000 01:56:32 GMT, [EMAIL PROTECTED]
wrote:
< snip >
> > (2) absence of evidence is not evidence of absence
>
> Everyone who has done elementary statistics is aware of this edict. But
> what if your power is very high and/or you have very large N? I have
> always found it surpri
- Original Message -
From: Ting Ting <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, October 13, 2000 10:57 PM
Subject: Re: questions on hypothesis
> >
> > A good example of a simple situation for which exact P values are
> > unavailable is the B
On Sat, 14 Oct 2000 [EMAIL PROTECTED] wrote, inter alia:
> I *would* argue that without some method to determine the likelihood of
> a difference b/w two conditions you have no chance of determining
> practical importance at all.
But hypothesis testing procedures do not establish any such likel
In article <[EMAIL PROTECTED]>,
San <[EMAIL PROTECTED]> wrote:
>Would there be some cases which the p-value are so difficult to find
>that it's nearly impossible? Is this a kind of limitation to the
>hypothesis testing using p-value? Is there any substitute for the
>p-value?
>Thx for ur reply.
In article <8s8egf$n5f$[EMAIL PROTECTED]>,
<[EMAIL PROTECTED]> wrote:
>In article <[EMAIL PROTECTED]>,
> Jerry Dallal <[EMAIL PROTECTED]> wrote:
>> (1) statistical significance usually is unrelated to practice
>> importance.
>I don't think so. I can think of many examples in which statistical
In article <[EMAIL PROTECTED]>,
Ting Ting <[EMAIL PROTECTED]> wrote:
>> A good example of a simple situation for which exact P values are
>> unavailable is the Behrens-Fisher problem (testing the equality of
>> normal means from normal populations with unequal variances). Some
>> might say we h
Gene Gallagher wrote:
> Can someone recommend a good book on the history of statistics,
> especially one focusing on Fisher's accomplishments. Fisher's
> contributions and prickly personality are dealt with tangen-
> tially in Provine's wonderful biography of Sewall Wright.
> Surely, Fisher has
>
> A good example of a simple situation for which exact P values are
> unavailable is the Behrens-Fisher problem (testing the equality of
> normal means from normal populations with unequal variances). Some
> might say we have approximate solutions that are good enough.
>
would u pls give some
In article <[EMAIL PROTECTED]>,
Jerry Dallal <[EMAIL PROTECTED]> wrote:
> (1) statistical significance usually is unrelated to practice
> importance.
I don't think so. I can think of many examples in which statistical
inference plays an invaluable role in practical applications and
instrumenta
> As to Observational studies --
>
> http://www.cnr.colostate.edu/~anderson/thompson1.html
>
> This is a short article and long bibliography. The title is direct:
> "326 Articles/Books Questioning the Indiscriminate Use of
> Statistical Hypothesis Tests in Observational Studies"
> (Compiled by
San wrote:
>
> Would there be some cases which the p-value are so difficult to find
> that it's nearly impossible?
I'm tempted to say "not under a randomization model" but, yes, there
are many problems for which P values are not readily available.
Perhaps P values are unavailable for *most* pr
On Thu, 12 Oct 2000, dennis roberts wrote in part:
> one nice full issue of a journal about this general topic of
> hull hypothesis testing ...
Dealing with problems in naval architecture, one presumes?
-- Don.
Would there be some cases which the p-value are so difficult to find
that it's nearly impossible? Is this a kind of limitation to the
hypothesis testing using p-value? Is there any substitute for the
p-value?
Thx for ur reply.
Jerry Dallal wrote:
>
> I wrote:
>
> > (1) statistical significan
one nice full issue of a journal about this general topic of hull
hypothesis testing that i came across recently is:
Research in the Schools, Vol 5, Number 2, Fall 1998 ...
you could contact jim mclean at ... jmclean@ etsu.edu ... and inquire about
obtaining a copy
we are in the process of co
I wrote:
> (1) statistical significance usually is unrelated to practice
> importance.
I meant to type "practical importance".
=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGE
< also posted to sci.stat.math, sci.stat.consult where separate
versions of the same question were posted. >
On Wed, 11 Oct 2000 23:25:05 +0800, San <[EMAIL PROTECTED]>
wrote:
> What are the limitations of hypothesis testing using significance tests
> based on p-values?
>
> Can someone suggest
What are the limitations of hypothesis testing using significance tests
based on p-values?
Can someone suggest me where I can find some reference book related to
the topics above?
thank you
=
Instructions for joining and leaving th
San wrote:
>
> What are the limitations of hypothesis testing using significance tests
> based on p-values?
>
(1) statistical significance usually is unrelated to practice
importance.
(2) absence of evidence is not evidence of absence
http://www.bmj.com/cgi/content/full/311/7003/485
=
49 matches
Mail list logo