Jerry Dallal <[EMAIL PROTECTED]> wrote in sci.stat.edu:
>"Robert J. MacG. Dawson" wrote:
>
>> > > But I don't see why either the advertiser or the consumer advocate
>> > would, or should, do a two-tailed test.
>>
>> The idea
"Robert J. MacG. Dawson" wrote:
> > > But I don't see why either the advertiser or the consumer advocate
> > would, or should, do a two-tailed test.
>
> The idea is that the "product" of these tests is a p-value to be used
> in support
ssarily.
> But I don't see why either the advertiser or the consumer advocate
> would, or should, do a two-tailed test.
The idea is that the "product" of these tests is a p-value to be used
in support of an argument. The evidence for the proposal is not made any
On Sat, 1 Dec 2001 08:20:45 -0500, [EMAIL PROTECTED] (Stan Brown)
wrote:
> [cc'd to previous poster]
>
> Rich Ulrich <[EMAIL PROTECTED]> wrote in sci.stat.edu:
> >I think I could not blame students for floundering about on this one.
> >
> >On Thu, 29 Nov 2001 14:39:35 -0500, [EMAIL PROTECTED] (S
At 08:29 AM 12/1/01 -0500, Stan Brown wrote:
>How I would analyze this claim is that, when the advertiser says
>"90% of people will be helped", that means 90% or more. Surely if we
>did a large controlled study and found 93% were helped, we would not
>turn around and say the advertiser was wrong
[cc'd to previous poster]
Rich Ulrich <[EMAIL PROTECTED]> wrote in sci.stat.edu:
>I think I could not blame students for floundering about on this one.
>
>On Thu, 29 Nov 2001 14:39:35 -0500, [EMAIL PROTECTED] (Stan Brown)
>wrote:
>> "The manufacturer of a patent medicine claims that it is 90%
>>
or the consumer advocate
would, or should, do a two-tailed test. Alan McLean seemed to agree
that both would be one-tailed, if I understand him correctly.
> (1) The "consumer advocate's test": we want a definite result that
>makes the manufacturer look bad, so H0 is t
Alan McLean <[EMAIL PROTECTED]> wrote in
sci.stat.edu:
>Stan, in practical terms, the conclusion 'fail to reject the null' is
>simply not true. You do in reality 'accept the null'. The catch is that
>this is, in the research situation, a tentative acceptance - you
>recognise that you may be wrong
Hi
On Thu, 29 Nov 2001, Stan Brown wrote:
> But -- and in retrospect I should have seen it coming -- some
> students framed the hypotheses so that the alternative hypothesis
> was "the drug is effective as claimed." They had
> Ho: p <= .9; Ha: p > .9; p-val
I think I could not blame students for floundering about on this one.
On Thu, 29 Nov 2001 14:39:35 -0500, [EMAIL PROTECTED] (Stan Brown)
wrote:
> On a quiz, I set the following problem to my statistics class:
>
> "The manufacturer of a patent medicine claims that it is 90%
> effective(*) in re
ective"; as the exact 90% value has prior
probability 0 this is not a problem. H0 is actually the original claim;
and the hoped-for outcome is to reject it because the number of
successes is too large. The manufacturer is not entitled to do a
1-tailed test just to shrink the reported p-value.
>> was "the drug is effective as claimed." They had
>> Ho: p <= .9; Ha: p > .9; p-value = .9908.
>
>I don't understand where they get the .9908 from.
x=170, n=200, p'=.85, Ha: p>.9, alpha=.01
z = -2.357
On TI-83, normalcdf(-2.357,1E99) = .9
tistics/, problem 10.6.)
> >
> > I believe a one-tailed test, not a two-tailed test, is appropriate.
> > It would be silly to test for "effectiveness differs from 90%" since
> > no one would object if the medicine helps more than 90% of
> > patients.)
> &
; since
>no one would object if the medicine helps more than 90% of
>patients.)
>
>Framing the alternative hypothesis as "the manufacturer's claim is
>not legitimate" gives
> Ho: p >= .9; Ha: p < .9; p-value = .0092
>on a one-tailed t-test. Therefore we rej
st, not a two-tailed test, is appropriate.
> It would be silly to test for "effectiveness differs from 90%" since
> no one would object if the medicine helps more than 90% of
> patients.)
>
> Framing the alternative hypothesis as "the manufacturer's claim is
> not
uot;effectiveness differs from 90%" since
no one would object if the medicine helps more than 90% of
patients.)
Framing the alternative hypothesis as "the manufacturer's claim is
not legitimate" gives
Ho: p >= .9; Ha: p < .9; p-value = .0092
on a one-tailed t
Hi
On 2 Nov 2001, Donald Burrill wrote:
> On Fri, 2 Nov 2001, jim clark wrote:
> > I would hate to ressurect a debate from sometime in the past
> > year, but the chi-squared is a non-directional (commonly referred
> > to as two-tailed) test, although it is true that you only
> > consider one end
At 05:06 PM 11/2/01 -0500, Wuensch, Karl L wrote:
> Dennis wrote: " it is NOT correct to say that the p > value (as
>traditionally calculated) represents the probability of finding a > result
>LIKE WE FOUND ... if the null were true? that p would be ½ of &
Dennis wrote: " it is NOT correct to say that the p > value (as
traditionally calculated) represents the probability of finding a > result
LIKE WE FOUND ... if the null were true? that p would be ½ of > what is
calculated."
Jones and Tukey (A sensible
CI for difference: (-0.01, 5.92)
> T-Test of difference = 0 (vs not =): T-Value = 2.02 P-Value = 0.051 DF = 35
>
> for 35 df ... minitab finds the areas beyond -2.20 and + 2.02 ... adds them
> together .. and this value in the present case is .051
>
> now, traditionally, we would
:
N Mean StDev SE Mean
exp 20 30.80 5.20 1.2
cont 20 27.84 3.95 0.88
Difference = mu exp - mu cont
Estimate for difference: 2.95
95% CI for difference: (-0.01, 5.92)
T-Test of difference = 0 (vs not =): T-Value = 2.02 P-Value = 0.051 DF = 35
for 35 df
y ...
>> treatment/control ... and, are interested in the mean difference ... and
>> find that a simple t test shows a p value (with mean in favor of
>treatment)
>> of .009
>> while it generally seems to be held that such a p value would suggest that
>> our null
"Dennis Roberts" <[EMAIL PROTECTED]> wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> let's say that you do a simple (well executed) 2 group study ...
> treatment/control ... and, are interested in the mean difference ... and
> find that a si
My opinion, FWIW:
The answer to your question in a strict fashion, assuming the experiment is
well designed, depends to a large extent on your "a priori" null hypothesis
and how you performed the statistical test.
In this case, presuming that you used a two-sided p value an
In article <[EMAIL PROTECTED]>,
Dennis Roberts <[EMAIL PROTECTED]> wrote:
>let's say that you do a simple (well executed) 2 group study ...
>treatment/control ... and, are interested in the mean difference ... and
>find that a simple t test shows a p value (with mean i
It seems to me that any well-designed experiment, by definition, leaves only
two reasonable explanations for favorable results: the desired effect and
chance. The low p-value (nearly) eliminates chance.
Jonathan Fry
SPSS Inc
let's say that you do a simple (well executed) 2 group study ...
treatment/control ... and, are interested in the mean difference ... and
find that a simple t test shows a p value (with mean in favor of treatment)
of .009
while it generally seems to be held that such a p value would su
In article <[EMAIL PROTECTED]>,
Jerry Dallal <[EMAIL PROTECTED]> wrote:
>Herman Rubin wrote and I marked up:
>> There is no way to use the present p-value by itself correctly
>> with additional data.
>I think the "with" is a typographical error and that
In article <[EMAIL PROTECTED]>,
Will Hopkins <[EMAIL PROTECTED]> wrote:
>I've been involved in off-list discussion with Duncan Murdoch. At one
>stage there I was about to retire in disgrace. But sighs of relief... his
>objection is Bayesian. OK. The p val
Herman Rubin wrote and I marked up:
> There is no way to use the present p-value by itself correctly
> with additional data.
I think the "with" is a typographical error and that "without" was
intended. I comment only because I like it and plan to use a
modified
ines, anyway.
>If only we could replace the p value with a probability that the true
>effect is negative (or has the opposite sign to the observed effect). The
>easiest way would be to insist on one-tailed tests for everything. Then
>the p value would mean exactly that. An example of t
On 2 Feb 2001 01:12:59 -0800, [EMAIL PROTECTED] (Will Hopkins) wrote:
>I've been involved in off-list discussion with Duncan Murdoch. At one
>stage there I was about to retire in disgrace. But sighs of relief... his
>objection is Bayesian.
Just to clarify, I don't think this is a valid summ
nce; I believe
that this is true in more generality than the question
studied there. In the ESP problem above, detecting even
a few parts in a thousand would require on the order of
a million observations, so one can "get away" with it.
But this is not the case with fixing a p value. Mo
I've been involved in off-list discussion with Duncan Murdoch. At one
stage there I was about to retire in disgrace. But sighs of relief... his
objection is Bayesian. OK. The p value is a device to put in a
publication to communicate something about precision of an estimate of an
e
Will Hopkins wrote:
>
> I accept that there are unusual cases where the null hypothesis has a
> finite probability of being be true, but I still can't see the point in
> hypothesizing a null, not in biomedical disciplines, anyway.
>
> If only we could replace the p v
Bruce Weaver wrote:
> Suppose you were conducting a test with someone who claimed to have ESP,
> such that they were able to predict accurately which card would be turned
> up next from a well-shuffled deck of cards. The null hypothesis, I think,
> would be that the person does not have ESP.
On 30 Jan 2001, Will Hopkins wrote:
-- >8 ---
> I haven't followed this thread closely, but I would like to state the
> only valid and useful interpretation of the p value that I know. If
> you observe a positive effect, then p/2 i
.02, or
>>0.008) that this statement is false.
>>Have I not said the same thing? As p gets small, we are more
>>confident that the null hypothesis is not valid.
>I haven't followed this thread closely, but I would like to state the
>only valid and useful interpretation
Will Hopkins wrote:
>
>
>
> I haven't followed this thread closely, but I would like to state the
> only valid and useful interpretation of the p value that I know. If
> you observe a positive effect, then p/2 is the probability that the
> true value of the effect is
>confident that the null hypothesis is not valid.
I haven't followed this thread closely, but I would like to state the
only valid and useful interpretation of the p value that I know. If
you observe a positive effect, then p/2 is the probability that the
true value of the effect is negat
is such as 'fairness' or 'simplicity' or 'uphold the status
quo'.) The statistical criterion is (usually) the p value. Finally, the
privilege extends to requiring the non-privileged model to perform
'significantly' (in the usual everyday sense of 'substantia
about p values, I'd be
> interested in any comments on the following:
>
> I find one of the hardest aspects of teaching statistical inference to
> students is the linguistic contortions that can arise in moving from a
> strict formal definition of what an obtained p value mea
42 matches
Mail list logo