Jerry Dallal <[EMAIL PROTECTED]> wrote in sci.stat.edu:
>A confidence interval is an interval generated by a process that has
>the property that the resulting interval will contain the
>parameter(s) of interest in the specified proportion of cases.
and in a different article in the thread:
>Confidence is not probability.
Okay, can you 'splain me something, please?
A 95% CI for the mean is generated by a process such that the
resulting interval will contain the true mu in 95% of cases, right?
That means that if you took a number NS of random samples, and
computed the NS 95% CI, (about) .95*NS of them would contain mu and
the other (about) .05*NS would not, right?
Here's the part I don't understand: if we agree (as I think we do)
that 95% of all possible random samples lead to a 95% CI that
contains the true value of mu, how can it possibly be incorrect (as
you and others seem to say) to state that the probability of getting
mu in the 95% CI from any _one_ CI is 95%? Perhaps I'm very stupid,
but I just don't see how the statements are not equivalent.
And since I'm supposed to be teaching this stuff, I'd really like to
understand the flaw in my thinking.
--
Stan Brown, Oak Road Systems, Cortland County, New York, USA
http://OakRoadSystems.com/
"My theory was a perfectly good one. The facts were misleading."
-- /The Lady Vanishes/ (1938)
.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
. http://jse.stat.ncsu.edu/ .
=================================================================