What is a confidence interval?

2001-09-26 Thread Warren

Hi,
I've been teaching an introductory stats course for several years.
I always learn something from my students...hope they learn too.
One thing I've learned is that confidence intervals are very tough
for them.  They can compute them, but why?

Of course, we talk about confidence interval construction and I try
to explain the usual "95% of all intervals so constructed will in the
long run include the parameter...blah, blah".  I've looked at the
Bayesian interpretation also but find this a bit hard for beginning
students.

So, what is your best way to explain a CI?  How do you explain it
without using some esoteric discussion of probability?

Now, here's another question.  If I roll 2 dice and
find the mean of the pips on the upturned faces.  You can compute
sample standard deviations, but if you roll 2 alike the s.d. is 0.
So, you cannot compute a CI based on these samples.  How would
you explain?

Thanks,

Warren


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: What is a confidence interval?

2001-09-26 Thread Dennis Roberts

as a start, you could relate everyday examples where the notion of CI seems 
to make sense

A. you observe a friend in terms of his/her lateness when planning to meet 
you somewhere ... over time, you take 'samples' of late values ... in a 
sense you have means ... and then you form a rubric like ... for sam ... if 
we plan on meeting at noon ... you can expect him at noon + or - 10 minutes 
... you won't always be right but, maybe about 95% of the time you will?

B. from real estate ads in a community, looking at sunday newspapers, you 
find that several samples of average house prices for a 3 bedroom, 2 bath 
place are certain values ... so, again, this is like have a bunch of means 
... then, if someone asks you (visitor) about average prices of a bedroom, 
2 bath house ... you might say ... 134,000 +/- 21,000 ... of course, you 
won't always be right but  perhaps about 95% of the time?

but, more specifically, there are a number of things you can do

1. students certainly have to know something about sampling error ... and 
the notion of a sampling distribution

2. they have to realize that when taking a sample, say using the sample 
mean, that the mean they get could fall anywhere within that sampling 
distribution

3. if we know something about #1 AND, we have a sample mean ... then, #1 
sets sort of a limit on how far away the truth can be GIVEN that sample 
mean or statistic ...

4. thus, we use the statistics (ie, sample mean) and add and subtract some 
error (based on #1) ... in such a way that we will be correct (in saying 
that the parameter will fall within the CI) some % of the time ... say, 95%?

it is easy to show this via simulation ... minitab for example can help you 
do this

here is an example ... let's say we are taking samples of size 100 from a 
population of SAT M scores ... where we assume the mu is 500 and sigma is 
100 ... i will take a 1000 SRS samples ... and summarize the results of 
building 100 CIs

MTB > rand 1000 c1-c100; <<< made 1000 rows ... and 100 columns ... each 
ROW will be a sample
SUBC> norm 500 100. <<< sampled from population with mu = 500 and sigma = 100
MTB > rmean c1-c100 c101 <<< got means for 1000 samples and put in c101
MTB > name c1='sampmean'
MTB > let c102=c101-2*10   found lower point of 95% CI
MTB > let c103=c101+2*10   found upper point of 95% CI
MTB > name c102='lowerpt' c103='upperpt'
MTB > let c104=(c102 lt 500) and (c103 gt 500)  <<< this evaluates if the 
intervals capture 500 or not
MTB > sum c104

Sum of C104

Sum of C104 = 954.00    954 of the 1000 intervals captured 500
MTB > let k1=954/1000
MTB > prin k1

Data Display

K10.954000   pretty close to 95%
MTB > prin c102 c103 c104 <<<  a few of the 1000 intervals are shown below

Data Display


  Row   lowerpt   upperpt   C104

1   477.365   517.365  1
2   500.448   540.448  0  <<< here is one that missed 500 ...the 
other 9 captured 500
3   480.304   520.304  1
4   480.457   520.457  1
5   485.006   525.006  1
6   479.585   519.585  1
7   480.382   520.382  1
8   481.189   521.189  1
9   486.166   526.166  1
   10   494.388   534.388  1





_
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: What is a confidence interval?

2001-09-26 Thread Jon Cryer

Dennis:

Example A is a mistaken interpretation of a confidence interval for a mean.
Unfortunately, this is is a very common misinterpretation.
What you have described in Example A is a _prediction_ interval for
an individual observation. Prediction intervals rarely get taught except
(maybe)
in the context of a regression model.

Jon

At 03:11 PM 9/26/01 -0400, you wrote:
>as a start, you could relate everyday examples where the notion of CI seems 
>to make sense
>
>A. you observe a friend in terms of his/her lateness when planning to meet 
>you somewhere ... over time, you take 'samples' of late values ... in a 
>sense you have means ... and then you form a rubric like ... for sam ... if 
>we plan on meeting at noon ... you can expect him at noon + or - 10 minutes 
>... you won't always be right but, maybe about 95% of the time you will?
>
>B. from real estate ads in a community, looking at sunday newspapers, you 
>find that several samples of average house prices for a 3 bedroom, 2 bath 
>place are certain values ... so, again, this is like have a bunch of means 
>... then, if someone asks you (visitor) about average prices of a bedroom, 
>2 bath house ... you might say ... 134,000 +/- 21,000 ... of course, you 
>won't always be right but  perhaps about 95% of the time?
>
>but, more specifically, there are a number of things you can do
>
>1. students certainly have to know something about sampling error ... and 
>the notion of a sampling distribution
>
>2. they have to realize that when taking a sample, say using the sample 
>mean, that the mean they get could fall anywhere within that sampling 
>distribution
>
>3. if we know something about #1 AND, we have a sample mean ... then, #1 
>sets sort of a limit on how far away the truth can be GIVEN that sample 
>mean or statistic ...
>
>4. thus, we use the statistics (ie, sample mean) and add and subtract some 
>error (based on #1) ... in such a way that we will be correct (in saying 
>that the parameter will fall within the CI) some % of the time ... say, 95%?
>
>it is easy to show this via simulation ... minitab for example can help you 
>do this
>
>here is an example ... let's say we are taking samples of size 100 from a 
>population of SAT M scores ... where we assume the mu is 500 and sigma is 
>100 ... i will take a 1000 SRS samples ... and summarize the results of 
>building 100 CIs
>
>MTB > rand 1000 c1-c100; <<< made 1000 rows ... and 100 columns ... each 
>ROW will be a sample
>SUBC> norm 500 100. <<< sampled from population with mu = 500 and sigma = 100
>MTB > rmean c1-c100 c101 <<< got means for 1000 samples and put in c101
>MTB > name c1='sampmean'
>MTB > let c102=c101-2*10   found lower point of 95% CI
>MTB > let c103=c101+2*10   found upper point of 95% CI
>MTB > name c102='lowerpt' c103='upperpt'
>MTB > let c104=(c102 lt 500) and (c103 gt 500)  <<< this evaluates if the 
>intervals capture 500 or not
>MTB > sum c104
>
>Sum of C104
>
>Sum of C104 = 954.00    954 of the 1000 intervals captured 500
>MTB > let k1=954/1000
>MTB > prin k1
>
>Data Display
>
>K10.954000   pretty close to 95%
>MTB > prin c102 c103 c104 <<<  a few of the 1000 intervals are shown below
>
>Data Display
>
>
>  Row   lowerpt   upperpt   C104
>
>1   477.365   517.365  1
>2   500.448   540.448  0  <<< here is one that missed 500 ...the 
>other 9 captured 500
>3   480.304   520.304  1
>4   480.457   520.457  1
>5   485.006   525.006  1
>6   479.585   519.585  1
>7   480.382   520.382  1
>8   481.189   521.189  1
>9   486.166   526.166  1
>   10   494.388   534.388  1
>
>
>
>
>
>_
>dennis roberts, educational psychology, penn state university
>208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
>http://roberts.ed.psu.edu/users/droberts/drober~1.htm
>
>
>
>=
>Instructions for joining and leaving this list and remarks about
>the problem of INAPPROPRIATE MESSAGES are available at
>  http://jse.stat.ncsu.edu/
>=
>
 ___
--- |   \
Jon Cryer, Professor Emeritus  ( )
Dept. of Statistics  www.stat.uiowa.edu/~jcryer \\_University
 and Actuarial Science   office 319-335-0819 \ *   \of Iowa
The University of Iowa   home   319-351-4639  \/Hawkeyes
Iowa City, IA 52242  FAX319-335-3017   |__ )
---   V

"It ain't so much the things we don't know that get us into trouble. 
It's the things we do know that just ain't so." --Artemus Ward 


=
Instructions for joining and leaving this list

Re: What is a confidence interval?

2001-10-01 Thread Jerry Dallal

Ronald Bloom wrote:
> 
> Jerry Dallal <[EMAIL PROTECTED]> wrote:
> > John Jackson wrote:
> >>
> >> this is the second time I have seen this word used: "frequentist"?
> 
> > Since Radford Neal has already given an excellent explanation,
> > let me add...
> 
> > A roulette wheel comes up with a red number 10 times in a row. When
> > deciding how to place his/her next bet...
> 
> > The person on the street bets black, "Because it's got to come up
> > eventually."
> 
> > The frequentist doesn't care, "Because red and black occur at random
> > with equal chances and past history doesn't matter."
> 
> > The Bayesian bets red, "Because there's something strange going on
> > here!"
> 
>   *I'd* bet The person on the street *also* bets red, "Because it
> looks like there's something strange going on!"

10?  Did I type 10?  The "10" key is s close to the "5" key.  I
meant "5".  Sorry for the confusion!  :-)


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: What is a confidence interval?

2001-10-30 Thread Donald Burrill

In reviewing some not-yet-deleted email, I came across this one, and have 
no record of its error(s) having been corrected.

On Sat, 29 Sep 2001, John Jackson wrote:

> How do describe the data that does not reside in the area
> described by the confidence interval?
> 
> For example, you have a two tailed situation, with a left tail of .1, a 
> middle of .8 and a right tail of .1, the confidence interval for the 
> middle is 90%.

Well, no.  You describe an 80% C.I., not a 90% C.I.

> Is it correct to say with respect to a value falling outside of the 
> interval in the right tail:
> 
> For any random inverval selected, there is a .05% probability that the 
> sample will NOT yield an interval that yields the parameter being 
> estimated and additonally such interval will not include any values in 
> area represented by the left tail. 

If you're still referring to the 80% C.I. introduced above, ".05% 
probability" is not applicable.  [Not even if you had stated it 
correctly, either as ".05 probability" or as "5% probability".  ;-) ]

> Can you make different statements about the left and right tail?

Not for the case you have described.  Had you chosen to compute an 
asymmetric C.I. (perfectly possible in theory, hardly ever done, so far 
as I am aware, in practice) it would be otherwise.
-- DFB.
 
 Donald F. Burrill [EMAIL PROTECTED]
 184 Nashua Road, Bedford, NH 03110  603-471-7128



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: What is a confidence interval?

2001-09-28 Thread Herman Rubin

In article <[EMAIL PROTECTED]>,
Radford Neal <[EMAIL PROTECTED]> wrote:
>In article ,
>John Jackson <[EMAIL PROTECTED]> wrote:

>>this is the second time I have seen this word used: "frequentist"? What does
>>it mean?

>It's the philosophy of statistics that holds that probability can
>meaningfully be applied only to repeatable phenomena, and that the
>meaning of a probability is the frequency with which something happens
>in the long run, when the phenomenon is repeated.  This rules out
>using probability to describe uncertainty about a parameter value,
>such as the mass of the hydrogen atom, since there's just one true
>value for the parameter, not a sequence of values.

>The frequentist view is currently the dominant one, especially in
>undergraduate statistics courses.  The alternative Bayesian philosophy
>holds the contrary view that probability can (and should) be used to
>describe uncertainty even about things that can't conceivably be
>regarded as coming from a sequence of repetitions.

>Confidence intervals are a frequentist concept.  Only in the Bayesian
>framework can one say things like, "There's a 95% chance that the
>parameter mu is in the interval (5.4, 7.1)".  That, however, is how
>people would like to interpret confidence intervals.  You can't
>interpret them that way, though, if you're abiding by the orthodox
>frequentist philosophy.

>   Radford Neal

There is another approach, which in my opinion is the only
one which makes sense for "physical" probability, which is 
that it exists, and behaves like probability is supposed to
behave.  One cannot conduct "independent trials with the 
same probability of success"; so the theorem that, if one
could do this, the relative frequencies would converge almost
surely to the true probability is at most a justification, not
a reasonable definition or characterization.

However, this still does not get other than the result that
confidence intervals will contain the parameter with the
specified probability holds BEFORE the analysis of the data.
After the analysis, only the Bayesian approach allows the
type of statement most make.
-- 
This address is for information only.  I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
[EMAIL PROTECTED] Phone: (765)494-6054   FAX: (765)494-0558


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: What is a confidence interval?

2001-09-28 Thread Jerry Dallal

Dennis Roberts wrote:
> 
> At 01:23 AM 9/28/01 +, Radford Neal wrote:
> 
> radford makes a nice quick summary of the basic differences between
> bayesian and frequentist positions, which is helpful. these distinctions
> are important IF one is seriously studying statistical ideas
> 
> personally, i think that trying to make these distinction for introductory
> students however is a waste of time ... these are things for "majors" in
> statistics or "statisticians" to discuss and battle over

There is a difference between making the distinction and pointing
out that there is one.

Marilyn Vos Savant had a relevant item in her "Ask Marilyn column:
of September 8, 2001.



Q: Is there any simple way for the average person to grasp the
theory of relativity?

A:  In my opinion. no. This is not a reflection on our intelligence
but rather on the extent of our learning. The general theory of
relativity expands the time and space proposals of the special
theory of relativity from the areas of electric and magnetic
phenomena to all physical phenomena, with emphasis on gravity.

Without being highly educated in physics, we can only read
summaries of the theory, accept the points on faith and then
successfully repeat what we've learned to others. But the theory of
relativity is not unique in this regard. All of us are capable of
understanding far more than we do; we just don't have the time to
educate ourselves in every field.

---

Students in intro courses are like that.  They yet don't have the
background to appreciate the difference between confidence and
probability, but if we don't at least point out that a difference
exists, they'll *never* learn it.


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: What is a confidence interval?

2001-09-28 Thread Herman Rubin

In article <[EMAIL PROTECTED]>,
Dennis Roberts <[EMAIL PROTECTED]> wrote:
>At 01:23 AM 9/28/01 +, Radford Neal wrote:


>radford makes a nice quick summary of the basic differences between 
>bayesian and frequentist positions, which is helpful. these distinctions 
>are important IF one is seriously studying statistical ideas

>personally, i think that trying to make these distinction for introductory 
>students however is a waste of time ... these are things for "majors" in 
>statistics or "statisticians" to discuss and battle over

I disagree.  Otherwise, the student is introduced to what
is pure ritual, which is the way these things are used in
practice.  When a medical paper states that something is
not important because the p value is .052, it is clear that
their statistical understanding leaves far too much to be
desired.  Also, if they say that something is very
important because it is significant at the .001 level, the
same holds.  How much damage has been done by the use of
significance testing, confidence intervals, etc., by 
government agencies and journals is medicine, psychology,
education, etc.?
-- 
This address is for information only.  I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
[EMAIL PROTECTED] Phone: (765)494-6054   FAX: (765)494-0558


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: What is a confidence interval?

2001-09-28 Thread Herman Rubin

In article <[EMAIL PROTECTED]>,
David Heiser <[EMAIL PROTECTED]> wrote:


>-Original Message-
>From: [EMAIL PROTECTED]
>[mailto:[EMAIL PROTECTED]]On Behalf Of Gordon D. Pusch
>Sent: Thursday, September 27, 2001 7:33 PM
>To: [EMAIL PROTECTED]
>Subject: Re: What is a confidence interval?


>"John Jackson" <[EMAIL PROTECTED]> writes:

>> this is the second time I have seen this word used: "frequentist"?
>> What does it mean?

>``Frequentist'' is the term used by Bayesians to describe partisans of
>Fisher et al's revisionist edict that ``probability'' shall be declared
>to be semantically equivalent to ``frequency of events'' in some mythical
>ensemble. Bayesians instead hold to the original Laplace-Bernoulli concept
>that probability is a measure of one's degree of confidence in an
>hypothesis,
>whereas the frequency of occurance of an outcome in a set of trials is a
>totally independent concept that does not even live in the same space as
>a probability.

See my other posting about the idea of a "real world"
probability, which is neither.

>-- Gordon D. Pusch

>I disagee with Pusch.

>Bayesians have a way of modifying definitions to support their arguments.

>Bayesians are those people who have to invent loss functions in order to
>make a decision.

Anybody needs something like this to decide what action to 
take.  If one action is marginally better than another, and
the information does not otherwise give strong reasons for
using it, why bother?  Decisions are actions.

The "behaviorist Bayes" approach to decision making under
uncertainty, only using self-consistency (coherence) to
compare actions, comes up with the utility (which comes form
coherence) of the action in ignorance of the state of nature
has to be a positive linear function of the utilities knowing
the state of nature; in other words, the "prior" for action
is just a weighting function.  If the utilities are
multiplied by a function of the state of nature, and the
weights divided, the same evaluation of the action results.
-- 
This address is for information only.  I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
[EMAIL PROTECTED] Phone: (765)494-6054   FAX: (765)494-0558


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: What is a confidence interval?

2001-09-28 Thread Ronald Bloom

Jerry Dallal <[EMAIL PROTECTED]> wrote:
> John Jackson wrote:
>> 
>> this is the second time I have seen this word used: "frequentist"? 

> Since Radford Neal has already given an excellent explanation,
> let me add...

> A roulette wheel comes up with a red number 10 times in a row. When
> deciding how to place his/her next bet...

> The person on the street bets black, "Because it's got to come up
> eventually."

> The frequentist doesn't care, "Because red and black occur at random
> with equal chances and past history doesn't matter."

> The Bayesian bets red, "Because there's something strange going on
> here!"


  *I'd* bet The person on the street *also* bets red, "Because it 
looks like there's something strange going on!"


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: What is a confidence interval?

2001-09-26 Thread William B. Ware

Have you tried simulations?  with something like Resampling Stats or
Minitab?

WBW


On 26 Sep 2001, Warren wrote:

> Hi,
> I've been teaching an introductory stats course for several years.
> I always learn something from my students...hope they learn too.
> One thing I've learned is that confidence intervals are very tough
> for them.  They can compute them, but why?
> 
> Of course, we talk about confidence interval construction and I try
> to explain the usual "95% of all intervals so constructed will in the
> long run include the parameter...blah, blah".  I've looked at the
> Bayesian interpretation also but find this a bit hard for beginning
> students.
> 
> So, what is your best way to explain a CI?  How do you explain it
> without using some esoteric discussion of probability?
> 
> Now, here's another question.  If I roll 2 dice and
> find the mean of the pips on the upturned faces.  You can compute
> sample standard deviations, but if you roll 2 alike the s.d. is 0.
> So, you cannot compute a CI based on these samples.  How would
> you explain?
> 
> Thanks,
> 
> Warren
> 
> 
> =
> Instructions for joining and leaving this list and remarks about
> the problem of INAPPROPRIATE MESSAGES are available at
>   http://jse.stat.ncsu.edu/
> =
> 



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: What is a confidence interval?

2001-09-26 Thread Radford Neal

In article <[EMAIL PROTECTED]>,
Dennis Roberts <[EMAIL PROTECTED]> wrote:

>as a start, you could relate everyday examples where the notion of CI seems 
>to make sense
>
>A. you observe a friend in terms of his/her lateness when planning to meet 
>you somewhere ... over time, you take 'samples' of late values ... in a 
>sense you have means ... and then you form a rubric like ... for sam ... if 
>we plan on meeting at noon ... you can expect him at noon + or - 10 minutes 
>... you won't always be right but, maybe about 95% of the time you will?
>
>B. from real estate ads in a community, looking at sunday newspapers, you 
>find that several samples of average house prices for a 3 bedroom, 2 bath 
>place are certain values ... so, again, this is like have a bunch of means 
>... then, if someone asks you (visitor) about average prices of a bedroom, 
>2 bath house ... you might say ... 134,000 +/- 21,000 ... of course, you 
>won't always be right but  perhaps about 95% of the time?

These examples are NOT analogous to confidence intervals.  In both
examples, a distribution of values is inferred from a sample, and
based on this distribution, a PROBABILITY statement is made concerning
a future observation.  But a confidence interval is NOT a probability
statement concerning the unknown parameter.  In the frequentist
statistical framework in which confidence intervals exists,
probability statements about unknown parameters are not considered to
be meaningful.

   Radford Neal


Radford M. Neal   [EMAIL PROTECTED]
Dept. of Statistics and Dept. of Computer Science [EMAIL PROTECTED]
University of Toronto http://www.cs.utoronto.ca/~radford



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: What is a confidence interval?

2001-09-26 Thread dennis roberts

some people are sure picky ...

given the context in which the original post was made ... it seems like the 
audience that the poster was hoping to be able to talk to about CIs was not 
very likely to understand them very well ... thus, it is not unreasonable 
to proffer examples to get one into having some sense of the notion

the examples below ... were only meant to portray ... the idea that 
observations have error ... and, over time and over samples ... one gets 
some idea about what the size of that error might be ... thus, when 
projecting about behavior ... we have a tool to know a bit about some 
penultimate value ... say the parameter for a person ... by using AN 
observation, and factoring in the error you have observed over time or 
samples ...

in essence, CIs are + and - around some observation where ... you 
conjecture within some range what the "truth" might be ... and, if you have 
evidence about size of error ... then, these CIs can say something about 
the parameter (again, within some range) in face of only seeing a limited 
sample of behavior

At 09:30 PM 9/26/01 +, Radford Neal wrote:
>In article <[EMAIL PROTECTED]>,
>Dennis Roberts <[EMAIL PROTECTED]> wrote:
>
> >as a start, you could relate everyday examples where the notion of CI seems
> >to make sense
> >
> >A. you observe a friend in terms of his/her lateness when planning to meet
> >you somewhere ... over time, you take 'samples' of late values ... in a
> >sense you have means ... and then you form a rubric like ... for sam ... if
> >we plan on meeting at noon ... you can expect him at noon + or - 10 minutes
> >... you won't always be right but, maybe about 95% of the time you will?
> >
> >B. from real estate ads in a community, looking at sunday newspapers, you
> >find that several samples of average house prices for a 3 bedroom, 2 bath
> >place are certain values ... so, again, this is like have a bunch of means
> >... then, if someone asks you (visitor) about average prices of a bedroom,
> >2 bath house ... you might say ... 134,000 +/- 21,000 ... of course, you
> >won't always be right but  perhaps about 95% of the time?
>
>These examples are NOT analogous to confidence intervals.  In both
>examples, a distribution of values is inferred from a sample, and
>based on this distribution, a PROBABILITY statement is made concerning
>a future observation.  But a confidence interval is NOT a probability
>statement concerning the unknown parameter.  In the frequentist
>statistical framework in which confidence intervals exists,
>probability statements about unknown parameters are not considered to
>be meaningful.

you are clearly misinterpreting, for whatever purpose, what i have said

i certainly have NOT said that a CI is a probability statement about any 
specific parameter or, being able to attach some probability value to some 
certain value as BEING the parameter

the p or confidence associated with CIs only makes sense in terms of 
dumping all possible CIs into a hat ... and, asking  what is the 
probability of pulling one out at random that captures the parameter 
(whatever the parameter might be) ...

the example i gave with some minitab work clearly showed that ... and made 
no other interpretation about p values in connection with CIs

perhaps some of you who seem to object so much to things i offer ... might 
offer some posts of your own in response to requests from those seeking 
help ... to make sure that they get the right message ...


>Radford Neal
>
>
>Radford M. Neal   [EMAIL PROTECTED]
>Dept. of Statistics and Dept. of Computer Science [EMAIL PROTECTED]
>University of Toronto http://www.cs.utoronto.ca/~radford
>
>
>
>=
>Instructions for joining and leaving this list and remarks about
>the problem of INAPPROPRIATE MESSAGES are available at
>   http://jse.stat.ncsu.edu/
>=

==
dennis roberts, penn state university
educational psychology, 8148632401
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: What is a confidence interval?

2001-09-29 Thread dennis roberts

At 02:16 AM 9/29/01 +, John Jackson wrote:

>For any random inverval selected, there is a .05% probability that the
>sample will NOT yield an interval that yields the parameter being estimated
>and additonally such interval will not include any values in area
>represented by the left tail.  Can you make different statements about the
>left and right tail?

unless CIs work differently than i think ... about 1/2 the time the CI will 
miss to the right ... and 1/2 the time they will miss to the left ... thus, 
what if we labelled EACH CI with a tag called HIT ... or MISSleft ... or 
MISSright ... for 95% CIs ... the p of grabbing a CI that is HIT from all 
possible is about .95 ... the p for getting MISSleft PLUS MISSright is 
about .05 ... thus, about 1/2 of the .05 will be MISSleft and about 1/2 of 
the .05 will be MISSright

so, i don't see that you can say anything differentially important about 
one end or the other




>"Michael F." <[EMAIL PROTECTED]> wrote in message
>[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> > (Warren) wrote in message:
> >
> > > So, what is your best way to explain a CI?  How do you explain it
> > > without using some esoteric discussion of probability?
> >
> > I prefer to focus on the reliability of the estimate and say it is:
> >
> > "A range of values for an estimate that reflect its unreliability and
> > which contain the parameter of interest 95% of the time in the long run."
>
>
>
>
>=
>Instructions for joining and leaving this list and remarks about
>the problem of INAPPROPRIATE MESSAGES are available at
>   http://jse.stat.ncsu.edu/
>=

==
dennis roberts, penn state university
educational psychology, 8148632401
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: What is a confidence interval?

2001-09-29 Thread John Jackson

Great explanation

"dennis roberts" <[EMAIL PROTECTED]> wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> At 02:16 AM 9/29/01 +, John Jackson wrote:
>
> >For any random inverval selected, there is a .05% probability that the
> >sample will NOT yield an interval that yields the parameter being
estimated
> >and additonally such interval will not include any values in area
> >represented by the left tail.  Can you make different statements about
the
> >left and right tail?
>
> unless CIs work differently than i think ... about 1/2 the time the CI
will
> miss to the right ... and 1/2 the time they will miss to the left ...
thus,
> what if we labelled EACH CI with a tag called HIT ... or MISSleft ... or
> MISSright ... for 95% CIs ... the p of grabbing a CI that is HIT from all
> possible is about .95 ... the p for getting MISSleft PLUS MISSright is
> about .05 ... thus, about 1/2 of the .05 will be MISSleft and about 1/2 of
> the .05 will be MISSright
>
> so, i don't see that you can say anything differentially important about
> one end or the other
>
>
>
>
> >"Michael F." <[EMAIL PROTECTED]> wrote in message
> >[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> > > (Warren) wrote in message:
> > >
> > > > So, what is your best way to explain a CI?  How do you explain it
> > > > without using some esoteric discussion of probability?
> > >
> > > I prefer to focus on the reliability of the estimate and say it is:
> > >
> > > "A range of values for an estimate that reflect its unreliability and
> > > which contain the parameter of interest 95% of the time in the long
run."
> >
> >
> >
> >
> >=
> >Instructions for joining and leaving this list and remarks about
> >the problem of INAPPROPRIATE MESSAGES are available at
> >   http://jse.stat.ncsu.edu/
> >=
>
> ==
> dennis roberts, penn state university
> educational psychology, 8148632401
> http://roberts.ed.psu.edu/users/droberts/drober~1.htm
>
>
>
> =
> Instructions for joining and leaving this list and remarks about
> the problem of INAPPROPRIATE MESSAGES are available at
>   http://jse.stat.ncsu.edu/
> =




=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: What is a confidence interval?

2001-09-27 Thread Konrad Halupka

Dennis Roberts wrote:

> as a start, you could relate everyday examples where the notion of CI 
> seems to make sense
> 
> A. you observe a friend in terms of his/her lateness when planning to 
> meet you somewhere ... over time, you take 'samples' of late values ... 
> in a sense you have means ... and then you form a rubric like ... for 
> sam ... if we plan on meeting at noon ... you can expect him at noon + 
> or - 10 minutes ... you won't always be right but, maybe about 95% of 
> the time you will?
> 
> B. from real estate ads in a community, looking at sunday newspapers, 
> you find that several samples of average house prices for a 3 bedroom, 2 
> bath place are certain values ... so, again, this is like have a bunch 
> of means ... then, if someone asks you (visitor) about average prices of 
> a bedroom, 2 bath house ... you might say ... 134,000 +/- 21,000 ... of 
> course, you won't always be right but  perhaps about 95% of the time?
> 

I suppose that in such situations most people would prefer to know 2SD 
instead of 95%CI.
k



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: What is a confidence interval?

2001-09-27 Thread Dennis Roberts

At 07:33 AM 9/27/01 -0700, Warren wrote:


>Now, we take our sample mean and s.d. and we compute a CI.  We know
>we can't say anything about a probability for this single CI...it
>either
>contains the mean or it doesn't.  So, what DOES a CI tell us?  Does it
>really give you a range of values where you think the parameter is?


most disciplines have various models for how they describe/explain/predict 
events and behaviors

in each of these, there are assumptions ... i don't see how we can get 
around that ... one must start from SOME point of reference (of course, 
some models make us start from much more stringent starting points than 
others)

to me, in statistics, particularly of the inferential type, the biggest 
assumption that we make that is suspect is the one of SRS ... taking random 
samples ...

however, if we did NOT make some assumption about the data 
being  representative of the overall population ... which SRSing helps to 
insure ... what can we do? what inferences could we possibly make?

in the case of CIs ... no, you are not sure at all that the range you got 
in your CI encompasses the parameter but, what are the odds that it does 
NOT? generally, fairly small. (well, all bets are off if you like to build 
25% CIs!) so, under these conditions, is it not reasonably assured that the 
parameter IS inside there someplace? this does not pinpoint WHERE within it 
is but, it does tend to eliminate from the long number line on which the CI 
rests ... what values do NOT seem to be too feasible for the parameter

unfortunately, if you are interested in knowing something about some 
parameter and, have no way to identify each and every population element 
and "measure" it (of course, even then, how do you know that your measure 
is "pure"?)... you are necessarily left with making this inference based on 
the data you have ... i don't see any way out of this bind ... OTHER than 
trying as best possible to take a good sample ... of decent size (to reduce 
sampling error) ... and then trusting the results that you find

if there is another way, i would certainly like to know it






_
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: What is a confidence interval?

2001-09-27 Thread Michael F.

(Warren) wrote in message:
 
> So, what is your best way to explain a CI?  How do you explain it
> without using some esoteric discussion of probability?

I prefer to focus on the reliability of the estimate and say it is:

"A range of values for an estimate that reflect its unreliability and 
which contain the parameter of interest 95% of the time in the long run."


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: What is a confidence interval?

2001-09-27 Thread Jerry Dallal

Dennis Roberts wrote:

> in the case of CIs ... no, you are not sure at all that the range you got
> in your CI encompasses the parameter but, what are the odds that it does
> NOT? generally, fairly small. 

You're slipping into Bayesian territory...  I would say the answer
to your question is, "It depends", but more important, it doesn't
really matter. (It might beforehand when we are studying its
properties, but not afterwards when we've made the decision to use
it in practice.) If it did, we should be following Radford Neal's
suggestion that we all become Bayesians.


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: What is a confidence interval?

2001-09-27 Thread John Jackson

this is the second time I have seen this word used: "frequentist"? What does
it mean?


"Radford Neal" <[EMAIL PROTECTED]> wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> In article <[EMAIL PROTECTED]>,
> Dennis Roberts <[EMAIL PROTECTED]> wrote:
>
> >as a start, you could relate everyday examples where the notion of CI
seems
> >to make sense
> >
> >A. you observe a friend in terms of his/her lateness when planning to
meet
> >you somewhere ... over time, you take 'samples' of late values ... in a
> >sense you have means ... and then you form a rubric like ... for sam ...
if
> >we plan on meeting at noon ... you can expect him at noon + or - 10
minutes
> >... you won't always be right but, maybe about 95% of the time you will?
> >
> >B. from real estate ads in a community, looking at sunday newspapers, you
> >find that several samples of average house prices for a 3 bedroom, 2 bath
> >place are certain values ... so, again, this is like have a bunch of
means
> >... then, if someone asks you (visitor) about average prices of a
bedroom,
> >2 bath house ... you might say ... 134,000 +/- 21,000 ... of course, you
> >won't always be right but  perhaps about 95% of the time?
>
> These examples are NOT analogous to confidence intervals.  In both
> examples, a distribution of values is inferred from a sample, and
> based on this distribution, a PROBABILITY statement is made concerning
> a future observation.  But a confidence interval is NOT a probability
> statement concerning the unknown parameter.  In the frequentist
> statistical framework in which confidence intervals exists,
> probability statements about unknown parameters are not considered to
> be meaningful.
>
>Radford Neal
>
> --
--
> Radford M. Neal
[EMAIL PROTECTED]
> Dept. of Statistics and Dept. of Computer Science
[EMAIL PROTECTED]
> University of Toronto
http://www.cs.utoronto.ca/~radford
> --
--




=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: What is a confidence interval?

2001-09-27 Thread Radford Neal

In article ,
John Jackson <[EMAIL PROTECTED]> wrote:

>this is the second time I have seen this word used: "frequentist"? What does
>it mean?

It's the philosophy of statistics that holds that probability can
meaningfully be applied only to repeatable phenomena, and that the
meaning of a probability is the frequency with which something happens
in the long run, when the phenomenon is repeated.  This rules out
using probability to describe uncertainty about a parameter value,
such as the mass of the hydrogen atom, since there's just one true
value for the parameter, not a sequence of values.

The frequentist view is currently the dominant one, especially in
undergraduate statistics courses.  The alternative Bayesian philosophy
holds the contrary view that probability can (and should) be used to
describe uncertainty even about things that can't conceivably be
regarded as coming from a sequence of repetitions.

Confidence intervals are a frequentist concept.  Only in the Bayesian
framework can one say things like, "There's a 95% chance that the
parameter mu is in the interval (5.4, 7.1)".  That, however, is how
people would like to interpret confidence intervals.  You can't
interpret them that way, though, if you're abiding by the orthodox
frequentist philosophy.

   Radford Neal


Radford M. Neal   [EMAIL PROTECTED]
Dept. of Statistics and Dept. of Computer Science [EMAIL PROTECTED]
University of Toronto http://www.cs.utoronto.ca/~radford



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: What is a confidence interval?

2001-09-27 Thread Gordon D. Pusch

"John Jackson" <[EMAIL PROTECTED]> writes:

> this is the second time I have seen this word used: "frequentist"? 
> What does it mean?

``Frequentist'' is the term used by Bayesians to describe partisans of
Fisher et al's revisionist edict that ``probability'' shall be declared 
to be semantically equivalent to ``frequency of events'' in some mythical
ensemble. Bayesians instead hold to the original Laplace-Bernoulli concept
that probability is a measure of one's degree of confidence in an hypothesis,
whereas the frequency of occurance of an outcome in a set of trials is a
totally independent concept that does not even live in the same space as 
a probability.


-- Gordon D. Pusch   

perl -e '$_ = "gdpusch\@NO.xnet.SPAM.com\n"; s/NO\.//; s/SPAM\.//; print;'



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



RE: What is a confidence interval?

2001-09-27 Thread David Heiser



-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of Gordon D. Pusch
Sent: Thursday, September 27, 2001 7:33 PM
To: [EMAIL PROTECTED]
Subject: Re: What is a confidence interval?


"John Jackson" <[EMAIL PROTECTED]> writes:

> this is the second time I have seen this word used: "frequentist"?
> What does it mean?

``Frequentist'' is the term used by Bayesians to describe partisans of
Fisher et al's revisionist edict that ``probability'' shall be declared
to be semantically equivalent to ``frequency of events'' in some mythical
ensemble. Bayesians instead hold to the original Laplace-Bernoulli concept
that probability is a measure of one's degree of confidence in an
hypothesis,
whereas the frequency of occurance of an outcome in a set of trials is a
totally independent concept that does not even live in the same space as
a probability.


-- Gordon D. Pusch
--
I disagee with Pusch.

Bayesians have a way of modifying definitions to support their arguments.

Bayesians are those people who have to invent loss functions in order to
make a decision.

A frequentist defines the concept of probability in terms of gaming,
where the probability is defined as the ratio of the number of times
an event (such as the occurance of a one showing on a die) is favorable
to the number of times all other events occur (all 6 sides of the die)
as the number of repeats (identically distributed independent random
events) becomes very, very large. This was very difficult to define
mathematically, since what is "a repetition" could not be adequately
defined.

Von Mises is usually taken as the main source of this concept.

There is a fundamental problem of defining probability, without involving
"circular" references. The terms identically distributed and independent
(random) events depend on the term "equi-probable", and then we are right
back at square 1. The definition of "random" involves something that
can't be defined, except by saying that the next random event can't be
predicted. When with my die a 2 keeps comming up by "chance", then what?

Bayesians say it is just a matter of belief, whatever that is. This leaves
probability undefined, as a mathematical property with values between 0 and
1.

Whether there is such a real thing as zero probability or a probability of
1 or not, for values between 0 and 1, statisticians have to resort to a
frequentist viewpoint in order to establish limiting values as the
"number repetitions" approach infinity.

This is why it so hard to teach statistics. It all depends on what is
the students internal understanding of what probability means. If you
are comfortable with belief then fine. Now tell me what the difference
is between a p value of 0.05 and 0.06 in real world terms?

If my study has a lot of "sizzle" and has important ramifications
about what we believe about our universe, a p value may not be
important. Afterall, proof of Einstein's theory of relativity was
based on a pretty sloppy single observation of the position of
Mercury at an eclipse.

Fisher in his reflective later life, took great pains to avoid making
a hard and fast decision based on probability values. He always said that it
was up to the investigator to determine whether a p value of 0.06 meant
that there was an improbable chance that random events could have
determined the outcome of his experiment, not the publication editor.

Nowdays it is determined by the stupid Peer system. Also by editors that
are looking hard at the best way to determine belief in the claims of the
experimenter when they haven't the foggiest idea of what the investigation
was about, and corporate profits or the status quo is the most important
issue. This was very probably the situation in England in the 1950's
which pushed Fisher to go to Australia.

Joseph F. Lucke said this in a recent post:
--
I saw the same show on Nova.  Flower had a different definition of
randomness than we now use.  We now define randomness as (probabilistic)
independence, but that was not always the case.  In the 1930s or so, the
mathematician-philosopher-statistician von Mises developed a theory of
probability based on frequencies.  This was not the Kolmogorov version in
which the axioms are interpreted as frequencies, but an axiomatic system
derived from the properties of repeated events.  Von Mises introduced the
notion of a "collective" or sequence of potentially infinitely repeatable
events.  Probability was defined as the limiting relative frequency in this
collective. One of his axioms was that the events within the collective were
random.   But because he had not yet developed the concept of independence
in hi

Re: What is a confidence interval?

2001-09-28 Thread Dennis Roberts

At 01:23 AM 9/28/01 +, Radford Neal wrote:


radford makes a nice quick summary of the basic differences between 
bayesian and frequentist positions, which is helpful. these distinctions 
are important IF one is seriously studying statistical ideas

personally, i think that trying to make these distinction for introductory 
students however is a waste of time ... these are things for "majors" in 
statistics or "statisticians" to discuss and battle over

in reference to a CI, the critical issue is CAN it be said that ... in the 
long run, there is a certain probability of producing CIs (using some CI 
construction procedure) that ... contain the parameter value ... that is, 
how FREQUENTLY we expect the CIs to contain the true value ... well, yes we can

THAT is the important idea and, i think that if we try (for the sake of 
edification of the intro student)to defend it or reject it according to 
being proper bayesian/frequentist or improper ... is totally irrelevant to 
the basic concept

but, that is just my opinion



>In article ,
>John Jackson <[EMAIL PROTECTED]> wrote:
>
> >this is the second time I have seen this word used: "frequentist"? What does
> >it mean?
>
>It's the philosophy of statistics that holds that probability can
>meaningfully be applied only to repeatable phenomena, and that the
>meaning of a probability is the frequency with which something happens
>in the long run, when the phenomenon is repeated.  This rules out
>using probability to describe uncertainty about a parameter value,
>such as the mass of the hydrogen atom, since there's just one true
>value for the parameter, not a sequence of values.

_
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: What is a confidence interval?

2001-09-28 Thread Jerry Dallal

John Jackson wrote:
> 
> this is the second time I have seen this word used: "frequentist"? 

Since Radford Neal has already given an excellent explanation,
let me add...

A roulette wheel comes up with a red number 10 times in a row. When
deciding how to place his/her next bet...

The person on the street bets black, "Because it's got to come up
eventually."

The frequentist doesn't care, "Because red and black occur at random
with equal chances and past history doesn't matter."

The Bayesian bets red, "Because there's something strange going on
here!"


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: What is a confidence interval?

2001-09-28 Thread Radford Neal

In article <[EMAIL PROTECTED]>,
Dennis Roberts <[EMAIL PROTECTED]> wrote:

>in reference to a CI, the critical issue is CAN it be said that ... in the 
>long run, there is a certain probability of producing CIs (using some CI 
>construction procedure) that ... contain the parameter value ... that is, 
>how FREQUENTLY we expect the CIs to contain the true value ... well, yes we can
>
>THAT is the important idea and, i think that if we try (for the sake of 
>edification of the intro student)to defend it or reject it according to 
>being proper bayesian/frequentist or improper ... is totally irrelevant to 
>the basic concept

THAT is indeed the important idea for understanding what a frequentist
C. I. really is.  Unfortunately, it is NOT the property that is important 
to anyone who is actually using a confidence interval, who OF COURSE is 
interested in whether the particular confidence interval they obtained 
contains the true parameter value.  I emphasize the OF COURSE because it
seems that some frequentists have managed to contort their thinking to
the point where they actually think that the long run coverage probability
of the C. I. is what users are interesed in, in defiance of all common 
sense.  More commonly, though, the tendency is to just give the Bayesian
interpretation, even though it is not justified.

This is not just an academic point.  If you tell someone that the 95% C. I.
obtained has the Bayesian interpretation, when it isn't actually the result
of a Bayesian procedure, they may well decide that even though they had 
previously thought that the parameter value was outside this interval, they
must have been wrong, since the statistician says there's a 95% chance the
parameter is in the C. I.  This is all wrong.  There are two more-or-less
right approaches, which are:

  1) Use a frequentist C. I., while understanding that the parameter does 
 NOT necessarily have a 95% chance of being in the interval you obtained.
 You have to informally weigh in your mind whether it is more likely that
 the parameter is inside the interval, or that this is one of the 5%
 of the intervals that don't contain the true parameter value.  There's
 no mathematical justification for deciding either way.

  2) Use a Bayesian procedure instead, which will of course include specification
 of a prior distribution for the parameter.  Then you can find an interval
 for which you can indeed say that the parameter has a 95% chance of lying
 inside.  (Or you might just look at the whole posterior distribution.)

If (1) sounds like a rather convoluted way of getting to a final situation in 
which you make a subjective judgement, then maybe you would prefer (2), in which 
the subjectivity is explict in the prior distribution.

   Radford Neal


Radford M. Neal   [EMAIL PROTECTED]
Dept. of Statistics and Dept. of Computer Science [EMAIL PROTECTED]
University of Toronto http://www.cs.utoronto.ca/~radford



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: What is a confidence interval?

2001-09-28 Thread John Jackson

How do describe the data that does not reside in the area
described by the confidence interval?

For example, you have a two tailed situation, with a left tail of .1, a
middle of .8 and a right tail of .1, the confidence interval for the middle
is 90%.

Is it correct to say with respect to a value falling outside of the interval
in the right tail:

For any random inverval selected, there is a .05% probability that the
sample will NOT yield an interval that yields the parameter being estimated
and additonally such interval will not include any values in area
represented by the left tail.  Can you make different statements about the
left and right tail?



"Michael F." <[EMAIL PROTECTED]> wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> (Warren) wrote in message:
>
> > So, what is your best way to explain a CI?  How do you explain it
> > without using some esoteric discussion of probability?
>
> I prefer to focus on the reliability of the estimate and say it is:
>
> "A range of values for an estimate that reflect its unreliability and
> which contain the parameter of interest 95% of the time in the long run."




=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=