Re: E as a % of a standard deviation

2001-09-26 Thread Glen Barnett


John Jackson <[EMAIL PROTECTED]> wrote in message
MGns7.49824$[EMAIL PROTECTED]">news:MGns7.49824$[EMAIL PROTECTED]...
> re: the formula:
>
>   n   = (Z?/e)2

This formula hasn't come over at all well.  Please note that newsgroups
work in ascii. What's it supposed to look like? What's it a formula for?

> could you express E as a  % of a standard deviation .

What's E? The above formula doesn't have a (capital) E.

What is Z? n? e?

> In other words does a .02 error translate into .02/1 standard deviations,
> assuming you are dealing w/a normal distribution?

? How does this relate to the formula above?

Glen



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: What is a confidence interval?

2001-09-26 Thread dennis roberts

some people are sure picky ...

given the context in which the original post was made ... it seems like the 
audience that the poster was hoping to be able to talk to about CIs was not 
very likely to understand them very well ... thus, it is not unreasonable 
to proffer examples to get one into having some sense of the notion

the examples below ... were only meant to portray ... the idea that 
observations have error ... and, over time and over samples ... one gets 
some idea about what the size of that error might be ... thus, when 
projecting about behavior ... we have a tool to know a bit about some 
penultimate value ... say the parameter for a person ... by using AN 
observation, and factoring in the error you have observed over time or 
samples ...

in essence, CIs are + and - around some observation where ... you 
conjecture within some range what the "truth" might be ... and, if you have 
evidence about size of error ... then, these CIs can say something about 
the parameter (again, within some range) in face of only seeing a limited 
sample of behavior

At 09:30 PM 9/26/01 +, Radford Neal wrote:
>In article <[EMAIL PROTECTED]>,
>Dennis Roberts <[EMAIL PROTECTED]> wrote:
>
> >as a start, you could relate everyday examples where the notion of CI seems
> >to make sense
> >
> >A. you observe a friend in terms of his/her lateness when planning to meet
> >you somewhere ... over time, you take 'samples' of late values ... in a
> >sense you have means ... and then you form a rubric like ... for sam ... if
> >we plan on meeting at noon ... you can expect him at noon + or - 10 minutes
> >... you won't always be right but, maybe about 95% of the time you will?
> >
> >B. from real estate ads in a community, looking at sunday newspapers, you
> >find that several samples of average house prices for a 3 bedroom, 2 bath
> >place are certain values ... so, again, this is like have a bunch of means
> >... then, if someone asks you (visitor) about average prices of a bedroom,
> >2 bath house ... you might say ... 134,000 +/- 21,000 ... of course, you
> >won't always be right but  perhaps about 95% of the time?
>
>These examples are NOT analogous to confidence intervals.  In both
>examples, a distribution of values is inferred from a sample, and
>based on this distribution, a PROBABILITY statement is made concerning
>a future observation.  But a confidence interval is NOT a probability
>statement concerning the unknown parameter.  In the frequentist
>statistical framework in which confidence intervals exists,
>probability statements about unknown parameters are not considered to
>be meaningful.

you are clearly misinterpreting, for whatever purpose, what i have said

i certainly have NOT said that a CI is a probability statement about any 
specific parameter or, being able to attach some probability value to some 
certain value as BEING the parameter

the p or confidence associated with CIs only makes sense in terms of 
dumping all possible CIs into a hat ... and, asking  what is the 
probability of pulling one out at random that captures the parameter 
(whatever the parameter might be) ...

the example i gave with some minitab work clearly showed that ... and made 
no other interpretation about p values in connection with CIs

perhaps some of you who seem to object so much to things i offer ... might 
offer some posts of your own in response to requests from those seeking 
help ... to make sure that they get the right message ...


>Radford Neal
>
>
>Radford M. Neal   [EMAIL PROTECTED]
>Dept. of Statistics and Dept. of Computer Science [EMAIL PROTECTED]
>University of Toronto http://www.cs.utoronto.ca/~radford
>
>
>
>=
>Instructions for joining and leaving this list and remarks about
>the problem of INAPPROPRIATE MESSAGES are available at
>   http://jse.stat.ncsu.edu/
>=

==
dennis roberts, penn state university
educational psychology, 8148632401
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Psychology of the Future

2001-09-26 Thread Psychology Network

Our system is unique - for the first time, we have combined the power of 
the Internet with the ease of the telephone to enable immediate access to 
a psychologist.

How does it work? Using our revolutionary new technology, you can reach 
our doctors by clicking on this link: 
http://dev.pnonline.com/EMReturn.asp?EGUID=3BE8260B-52FF-4042-95F7-9E470ECD6DF3&ELinkID=191.
 
Simply enter your callback phone number and credit card info and you will 
receive an immediate callback from one of our doctors!  Or, just pick up 
the phone and call 1-877 DR TALKS (1-877 378-2557)!

Get expert advice on such problems as :

parenting

relationships

stress

sexuality

more.

Our service is convenient - we are available 24/7/365.

*Promotional Special: A  cruise for two to the Caribbean will be awarded to 
one registered client. The winner will be contacted via email. Check our 
Website for details. 










If you would no longer like to be a part of our mailing list, please accept 
our apologies in advance for this email. To be removed, just click this link: 
http://dev.pnonline.com/unsubscribe.asp?EGUID=3BE8260B-52FF-4042-95F7-9E470ECD6DF3&ELinkID=192.


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: What is a confidence interval?

2001-09-26 Thread Radford Neal

In article <[EMAIL PROTECTED]>,
Dennis Roberts <[EMAIL PROTECTED]> wrote:

>as a start, you could relate everyday examples where the notion of CI seems 
>to make sense
>
>A. you observe a friend in terms of his/her lateness when planning to meet 
>you somewhere ... over time, you take 'samples' of late values ... in a 
>sense you have means ... and then you form a rubric like ... for sam ... if 
>we plan on meeting at noon ... you can expect him at noon + or - 10 minutes 
>... you won't always be right but, maybe about 95% of the time you will?
>
>B. from real estate ads in a community, looking at sunday newspapers, you 
>find that several samples of average house prices for a 3 bedroom, 2 bath 
>place are certain values ... so, again, this is like have a bunch of means 
>... then, if someone asks you (visitor) about average prices of a bedroom, 
>2 bath house ... you might say ... 134,000 +/- 21,000 ... of course, you 
>won't always be right but  perhaps about 95% of the time?

These examples are NOT analogous to confidence intervals.  In both
examples, a distribution of values is inferred from a sample, and
based on this distribution, a PROBABILITY statement is made concerning
a future observation.  But a confidence interval is NOT a probability
statement concerning the unknown parameter.  In the frequentist
statistical framework in which confidence intervals exists,
probability statements about unknown parameters are not considered to
be meaningful.

   Radford Neal


Radford M. Neal   [EMAIL PROTECTED]
Dept. of Statistics and Dept. of Computer Science [EMAIL PROTECTED]
University of Toronto http://www.cs.utoronto.ca/~radford



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: What is a confidence interval?

2001-09-26 Thread William B. Ware

Have you tried simulations?  with something like Resampling Stats or
Minitab?

WBW


On 26 Sep 2001, Warren wrote:

> Hi,
> I've been teaching an introductory stats course for several years.
> I always learn something from my students...hope they learn too.
> One thing I've learned is that confidence intervals are very tough
> for them.  They can compute them, but why?
> 
> Of course, we talk about confidence interval construction and I try
> to explain the usual "95% of all intervals so constructed will in the
> long run include the parameter...blah, blah".  I've looked at the
> Bayesian interpretation also but find this a bit hard for beginning
> students.
> 
> So, what is your best way to explain a CI?  How do you explain it
> without using some esoteric discussion of probability?
> 
> Now, here's another question.  If I roll 2 dice and
> find the mean of the pips on the upturned faces.  You can compute
> sample standard deviations, but if you roll 2 alike the s.d. is 0.
> So, you cannot compute a CI based on these samples.  How would
> you explain?
> 
> Thanks,
> 
> Warren
> 
> 
> =
> Instructions for joining and leaving this list and remarks about
> the problem of INAPPROPRIATE MESSAGES are available at
>   http://jse.stat.ncsu.edu/
> =
> 



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Analysis of covariance

2001-09-26 Thread Dennis Roberts

At 02:26 PM 9/26/01 -0500, Burke Johnson wrote:
> >From my understanding, there are three popular ways to analyze the 
> following design (let's call it the pretest-posttest control-group design):
>
>R Pretest   Treatment   Posttest
>R PretestControl   Posttest

if random assignment has occurred ... then, we assume and we had better 
find that the means on the pretest are close to being the same ... if we 
don't, then we wonder about random assignment (which creates a mess)

anyway, i digress ...

what i would do is to do a simple t test on the difference in posttest 
means and, if you find something here ... then that means that treatment 
"changed" differentially compared to control

if that happens, why do anything more complicated? has not the answer to 
your main question been found?

now, what if you don't ... then, maybe something a bit more complex is 
appropriate

IMHO

_
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: E as a % of a standard deviation

2001-09-26 Thread Dennis Roberts

At 04:49 PM 9/26/01 +, John Jackson wrote:
>re: the formula:
>
>   n   = (Z?/e)2
>
>
>could you express E as a  % of a standard deviation .
>
>In other words does a .02 error translate into .02/1 standard deviations,
>assuming you are dealing w/a normal distribution?


well, let's see ... e is the margin of error ... using the formula for a CI 
for a population mean ..

   X bar +/- z * stan error of the mean

so, the margin of error or e ... is z * standard error of the mean

now, let's assume that we stick to 95% CIs ... so the z will be about 2 ... 
that leaves us with the standard error of the mean ... or, sigma / sqrt n

let's say that we were estimating SAT M scores and assumed a sigma of about 
100 and were taking a sample size of n=100 (to make my figuring simple) ... 
this would give us a standard error of 100/10 = 10 so, the margin of error 
would be:

   e = 2 * 10 or about 20

so, 20/100 = .2 ... that is, the e or margin of error is about .2 of the 
population sd

if we had used a sample size of 400 ... then the standard error would have 
been: 100/20 = 5

and our e or margin of error would be 2 * 5 = 10

so, the margin of error is now 10/100 or .1 of a sigma unit OR 1/2 the size 
it was before

but, i don't see what you have accomplished by doing this ... rather than 
just reporting the margin of error ... 10 versus 20 ... which is also 1/2 
the size

since z * stan error is really score UNITS ... and, the way you done it ... 
.2 or .1 would represent fractions of sigma ... which still amounts to 
score UNITS ... i don't think anything new has been done ... certainly, no 
new information has been created







>=
>Instructions for joining and leaving this list and remarks about
>the problem of INAPPROPRIATE MESSAGES are available at
>   http://jse.stat.ncsu.edu/
>=

_
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: What is a confidence interval?

2001-09-26 Thread Jon Cryer

Dennis:

Example A is a mistaken interpretation of a confidence interval for a mean.
Unfortunately, this is is a very common misinterpretation.
What you have described in Example A is a _prediction_ interval for
an individual observation. Prediction intervals rarely get taught except
(maybe)
in the context of a regression model.

Jon

At 03:11 PM 9/26/01 -0400, you wrote:
>as a start, you could relate everyday examples where the notion of CI seems 
>to make sense
>
>A. you observe a friend in terms of his/her lateness when planning to meet 
>you somewhere ... over time, you take 'samples' of late values ... in a 
>sense you have means ... and then you form a rubric like ... for sam ... if 
>we plan on meeting at noon ... you can expect him at noon + or - 10 minutes 
>... you won't always be right but, maybe about 95% of the time you will?
>
>B. from real estate ads in a community, looking at sunday newspapers, you 
>find that several samples of average house prices for a 3 bedroom, 2 bath 
>place are certain values ... so, again, this is like have a bunch of means 
>... then, if someone asks you (visitor) about average prices of a bedroom, 
>2 bath house ... you might say ... 134,000 +/- 21,000 ... of course, you 
>won't always be right but  perhaps about 95% of the time?
>
>but, more specifically, there are a number of things you can do
>
>1. students certainly have to know something about sampling error ... and 
>the notion of a sampling distribution
>
>2. they have to realize that when taking a sample, say using the sample 
>mean, that the mean they get could fall anywhere within that sampling 
>distribution
>
>3. if we know something about #1 AND, we have a sample mean ... then, #1 
>sets sort of a limit on how far away the truth can be GIVEN that sample 
>mean or statistic ...
>
>4. thus, we use the statistics (ie, sample mean) and add and subtract some 
>error (based on #1) ... in such a way that we will be correct (in saying 
>that the parameter will fall within the CI) some % of the time ... say, 95%?
>
>it is easy to show this via simulation ... minitab for example can help you 
>do this
>
>here is an example ... let's say we are taking samples of size 100 from a 
>population of SAT M scores ... where we assume the mu is 500 and sigma is 
>100 ... i will take a 1000 SRS samples ... and summarize the results of 
>building 100 CIs
>
>MTB > rand 1000 c1-c100; <<< made 1000 rows ... and 100 columns ... each 
>ROW will be a sample
>SUBC> norm 500 100. <<< sampled from population with mu = 500 and sigma = 100
>MTB > rmean c1-c100 c101 <<< got means for 1000 samples and put in c101
>MTB > name c1='sampmean'
>MTB > let c102=c101-2*10   found lower point of 95% CI
>MTB > let c103=c101+2*10   found upper point of 95% CI
>MTB > name c102='lowerpt' c103='upperpt'
>MTB > let c104=(c102 lt 500) and (c103 gt 500)  <<< this evaluates if the 
>intervals capture 500 or not
>MTB > sum c104
>
>Sum of C104
>
>Sum of C104 = 954.00    954 of the 1000 intervals captured 500
>MTB > let k1=954/1000
>MTB > prin k1
>
>Data Display
>
>K10.954000   pretty close to 95%
>MTB > prin c102 c103 c104 <<<  a few of the 1000 intervals are shown below
>
>Data Display
>
>
>  Row   lowerpt   upperpt   C104
>
>1   477.365   517.365  1
>2   500.448   540.448  0  <<< here is one that missed 500 ...the 
>other 9 captured 500
>3   480.304   520.304  1
>4   480.457   520.457  1
>5   485.006   525.006  1
>6   479.585   519.585  1
>7   480.382   520.382  1
>8   481.189   521.189  1
>9   486.166   526.166  1
>   10   494.388   534.388  1
>
>
>
>
>
>_
>dennis roberts, educational psychology, penn state university
>208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
>http://roberts.ed.psu.edu/users/droberts/drober~1.htm
>
>
>
>=
>Instructions for joining and leaving this list and remarks about
>the problem of INAPPROPRIATE MESSAGES are available at
>  http://jse.stat.ncsu.edu/
>=
>
 ___
--- |   \
Jon Cryer, Professor Emeritus  ( )
Dept. of Statistics  www.stat.uiowa.edu/~jcryer \\_University
 and Actuarial Science   office 319-335-0819 \ *   \of Iowa
The University of Iowa   home   319-351-4639  \/Hawkeyes
Iowa City, IA 52242  FAX319-335-3017   |__ )
---   V

"It ain't so much the things we don't know that get us into trouble. 
It's the things we do know that just ain't so." --Artemus Ward 


=
Instructions for joining and leaving this list

Analysis of covariance

2001-09-26 Thread Burke Johnson

>From my understanding, there are three popular ways to analyze the following design 
>(let's call it the pretest-posttest control-group design):

R Pretest   Treatment   Posttest 
R PretestControl   Posttest

In the social sciences (e.g., see Pedhazur's popular regression text), the most 
popular analysis seems to be to run a GLM (this version is often called an ANCOVA), 
where Y is the posttest measure, X1 is the pretest measure, and  X2  is the treatment 
variable. Assuming that X1 and X2 do not interact, ones' estimate of the treatment 
effect is given by B2 (i.e., the partial regression coefficient for the treatment 
variable which controls for adjusts for pretest differences). 

Another traditionally popular analysis for the design given above is to compute a new, 
gain score variable (posttest minus pretest) for all cases and then run a GLM (ANOVA) 
to see if the difference between the gains (which is the estimate of the treatment 
effect) is statistically significant. 

The third, and somewhat less popular (?) way to analyze the above design is to do a 
mixed ANOVA model (which is also a GLM but it is harder to write out), where Y is the 
posttest, X1 is "time" which is a  repeated measures variable (e.g., time is 1 for 
pretest and 2 for posttest for all cases), and X2 is the between group, treatment 
variable. In this case one looks for treatment impact by testing the statistical 
significance of the two-way interaction between the time and the treatment variables. 
Usually, you ask if the difference between the means at time two is greater than the 
difference at time one (i.e., you hope that the treatment lines will not be parallel)

Results will vary depending on which of these three approaches you use, because each 
approach estimates the counterfactual in a slightly different way. I believe it was 
Reichardt and Mark (in Handbook of Applied Social Research Methods) that suggested 
analyzing your data using more than one of these three statistical methods. 

I'd be interested in any thoughs you have about these three approaches.

Take care,
Burke Johnson
http://www.coe.usouthal.edu/bset/Faculty/BJohnson/Burke.html



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: What is a confidence interval?

2001-09-26 Thread Dennis Roberts

as a start, you could relate everyday examples where the notion of CI seems 
to make sense

A. you observe a friend in terms of his/her lateness when planning to meet 
you somewhere ... over time, you take 'samples' of late values ... in a 
sense you have means ... and then you form a rubric like ... for sam ... if 
we plan on meeting at noon ... you can expect him at noon + or - 10 minutes 
... you won't always be right but, maybe about 95% of the time you will?

B. from real estate ads in a community, looking at sunday newspapers, you 
find that several samples of average house prices for a 3 bedroom, 2 bath 
place are certain values ... so, again, this is like have a bunch of means 
... then, if someone asks you (visitor) about average prices of a bedroom, 
2 bath house ... you might say ... 134,000 +/- 21,000 ... of course, you 
won't always be right but  perhaps about 95% of the time?

but, more specifically, there are a number of things you can do

1. students certainly have to know something about sampling error ... and 
the notion of a sampling distribution

2. they have to realize that when taking a sample, say using the sample 
mean, that the mean they get could fall anywhere within that sampling 
distribution

3. if we know something about #1 AND, we have a sample mean ... then, #1 
sets sort of a limit on how far away the truth can be GIVEN that sample 
mean or statistic ...

4. thus, we use the statistics (ie, sample mean) and add and subtract some 
error (based on #1) ... in such a way that we will be correct (in saying 
that the parameter will fall within the CI) some % of the time ... say, 95%?

it is easy to show this via simulation ... minitab for example can help you 
do this

here is an example ... let's say we are taking samples of size 100 from a 
population of SAT M scores ... where we assume the mu is 500 and sigma is 
100 ... i will take a 1000 SRS samples ... and summarize the results of 
building 100 CIs

MTB > rand 1000 c1-c100; <<< made 1000 rows ... and 100 columns ... each 
ROW will be a sample
SUBC> norm 500 100. <<< sampled from population with mu = 500 and sigma = 100
MTB > rmean c1-c100 c101 <<< got means for 1000 samples and put in c101
MTB > name c1='sampmean'
MTB > let c102=c101-2*10   found lower point of 95% CI
MTB > let c103=c101+2*10   found upper point of 95% CI
MTB > name c102='lowerpt' c103='upperpt'
MTB > let c104=(c102 lt 500) and (c103 gt 500)  <<< this evaluates if the 
intervals capture 500 or not
MTB > sum c104

Sum of C104

Sum of C104 = 954.00    954 of the 1000 intervals captured 500
MTB > let k1=954/1000
MTB > prin k1

Data Display

K10.954000   pretty close to 95%
MTB > prin c102 c103 c104 <<<  a few of the 1000 intervals are shown below

Data Display


  Row   lowerpt   upperpt   C104

1   477.365   517.365  1
2   500.448   540.448  0  <<< here is one that missed 500 ...the 
other 9 captured 500
3   480.304   520.304  1
4   480.457   520.457  1
5   485.006   525.006  1
6   479.585   519.585  1
7   480.382   520.382  1
8   481.189   521.189  1
9   486.166   526.166  1
   10   494.388   534.388  1





_
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: error estimate as fraction of standard deviation

2001-09-26 Thread John Jackson

Thanks for the formula, but I was really interested in knowing what % of a
standard deviation corresponds to E.

In other words does a .02 error translate into .02/1 standard deviations?


"Graeme Byrne" <[EMAIL PROTECTED]> wrote in message
9orn26$m80$[EMAIL PROTECTED]">news:9orn26$m80$[EMAIL PROTECTED]...
> This sounds like homework but I will .
>
> Anyway assume the normal approximation to the binomial can be used (is
this
> reasonable?) then the formula for estimating sample sizes based on a given
> confidence level and a given maximum error is
>
> n = z*Sqrt(p*(1-p))/e
>
> where z = the z-scores associated with the given confidence level (90% in
> this case)
>   p = the proportion of successes (bruised apples in this case)
>   e = the maximum error (4% in this case)
>
> Your problem is you don't know p since that is what you are trying to
> estimate. Ask yourself what value of p will make n a large as possible and
> then you can use this "worst case" solution.
>
> Another solution would be to estimate (roughly) the maximum value of p
(10%,
> 20% ...) and use it to find n. Whatever you do you should read up on
> "Calculating sample sizes for estimating proportions" in any good basic
> statistics text.
>
> GB
>
>
>
> "@Home" <[EMAIL PROTECTED]> wrote in message
> b25s7.46471$[EMAIL PROTECTED]">news:b25s7.46471$[EMAIL PROTECTED]...
> > If you have a confidence level of 90% and an error estimate of 4% and
> don't
> > know the std deviation, is there a way to express the error estimate as
a
> > fraction of a std deviation?
> >
> >
> >
>
>




=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



E as a % of a standard deviation

2001-09-26 Thread John Jackson

re: the formula:

  n   = (Z?/e)2


could you express E as a  % of a standard deviation .

In other words does a .02 error translate into .02/1 standard deviations,
assuming you are dealing w/a normal distribution?




=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



What is a confidence interval?

2001-09-26 Thread Warren

Hi,
I've been teaching an introductory stats course for several years.
I always learn something from my students...hope they learn too.
One thing I've learned is that confidence intervals are very tough
for them.  They can compute them, but why?

Of course, we talk about confidence interval construction and I try
to explain the usual "95% of all intervals so constructed will in the
long run include the parameter...blah, blah".  I've looked at the
Bayesian interpretation also but find this a bit hard for beginning
students.

So, what is your best way to explain a CI?  How do you explain it
without using some esoteric discussion of probability?

Now, here's another question.  If I roll 2 dice and
find the mean of the pips on the upturned faces.  You can compute
sample standard deviations, but if you roll 2 alike the s.d. is 0.
So, you cannot compute a CI based on these samples.  How would
you explain?

Thanks,

Warren


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Looking for Presenters at 2002 International Symposium on Forecasting in Dublin 6/23-6/26

2001-09-26 Thread Tom Reilly

I am looking for interested parties to present next June in the field
of "Intermittent Demand" Forecasting.

Of course, if you have an interest in another area I will be glad to
pass your name on to for consideration.  If you are interested send me
an e-mail at [EMAIL PROTECTED] stating a synopsis of what your
45 minute speech would focus on.

This conference is more of academic group, but applications of
methodologies is also welcomed.

See http://www.isf2002.org/ for more info.  The conference is
6/23-6/26.  They always have well organized functions with good eats
and tours.

Tom Reilly


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



likelihood ratio chi-square: L^2

2001-09-26 Thread Nathaniel

Dear Debater,

I've got two models which are nested (or hierarchical), and values
likelihood ratio chi-square (L^2) for them with degrees of freedom :

1 model:  L^2 =21.93 , df=16
2 model:  L^2=22.13 , df=18

I read that it' posible to settle which model is better (because they are
hierarchical). In a book was written: "we conclude that the fit is
acceptable if the increase in the L^2 is small relative to its degrees of
freedom - model 2 is better".
My question is: how small increase of L^2 should be relative to its degrees
of freedom in order to say that model 2 fit better.
Could you give me assistance or to recommend books or webside solving my
problem.


I'd appreciate
Nathaniel




=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Technological Development in the Third World

2001-09-26 Thread The Financial News




The Financial News, September 2001

Production Mini-plants in mobile containers

"...Science Network will supply to countries and developing regions the technology and the necessary support for the production in series of Mini-plants in mobile containers (40-foot). The Mini-plant system is designed in such a way that all the production machinery is fixed on the platform of the container, with all wiring, piping, and installation parts; that is to say, they are fully equipped... and the mini-plant is ready for production."

More than 700 portable production systems: Bakeries, Steel Nails, Welding Electrodes, Tire Retreading, Reinforcement Bar Bending for Construction Framework, Sheeting for Roofing, Ceilings and Façades, Plated Drums, Aluminum Buckets, Injected Polypropylene Housewares, Pressed Melamine Items (Glasses, Cups, Plates, Mugs, etc.), Mufflers, Construction Electrically Welded Mesh, Plastic Bags and Packaging, Mobile units of medical assistance, Sanitary Material, Hypodermic Syringes, Hemostatic Clamps, etc. 

For more information: Mini-plants in mobile containers

By Steven P. Leibacher, The Financial News, Editor

-
If you received this in error or would like to be removed from our list, please return us indicating: remove or un-subscribe in 'subject' field, Thanks.  Editor
© 2001 The Financial News. All rights reserved.