Re: zero variance in a pair of ANOVA means

2000-12-13 Thread William B. Ware

Keeping in mind that it's a textbook, I suspect that the authors were just
trying to keep the numbers of numbers small.  All replicates within a cell
having the same value is rather rare in practice.

However, the greater question appears to be that of violating the
assumption of homogeneity of variance.  ANOVA is robust against such
violations when the cell sizes are equal...

WBW

__
William B. Ware, Professor and Chair   Educational Psychology,
CB# 3500   Measurement, and Evaluation
University of North Carolina PHONE  (919)-962-7848
Chapel Hill, NC  27599-3500  FAX:   (919)-962-1533
http://www.unc.edu/~wbware/  EMAIL: [EMAIL PROTECTED]
__

On Wed, 13 Dec 2000, Gene Gallagher wrote:

 The textbook I'm using this semester presents a 2-factor ANOVA problem
 (3 levels of each factor) in which two of the 9 groups have zero
 variance (identical observations for two replicates).  Levene's test
 indicates significant departure from homoscedasticity (this may not be
 known to the authors of the text who provide the solution as if there
 were no problems with homogeneity of variance).  Is there ever a case
 when you can trust the ANOVA results despite violations of
 homoscedasticity like this?  Obviously, no transformation is appropriate
 and the non-parametric ANOVAs aren't good at handling interaction
 effects (at least not Friedman).
 
 --
 Eugene D. Gallagher
 ECOS, UMASS/Boston
 
 
 Sent via Deja.com
 http://www.deja.com/
 
 
 =
 Instructions for joining and leaving this list and remarks about
 the problem of INAPPROPRIATE MESSAGES are available at
   http://jse.stat.ncsu.edu/
 =
 



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Multivariable regression

2000-12-13 Thread Mu Mu

Dear junk,
Such a task is indeed accomplished in Excel easily. You have to just
remember that it's a spreadsheet, and there are certain ways that
spreadsheets operate, and that's the key to the solution. I would tell you
more, but it would be horrible if by doing so I facilitated a student's
cheating.
Identify yourself, sir/madame, and if yours is a legitimate request, I'll
write more.
ZT
- Original Message -
From: junk [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, December 13, 2000 1:29 AM
Subject: Multivariable regression


 Can anyone direct me to an Excel wizard or, if none is available, a
formula
 to do the following.  Note I do not have access to Maple, Matlab,
 Mathematica or any other statical or engineering software.

 Why I am trying to do is come up with a simple method of doing a
 multivariable (multidimensional ?) least squares approximation.

 How could I easily create:
 Y = (a1*A^4 + b1*A^3 . e1*A + f1)  + (a2*B^4 .. + f2)  + (a3*C^4 +.+f3)


 Note that basic regression will not work in the real problem.  Need
 multiple exponents*


 from the data below:


 AB  CY
 paint engine amenities price
 0.1  1  6$1000
 0.1125  1200
 0.1413  1150
 0.1625  1200
 0.2  14  3000
 0.201  45  2700
 0.3  2  3  4000
 0.334  7  3500
 0.351  4  6000
 0.4  2  6  4650
 0.413  4  4400
 0.4243  4750
 0.4415  5360
 0.4513  7500
 0.6  33  7400
 0.8  25  7700
 0.872  6  8500
 0.883  5  9000



 Basically, I would like to create a formula for a price estimator based on
 three input variables and one output (price).  I am hoping to do something
 quick and simple in Excel.

 Any help would be appreciated.

 thanks,

 Nathan






 =
 Instructions for joining and leaving this list and remarks about
 the problem of INAPPROPRIATE MESSAGES are available at
   http://jse.stat.ncsu.edu/
 =


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: zero variance in a pair of ANOVA means

2000-12-13 Thread dennis roberts

though you have not indicated the kind of data you are referring to ... nor 
treatments, etc. ... if the ns are decent in each group ... i would 
seriously question the design ... or data collection process ... IF you had 
NO within group variance AT all ... in ANY group ...

when you collect data in a design like you refer to, you have to ask 
yourself: how is it possible that i can "test" a within group ... with some 
data collection instrument ... and have each and every value in the group 
be identical?

THAT i think is a more serious problem

At 12:24 AM 12/13/00 +, Gene Gallagher wrote:
The textbook I'm using this semester presents a 2-factor ANOVA problem
(3 levels of each factor) in which two of the 9 groups have zero
variance (identical observations for two replicates).  Levene's test
indicates significant departure from homoscedasticity (this may not be
known to the authors of the text who provide the solution as if there
were no problems with homogeneity of variance).  Is there ever a case
when you can trust the ANOVA results despite violations of
homoscedasticity like this?  Obviously, no transformation is appropriate
and the non-parametric ANOVAs aren't good at handling interaction
effects (at least not Friedman).

--
Eugene D. Gallagher
ECOS, UMASS/Boston


Sent via Deja.com
http://www.deja.com/


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
   http://jse.stat.ncsu.edu/
=


=
dennis roberts, educational psychology
penn state university, 208 cedar building
university park, pa USA 16802 ... AC 8148632401
[EMAIL PROTECTED] ... http://roberts.ed.psu.edu/users/droberts/drober~1.htm




=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Radon-Nikodym derivative?

2000-12-13 Thread Gökhan

Hi!
In the book "Hidden Markov Models " of Elliot,Aggoun, Moore the
Radon-Nikodym derivative is excessively used.
Can someone point me to the literature where this theorem is well
defined and explained.
Thanks  in advance.
Gökhan



--


Gökhan BakIr
Insitute of Robotics and Mechatronics
German National Research Institute for Aero and Space
82234 Oberpfaffenhofen
Tel: + 49-8153 - 28 2440
ICQ : 82040497
www.fastray.de




=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Review a sample issue of the Journal of Applied Spectroscopy (eu24)

2000-12-13 Thread Steve C. Franklin

*
To have your name removed from this mailing list, 
please  ADD the word REMOVE to 
the subject heading  and return this email to -
[EMAIL PROTECTED] 
We apologize if we have caused you any inconvenience.
*

Franklins International provides academics, scientists and research
workers, with an opportunity to receive, without obligation, free
up-to-date information on new books, journals, online databases, CDROMS etc
- published by the world's leading information publishers.

Below you will find information on the Journal of Applied Spectroscopy
published by the Society of Applied Spectroscopy (USA).

We will be happy to send you - without obligation - the address on the
Internet (URL) where you will be able to see the table of contents of the
latest issues and abstracts of the articles AND/OR you may request a free
sample copy of the journal in print form.

In order to receive the URL or a free sample issue, please complete the
form below and return this email - in full - to us.

Thanking you in advance,

Franklins

Applied Spectroscopy is a peer-reviewed, international journal of
spectroscopy and the official publication of the Society for Applied
Spectroscopy. Content includes scientific articles covering new research
results and novel applications in the areas of atomic and molecular
spectroscopy. Applied Spectroscopy is a leading scientific journal focusing
on all areas of spectroscopy with many articles looking at the interface
between various fields. 

For more then 50 years, Applied Spectroscopy has been providing the 
scientific community with cutting edge research papers by many of the top 
spectroscopists in the world.  The research that is published in this journal 
impacts applications in analytical chemistry, materials science, 
biotechnology, and chemical characterization. The quality of this 
internationally recognized, peer-reviewed journal is top notch.  Statistics 
from SCI's Journal Citation Reports for the most recent years available show 
Applied Spectroscopy produced impact factors of 1.848 and 1.917 for 1997 and 
1998 respectively.  This factor is the number of times that recent articles 
in the journal were cited during the year in question.  Additionally, SCI 
ranks Applied Spectroscopy #2 in the world for journals in the Instruments 
and Instrumentation Subject category for both 1997 and 1998

*
To:   FRANKLINS - [PLEASE FILL IN ALL THE FIELDS IN THE FORM]

I would like to receive, without obligation:

[  ] the URL of this journal on the Internet.

OR

[   ] a  FREE sample issue of this journal in print form.

OR  

[  ] both the above - i.e. a sample issue in print form and the URL

The following keywords describe my specific fields of interest:
1.
2.
3.
4.

PLEASE SEND MY FREE SAMPLE COPY TO
Name: 
Position:
Dept.
University/College/Institution: 
Address:
City
State/Zip
Country:
Telephone:
Fax: . 



Steve Franklin
PubText International
POB 54
Gan Yavne 70800
Israel.

http://go.to/pubtext


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Multivariable regression

2000-12-13 Thread David Wilkinson

In Excel have a column for Y as the dependant variable and 12 columns
for A..A^4, B..B^4, C..C^4 as the independent variables. Run the
regression tool and it will give you the result you want. 

However, when I tried it on the data below the process did not work as
the matrix was singular. In any case 13 parameters is a lot to obtain
from only 18 points and is nearly deterministic rather than least
squares, so I tried just A, B  C as independent variables. This showed
that Y did not depend on B but mainly on A with a small effect from C.
The correlation was 90%.

A second run with A, A^2, B, B^2 , C and C^2 showed Y a function of A
and A^2 only, with a correlation of 95%.

In article 91715h$sbv$[EMAIL PROTECTED], junk [EMAIL PROTECTED]
writes
Can anyone direct me to an Excel wizard or, if none is available, a formula
to do the following.  Note I do not have access to Maple, Matlab,
Mathematica or any other statical or engineering software.

Why I am trying to do is come up with a simple method of doing a
multivariable (multidimensional ?) least squares approximation.

How could I easily create:
Y = (a1*A^4 + b1*A^3 . e1*A + f1)  + (a2*B^4 .. + f2)  + (a3*C^4 +.+f3)


Note that basic regression will not work in the real problem.  Need
multiple exponents*


from the data below:


AB  CY
paint engine amenities price
0.1  1  6$1000
0.1125  1200
0.1413  1150
0.1625  1200
0.2  14  3000
0.201  45  2700
0.3  2  3  4000
0.334  7  3500
0.351  4  6000
0.4  2  6  4650
0.413  4  4400
0.4243  4750
0.4415  5360
0.4513  7500
0.6  33  7400
0.8  25  7700
0.872  6  8500
0.883  5  9000



Basically, I would like to create a formula for a price estimator based on
three input variables and one output (price).  I am hoping to do something
quick and simple in Excel.

Any help would be appreciated.

thanks,

Nathan





-- 
David Wilkinson


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



urgent problem (statistics for management)

2000-12-13 Thread Jan

I have some difficulties with following problem
(I need the solution urgently for tomorrow):

Production levels for Giles Fashion vary greatly according to consumer
acceptance of the latest styles. Therefore, the company's
weekly orders of wool cloth are difficult
to predict in advance. On the basis of 5 years data, the following
probability distribution for the company's weekly demand for wool
has been computed:

Amount of wool (lb) Probability
25000.30
35000.45
45000.20
55000.05

From these data, the raw-materials purchaser computed the
expected number of pounds required. Recently, she noticed
that the company's sales were lower in the last year than in years
before.
Extrapolating, she observed that the company will be lucky
if its weekly demand averages 2,500 this year.

(a) What was the expected weekly demand for wool based
on the distribution from past data?

(b) If each pound of wool generates $5 in revenue and costs $4 to
purchase, ship, and handle, how much would Giles Fashion stand
to gain or lose each week if it orders wool based on the past
expected value and company's demand is only 2,500?

(End of the text of the problem.)

Possible solution (in my opinion):

I.
(a) I fink is obvious: If X means company's weekly demand for wool
(lb), then the expected weekly demand for wool based  on the
distribution from past data =E(X) =
0.3*2500+0.45*3500+0.20*4500+0.05*5500=
= 3500. Am I right?

(b)
Actually I am not sure what company's weekly demand for
wool in the past data (table of probability distr.) means.
It is the amount of wool which company bought weekly
or is the amount of wool which company sold (in it's products)
weekly?
The last sentence make difference between
company's orders (it orders wool based...) and company's demand
( and company's demand is only 2,500)
(I think but I am not sure, it's actually company's weekly demand for
wool).
So In my opinion company's weekly demand for wool means:
the amount of wool which company sold (in it's products) weekly?
Am I right?

I am not sure what the last sentence means.
Does it mean that the company orders weekly
3500 lb of wool ( it orders wool based on the past
expected value and  the past expected value = 3500 from (a))
and it sells weekly 2500 lb in their products
(and company's demand is only 2,500)?
 If so the solution seems to be:
The company should expect to gain weekly: 2500*1$-1000*4$=-1500$
so in fact it should expect to lose weekly 1500$.
--

Am I right?

Maybe I should consider that the company's weekly demand
is 2500 lb but it orders are:

Amount of wool (lb) Probability
25000.30
35000.45
45000.20
55000.05

(Loss | Orders=2500 )   0$  -1500$  ...
probability 0.30 0.45

E(Loss | Orders=2500 ) = 0*0.3+(-1500)*0.45+ ...


Please somebody correct me if I am wrong.

Jan



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: rough translation of: Prognose des BSP anhand der Cobb-Douglas-Produktionsfunktion

2000-12-13 Thread Jeff Rasmussen

Katja,

Ich verstehe etwas Deutsches. Ich kenne nicht das Cobb-Douglas-Produktionsfunktion. 
Ist hier ein erster Versuch einer Übersetzung. Ich frage, daß jemand auf englisch das Cobb-Douglas-Produktionsfunktion beschreiben. Dann kann ich eine bessere Übersetzung geben. 


to the list,

My German is rather poor, but below is a rough translation of the question.  However, I'm not familiar with the Cobb-Douglas function, and don't know what BSP means.  If someone can explain it, I can probably give a better translation.

Katja writes:  "I need to predict, as mentioned already in the Subject line, the BSP from the Cobb-Douglas function.  The function requires work and capitalization (?) as input numbers.  So my question is: what are the economic indices of these numbers, and what data can I use for these numbers."


JR



   /\
*||*
ox*=||=*xo
||
Jeff Rasmussen, PhD
"Welcome Home to Symynet"
Symynet http://www.symynet.com
Graphic Design
Website Development
Eastern Philosophies Software
Quantitative Instructional Software




Ý'structions for joining and leaving this list and remarks about the problem of INAPPROPRIATE MESSAGES are available at http://jse.stat.ncsu.edu/ Ý=

Re: urgent problem (statistics for management)

2000-12-13 Thread Jon Cryer

This is quite a silly problem. No wonder statistics (for business)
gets so little respect. This is time series or process data--not a random
sample
from some fixed population. There is no information about the stability
of the process over time. Very few business processes are stable over five
years.
Why can't we teach meaningful statistics?

Jon Cryer

At 05:14 PM 12/13/00 +0100, you wrote:
I have some difficulties with following problem
(I need the solution urgently for tomorrow):

Production levels for Giles Fashion vary greatly according to consumer
acceptance of the latest styles. Therefore, the company's
weekly orders of wool cloth are difficult
to predict in advance. On the basis of 5 years data, the following
probability distribution for the company's weekly demand for wool
has been computed:

Amount of wool (lb) Probability
25000.30
35000.45
45000.20
55000.05

From these data, the raw-materials purchaser computed the
expected number of pounds required. Recently, she noticed
that the company's sales were lower in the last year than in years
before.
Extrapolating, she observed that the company will be lucky
if its weekly demand averages 2,500 this year.

(a) What was the expected weekly demand for wool based
on the distribution from past data?

(b) If each pound of wool generates $5 in revenue and costs $4 to
purchase, ship, and handle, how much would Giles Fashion stand
to gain or lose each week if it orders wool based on the past
expected value and company's demand is only 2,500?

(End of the text of the problem.)

Possible solution (in my opinion):

I.
(a) I fink is obvious: If X means company's weekly demand for wool
(lb), then the expected weekly demand for wool based  on the
distribution from past data =E(X) =
0.3*2500+0.45*3500+0.20*4500+0.05*5500=
= 3500. Am I right?

(b)
Actually I am not sure what company's weekly demand for
wool in the past data (table of probability distr.) means.
It is the amount of wool which company bought weekly
or is the amount of wool which company sold (in it's products)
weekly?
The last sentence make difference between
company's orders (it orders wool based...) and company's demand
( and company's demand is only 2,500)
(I think but I am not sure, it's actually company's weekly demand for
wool).
So In my opinion company's weekly demand for wool means:
the amount of wool which company sold (in it's products) weekly?
Am I right?

I am not sure what the last sentence means.
Does it mean that the company orders weekly
3500 lb of wool ( it orders wool based on the past
expected value and  the past expected value = 3500 from (a))
and it sells weekly 2500 lb in their products
(and company's demand is only 2,500)?
 If so the solution seems to be:
The company should expect to gain weekly: 2500*1$-1000*4$=-1500$
so in fact it should expect to lose weekly 1500$.
--

Am I right?

Maybe I should consider that the company's weekly demand
is 2500 lb but it orders are:

Amount of wool (lb) Probability
25000.30
35000.45
45000.20
55000.05

(Loss | Orders=2500 )   0$  -1500$  ...
probability 0.30 0.45

E(Loss | Orders=2500 ) = 0*0.3+(-1500)*0.45+ ...


Please somebody correct me if I am wrong.

Jan



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=


 ___
--- |   \
Jon Cryer, Professor [EMAIL PROTECTED]   ( )
Dept. of Statistics  www.stat.uiowa.edu/~jcryer \\_University
 and Actuarial Science   office 319-335-0819 \ *   \of Iowa
The University of Iowa   dept.  319-335-0706  \/Hawkeyes
Iowa City, IA 52242  FAX319-335-3017   |__ )
---   V



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



coefficient of determination

2000-12-13 Thread bugs6900

I need some immediate help in convincing DOD
people that a low R-squared is not neccessarily
saying, that a CER computed using the Linear
Least Squares Method is a bad predictor for the
data set.

Is there any references, papers, studies,
theories or any agencies that have used a CER
with a low r-squared.  The cut off for DOD is
0.64, and we have many in the 0.40 - 0.50 range.

Any help would be appreciated ASAP.

Jeff Moore


Sent via Deja.com
http://www.deja.com/


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Radon-Nikodym derivative?

2000-12-13 Thread Herman Rubin

In article [EMAIL PROTECTED],
=?iso-8859-1?Q?G=F6khan?=  [EMAIL PROTECTED] wrote:
Hi!
In the book "Hidden Markov Models " of Elliot,Aggoun, Moore the
Radon-Nikodym derivative is excessively used.
Can someone point me to the literature where this theorem is well
defined and explained.
Thanks  in advance.
Gkhan

This is one of the most important theorems from measure
theory for statistics.  The theorem states somewhat more
than that, under the conditions where it is possible, one
measure has a "derivative" with respect to another, which
is unique up to measure zero.  One statistical application
is that, under appropriate conditions, there is a 
likelihood ratio.

Any reasonable measure theory book will have this.


-- 
This address is for information only.  I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
[EMAIL PROTECTED] Phone: (765)494-6054   FAX: (765)494-0558


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



coefficient of determination

2000-12-13 Thread bugs6900

I am in immediate assistance of convincing DOD people that a low R-
squared is not necessarily saying that, the computed CER using the
Linear Least Squares Method is not a good predictor of the data set.
If anyone knows of papers, studies, theories, or any agencies that have
documentation of using a CER with a low R-squared.  The cut off for DOD
is 0.64.  We have many between 0.40 and 0.50.  We feel that the
regression equations are a very good indicator of the data set.

Any ASAP help would be appreciated.

Thanks,

Jeff Moore


Sent via Deja.com
http://www.deja.com/


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Florida votes and statistical errors

2000-12-13 Thread P.G.Hamer

[EMAIL PROTECTED] wrote:

 Since the vote difference between Bush and Gore falls within the margin
 of error for the counting process, declaring the winner is
 mathematically indeterminable within any reasonable degree of
 scientific confidence.

 Since we cannot know who has won, the Florida Legislature should use
 their power to honor the will of the people by choosing 25 electors
 that proportionally represent the two candidates based on the popular
 vote.

 This solution is both common-sensical and constitutional.

As has already been said, probably on another thread, deciding that
the vote is `too close to call' requires a judgement just as arbitary and
contestable as calling a winner.

In general  proportional representation for electors seems a good idea.
[So like many good ideas in voting it will never be generally implemented?]

Peter



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Quantiles in Excel

2000-12-13 Thread Alan McLean

Does anyone know the formulas that Excel uses in its QUARTILE and
PERCENTILE functions? I couldn't find them in Help.

Thanks in advance,
Alan


-- 
Alan McLean ([EMAIL PROTECTED])
Department of Econometrics and Business Statistics
Monash University, Caulfield Campus, Melbourne
Tel:  +61 03 9903 2102Fax: +61 03 9903 2007


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Implied Volatility!

2000-12-13 Thread mot4201

Hi,

Could anybody explain "Implied Volatility" of a call or put Option?
If we had to plot "implied volatility" versus Strike price, how would
that plot look like (in the case of call and put option). If somebody
has a program computing implied volatility (S-Plus, Matlab), please
forward.

Thank you,

Mark


Sent via Deja.com
http://www.deja.com/


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: coefficient of determination

2000-12-13 Thread Rich Ulrich

re:  "Cost estimating relations", I think.

On Wed, 13 Dec 2000 17:55:06 GMT, [EMAIL PROTECTED] wrote:

 I am in immediate assistance of convincing DOD people that a low R-
 squared is not necessarily saying that, the computed CER using the
 Linear Least Squares Method is not a good predictor of the data set.
 If anyone knows of papers, studies, theories, or any agencies that have
 documentation of using a CER with a low R-squared.  The cut off for DOD
 is 0.64.  We have many between 0.40 and 0.50.  We feel that the
 regression equations are a very good indicator of the data set.
 

An R-squared depends on the sample (and its range) as well as the
model.   So, R-squared has that as a *problem*.  The raw residuals are
typically going to be invariant in cases where the R-squared is not.

A crudely modeled time series might give you a really high R-squared
while having VERY low ability to predict beyond its lag-equal-one.
Time series has that as a problem.  You can't compare a times-series
R-squared to a cross-section result.

It is interesting to see a number "0.64"  floated, as something
sufficient.  I am only guessing at what is predicted, but does that
imply something about having 50% cost over-runs,  while the CER of .40
can imply 100% cost over-runs? 


 - I used www.google.com to search for  regression  "CER"   and hit
(among other places) the DOD site on Cost Estimating Relationships,
http://www.acq.osd.mil/dp/cpf/pgv1_0/pgv2/pgv2c5.html

The commentary includes these sensible words:

 citation from the site
5.7 - Identifying Issues And Concerns

Questions to Consider in Analysis 1.  As you perform price/cost
analysis, consider the issues and concerns identified in this section,
whenever you use regression analysis.

 - Does the r2 value indicate a strong relationship between the
independent variable and the dependent variable? 

The value of r2 indicates the percentage of variation in the dependent
variable that is explained by the independent variable. Obviously, you
would prefer an r2 of .96 over an r2 of .10, but there is no magic
cutoff for r2 that indicates that an equation is or is not acceptable
for estimating purposes. However, as the r2 becomes smaller, you
should consider your reliance on any prediction accordingly.

= end of citation.

 - Maybe you can refute them from their site.

-- 
Rich Ulrich, [EMAIL PROTECTED]
http://www.pitt.edu/~wpilib/index.html


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Implied Volatility!

2000-12-13 Thread David Rothman

google
altavista
yahoo
excite

take your pick


[EMAIL PROTECTED] wrote in message
918nmm$8j2$[EMAIL PROTECTED]">news:918nmm$8j2$[EMAIL PROTECTED]...
 Hi,

 Could anybody explain "Implied Volatility" of a call or put Option?
 If we had to plot "implied volatility" versus Strike price, how would
 that plot look like (in the case of call and put option). If somebody
 has a program computing implied volatility (S-Plus, Matlab), please
 forward.

 Thank you,

 Mark


 Sent via Deja.com
 http://www.deja.com/




=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=