>
>"In most cases, as in the case of errors of observation, they have a fairly
>definite symmetrical shape and one that approaches with a close degree of
>approximation to the well-known error or probability-curve. A frequency
>curve, which, for practical purposes, can be represented by the error
- Original Message -
From: Michael Granaas <[EMAIL PROTECTED]>
To: EDSTAT list <[EMAIL PROTECTED]>
Sent: Thursday, April 13, 2000 8:23 AM
Subject: Re: Hypothesis testing and magic - episode 2
> In addition to defining the variables some areas do a better job of
> defining and therefore te
> Truth has nothing to do with it. We contruct stories of how the universe
operates -
> we call these stories 'theories' or 'models'. Significance testing is one
way in
> which we choose between stories as to which is (probably) more useful in a
> specified context.
--
> Alan McLean ([EMAIL PROTEC
In article <[EMAIL PROTECTED]>,
Michael Granaas <[EMAIL PROTECTED]> wrote:
>On Thu, 13 Apr 2000, Robert Dawson wrote:
>> Michael Granaas wrote:
>> > If n = 10 and I cannot reject a null of 100 I certainly agree that the
>> > corroboration value is low. But, if n = 100 and I can't reject a null
In article <005101bfa4fb$cc623d00$[EMAIL PROTECTED]>,
David A. Heiser <[EMAIL PROTECTED]> wrote:
>> Except for posterior probability, none of these are tools
>> for the actual problems. And posterior probability is not
>> what is wanted; it is the posterior risk of the procedure.
>> But even thi
In article <8d4fpl$em8$[EMAIL PROTECTED]>,
Jan Souman <[EMAIL PROTECTED]> wrote:
>Does anybody know why the normal distribution is called 'normal'? The most
>plausible explanations I've encountered so far are:
>1. The value of a variable that has a normal distribution is determined by
>many diffe
In article ,
Aaron & Katya <[EMAIL PROTECTED]> wrote:
>If you distribute your mass 1/2 at 0 and 1/2 at 1 the variance is 1/4. (ie
>Var[x]=E[x^2]-E[x]^2=(0^2*1/2+1^2*1/2)-(0*1/2+1*1/2)^2=1/4.
>I believe this is the maximum variance for the interval [0,1]. Other than
It is a fact that most statisticians who write scripts use S-plus.
For precisely the same reasons that most business majors
use Excel, most consulting firms use SAS, most secretaries use
Word, and most PC's run Windows. All very true, and all very
unfortunate.
At 0:15 + 04/14/2000, T.S. Lim
Spot on, Michael.
Michael Granaas wrote:
> On Thu, 13 Apr 2000, dennis roberts wrote:
>
> > At 10:23 AM 4/13/00 -0500, Michael Granaas wrote:
> >
> > >In addition to defining the variables some areas do a better job of
> > >defining and therefore testing their models. The ag example is one wher
In article <8d4f0o$g4$[EMAIL PROTECTED]>, [EMAIL PROTECTED] says...
>
>Hi everybody,
>
>I m looking for free Matlab programs wich perform bootstrap, jackknife &
>cross-validation, for neural netorks and regression (MLP).
>Does anybody can tell me where I can find it ?
>
>Thanks a lot
>--
>
>Frédé
dennis roberts wrote:
> but, if we follow this to some logical conclusion ... this could be
> rephrased as meaning ...
>
> situations where you have essentially complete control over variable
> manipulation = situations where you can establish 'the truth' (in
> terms of the impacts of these
Hi Michael,
This sounds to me like lousy experimental design. Surely the purpose of the
experiment is to distinguish between competing theoretical models?
Michael Granaas wrote:
> But in some areas in psychology you will have a situation where many
> theoretical perspectives predict the same ou
Hi Jan,
I have always understood that the word 'normal' in this context means
'perpendicular'. You might remember calculus exercises in which you were asked
to find 'the equation to the normal to a curve', just after you were asked to
find the equation to the tangent.
The reason why this name ap
Process optimization is essential for gaining and maintaining a competitive
edge. The sequential simplex method may provide the optimal strategy for
discovering the true optimum. An article from the April issue of Chemical
Engineering Progress is available for download/viewing as an Adobe Acrobat
On Thu, 13 Apr 2000 11:53:05 GMT, Chuck Cleland <[EMAIL PROTECTED]>
wrote:
> I have an ordinal response variable measured at four different times
> as well as a 3 level between subjects factor. I looked at the time
> main effect with the Friedman Two-Way Analysis of Variance by Ranks.
> That
On Thu, 13 Apr 2000, dennis roberts wrote:
> At 10:23 AM 4/13/00 -0500, Michael Granaas wrote:
>
> >In addition to defining the variables some areas do a better job of
> >defining and therefore testing their models. The ag example is one where
> >not only the variables are relatively clear so a
Don..Thank you for your responseI think you are on target about " It
rather looks as though the question dealt with the SE of a single (new)
observation at a particular value of (X1, X2, ...)", that value being the
peak (or when the tangent to the curve is horizontal, i.e., at the global
Dale, a quick response before my 2:00 class.
What does your correspondent want the SE _of_?
It rather looks as though the question dealt with the SE of a single
(new) observation at a particular value of (X1, X2, ...). If this
is indeed the question, Draper & Smith deal with it in their secti
At 10:23 AM 4/13/00 -0500, Michael Granaas wrote:
>In addition to defining the variables some areas do a better job of
>defining and therefore testing their models. The ag example is one where
>not only the variables are relatively clear so are the models. That is
>there is one highly plausible
Paul Bernhardt wrote:
> I am not affiliated with the Motley Fool (where this investment strategy
> is touted) nor am I advertising for them. It is just an interesting
> practical problem which raises a question I think many statiticians face,
> how to explain when someone has conducted data mining
At 08:37 AM 4/13/00 -0400, Art Kendall wrote:
>in the "harder to do" sciences it is common to distinguish an experiment
>from a
>quasi-experiment.
>
>Part of the difficulty of these fields is that we can not (or ethically may
>not) manipulate many independent variables. Therefore we lose the opp
On Thu, 13 Apr 2000, Aaron & Katya wrote:
> If you distribute your mass 1/2 at 0 and 1/2 at 1 the variance is 1/4.
> (ie Var[x]=E[x^2]-E[x]^2=(0^2*1/2+1^2*1/2)-(0*1/2+1*1/2)^2=1/4.
>
> I believe this is the maximum variance for the interval [0,1]. Other
>than just making an assertion, is there
- Original Message -
From: Wen-Feng Hsiao <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Thursday, April 13, 2000 3:06 AM
Subject: linear model or interactive model?
| Dear all,
|
| Suppose I have an aggregation model which is in the following form:
| Y = c1*(X11 * X12) + c2*(X21
In article <[EMAIL PROTECTED]>,
[EMAIL PROTECTED] (Brian E. Smith) wrote:
> I will be teaching Time Series and Forecasting (an MBA course) in the
Fall.
> I am looking for an inexpensive software package that is good for
> forecasting. Last year I used Minitab 12 and found it easy-to-use and
>
On Thu, 13 Apr 2000, Alan McLean wrote:
> Some more comments on hypothesis testing:
>
> My impression of the hypothesis test controversy, which seems to exist
> primarily in the areas of psychology, education and the like is that it
> is at least partly a consequence of the sheer difficulty o
On Thu, 13 Apr 2000, Robert Dawson wrote:
> Michael Granaas wrote:
>
>
> > If n = 10 and I cannot reject a null of 100 I certainly agree that the
> > corroboration value is low. But, if n = 100 and I can't reject a null of
> > 100 I am starting to see support for 100 as a correct value. If n
On Thu, 13 Apr 2000, Chuck Cleland wrote:
> Hello:
> I have an ordinal response variable measured at four different times
> as well as a 3 level between subjects factor. I looked at the time
> main effect with the Friedman Two-Way Analysis of Variance by Ranks.
> That effect was statistically
Does anybody know why the normal distribution is called 'normal'? The most
plausible explanations I've encountered so far are:
1. The value of a variable that has a normal distribution is determined by
many different factors, each contributing a small part of the total value.
Because this is the
in the "harder to do" sciences it is common to distinguish an experiment from a
quasi-experiment.
Part of the difficulty of these fields is that we can not (or ethically may
not) manipulate many independent variables. Therefore we lose the opportunity
to assert "et ceteris paribus" "everything e
Hello:
I have an ordinal response variable measured at four different times
as well as a 3 level between subjects factor. I looked at the time
main effect with the Friedman Two-Way Analysis of Variance by Ranks.
That effect was statistically significant and was followed up by
single df compari
Generally, you can include an interaction (or moderator) term in a linear
model, like
y = b0 + b1 * x1 + b2 * x2 + b3 * x1*x2,
and the model still is linear. If you decide not to include x1 and x2, like
y = b0 + b1 * x1*x2,
you still have a linear model.
BUT: I don't understand the purpose and tec
Hi everybody,
I m looking for free Matlab programs wich perform bootstrap, jackknife &
cross-validation, for neural netorks and regression (MLP).
Does anybody can tell me where I can find it ?
Thanks a lot
--
Frédéric Martin
ForecastPro or one of the other AutoBox products may fit all of your
criteria, with the exception inclusion of the very advanced features.
It's meant to be used by MBA & non-statistician types. I believe you can
look them over at www.autobox.com. I've been using ForeCastPro for some
time to predic
Michael Granaas wrote:
> I have in my own mind been using "plausible" to refer to a hypothesis that
> has not been refuted by data. We may certainly find at some point that
> the hypothesis is in fact false, but at the time we propose it it could be
> true. We may even wish it to be false at th
"T.S. Lim" wrote:
> Data Mining = Statistics reborn with a new name.
>
> You ask the wrong crowd. Go to
>
>http://www.kdcentral.com
>
> and subscribe to datamine-l mailing list.
That's debatable. The poster's question has as much to do with regression
to the mean as with modeling, and any
If you distribute your mass 1/2 at 0 and 1/2 at 1 the variance is 1/4. (ie
Var[x]=E[x^2]-E[x]^2=(0^2*1/2+1^2*1/2)-(0*1/2+1*1/2)^2=1/4.
I believe this is the maximum variance for the interval [0,1]. Other than
just making an assertion, is there a way to prove that Var[x]<=1/4.
==
Dear all,
Suppose I have an aggregation model which is in the following form:
Y = X11 * X12 + X21 * X22.
This model could be thought as an aggregation of two knowledge, namely
X1. and X2.. Each knowledge contains two pieces of information
(attributes). For example, X1 contains X11 ans X12. N
37 matches
Mail list logo