On Mon, 1 Oct 2001 21:52:08 +0200, "Bernhard Kuster"
<[EMAIL PROTECTED]> wrote:

[me] > 
> > I think I am trying to say, gently, that your basic question doesn't
> > make very good sense to me;  and it did not, to Dennis, either.
> > "Optimal"  is one problematic word.  Another problem is that
> > you seem to ask about all research, in all of the world....  
> > It might be a clever way to attack 'sample size', but I think 
> > that hasn't been done.

BK > 
> Thanks for your advise. I see that the question is probably a little bit
> overloaded and that optimal is not a good definition. But isn't there
> something that determines the sample size of all statistical techniques? I
> remember having read something a long time ago, that sample size of all
> statistical techniques are influenced by alpha risk, power and effect size.
> Is this wrong or is not applicable to my question?

"Influenced"  is an inadequate word.  
And there is nothing to be "optimal"  when the relations are 1-to-1.

Once you have your choice of parameterization, 
there is a strict, 5-dimensional relation.   Here is one
way to describe it:
" a) for a given test 
 b) if there is a specific, assumed N
 c) and (standardized) effect size, then:
 d) using a given alpha (risk, one or two tailed)
 e) will give you a test with power ... (or 1.0 minus beta-error)
[that can be determined as follows ....]"

Saying it in another way:  
(Almost always) there is a trade-off
between beta and alpha (type 2 and type 1 errors).  
For ANOVA, the factors (b) and (c), above, 
can be encompassed in the "non-centrality parameter", 
which  is the product of N times distance-squared.  
(Actually, that is Degrees of freedom instead of N.)

A given "noncentrality" has a specific set of alpha and beta
values that can be computed, where the one error can
be traded-off against the other.

The articles and books that get written are about parameterizations
that are useful, or trade-offs that are recommended.  Or designs
that are efficient.  These vary by area -- Jacob Cohen wrote the 
book that is most used in "behavioral science"  (from its title)
and clinical research.  The book is not concerned with designed 
N's of over 1000, or N's under 10; and it has a very particular
orientation towards designs in that middle range, stating a 
judgement on what effects are small, medium or large.  
Okay, he was describing the world that a lot of us work in.

But it is certainly not the only sort of research that exists.


As I read it in your earlier message, you were asking about 
"an expert"  who -- I think -- would be like an expert in doing 
long division.  Not necessary.  Not a good idea.

That is: no one has asked for scientific proof on that level, 
for a few hundred years.  A lot of people know what 
"statistical power"  is, and there is no controversy as to 
shape of the general topic.  So you need a more specific question.

-- 
Rich Ulrich, [EMAIL PROTECTED]
http://www.pitt.edu/~wpilib/index.html


=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to