[EMAIL PROTECTED] (Peter Hai) wrote in message news:<[EMAIL PROTECTED]>...
> m and n observations were obtained, respectively, from two normal
> distributions which are independent of each other. How can I test the
> null hypothesis that the absolute mean difference [abs(mu1-mu2)] +
> variance ratio [Variance1/Variance2] is less than or equal to a
> constant.

Personally, for a complicated situation, I'd do a permutation 
test (or if m and n are really large, a randomization test).

However, I have a problem with your hypothesis: the two 
components of it look to be measured in different units!

The first term is measured in whatever units the data are in,
the second term is dimensionless. 

What units is the constant on the right hand side in???

For example, if the data are in dollars, say (or feet),
and you change to measuring in cents (or inches), the
first term in your null hypothesis will change and the 
second term will not!

Even if we could ignore the logical problem with adding
things in different dimensions: $1.25 + 2 (non-dollars), 
2.5 feet + 0.5 (non-feet), it means that the relative 
impact of the variance ratio on the test will depend on 
what units you work in.

Except in some pretty special circumstances, that's going
to be a problem.

Can you give some details of why your null hypothesis
involves things in different dimensions?

Glen
.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to