----- Original Message -----
From: "Estimator" <[EMAIL PROTECTED]>
Newsgroups: sci.stat.edu
Sent: Saturday, 22 December 2001 2:47
Subject: How to prove absence of dependence between arguments of
function?


> *** post for FREE via your newsreader at post.newsfeeds.com ***
>
> I've got linear function Y=f(x1;x2;x3;x4) of theoretical
> distribution, in addition x3=g1(x1), x4=g2(x1;x2) Also I've got
> empirical sample consists of N sets of these values (magnitudes)
> Y1 x11 x12 x13 x14
> ..
> Yn xn1 xn2 xn3 xn4
> as since x3 x4 are dependent of x1 x2 it's reasonable to
> evaluate x3 x4 by x1 x2 accordingly and analyse Y only from x1
x2.

    If x3= g1(x1)  x3 and x1 can only be independent if the
function g1 is a constant, or at least degenerate wrt x1 and
similarly for Y.

> But I've got a strong believe that in fact all of the arguments
> are independent or dependence is insignificant. How to prove
> this mathematically using empitical observations?

        You could plot x1 against x3 to convince yourself that there
was no tendency for the points to depart from a random scatter.
Similarly plot x1 vs x4  and x2 vs x4. But this would not give you
objective grounds to include or reject x3 and x4 from the
analysis.

    In fact, even if x3 and x4 are uncorrelated with x1 and x2
,your best course would be to retain them in the analysis. Then
you can formally test to see whether they contribute any
explanatory power wrt Y.

If the explanatory power of the model is not significantly
improved by including x3 and x4, you have objective evidence to
exclude them from the model.

> Is any sense in making correlation matrix 4x4 (Pearson) and
> proving insignificance of coefficient of correlation (Student
> t-criterion for example) between arguments?

        No. One reason is that you would have to conduct six
significance tests and the chance of the 1 in 20 level of the test
being exceeded by chance in one of them is too high. In any event
correlation is only a measure of LINEAR dependence and frequently
data has more complicated dependencies. There is no way to prove
independence from a data set, in the same sense that scientific
theories are never proved, only disproved. In particular, failure
to reject a null hypothesis is not proof of its correctness.

    Dependence does not consist of only linear relationships and
lack of correlation does not imply independence. For example if X
is symetrically distributed  on (-1,1)  and Y = X^2 , then X and Y
are uncorrelated although functionally related.

    Hope this helps   Jim Snow
             [EMAIL PROTECTED]




=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to