I would like to add my two cents to the discussion on ESDs and
weightings. 

Paolo wrote:

> Clearly, the ESD on x1 have worsened in this simple case.  This does not
> prove the general case, but there might be a proof for that as well.

This conclusion was reached by applying non-linear least squares to
solving for x1 and x2 in the following:

^        I1=x1+x2; I2=x1+2*x2, I3=x2, and assuming unitary weights.

ESDs squared were calculated as
        2 and 2/3 for x1 and x2 respectively:
and     5 and 2 for x1 and x2 respectively excluding I3

This change in ESDs makes perfect sense to me. Excluding I3 lessens the
certainty in determining x2 which in turn lessens the certainty in
determining  x1. My conclusion is that there is nothing wrong with the
ESDs calculated. 

Now,  for x1 and x2 to be independent of I3 then I3 should not contain
any parameters that are also in equations that are functions of x1 and
x2; for example:

        I1 = x1 + x2
        I2 = x1 + 2 x2
        I3 = x3

I3 here is now independent of I3. And the A and B matrices becomes:

        A = {{2, 3, 0}, {3, 5, 0}, {0, 0, 1}}
        B = {{5, -3, 0}, {-3, 2, 0}, {0, 0, 1}}

Excluding I3 means excluding x3 as well; the A and B matrices here are:

        A = {{2, 3}, {3, 5}}
        B = {{5, -3}, {-3, 2}}

As you can see that in both cases the ESDs for x1 and x2 are the same.

Some other points I would like to clear up,

> 1) You construct the Aij matrix (also sometimes called the Hessian matrix)
> 
>         Aij=D^2chi^2/DxiDxj (D is the partial derivative sign.  E-mail is
> still too primitive)

The term Hessian matrix refers to the second order terms obtained when
expanding a function in a Taylor series.

Typically Chi^2 is written as:

         Chi^2 = Sum[  w(i)^2 (Io(i) - Ic(i))^2 , i]

        where Io(i) an the observed data point
                Ic(i) a calclated data point
                w(i)^2 the weighting term for data point I
                and the summation is over i

Ic(i) is expanded to a first order Taylor approximation, or:

        Ic(i) = Ic(i, P) + Sum[ dIc(i)/dP(p) Del(p) , p]

        where P is the parameter vector 
                Del(p) the change in parameter P(p)
                dIc(i)/dP(p) is the derivative of the Ic(I) wrt parameter P(p)
 
Chi^2 becomes

         Chi^2 = Sum[  w(i)^2 (Io(i) - Ic(i, P) - Sum[ dIc(i)/dP(p) Del(p) ,
p])^2 , i]

Differentiating Chi^2 wrt to each parameter P(p) and equating these
equation to zero yields the normal equations which in matrix form looks
like:

        A Del(P) = Y
        
        where Aij = Sum[  w(i) dIc(i)/dP(i)  dIc(i)/dP(i)  , i]

These equations are then solved for the changes in the parameters
Del(P). This term for Aij is different to the one given by Paolo and is
applicable to single crystal data. It is also applicable to powder data
by replacing i by 2Th and I by the intensities as a function of 2Th.
Thus I do not know where the following term by Paolo came from:

^       Aij=D^2chi^2/DxiDxj 

Note that it is sometimes useful to expand Chi^2 itself but not in the
analysis of XRD data.

Alan

Reply via email to