Jon & others,
Well, there is an attempt at this in GSAS - the "diffuse scattering" functions for 
fitting these contributions separate from  the "background" functions. These things 
have three forms related to the Debye equations formulated for glasses. The possibly 
neat thing about them is that they separate the diffuse scattering component from the 
Bragg component unlike PDF analysis. As a test of them I can fit neutron TOF 
"diffraction" data from fused silica quite nicely. I'm sure others have tried them - 
we all might want to hear about their experience.
Bob Von Dreele

________________________________

From: Jon Wright [mailto:[EMAIL PROTECTED]
Sent: Sun 8/22/2004 6:13 AM
To: [EMAIL PROTECTED]




>Well, that is an old chestnut that Cooper and Rollet used to oppose to
>Rietveld refinement. I think Rollet eventually agreed that Rietveld was
>the better method. Has Bill really gone back on that ?
> 
>
The difference between the two approaches are just an interchange of the
order of summations within a Rietveld program. Differences in esds
should only arise through differences in accumulated rounding errors,
assuming you don't apply any fudge factors. Since most people do apply
fudge factors, the argument is really about which fudge factor you
should apply. I will only comment that the conventional Rietveld
approach (multiply the covariance matrix by chi^2) is often poor.

As for the PDF "versus" Rietveld - you should get smaller esds on
thermal factors if you were to write a program which treats the
background as a part of the crystal structure and has no arbitrary
degrees of freedom in modelling the background. This is just due to
adding in more data points that are normally treated as "background" but
which should help to determine the thermal parameters via the diffuse
scattering.

So, provided you were to remove the arbitrary background from the
Rietveld program and compute the diffuse scattering the methods ought to
be equivalent. Something like DIFFAX does this already for a subset of
structures, but I think without refinement. The real difficulty arises
with how to visualise the disordered component, decide what it is, and
improve the fit - hence the use of the PDF. Although no one appears to
have written such a program there does not seem to be any fundamental
reason why it is not possible (compute the PDF to whatever Q limit you
like, then transform the PDF and derivatives into reciprocal space).
Biologists already manage to do this in order to use an FFT for
refinement of large crystal structures!

In practice a large percentage of the beamtime for these experiments at
the synchrotron is used to measure data at very high Q which visually
has relatively little information content - just so that a Fourier
transform can be used to get the PDF. This is silly! The model can
always be Fourier transformed up to an arbitrary Q limit and then
compared whatever range of data you have. For things like InGaAs the
diffuse scatter bumps should occur mainly on the length scale of the
actual two bond distances. Wiggles on shorter length scales  are going
to be more and more dominated by the thermal motion of the atoms, and so
don't really add as much to the picture (other than to allow an
experimentalist to get some sleep!).

In effect it is like the difference between measuring single crystal
data to the high Q limit and then computing an origin removed Patterson
function and doing a refinement against that Patterson as raw data. No
one does the latter as you can trivially avoid the truncation effects by
doing the refinement in reciprocal space. The question then is whether
it is worth using up most of your beamtime to measure the way something
tends toward a constant value very very precisely? Could the PDF still
be reconstructed via maximum entropy techniques from a restricted range
of data for help in designing the model? Currently the PDF approach
beats crystallographic refinement by modelling the diffuse scattering.
As soon as there is a Rietveld program which can model this too then one
might expect the these experiments become more straightforward away from
the ToF source.

I'd be grateful if someone can correct me and show that most of the
information is at the very high Q values. Visually these data contain
very little compared to the oscillations at lower Q and seem to become
progressively less interesting the further you go, as there is a larger
and larger "random" component due to thermal motion. Measuring this just
so you can do one transform of the data instead of transforms of the
computed PDF and derivatives seems like a dubious use of resources?
Since the ToF instruments get this data whether they like it or not, the
one transform approach is entirely sensible there. For x-rays and CW
neutrons, it seems there is a Rietveld program out there waiting to be
modified.

August is still with us, Happy Silly Season!

Jon





Reply via email to