>For the purists, just redo the calculation starting from different
points 
>and you can evaluate the error in the distribution using a 
>MonteCarlo-like approach...

Leonie,

Your error estimation procedure sounds a lot like the "boot strap"
method which I think has now gained credibility, see:

http://www.amazon.com/exec/obidos/tg/detail/-/0412042312/002-2676968-371
2052?v=glance

alan



-----Original Message-----
From: Matteo Leoni [mailto:[EMAIL PROTECTED] 
Sent: Monday, April 18, 2005 2:00 PM
To: rietveld_l@ill.fr


buna Nicolae,

> Not only arithmetic, I think is clear that both <R> and c were refined
in a
> whole pattern least square fitting. A private program, not a popular
> Rietveld program because no one has inplemented the size profile
caused by
> the lognormal distribution.

not sure no one did.. we're working with that kind of profiles at 
least since 2000 (published in 2001 Acta Cryst A57, 204), without the
need 
for any approximation going through Voigts or Pseudo Voigts. Using FFT
and 
some math tricks you can compute the "true" profile for a distribution
of 
crystallites almost in the same time you calculate a Voigt curve, so why

the need to use any approximate function? 
I think this agrees with what Alan just pointed out (well 5000 profiles 
per second if you do not include any hkl dependent broadening that has 
to be calculated for each of them (and perhaps for each subcomponent)...

otherwise the speed reduces.. but yes few ms for each profile is the 
current speed for my WPPM code, implementing all this stuff within the 
WPPM frame). 

> > But the most important disadvantage is the necessity to choose the
> > exact type of size distribution. For Sample 1 (which, obviously,
have
> > certain distribution with certain <R> and c) you got quite different
> > values of <R> and c for lognorm and gamma models, but the values of
Dv
> > and Da were nearly the same. Don't you feel that Dv and Da values
> > "contain" more reliable information about <R> and c than those
> > elaborate approximations described in chapter 6?
> 
> Well, this is the general feature of the least square method. In the
least
> square you must firstly to choose a parametrised model for something
that
> you wish to fit.  Do you know another posibility with the least square
than
> to priory choose the model? Without model is only the deconvolution,
and
> even there, if you wish a "stable" solution you must use a
deconvolution
> method that requres a "prior, starting model" (I presume you followed
the
> disertation of Nick Armstrong on this theme).

also in this case it has ben shown possible to obtain a distribution 
without any prior information on its functional shape (J.Appl.Cryst
(2004), 
37, 629) and without taking the maxent treatment into account. 
I'm currently using without much problems for the analysis of 
nanostructured materials... advantages with respect to maxent are the 
speed and the fact that it can coexist with other broadening models
(still  
not possible with maxent and still have to see a specimen where strain  
broadening is absent) and it's able to recover also a polydisperse  
distribution if it's present.... Just need to test it against maxent (if

data would be kindly provided to do so).
For the purists, just redo the calculation starting from different
points 
and you can evaluate the error in the distribution using a 
MonteCarlo-like approach...

As for the TCH-pV, well, it is no more than a pV with the Scherrer 
trend (1/cos) and the differential of Bragg's law (tan term) plugged in.
This means it is ok as long as you consider a Williamson-Hall plot a
good 
quantitative estimator for size and strain (IMHO).

Mat

PS I fully agree with Alan on the continuous request for Journals, but I

bet the other Alan (the deus ex machina of the mailing list) should warn

the members somehow...

----------------------------------------------
Matteo Leoni
Department of Materials Engineering
and Industrial Technologies 
University of Trento
38050 Mesiano (TN)
ITALY




Reply via email to