RE: [EXT] Re: [External] Re: Step-like basline

2023-09-04 Thread alancoelho
Yes Frank, the macro is working with V5 of TOPAS-Academic; TOPAS should be the 
same.

Cheers

Alan

 

-Original Message-
From: Frank Girgsdies  
Sent: Monday, September 4, 2023 7:09 PM
To: alancoe...@bigpond.com
Cc: rietveld_l@ill.fr
Subject: Re: [EXT] Re: [External] Re: Step-like basline

 

Dear Alan,

 

Is this updated macro downward compatible with TOPAS version 5?

 

If yes, I would edit my topas.inc and try it.

 

Thanks and cheers,

Frank

 

On 04.09.2023 10:44,   alancoe...@bigpond.com 
wrote:

> Hi Kurt

> 

> TOPAS models absorption edges well. Page 158 of the Technical 

> Reference at   
> http://www.topas-academic.net/Technical_Reference.pdf

> <  
> http://www.topas-academic.net/Technical_Reference.pdf> shows the result.

> 

> The macro in question is modifies the emission profile which is how I 

> think it should be modelled:

> 

> macro Absorption_Edge_Correction_Eqn(& edge, & a_white, & b_white, & 

> a_erf, & edge_extra)

> 

> {

> 

>modify_peak_eqn =

> 

>   (

> 

>  Get(current_peak) +

> 

>  a_white Exp(- b_white (Get(current_peak_x) - edge)^2) / 

> Tan(Th)

> 

>  ' Not necessary

> 

>  ' If(Get(current_peak_x) < edge, 1/Get(current_peak_x)^3, 

> 1 )

> 

>   )

> 

>   (edge_extra + 0.5 (1 + Erf_Approx( a_erf 

> (Get(current_peak_x)

> - edge))) );

> 

> }

> 

> If someone spots a problem then I would be happy to know. The line 

> commented out is due to the fact it makes little difference to the result.

> 

> Cheers

> 

> Alan

> 

> *From:*rietveld_l-requ...@ill.fr <  
> rietveld_l-requ...@ill.fr> *On 

> Behalf Of *Kurt Leinenweber

> *Sent:* Monday, September 4, 2023 2:53 AM

> *To:* Thomas Gegan <  tom.ge...@basf.com>; Bish, 
> David L 

> <  b...@indiana.edu>; Shay Tirosh < 
>  stiro...@gmail.com>; Fernando Igoa 

> <  fer.igoa.1...@gmail.com>

> *Cc:* Rietveld List (  rietveld_l@ill.fr) < 
>  rietveld_l@ill.fr>

> *Subject:* RE: [EXT] Re: [External] Re: Step-like basline

> 

> Hi,  Are these things modeled in Rietveld programs, by chance?  It 

> seems like a lot of baggage to put in a refinement but if it makes the 

> results better…

> 

>   * Kurt

> 

> *From:*rietveld_l-requ...@ill.fr <  
> mailto:rietveld_l-requ...@ill.fr>

> <  
> rietveld_l-requ...@ill.fr > *On 

> Behalf Of *Thomas Gegan

> *Sent:* Sunday, September 3, 2023 9:16 AM

> *To:* Bish, David L <  
> b...@indiana.edu >; Shay 

> Tirosh <  
> stiro...@gmail.com >; Fernando Igoa 

> <  
> fer.igoa.1...@gmail.com >

> *Cc:* Rietveld List (  rietveld_l@ill.fr < 
>  mailto:rietveld_l@ill.fr>) 

> <  rietveld_l@ill.fr 
> >

> *Subject:* RE: [EXT] Re: [External] Re: Step-like basline

> 

> I agree with a Ni absorption edge, possibly with a Kβ peak around 38° 2θ.

> 

> *Tom Gegan*

> Chemist III

> 

> Phone: +1 732 205-5111, Email:   
> tom.ge...@basf.com 

> <  mailto:tom.ge...@basf.com> Postal Address: BASF 
> Corporation, , 25 

> Middlesex Essex Turnpike, 08830 Iselin, United States

> 

> *From:*rietveld_l-requ...@ill.fr <  
> mailto:rietveld_l-requ...@ill.fr>

> <  
> rietveld_l-requ...@ill.fr > *On 

> Behalf Of *Bish, David L

> *Sent:* Sunday, September 3, 2023 7:08 AM

> *To:* Shay Tirosh < 
>  stiro...@gmail.com 
> >; 

> Fernando Igoa  <  mailto:fer.igoa.1...@gmail.com>>

> *Cc:* Rietveld List (  rietveld_l@ill.fr < 
>  mailto:rietveld_l@ill.fr>) 

> <  rietveld_l@ill.fr 
> >

> *Subject:* [EXT] Re: [External] Re: Step-like basline

> 

> 

> 

> Some people who received this message don't often get email from 

>   b...@indiana.edu < 

RE: [EXT] Re: [External] Re: Step-like basline

2023-09-04 Thread alancoelho
Hi Kurt

 

TOPAS models absorption edges well. Page 158 of the Technical Reference at
http://www.topas-academic.net/Technical_Reference.pdf shows the result. 

 

The macro in question is modifies the emission profile which is how I think
it should be modelled:

 

macro Absorption_Edge_Correction_Eqn(& edge, & a_white, & b_white, & a_erf,
& edge_extra)

   {

  modify_peak_eqn =

 (

Get(current_peak) +

a_white Exp(- b_white (Get(current_peak_x) - edge)^2) / Tan(Th)

' Not necessary

' If(Get(current_peak_x) < edge, 1/Get(current_peak_x)^3, 1 )

 )

 (edge_extra + 0.5 (1 + Erf_Approx( a_erf (Get(current_peak_x) -
edge))) );

   }

 

If someone spots a problem then I would be happy to know. The line commented
out is due to the fact it makes little difference to the result. 

 

Cheers

Alan

 

From: rietveld_l-requ...@ill.fr  On Behalf Of
Kurt Leinenweber
Sent: Monday, September 4, 2023 2:53 AM
To: Thomas Gegan ; Bish, David L ;
Shay Tirosh ; Fernando Igoa 
Cc: Rietveld List (rietveld_l@ill.fr) 
Subject: RE: [EXT] Re: [External] Re: Step-like basline

 

Hi,  Are these things modeled in Rietveld programs, by chance?  It seems
like a lot of baggage to put in a refinement but if it makes the results
better.

 

*   Kurt

 

From: rietveld_l-requ...@ill.fr 
mailto:rietveld_l-requ...@ill.fr> > On Behalf Of
Thomas Gegan
Sent: Sunday, September 3, 2023 9:16 AM
To: Bish, David L mailto:b...@indiana.edu> >; Shay Tirosh
mailto:stiro...@gmail.com> >; Fernando Igoa
mailto:fer.igoa.1...@gmail.com> >
Cc: Rietveld List (rietveld_l@ill.fr  )
mailto:rietveld_l@ill.fr> >
Subject: RE: [EXT] Re: [External] Re: Step-like basline

 

I agree with a Ni absorption edge, possibly with a Kβ peak around 38° 2θ.

 

Tom Gegan
Chemist III

 

Phone: +1 732 205-5111, Email:  
tom.ge...@basf.com
Postal Address: BASF Corporation, , 25 Middlesex Essex Turnpike, 08830
Iselin, United States



 

From: rietveld_l-requ...@ill.fr 
mailto:rietveld_l-requ...@ill.fr> > On Behalf Of
Bish, David L
Sent: Sunday, September 3, 2023 7:08 AM
To: Shay Tirosh mailto:stiro...@gmail.com> >; Fernando
Igoa mailto:fer.igoa.1...@gmail.com> >
Cc: Rietveld List (rietveld_l@ill.fr  )
mailto:rietveld_l@ill.fr> >
Subject: [EXT] Re: [External] Re: Step-like basline

 


Some people who received this message don't often get email from
b...@indiana.edu  . Learn why this is important
 



Hello Shay,

I think it is probably related to "tube tails". You can read about this in
the literature (e.g., on the BGMN web site) and you can model it in some
Rietveld software such as Topas. You don't normally notice this but it
becomes apparent with higher-intensity peaks.

 

Regards,

Dave

  _  

From: rietveld_l-requ...@ill.fr 
mailto:rietveld_l-requ...@ill.fr> > on behalf of
Fernando Igoa mailto:fer.igoa.1...@gmail.com> >
Sent: Sunday, September 3, 2023 3:06 AM
To: Shay Tirosh mailto:stiro...@gmail.com> >
Cc: Rietveld List (rietveld_l@ill.fr  )
mailto:rietveld_l@ill.fr> >
Subject: [External] Re: Step-like basline 

 

This message was sent from a non-IU address. Please exercise caution when
clicking links or opening attachments from external sources.

 

Hey Shay, 

 

Are you using a motorized slit during the measurement? These may open up
abruptly to compensate for the angular dependence of the footprint and thus
generate an abrupt increase in the intensity. 

 

Hope it helps :)

 

On Sun, Sep 3, 2023, 8:50 AM Shay Tirosh mailto:stiro...@gmail.com> > wrote:

Dear Rietvelders 

I am attaching a zoom-in on a diffraction profile. 

My question is what is the origin of the step-like profile next to a very
large reflection peak?

Is it a sample preparation problem?

Is it part of the baseline? 



Thanks

Shay

-- 

 

 

 

 

 

++
Please do NOT attach files to the whole list mailto:alan.he...@neutronoptics.com> >
Send commands to mailto:lists...@ill.fr> > eg: HELP as the
subject with no body text
The Rietveld_L list archive is on
http://www.mail-archive.com/rietveld_l@ill.fr/
 
++

++
Please do NOT attach files to the whole list 
Send commands to  eg: HELP as the subject with no body text
The Rietveld_L list archive is on 

TOPAS-Academic V7

2020-05-11 Thread alancoelho
Hi All

 

Just letting everyone know that TOPAS-Academic Version 7 is now available.

 

See http://www.topas-academic.net/ where New7.PDF and the
Technical_Reference.PDF can be downloaded.

 

The Bruker-AXS version won't be far behind.

 

Cheers

Alan

 

++
Please do NOT attach files to the whole list 
Send commands to  eg: HELP as the subject with no body text
The Rietveld_L list archive is on http://www.mail-archive.com/rietveld_l@ill.fr/
++



RE: Software re-binned PD data

2019-09-26 Thread alancoelho
Hi Tony

 

>My I ask is this re-bined data from the measurement software considered as
"raw data" or "treated data"?

 

I'm not sure what is meant by treated data. Almost all neutron data and
synchrotron data with area detectors are "treated data".

 

If the detector has a slit width in the equatorial plane that is 0.03
degrees 2Th then it makes little sense using a step size that is less than
0.03/2 degrees 2Th. If rebinning is done correctly (see rebin_with_dx_of in
the Technical Reference) then rebinning is basically collecting redoing the
experiment with a wider slit.

 

In the case of your PSD then the resolution of the PSD would be the smallest
slit width. If the data has broad features relative to the slit width then
rebinning (or using a bigger slit width) should not change the results. You
could simulate all this using TOPAS to see the difference. Correct rebinning
should not affect parameter errors. 

 

This is a question that is not simple to answer and if there's concern then:

 

1.  Simulating data with the small step size and performing a fit
2.  And then rebinning with various slit widths and then fitting
3.  And then comparing parameters errors and parameter values for all
the refinements should shine light on the area.

 

I don't know where but I feeling is that there should be papers on this. 

 

Cheers

Alan

 

 

From: rietveld_l-requ...@ill.fr  On Behalf Of
iangie
Sent: Thursday, September 26, 2019 1:40 PM
To: rietveld_l@ill.fr
Subject: Software re-binned PD data

 

Dear Rietvelder,

 

I hope you are doing well.

It is generally acknolwdged that Rietveld refinement should be performed on
raw data, without any data processing. 
One of our diffractometer/PSD  scans data at its minimal step size (users
can see that the step size during scan is much smaller than what was set),
and upon finishing, the measurement software re-bin the counts to the step
size what users set (so the data also looks smoother, after re-bin).  
My I ask is this re-bined data from the measurement software considered as
"raw data" or "treated data"? And can we apply Rietveld refinement on this
data?

 

Any comments are welcome. :)

--

Dr. Xiaodong (Tony) Wang

Research Infrastructure Specialist (XRD)

Central Analytical Research Facility (CARF)   |   Institute for Future
Environments

Queensland University of Technology

++
Please do NOT attach files to the whole list 
Send commands to  eg: HELP as the subject with no body text
The Rietveld_L list archive is on http://www.mail-archive.com/rietveld_l@ill.fr/
++



TOPAS Working on MACs

2019-03-04 Thread alancoelho
Hi all

 

Some MACs users are experiencing problems using TOPAS/TOPAS-Academic on a
MAC emulating windows. If you are using a successfully using TOPAS on a MAC
then can you please send me a private e-mail with the details of your
emulator; ie. is it Parallels Desktop, VMware Fusion, VirtualBox or any
other.

 

Thanks in advance.

Cheers

Alan

 

++
Please do NOT attach files to the whole list 
Send commands to  eg: HELP as the subject with no body text
The Rietveld_L list archive is on http://www.mail-archive.com/rietveld_l@ill.fr/
++



RE: Rietveld

2018-08-21 Thread AlanCoelho
It seems that even the origin of least squares is debateable:

 

https://blog.bookstellyouwhy.com/carl-friedrich-gauss-and-the-method-of-least-squares
 

 

"That Gauss was the first to define the method of least squares was contested 
in his day. Adrien-Marie Legendre first published a version of the method in 
1805"

 

I though this discussion would divide the community but from Armel’s pole (good 
idea) it hasn’t. 

 

The Rietveld method is an implementation of the method of least squares with 
the function being minimized changed to suite powder diffraction. Looking at 
the least squares formulae as defined by Gauss, Eq. (1) at:

 

https://books.google.com.au/books?id=gtoTkL7heS0C 

 
&pg=RA1-PA193&lpg=RA1-PA193&dq=Charles+fredrick+gauss+and+non-linear+least+squares&source=bl&ots=ZAS84VXPuo&sig=CZR3HPkPEEY4sxMfCHerAako3qQ&hl=en&sa=X&ved=2ahUKEwj-i6PGt_rcAhWBM94KHbt_AZMQ6AEwAnoECAgQAQ#v=onepage&q=Charles%20fredrick%20gauss%20and%20non-linear%20least%20squares&f=false

 

or at:

 

https://en.wikipedia.org/wiki/Least_squares

 

we see that it is familiar except that f describes a diffraction pattern. 
Fitting to the powder data itself (rather than first performing data reduction 
in the form of extracting intensities) is what is implied by Gauss and it seems 
odd that this was not considered by many. 

 

Also, computer code for performing least squares on powder data is 70% 
identical (in my estimate) to performing least squares on single crystal data. 
In fact, if only Gaussian peak shapes are considered then its 90% identical. I 
should know as computer code that perform derivatives on structural parameters 
for single crystal data are exactly the same code used to perform derivatives 
on powder data.

 

My feelings are similar to Scott Speakman in that credit should be given to 
Rietveld for developing and distributing his software; if more was done by 
Rietveld in regards to developing f then all the better.

 

All the best

Alan Coelho

 

 

From: rietveld_l-requ...@ill.fr  On Behalf Of Scott 
Speakman
Sent: Wednesday, 22 August 2018 4:35 AM
To: Rietveld_l@ill.fr
Subject: RE: Rietveld

 

It is interesting to read this conversation and to hear the various points of 
view. 

 

I have one point for consideration to add, and would love to hear the opinion 
of those who were more closely involved in those early days:  I was always 
under the impression that the nomenclature "Rietveld technique" evolved mostly 
because Hugo Rietveld freely distributed the programming code for others to 
use, and allowed the code to be used and incorporated into other programs 
without ever requesting licensing fees or the like.  In that case, the name 
"Rietveld technique" isn't used to credit the inventor(s) of the methodology, 
but rather to acknowledge the author of the original programming code.  

 

 


Kind Regards,

Scott A Speakman, Ph.D.
Principal Scientist- XRD


Tel
Mob

+1 800 279 7297
+1 508 361 8121



Malvern Panalytical Inc.
117 Flanders Road
Westborough MA 01581
United States

  scott.speak...@panalytical.com
  www.malvernpanalytical.com


 


This email and any files transmitted with it are confidential and maybe legally 
privileged. Such message is intended solely for the use of the individual or 
entity to whom they are addressed. Please notify the originator of the message 
if you are not the intended recipient and destroy all copies of the message. 
Please note that any use, dissemination, or reproduction is strictly prohibited 
and may be unlawful. 



The way we want to do business:   The 
value of Integrity - Code of Business Ethics

 

From: rietveld_l-requ...@ill.fr   
mailto:rietveld_l-requ...@ill.fr> > On Behalf Of Le 
Bail Armel
Sent: Tuesday, August 21, 2018 11:09 AM
To: Rietveld_l@ill.fr  
Subject: Re: Rietveld

 

The >1500 subscribers can vote... :

 

https://doodle.com/poll/gh3v3nfhue599w23

 

Best,

 

Armel

 

 

 

> Message du 21/08/18 19:10
> De : "Alan Hewat"   >
> A : "rietveld_l@ill.fr  "   >
> Copie à : 
> Objet : Re: Rietveld
> 
> 

> As a matter of course we didn't took part in the discussion... (Schenk) 

> ...people pretending now to speak in place of Loopstra should stop to do so 
> (Le Bail)

What a contrast of style and substance. Late believers are true believers, and 
Passion evicts Doubt.
>


> 

Seeking sanity, I refer back to Miguel and th

RE: [Fwd: Question on file formats]

2008-12-11 Thread AlanCoelho

Lee Wrote
>I have some ToF data from GEM ISIS as both a .gss and a .asc file. I have
>managed to import and refine this using GSAS but am having some
>difficulties importing into TOPAS. Does anyone have some advice please on
>maybe file exchange or getting this file into the correct format to be
>read by TOPAS. Or do I need to contact GEM to issue me with a different
>file type?


I think that the ASCII data from GEM comprises x-axis, y-axis and error
values. Typically TOPAS uses the file extension to determine file format;
ie. .XYE type files. You can either change the extension or use the
'xye_format' switch to indicate XYE format, for example:

xdd FILE xye_format

cheers
alan

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Thursday, 11 December 2008 1:53 AM
To: Rietveld_l@ill.fr
Subject: [Fwd: Question on file formats]

Dear Lee,

If you have data as (.dat) for GSAS, TOPAS can read this file in the
option GSAS CONST data files (“.”). However, you can convert .asc to .xrd
and then to read it with POWDERX and save it as GSAS (.dat)

Best Regards,

Mario Macías
Universidad Industrial de Santander
Colombia
--

I have a question for the list...

I have some ToF data from GEM ISIS as both a .gss and a .asc file. I have
managed to import and refine this using GSAS but am having some
difficulties importing into TOPAS. Does anyone have some advice please on
maybe file exchange or getting this file into the correct format to be
read by TOPAS. Or do I need to contact GEM to issue me with a different
file type?

Thanks,

Lee

~~~
Dr. Lee A. Gerrard
Materials Scientist
 
Materials Science Research Division
SB40.1
AWE Plc
Aldermaston, Reading,
Berkshire
RG7 4PR
 
Tel:  +44 (0)118 982 6516
Fax: +44 (0)118 982 4739
Email: [EMAIL PROTECTED]
~~~ 










RE: macro's in TOPAS

2008-12-04 Thread AlanCoelho
Hi Ross

There's no way of outputting a selection of hkls; it's all or none.

You can however omit hkls at the start of the refinement; ie.

omit_hkls = And(H, K, Mod(L, 2));
omit_hkls = H > 10;
etc...

You can use this in an indirect manner to first refine using all hkls and
then in a subsequent refinement omit the unwanted hkls; ie:

tc INP_FILE "macro M_ {}"
copy INP_FILE.OUT INP_FILE.INP
tc INP_FILE "macro M_ { omit_hkls = H; iters 0 }"
etc...

where the macro M_ is placed at the str level; ie.

xdd...
str...
M_

Alternatively an actual program can be written to manipulate the output hkl
file; this should be trivial to do. I will for future reference however
consider outputting a selection of hkls.

Cheers
Alan




-Original Message-
From: Ross Williams [mailto:[EMAIL PROTECTED] 
Sent: Thursday, 4 December 2008 7:41 PM
To: rietveld_l@ill.fr
Subject: macro's in TOPAS

Dear All,

I am trying to write a macro in TOPAS (version 4) to output the intensities
of specific reflections for each phase, for the purposes comparing multiple
structures of the same phase from the literature.

To date I have been able to achieve this by using the following macro which
output all the hkl's and the intensity of each, then I use a spreadsheet
package to sort my data appropriately, but this is cumbersome and I have
hundreds of structures I  would like to examine. 


macro out_specific_hkl_details(file)
{
 phase_out file append load out_record out_fmt out_eqn
   {
" %s" = Get(phase_name);
" %3.0f " = H;
" %3.0f " = K;
" %3.0f " = L;
" %11.5f \n " = I_after_scale_pks;
   }
}


Ideally I would like to output:

Phase1Name I(hkl_1) I(hkl_2) I(hkl_3) I(hkl_4) I(hkl_5)
Phase2Name I(hkl_1) I(hkl_2) I(hkl_3) I(hkl_4) I(hkl_5)
Phase3Name I(hkl_1) I(hkl_2) I(hkl_3) I(hkl_4) I(hkl_5)
Phase4Name I(hkl_1) I(hkl_2) I(hkl_3) I(hkl_4) I(hkl_5)

The problem is I can't work out how to output parameters for specific hkl's,
can anyone help me?


Thanks,

Ross


+
Ross Williams
PhD Student
Centre for Materials Research
Department of Imaging and Applied Physics
Curtin University of Technology
GPO Box U1987 Perth WA 6845
Western Australia
Phone: +61 (0)8 9266 4219
Fax: +61 (0)8 9266 2377
Email:   [EMAIL PROTECTED]







RE: Anisotropic peak broadening with TOPAS

2008-10-31 Thread AlanCoelho
Hi Frank

I'm not 100% sure what you have been doing but I think the copying and
pasting to get LVol values for particular sets of hkls can be maybe done in
a more simple manner as shown below. If it's not clear then contact me off
the list.

   prm lor_h00 300 min .3
   prm lor_0k0 300 min .3
   prm lor_00l 300 min .3
   prm lor_hkl 300 min .3
   prm gauss_h00 300 min .3
   prm gauss_0k0 300 min .3
   prm gauss_00l 300 min .3
   prm gauss_hkl 300 min .3
   
   prm = 1 / IB_from_CS(gauss_h00, lor_h00); : 0 ' This is LVol
   prm = 0.89 / Voigt_FWHM_from_CS(gauss_h00, lor_h00); : 0 ' This is
LVol_FWHM
   
   lor_fwhm = 
  (0.1 Rad Lam / Cos(Th)) /
  IF And(K == 0, L == 0) THEN
 lor_h00
  ELSE IF And(H == 0, L == 0) THEN
 lor_0k0
  ELSE IF And(H == 0, K == 0) THEN
  lor_00L
   ELSE
 lor_hkl
  ENDIF
  ENDIF
  ENDIF
  ;
  
   gauss_fwhm = 
  (0.1 Rad Lam / Cos(Th)) /
  IF And(K == 0, L == 0) THEN
 gauss_h00
  ELSE IF And(H == 0, L == 0) THEN
 gauss_0k0
  ELSE IF And(H == 0, K == 0) THEN
 gauss_00L
  ELSE
 gauss_hkl
  ENDIF
  ENDIF
  ENDIF
  ;

Cheers
Alan


-Original Message-
From: Frank Girgsdies [mailto:[EMAIL PROTECTED] 
Sent: Friday, 31 October 2008 11:29 PM
To: Frank Girgsdies; Rietveld_l@ill.fr
Subject: Re: Anisotropic peak broadening with TOPAS

Dear Topas users,

thanks to your helpful input, I've now come up
with a (probably clumsy) solution to achieve my
goal (see my original post far down below).

For those who are interested, I'll explain it
in relatively high detail, but before I do so,
I want to make some statements to prevent
unnecessary discussions.

The reasoning behind my doing is the following:
Investigating a large series of similar but
somehow variable samples, my goal is to derive
numerical parameters for each sample from its
powder XRD. Using these parameters, I can compare
and group samples, e.g. by making statements
like "sample A and B are similar (or dissimilar)
with respect to this parameter". Thus, the primary
task is to "parametrize" the XRD results. Ideally,
such parameters would have some physical meaning,
like lattice parameters, crystallite size etc.
However, this does not necessarily mean that I
would interpret or trust parameters like e.g.
LVol-IB on an absolute scale!!! After all, it
is mainly relative trends I'm interested in.

LVol-IB is is one of the parameters I get and
tabulate if the peak broadening can be successfully
described as isotropic size broadening.
   [For details on LVol-IB, see Topas (v3) Users
   Manual, sections 3.4.1 and 3.4.2)]
If, however, the peak broadening is clearly
anisotropic, applying the isotropic model gives
inferior fit results. LVol-IB is still calculated,
but more or less meaningless.
Thus, I wanted an anisotropic fit model that BOTH
(a) yields a satisfactory fit AND (b) still
delivers parameters with a similar meaning as
the isotropic LVol-IB.

Applying a spherical harmonics function satisfied
condition (a), but not (b) (maybe just due to my
lack in mathematical insight).

Applying Peter Stephens' code (but modified for
size broadening) met condition (a) and brought
me halfway to reach condition (b). As I did not
find a way of "teaching" (coding) Topas to do
all calculations I wanted in the launch mode,
I developed a workaround to reach (b).

Now in detail:
The modified Stephens code I use looks like
this:
 prm s400  29.52196`_2.88202 min 0
 prm s040  40.52357`_4.10160 min 0
 prm s004  6631.09739`_227.63909 min 0
 prm s220  54.23582`_13.82762
 prm s202  1454.83518`_489.04664
 prm s022  5423.10499`_765.48349
 prm mhkl = H^4 s400 + K^4 s040 + L^4 s004 + H^2 K^2 s220 +
H^2 
L^2 s202 + K^2 L^2 s022;
 lor_fwhm = (D_spacing^2 * Sqrt(Max(0,mhkl)) / 1) /
Cos(Th);
Compared to Peters original code, I have changed
the strain dependence "* Tan(Th)" into the size
dependence "/ Cos(Th)" and re-arranged the remaining
terms in that line of code.
   [Peter has mentioned that from the fundamental
   theoretical point of view, spherical harmonics
   might be better justified then his formula
   for the case of SIZE broadening. Anyway,
   it works for me from the practical point of
   view, thus I'll use it.]
This rearrangement emphasizes the analogy between
the isotropic case "c / Cos(Th)" (where c is valid
for ALL peaks) and the anisotropic one, where
c is replaced by the hkl dependent term
(D_spacing^2 * Sqrt(Max(0,mhkl)) / 1).
Thus, I freely interpret this term as some
sort of c(hkl), which I will use for some
specific values of hkl to derive hkl dependent
analogues of LVol-IB.
The first step of this calculation I managed
to code for Topas based on Peters equations,
but for specifc hkl values:
 prm ch00 = (Lpa^2 * Sqrt(s400) / 1); :  0.24382`_0.0119

RE: PDF refinement pros and cons

2008-06-13 Thread AlanCoelho
Thanks all for the PDF explanations

I think I'm beginning to understand. To summarise what we know:

- PDF for powder data is a Patterson function (as Alan Hewat stated) in one
dimension that plots the histogram of atom separation

- It will show quite nicely the short range order and then possible disorder
longer range (eg. Rotated bucky balls)


There seems to be a few issues:

1) PDF refinement on long range ordered crystals is the same as Rietveld
refinement and little benefit if any is obtained by fitting to G(r)
directly. It's useful however to view a PDF to ascertain whether there's
disorder even in the event of a good Rietveld fit. 

2) A nano-sized crystal, as Vincent mentioned, can be modelled by arranging
the atoms such that it's G(r) matches the observed PDF. This can be done
with disregard to lattice parameters and periodicity in general.

3) a mix of (1) and (2) as in regularly arranged bucky balls but with the
balls randomly rotated.


The ability to do number (2) is what can't be done with normal Rietveld
refinement as a disordered object has no periodicity and no Bragg peaks.
Arranging atoms in space to form an object with disregard to lattice
parameters and space groups can yield a G(r) to match an observed PDF.
However even very small crystals would have 10s of thousands of atoms. Its
seems difficult to try and determine atomic positions of such an object from
a PDF.

It may be worthwhile to look at what the glass community has done (the Debye
Formula - thanks Jon - still reading).

The main point that is still confusing me is whether current PDF analysis
considers an object (or should) or is it the case that most materials are
periodic (with an enlarged unit cell) with parts of the cell being
disordered.

Cheers
Alan




PDF refinement pros and cons

2008-06-12 Thread AlanCoelho
HI all

 

Looking at the Pair Distribution Function and refinement I come away with
the following:

 

Fitting in real space (directly to G(r)) should be equivalent to fitting to
reciprocal space except for a difference in the cost function. Is this
difference beneficial in any way. In other words does the radius of
convergence increase or decrease.

 

The computational effort required to generate G(r) is proportional to N^2
where N is the number of atoms within the unit cell.  The computational
effort for generating F^2 scales by N.Nhkl where Nhkl is the number of
observed reflections. Is there a speed benefit in generating G(r) - my guess
is that it's about the same. Note, generating G(r) by first calculating F
and then performing a Fourier transform is not considered. 

 

In generating the observed PDF there's an attempt to remove instrumental and
background effects. In reciprocal space these unwanted effects are
implicitly considered. This seems a plus for the F^2 refinement.

 

>From my simple understanding of the process, there seems to be good
qualitative information in a G(r) pattern but can someone help in explaining
the benefit of actually refining directly to G(r).

 

Cheers

Alan

 



RE: Help: General spherical harmonics

2008-04-21 Thread AlanCoelho

Hi Xiujun 

Topas implements a normalized symmetrized sperical harmonics function, see
Jarvine

J. Appl. Cryst. (1993). 26, 525-531
http://scripts.iucr.org/cgi-bin/paper?S0021889893001219


The expansion is simply a series that is a function hkl values. 

The series is normalized such that the maximum value of each component is 1.
The normalized components are:

Y00  = 1
Y20  = (3.0 Cos(t)^2 - 1.0)* 0.5
Y21p = (Cos(p)*Cos(t)*Sin(t))* 2
Y21m = (Sin(p)*Cos(t)*Sin(t))* 2
Y22p = (Cos(2*p)*Sin(t)^2)
Y22m = (Sin(2*p)*Sin(t)^2)
Y40  = (3 - 30*Cos(t)^2 + 35*Cos(t)^4) *.125000
Y41p = (Cos(p)*Cos(t)*(7*Cos(t)^2-3)*Sin(t)) *.9469461818
Y41m = (Sin(p)*Cos(t)*(7*Cos(t)^2-3)*Sin(t)) *.9469461818
Y42p = (Cos(2*p)*(-1 + 7*Cos(t)^2)*Sin(t)^2) *.78
Y42m = (Sin(2*p)*(-1 + 7*Cos(t)^2)*Sin(t)^2) *.78
Y43p = (Cos(3*p)*Cos(t)*Sin(t)^3) *3.0792014358
Y43m = (Sin(3*p)*Cos(t)*Sin(t)^3) *3.0792014358
Y44p = (Cos(4*p)*Sin(t)^4)
Y44m = (Sin(4*p)*Sin(t)^4)
Y60  = (-5 + 105*Cos(t)^2 - 315*Cos(t)^4 + 231*Cos(t)^6) *.62500.
Y61p = (Cos(p)*(-5 + 30*Cos(t)^2 - 33*Cos(t)^4)*Sin(t)*Cos(t)) *.6913999628
Y61m = (Sin(p)*(-5 + 30*Cos(t)^2 - 33*Cos(t)^4)*Sin(t)*Cos(t)) *.6913999628
Y62p = (Cos(2*p)*(1 - 18*Cos(t)^2 + 33*Cos(t)^4)*Sin(t)^2) *.6454926483
Y62m = (Sin(2*p)*(1 - 18*Cos(t)^2 + 33*Cos(t)^4)*Sin(t)^2) *.6454926483
Y63p = (Cos(3*p)*(3- 11*Cos(t)^2)*Cos(t)*Sin(t)^3) *1.4168477165
Y63m = (Sin(3*p)*(3- 11*Cos(t)^2)*Cos(t)*Sin(t)^3) *1.4168477165
Y64p = (Cos(4*p)*(-1 + 11*Cos(t)^2)*Sin(t)^4) *.816750
Y64m = (Sin(4*p)*(-1 + 11*Cos(t)^2)*Sin(t)^4) *.816750
Y65p = (Cos(5*p)*Cos(t)*Sin(t)^5) *3.8639254683
Y65m = (Sin(5*p)*Cos(t)*Sin(t)^5) *3.8639254683
Y66p = (Cos(6*p)*Sin(t)^6)
Y66m = (Cos(6*p)*Sin(t)^6)
Y80  = (35 - 1260*Cos(t)^2 + 6930*Cos(t)^4 - 12012*Cos(t)^6 +
6435*Cos(t)^8)* .0078125000
Y81p = (Cos(p)*(35*Cos(t) - 385*Cos(t)^3 + 1001*Cos(t)^5 -
715*Cos(t)^7)*Sin(t))* .1134799545
Y81m = (Sin(p)*(35*Cos(t) - 385*Cos(t)^3 + 1001*Cos(t)^5 -
715*Cos(t)^7)*Sin(t))* .1134799545
Y82p = (Cos(2*p)*(-1 + 33*Cos(t)^2 - 143*Cos(t)^4 + 143*Cos(t)^6)*Sin(t)^2)*
.5637178511
Y82m = (Sin(2*p)*(-1 + 33*Cos(t)^2 - 143*Cos(t)^4 + 143*Cos(t)^6)*Sin(t)^2)*
.5637178512
Y83p = (Cos(3*p)*(-3*Cos(t) + 26*Cos(t)^3 - 39*Cos(t)^5)*Sin(t)^3)*
1.6913068375
Y83m = (Sin(3*p)*(-3*Cos(t) + 26*Cos(t)^3 - 39*Cos(t)^5)*Sin(t)^3)*
1.6913068375
Y84p = (Cos(4*p)*(1 - 26*Cos(t)^2 + 65*Cos(t)^4)*Sin(t)^4)* .7011002983
Y84m = (Sin(4*p)*(1 - 26*Cos(t)^2 + 65*Cos(t)^4)*Sin(t)^4)* .7011002983
Y85p = (Cos(5*p)*(Cos(t) - 5*Cos(t)^3)*Sin(t)^5)* 5.2833000817
Y85m = (Sin(5*p)*(Cos(t) - 5*Cos(t)^3)*Sin(t)^5)* 5.2833000775
Y86p = (Cos(6*p)*(-1 + 15*Cos(t)^2)*Sin(t)^6)* .8329862557
Y86m = (Sin(6*p)*(-1 + 15*Cos(t)^2)*Sin(t)^6)* .8329862557
Y87p = (Cos(7*p)*Cos(t)*Sin(t)^7)* 4.5135349314
Y87m = (Sin(7*p)*Cos(t)*Sin(t)^7)* 4.5135349313
Y88p = (Cos(8*p)*Sin(t)^8)
Y88m = (Sin(8*p)*Sin(t)^8)

where 
t = theta 
p = phi


theta and phi are the sperical coordinates of the normal to the hkl plane. 

These components were obtained from Mathematica and mormalized using Topas.

The user determines how the series is used. In the case of correcting for
texture as per Jarvine then the 
intensities of the reflections are multiplied by the series value. This is
accomplished bye first defining a series:

str...
spherical_harmonics_hkl sh sh_order 8

and then scaling the peak intensities, or, 

scale_pks = sh;

after refinement the INP file is updated with the coefficients.

The macro PO_Spherical_Harmonics, as you have defined, can also be used.

Typically the C00 coeffecient is not refined as its series component Y00 is
simply 1 and is 100% correlated with the scale parameter.

You could output the series values as a function of hkl as follows:

scale_pks = sh;
phase_out sh.txt load out_record out_fmt out_eqn {
 "%4.0f" = H;
 "%4.0f" = K;
 "%4.0f" = L;
 " %9g\n" = sh;
}

Cheers
Alan

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Saturday, 19 April 2008 9:40 AM
To: rietveld_l@ill.fr
Subject: Help: General spherical harmonics

Dear all,

Now i am using the Topas Academic software to do the refinement of my sample
which has stronger preferred orientations in some directions.  
In the program, i use the general spherical harmonics function to correlate
the effect, as shown as below,


'Preferred Orientation using Spherical Harmonics
PO_Spherical_Harmonics(sh, 6 load sh_Cij_prm {
k00   !sh_c00  1.
k41sh_c41   0.36706`
k61sh_c61  -0.30246`
} )

And I see the literature, texture index J is used to evaluate the extent of
PO by the equation shown in attachment ( I don't how to put the equation
here).

But I am not sure what the l means and it's not easy to find the detailed
calculation in the literature. So I am

RE: Use fix or programmable slits for Rietveld analysis

2007-03-26 Thread AlanCoelho
Hi Maria

Automatic Divergence Slits (ADS) illuminate different parts of the post
monochromator as a function of 2Th. I am guessing but I think that the
crystals are good enough not to change the intensity too much; this is my
experience with Y2O23 in any case. 

At around 70 to 80 degrees 2Th however the beam (typically around 4 degrees
in the equatorial plane) spills out of the post monochromator. This
situation should be avoided; instead above that angle the slits should be
fixed. Thus you have the situation where part of your pattern is analysed
using ADS corrections and part using FDS corrections.

ADS corrections comprise:

- A Sin(Th) scaling of intensities
- A change in peak shape

Note, that anti scatter slits could  also influence intensities at large
divergences.

Cheers
Alan




-Original Message-
From: Fabra-Puchol, Maria [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, 27 March 2007 12:03 AM
To: rietveld_l@ill.fr
Subject: Use fix or programmable slits for Rietveld analysis

 
Hi all,

I have a question using fix (FDS) and programmable slits (ADS) for
Rietveld analysis.
Actually, I work with a X'PERT equipment with fast detection and with
programmable divergence slits in the incident beam and a programmable
antiscattering slits in the diffracted beam.
I have also the possibility to use them in a fix mode. 
I would like to know the opinion of the community about the best
configuration of this slits (fix or programmable) if a Rietveld analysis
is required.
In fact, using programmable slits, the software corrects data and
changes from ADS to FDS. What is the interest to use ADS in this case? 

Thank you all


Maria Fabra Puchol
Microanalysis Engineer
Saint-Gobain CREE
---
550, Avenue Alphonse Jauffret
84306 Cavaillon Cedex-France
[EMAIL PROTECTED] 
telf: +33 (0)4 32 50 09 36
fax: +33 (0)4 32 50 08 51





RE: [Fwd: [ccp4bb] Nature policy update regarding source code]

2007-03-24 Thread AlanCoelho
One last reassuring word to the users of TOPAS - if I may

No open source algorithm under any license comes close to equalling those as
implemented in TOPAS and its academic counterpart. This status quo shall
remain.

In the event that commercial entities are locked out of journals then in the
case of TOPAS the manuals shall take up the slack.

Sincerely
Alan Coelho




RE: [Fwd: [ccp4bb] Nature policy update regarding source code]

2007-03-24 Thread AlanCoelho
Vincent

>For another software with many important contributors it would indeed be 
>impossible. But then the software would not have existed in the first place

>with a non-free license, so.

This discussion is getting nowhere; I politely disagree and see no solution;
never is it right to development software using public money and to then
deny access to the people that support the development of that software -
the tax payer. And I can't see why the software would not have existed if it
was indeed developed at an institute with public funding. It appears that
have been a tremendous amount of brain washing exercised by GNU GPL
converts; it has preyed on the dark side of human nature - the idea of
getting something for nothing - "free".

>In the beginning of this thread you complained that scientists wanted 
>software for free ! Now it's my turn - but to use my software I'm not
asking 
>for money, but rather for sharing your contribution in return.

I afraid again that I cannot give you the source code of the software I
write as it does not belong to me. You see I abide by international laws and
treaties. You are however "free" to buy the articles describing the
algorithms in the journals that I have published in. They are quite detailed
and someone of your calibre should quite easily cope.

All the best
Alan

-Original Message-
From: Vincent Favre-Nicolin [mailto:[EMAIL PROTECTED] 
Sent: Sunday, 25 March 2007 9:02 AM
To: rietveld_l@ill.fr
Subject: Re: [Fwd: [ccp4bb] Nature policy update regarding source code]

Alan,

> As it stands however the reality of a GNU GPL license is that if a
> manufacture wanted to modify and include a program licensed under it in
> order to sell a larger manufacturing process for commercial purposes then
> they would be denied access unless all 10 people who wrote the software
> gave explicit permission, even if the software was written using public
> money.

   In the case of Fox, most of the code (>>95%) rights belong to me & the
uni 
of Geneva, so if I had to 'sell' the code with the Uni of Geneva, I could 
rewrite the other parts easily.
   For another software with many important contributors it would indeed be 
impossible. But then the software would not have existed in the first place 
with a non-free license, so.

> GNU GLP is divisive, it divides, its practices exclusivity and  it's a
sham
> in my view. Denial of access is what this all about.

You are denied access only if you want to modify & distribute as a
closed 
source program - for all other purposes you are absolutely free. It is a 
fairly strong policy, but there are huge benefits - just look at all 
the "free" OS available - BSDs and Linuxes. The most important difference 
between the two are the licenses they put forward (even if they share a lot 
of software, both under bsd and gpl licenses). Linux is much more successful

because a *lot* of developers wouldn't release their source code if they 
thought that their contribution could be used in a closed source project 
without their approval. This is what makes the GPL useful - it gathers many 
developers around it, as they feel the lifetime of their code will be longer

with that license.
   As for commercial developers who are excluded from using GPL'ed code, if 
they cannot buy the code using another license then they still have access
to 
the source code and can rewrite the parts they want - under corporate 
finances they should have the means for that.

   In the beginning of this thread you complained that scientists wanted 
software for free ! Now it's my turn - but to use my software I'm not asking

for money, but rather for sharing your contribution in return.

Vincent
-- 
Vincent Favre-Nicolin
Université Joseph Fourier
http://v.favrenicolin.free.fr
ObjCryst & Fox : http://objcryst.sourceforge.net







RE: Nature policy update regarding source code]

2007-03-24 Thread AlanCoelho
Alan

A very good take on the situation and I totally agree that Nature Materials
should provide free access to all of its articles; otherwise it preaches one
thing and practices another.

One of the articles at nature.com concerns Public Library of Science
http://www.nature.com/nature/debates/e-access/Articles/butler3.html

It would be interesting to see what comes of this experiment/boycott; if
they can publish cheaper than commercial publishers then they would have
enlightened us; time will tell but I have my doubts.

Alan



-Original Message-
From: Alan Hewat [mailto:[EMAIL PROTECTED] 
Sent: Sunday, 25 March 2007 3:33 AM
To: rietveld_l@ill.fr
Subject: Re: Nature policy update regarding source code]

> Hi Everyone; Just in case you don't follow ccp4bb or nature methods:
> http://www.nature.com/nmeth/journal/v4/n3/full/nmeth0307-189.html

This does seem to mean that Nature will not accept results obtained with
commercial software, or even free software like GSAS and FullProf, for
which the source code is not available. In particular, Nature claims that:

"Without the code, the software-and thus the paper-would become a black
box of little use to the scientific community."

Now no scientist would use a "black box" to obtain their results would
they? Everyone author (and reader) of a Nature paper is clearly capable of
understanding every detail of Rietveld and other computer code, starting
with MS Word, which they probably used to write their paper. The same goes
for diffractometers used to collect the data. Every Nature reader
understands completely how they work, and could build one herself if she
had the time. Nothing is a "black box" to a Nature reader. She is the
Universal Scientist, otherwise her results would be of "little use to the
scientific community". How do we know ? Naturally, because Nature tells
us.

But the big news is that from next week, Nature is allowing free access to
all articles on its WWW site ! Furthermore, free copies of the journal
will be sent to anyone who requests them. After all, if the scientific
community does not have free access to Nature content, then Nature itself
risks becoming of "little use to the scientific community".

For a more rational discussion of the issue of "free lunches" in
publishing,  computing and databases, read other opinions of the
scientific community that have generously been made available by a
benevolent newspaper http://www.nature.com/nature/debates/e-access/
_
Dr Alan Hewat, ILL Grenoble, FRANCE <[EMAIL PROTECTED]>fax+33.476.20.76.48
+33.476.20.72.13 (.26 Mme Guillermet) http://www.ill.fr/dif/people/hewat/
_






RE: [Fwd: [ccp4bb] Nature policy update regarding source code]

2007-03-24 Thread AlanCoelho
Brian

 

>I personally feel that open source software is usually in the best
interests of scientific 

>methods development.

 

This is a matter of opinion but even if you are right then no one has the
right to deny non-publically funded bodies the right to practice science.
Science or knowledge is not owned by institutes or Governments or anyone
that set themselves up as judge and jury. 

 

 

>However, I don't see the intent of the editorial to force that model on
anyone.

 

Your reading of Nature Methods decision seems to indicate that there's
nothing to worry about; there seems to be confusion. I don't know why
because as I see it people like myself are locked out of  Nature Materials.
I would not like to attempt to publish an algorithmic article in Nature and
not provide the source code. 

 

 

Vincent wrote:

 

>I believe the ideal licensing uses a dual approach (1) provide the source
code as GPL 

>to encourage others to provide their modifications and (2) allow another
closed-source 

>licensing to be available, at a price. That way everybody is happy.

 

Your wishes sounds reasonable. 

 

As it stands however the reality of a GNU GPL license is that if a
manufacture wanted to modify and include a program licensed under it in
order to sell a larger manufacturing process for commercial purposes then
they would be denied access unless all 10 people who wrote the software gave
explicit permission, even if the software was written using public money.

 

Many seem to be unaware that they live in a larger community and not a fish
bowl. Science moves backwards every time it closes a door in its ivory
tower. 

 

GNU GLP is divisive, it divides, its practices exclusivity and  it's a sham
in my view. Denial of access is what this all about.

 

Cheers

Alan

 

 

From: Brian H. Toby [mailto:[EMAIL PROTECTED] 
Sent: Sunday, 25 March 2007 6:41 AM
To: rietveld_l@ill.fr
Subject: [Fwd: [ccp4bb] Nature policy update regarding source code]

 

Begin forwarded message:





>From what I have read on Nature Methods decision then if the journals of J.

Applied Cryst and Acta Cryst were to go down the same path then 2000 plus

users of TOPAS and TOPAS-Academic would be without a means of reading peer

reviewed articles on the algorithms used by those programs. This would be a

tragedy...

 

I think that people are misreading the intent of the editorial. From the
abstract:

 

Software that is custom-developed as part of novel methods is as important
for the method's implementation as reagents and protocols. Such software, or
the underlying algorithms, must be made available to readers upon
publication.

 

Note that TOPAS is available to Nature Method readers... one only has to buy
it, so no problem here. The problem is really when a scientific result is
obtained with software that is both unique and not distributed. How could
cold-fusion, polywater etc be debunked if one has to know the secret
handshake to know enough details to duplicate the work. 

 

I personally feel that open source software is usually in the best interests
of scientific methods development. I would be uncomfortable if the only
method available to me to check that a structural model matches data were a
single undisclosed code (such as GSAS or TOPAS). However one has a wealth of
programs that can be used to verify that a model matches the observed data.
In worst case, I could pull out a calculator and grind out some structure
factors. There may be some improvements in the algorithms, but the
fundamental equations are all well understood  here.

 

Access to code improves science, IMHO. When people can read through code,
errors are spotted. Access to code allows a motivated scientist to develop a
new method by building on an existing one rather than starting from scratch,
where the latter can be a huge hurdle. Perhaps the argument is less acute,
as few scientists are prepared to code or even modify programs. However, I
don't see the intent of the editorial to force that model on anyone.

 

Brian

 



RE: [Fwd: [ccp4bb] Nature policy update regarding source code]

2007-03-24 Thread AlanCoelho
Vincent wrote:

"It is a step forward for F/OSS as it acknowledges that open-source code
allows to spread a new method better than a closed source. As opposed to,
filing a patent - since patents were originally developed to ensure that new
methods be available to all."

You are right in that open source is good at spreading algorithms but no one
should be locked out by decree. Thus the licensing of software is critical;
the GNU GPL license including Copyleft is not to be confused with something
like Python; from the Python web site:

"The Python implementation is under an open source license that makes it
freely usable and distributable, even for commercial use"

In regards to GNU GPL; never in the history of literature has the words
"freedom" and "choice" been so misrepresented; they stand behind their
lawyers. How much of the software under GNU GPL license have been developed
using computers provided by institutes - was it really a hobby. 

Fox is under GNU GPL - not very helpful to society in a general sense
wouldn't you say.

>From what I have read on Nature Methods decision then if the journals of J.
Applied Cryst and Acta Cryst were to go down the same path then 2000 plus
users of TOPAS and TOPAS-Academic would be without a means of reading peer
reviewed articles on the algorithms used by those programs. This would be a
tragedy not to mention users of other commercial programs in the field.
Let's hope that it doesn't come to that.


>Often this development is not funded as an isolated project - but part of a
larger project 
> (hence the developments at large instruments).

Good to see but to say that these projects are part of a wider orchestrated
effort is to be optimistic.


All the best
Alan




RE: [Fwd: [ccp4bb] Nature policy update regarding source code]

2007-03-23 Thread AlanCoelho
Jon

I did not state, I think, that you are responsible for Nature's decisions.
Far from it, as a messenger you have enlightened me on what is happening in
this area - thank you. 

As for patents; I despise patents on software as it really does inhibit
science. I made a policy a long time ago not to patent. What a developer
seeks is to be paid for the work of coding. New work can easily be described
in the well proven language of mathematics. Anyone that is computationally
minded can and should be able to implement any ideas that I may have come it
with. I have also used many algorithms written by third parties and the math
descriptions accompanied by pseudo code is what I look for - never sour
code.

Cheers
Alan


-Original Message-
From: Jon Wright [mailto:[EMAIL PROTECTED] 
Sent: Saturday, 24 March 2007 12:21 PM
To: rietveld_l@ill.fr
Subject: Re: [Fwd: [ccp4bb] Nature policy update regarding source code]

AlanCoelho wrote:
> Not sure what to make of all this Jon

Don't shoot the messenger, I was surprised enough by it to forward it to 
the list. I guess they imply if you want to keep all implementation 
details secret you should be patenting instead of publishing? (Patents 
seems to be free online, NM is $30 per 2 page article?). As Vincent 
says, the new part of an algorithmic development is often much less than 
the whole package, and useless to the average user without a gui.

Wonder what this means for any REAL programmers out there who are still 
programming directly in hex. Their source is most people's compiled, but 
considerably more interesting to read ;-)

Best,

Jon




RE: [Fwd: [ccp4bb] Nature policy update regarding source code]

2007-03-23 Thread AlanCoelho
Vincent

In the original message of Michael Love (forwarded by Jon Wright) it clearly
states:

> Although there are still some small problems, I think that this is a 
> big step forward, and certainly an interesting read, if you are 
> interested in FOSS and science.

What does "still some problems" mean. Don't kid yourself; if you have read
the SourceForge preamble then you would certainly know the agenda. Don't you
understand "free" is what has caused the problems now faced in this field;
everyone demands it be free but no one want to pay the real cost of software
development. 

Too often those that wallow in politics end up making decisions that the
rest have to follow; this has happened to Nature to some extent. I certainly
don't want this garbage filling up my mail box. Too often real scientists
are too busy doing what they should be doing; it may pay them to keep their
eyes open a little.

=

>If they had mandated all code (full software) associated to articles to be 
>open-source, they would clearly have been unreasonable (as much as I like 
>open-source), but this is _not_ the case here. As I see things, a _small_ 
>piece of code should be associated to each algorithm, not a full software 
>with > 10 000 lines of code.

They have not yet mandated "full software" but give them a millimetre and
see what happens. And you cannot simply deposit a small piece of code most
of the time. Most code rely on libraries and rarely is it trivial to rewrite
code to be stand alone. Again this is something that only developers would
know and not as it seems the decision makers.



>If someone claims that his algorithm allows computing FFT with O(n) 
>complexity, it is fair to ask him for a practical demonstration available
to 
>all readers.

If there are those who can't follow pseudo code or mathematical descriptions
then what on earth are they doing in science. If that's not enough then an
executable should suffice. Otherwise you are forcing developers not paid by
tax payers to make their work free. Why is it that people of tax payer paid
institutes must get paid and non-institute people not. If the believers of
this were to forgo their salary and work for free then I would believe the
argument. 

=

>Most scientists want software which (i) is efficient and (ii) they can play

>around with (=modify). Free is a side issue for most scientists I know.

Yes, most scientist should also know that there's no such thing as a free
lunch. If their Governments don't see it fit to pay for software development
then vote them out of office. You cannot expect Governments not to pay for
software and at the same time demand good software often written by those
not paid by Governments. 

"Free is a side issue" - To believe that is to assert that pigs can fly.
Again I never want to see source code; I want to see the mathematical
description.

=

Best regards
Alan Coelho





RE: [Fwd: [ccp4bb] Nature policy update regarding source code]

2007-03-23 Thread AlanCoelho
Not sure what to make of all this Jon

I am to believe that scientists prefer to mull over source code rather than
pseudo code and mathematical descriptions. Anyone that knows just a little
about software development would know that source code is the last thing
that one wants to see.  How many has deciphered the source code of an FFT
routine, how many would want to. The language of maths has evolved over the
centuries and computer code whether it be Fortran or c++ is simply not
adequate for describing complex algorithms. 

There's a hidden agenda here and those pushing it should have the fortitude
to come clean about their motives; that is that scientists would like
software for free. This word 'free' is an ugly word and it is in fact the
reason why software in this field is so bad. Governments/institutes do not
pay scientists to develop software except for short term projects that
typically amounts to a gross waste of tax payer's money. 

Very complex programs such as Axiom, Reduce, Maple and Mathematica require
teams of dedicated mathematicians and code writers looking after it. Maple
and Mathematica are commercial, Reduce is open source and it charges 700
pounds stating that this is necessary for its continued development. Axiom
is a commercial failure by IBM and has been placed on the open source scrap
heap, last I checked it's still just sitting there. Thus if you want to
destroy commercial entities such as Maple and Mathematica then you would
subjecting science to second rate software. How many people check that the
source code of these programs spew out the correct results.

Thus this idea of source code must be made available is a charade. It would
mean that developments in programs like GSAS/TOPAS could not be reported as
the source code is not available. Not sure about the present status of GSAS
source code but it was not available in the past.

Even if governments were to pay for the development of programs such as GSAS
then it is still most certainly not free. There are three options, 1)
Governments/institutes fund software development, 2) Scientists band
together to write open source code or 3) software is paid for on a
commercial basis. Option (1) probably won't happen for political reasons,
option (2) has worked in other fields but doubtful in this field for lack of
expertise and option (3) already exists.

What is interesting is the reason why software is not funded. In my opinion
it is not funded because everyone thinks that software should be free not
bothering where it comes from and most certainly not knowing that computer
science is as difficult a field as science and asking Mr or Mrs Blogs to
write a program without being educated in the area won't work.

The bottom line of all this is that it must not be a prerequisite that
source code be made available in reporting algorithmic developments. Anyone
that states otherwise is being disingenuous - Journals like Nature or
otherwise will most certainly be given a miss by me in regards to publishing
algorithmic developments.

Best regards
Alan Coelho

-Original Message-
From: Jon Wright [mailto:[EMAIL PROTECTED] 
Sent: Saturday, 24 March 2007 6:09 AM
To: rietveld_l@ill.fr
Subject: [Fwd: [ccp4bb] Nature policy update regarding source code]

Hi Everyone; Just in case you don't follow ccp4bb or nature methods:

http://www.nature.com/nmeth/journal/v4/n3/full/nmeth0307-189.html

> I thought that some of you might be interested that the journal Nature
> has clarified the publication requirements regarding source code
> accessibility.  It is likely that some of you deserve congrats
> for this.  Cheers!
> 
> http://www.nature.com/nmeth/journal/v4/n3/full/nmeth0307-189.html
> 
> Although there are still some small problems, I think that this is a
> big step forward, and certainly an interesting read, if you are
> interested in FOSS and science.
> 
> Regards,
> Michael L. Love Ph.D
> Department of Biophysics and Biophysical Chemistry
> School of Medicine
> Johns Hopkins University
> 725 N. Wolfe Street
> Room 608B WBSB
> Baltimore MD 21205-2185
> 
> Interoffice Mail: 608B WBSB, SoM
> 
> office: 410-614-2267
> lab:410-614-3179
> fax:410-502-6910
> cell:   443-824-3451
> http://www.gnu-darwin.org/
> 
> 
> 
> -- Visit proclus realm! http://proclus.tripod.com/ -BEGIN GEEK CODE
BLOCK- Version: 3.1 GMU/S d+@ s: a+ C UBULI$ P+ L+++() E---
W++ N- !o K- w--- !O M++@ V-- PS+++ PE Y+ PGP-- t+++(+) 5+++ X+ R tv-(--)@ b
!DI D- G e h--- r+++ y --END GEEK CODE BLOCK-- 





RE: Problems using TOPAS R (Rietveld refinement)

2007-03-21 Thread AlanCoelho
Lubo

SVD as you mentioned does avoid numerical problems as does other methods
such as the conjugate gradient method. SVD minimizes on the residuals |A x -
b| after solving the matrix equation A x = b.

I would like to point out however that errors obtained from the covariance
matrix are an approximation. The idea of fixing parameters as in SVD when a
singular value is encountered is also a little arbitrary as it requires the
user setting a lower limit.

The A matrix is formed at a point in parameter space; when there are strong
correlations (as SVD would report) then that point in space changes from one
refinement to another after modifying the parameter slightly.

If derivatives are numerically calculated, as is the case for convolution
parameters, then the A matrix becomes a function of how the derivative are
calculated; forward difference approximation for example gives different
derivatives than both forward and backwards if the step size in the
derivative is appreciable. For most convolutions and numerical derivatives
in general then it needs to be appreciable for good convergence.

Rietveld people may want to look at the re-sampling technique known as the
bootstrap method of error determination. It gives similar errors to the
covariance matrix when the correlations are weak; the maths journals are
full of details. It requires some more computing time but it actually gives
the distribution. And yes TOPAS has the bootstrap method; other code writers
may wish to investigate it.

Cheers
Alan



 

-Original Message-
From: Lubomir Smrcok [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, 21 March 2007 5:50 PM
To: rietveld_l@ill.fr
Subject: Re: Problems using TOPAS R (Rietveld refinement)

Gentlemen,
I've been listening for a week or so and I am really wondering what do you
want to get ... Actually you are setting up a "refinement", whose results
will be, at least, inaccurate. I am always surprised by attempts to refine
crystal structure of a disordered sheet silicate from powders, especially
when it is known it hardly works with single crystal data. Yes, there are
several models of disorder, but who has ever proved they are really good ?
I do not mean here a graphical comparison of powder patterns with a
calculated trace, but a comparison of structure factors or integrated
intensities. (Which ones are to be selected is well described in the works
of my colleague, S.Durovic and his co-workers.)
As far as powders are concerned, all sheet silicates "suffer" from
prefered orientation along 001. Until you have a pattern taken in a
capillary or in transmission mode, this effect will be dominating and you
can forget such noble problems like anisotropic broadening.

Last but not least : quantitative phase analysis by "Rietveld" is (when only
scale factors are "on") nothing else but multiple linear regression. There
is a huge volume of literature on the topic, especially which variables
must, which should and which could be a part of your model.
I really wonder why the authors of program do not add one option called
"QUAN", which could, upon convergence of highly sophisticated non-linear
L-S, fix all parameters but scale factors and run standard tests or factor
analysis. One more diagonalization is not very time consuming, is it ? To
avoid numerical problems, I'd use SVD.
This idea is free and if it helps people reporting 0.1% MgO (SiO2) in a
mixture  of 10 phases to think a little of the numbers they are getting, I
would only be happy :-)
Lubo

P.S. Hereby I declare I have never used Topas and I am thus not familiar
with all its advantages or disadvantages compared to other codes.


On Wed, 21 Mar 2007, Reinhard Kleeberg wrote:

> Dear Leandro Bravo,
> some comments below:
>
> Leandro Bravo schrieb:
>
> >
> > In the refinement of chlorite minerals with well defined disordering
> > (layers shifting by exactly b/3 along the three pseudohexagonal Y
> > axis), you separate the peaks into k = 3.n (relative sharp, less
> > intensive peak) and k  3.n (broadened or disappeared
> > reflections). How did you determined this value k = 3.n and n =
> > 0,1,2,3..., right?
> >
> The occurence of stacking faults along the pseudohexagonal Y axes causes
> broadening of all reflections hkl with k unequal 3n (for example 110,
> 020, 111..) whereas the reflections with k equal 3n remain unaffected
> (001, 131, 060, 331...). This is clear from geometric conditions, and
> can be seen in single crystal XRD (oscillation photographs, Weissenberg
> photographs) as well in selected area electron diffraction patterns. The
> fact is known for a long time, and published and discussed in standard
> textbooks, for example *Brindley, G.W., Brown, G.:  Crystal Structures
> of Clay Minerals and their X-ray Identification. Mineralogical Society,
> London, 1980.*
>
> > First, the chlorite refinement.
> >
> > In the first refinement of chlorite you used no disordering models and
> > used ´´cell parameters`` and ´´occupation of octahedra``. So you
>

RE: Problems using TOPAS R (Rietveld refinement)

2007-03-21 Thread AlanCoelho
Clay people

I think the single crystal analysis of clays is interesting. I have not read
the literature but in determining the intensities is overlap of the dots
considered as I would have expected the dots to be very much smeared (5 to
10 degrees 2Th in my experience). If yes the fitting in two dimension would
be better.

Thus the question to ask is how accurate can QPA be for clays if the
intensities can be accurately obtained; is this an open question or is the
book closed on this. If as Reinhard Kleeberg mentioned that some directions
are unaffected then it would seem plausible that something can be gained
especially if one of "those models" work. 

Also, TOPAS simply offers a means of describing the peak shapes using a hkl
dependent spherical harmonics. From my experiences it seems to work. Like
Lubomir Smrcok remarked getting the intensities is critical. 

Another important point, again as Lubomir Smrcok mentioned, is preferred
orientation. If there's very strong preferred orientation then the peak
shapes will be affected due to axial divergence as well; it best to remove
preferred orientation.

Cheers
Alan



-Original Message-
From: Reinhard Kleeberg [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, 21 March 2007 7:48 PM
To: rietveld_l@ill.fr
Subject: Re: Problems using TOPAS R (Rietveld refinement)

Dear colleagues,
sorry, my mail should go directly to Leandro, but I used this damned reply
buttom...
My answer was related to Leandro's questions regarding these line broadening
models. I realised that Leandro is going on to apply a Rietveld program for
phase quantification, including kaolinite and later other clay minerals. I
only tried to express my personal experience, that any inadequate profile
description of a clay mineral will surely cause wrong QPA results, nothing
else. This is a practical issue, and it is only partially related to
structure refinement. Lubomir Smrcok is definitely right that other things
like PO are frequently biasing a QPA result, but for the most of these
problems working solutions do exist. 
But I disagree that anisotropic line broadening is a "noble problem". In
clay mineral mixtures, it is essentially to fit the profiles of the single
phases as best as one can, to get any reasonable "QPA" result in a +-5 wt%
interval. On the other hand, for the QPA purpose it is not so much important
to find any sophisticated description of the microstructure of a phase. But
the "model" should be flexible enough to cover the variablility of the
profiles in a given system, and, on the other hand, stabil enough (not
over-parametrised) to work in mixtures. 
The balancing out of these two issues could be the matter of an endless
debate. And here I agree again, a better, more stable minimisation algorithm
can help to keep a maximum of flexibility of the models.
Best regards
Reinhard Kleeberg

Lubomir Smrcok schrieb:

>Gentlemen,
>I've been listening for a week or so and I am really wondering what do 
>you want to get ... Actually you are setting up a "refinement", whose 
>results will be, at least, inaccurate. I am always surprised by 
>attempts to refine crystal structure of a disordered sheet silicate 
>from powders, especially when it is known it hardly works with single 
>crystal data. Yes, there are several models of disorder, but who has ever
proved they are really good ?
>I do not mean here a graphical comparison of powder patterns with a 
>calculated trace, but a comparison of structure factors or integrated 
>intensities. (Which ones are to be selected is well described in the 
>works of my colleague, S.Durovic and his co-workers.) As far as powders 
>are concerned, all sheet silicates "suffer" from prefered orientation 
>along 001. Until you have a pattern taken in a capillary or in 
>transmission mode, this effect will be dominating and you can forget 
>such noble problems like anisotropic broadening.
>
>Last but not least : quantitative phase analysis by "Rietveld" is (when 
>only scale factors are "on") nothing else but multiple linear 
>regression. There is a huge volume of literature on the topic, 
>especially which variables must, which should and which could be a part of
your model.
>I really wonder why the authors of program do not add one option called 
>"QUAN", which could, upon convergence of highly sophisticated 
>non-linear L-S, fix all parameters but scale factors and run standard 
>tests or factor analysis. One more diagonalization is not very time 
>consuming, is it ? To avoid numerical problems, I'd use SVD.
>This idea is free and if it helps people reporting 0.1% MgO (SiO2) in a 
>mixture  of 10 phases to think a little of the numbers they are 
>getting, I would only be happy :-) Lubo
>
>P.S. Hereby I declare I have never used Topas and I am thus not 
>familiar with all its advantages or disadvantages compared to other codes.
>
>
>On Wed, 21 Mar 2007, Reinhard Kleeberg wrote:
>
>  
>
>>Dear Leandro Bravo,
>>some comments below:
>>
>>Leandro Bravo schrieb:
>>
>>

RE: Problems using TOPAS R (Rietveld refinement)

2007-03-15 Thread AlanCoelho
Leandro

Not sure what the purpose of your refinement is but if it's quantification
then your results would probably be in error to a large extent.

The references given by Alan Hewat and Lubo Smrcok is probably a good
starting point.

Data quality and model errors typically mean that atomic positions should
not be refined for clays; especially for Kaolinite. Also, use a common beq
value for all sites or take them from literature. A gobal beq could then be
superimposed using something like prm b 0 scale_pks = Exp(-b /
D_spacing^2);.

For quantification try spiking the sample with a standard to determine the
amorphous content.

It is possible to get the peak shapes without changing peak intensities; if
you need assistance then contact me off the list.

Cheers
Alan


-Original Message-
From: Leandro Bravo [mailto:[EMAIL PROTECTED] 
Sent: Friday, 16 March 2007 9:15 AM
To: rietveld_l@ill.fr
Subject: Re: Problems using TOPAS R (Rietveld refinement)

Ok, I´m starting to have sucess in the kaolinite refinement, the 
quantification is giving me reasonable values. I´m refining the thermal 
factors, all the atoms positions in the kaolinite, the lattice parameters 
and the cystallite size. Lattice parameters and crystallite size are giving 
me very good numbers, with very low errors (about 0,09). In the thermal 
factors, I realized that alll of them tend to 20, so after all refinements I

put them to 20, and refine all over again. I don´t care that much for atoms 
positions, I´m only using them because refining only lattice, thermal and 
cry size wasn´t enough to make a good calculated pattern to compare with the

measured one.
In the calcite and dolomite I refine: lattice parameters, cry size and 
thermal factors. And use on both a preferred orientation correction 
(spherical harmonics 4 th order). The RWP is about 16.

I´d to hear some opinions about this strategy of refinement, if you think 
that I can spare some refining cycles or even fix some values to reduce 
erros in the refinement.

_
Descubra como mandar Torpedos SMS do seu Messenger para o celular dos seus 
amigos. http://mobile.msn.com/







RE: Powder Diffraction In Q-Space

2007-02-21 Thread AlanCoelho
 

I was not aware of Journals having rules stating that data must be plotted
in square root scale and this is precisely the point. I certainly do not
want an edict stating that authors must do so. It should be up to the author
to choose whatever x-axis and y-axis scales that best shows up features.

If an author is writing a paper on an instrumet correction, say for example
cylindrical or PSD correction, then it is sensible to use 2Th.

Thus please do not take these decisions away from the author.

And I was referring to the Fourier transform of the whole pattern as is done
in PDF work not on a peak by peak basis for correcting aberrations. I
brought up the point because when people talk about Q space especially for
comparison purposes there's typcally a conversion to Q space. A quick visual
display opearting on a point by point basis is adequate in most cases but
for analysis the conversion should be done properly; a point missed by many.

cheers
alan

-Original Message-
From: Luca Lutterotti [mailto:[EMAIL PROTECTED] 
Sent: Thursday, 22 February 2007 6:59 AM
To: rietveld_l@ill.fr
Subject: Re: Powder Diffraction In Q-Space

Alan,

if it is trivial to plot in the square root I presume it is already  
available in Topas as well GSAS and Fullprof just to mention few. And  
if it is so trivial and already available as a button options why no  
one is using it. Why every time I look at a paper with a Rietveld  
refinement I can only appreciate the big peaks and why the residuals  
are so little meaningful at end? Every one can choose what he prefers  
obviously, but why all (without exceptions) are using not exactly the  
best way to just do a plot? Should not be trivial?

Conversion from 2theta - d -1/d is trivial in MY OPINION. This is  
really a problem of the programmer not of the users (in a Rietveld  
list). Constant step or variable step? Why it should influence the  
conversion except for the speed of the conversion. I am using fast  
fourier transform in my program for computing peak profiles directly  
from distribution of crystallites and microstrain; I don't assume any  
constant step, but seems like I can do it.

And this step or FT has really nothing to do with the original post  
of Klaus-Dieter who was focusing on just asking people if we can try  
to get used to a common way to plot data different from the  
conventional one. There are obviously favorable points and cons. Why  
we cannot discuss it. Seems like for the Rietveld users is not so  
trivial.

Oh, and no one is setting up web sites just to show  
opinions.may be these were already there...

Cheers,
Luca

On Feb 21, 2007, at 8:11 PM, AlanCoelho wrote:

>
> Whether a program has a button to display data as a function of 1/d  
> or a
> button to take the square toor of intensities is trivial to the  
> point of not
> being talked about much less setting up web sites to get opinions.
>
> The only point worth talking about is how a conversion from 2Th to  
> 1/d is
> done in regards to takeing the fourier transform of a powder pattern.
>
> alan
>
>
> -Original Message-
> From: Joerg Bergmann [mailto:[EMAIL PROTECTED]
> Sent: Thursday, 22 February 2007 4:30 AM
> To: rietveld_l@ill.fr
> Subject: Re: Powder Diffraction In Q-Space
>
> It's a principle of software design not to presume any kind of
> equidistant data. Unfortunately, file formats for non-equidistant
> data are seldom. So I could not implement any in BGMN, until now.
> But, in principle, there is no restriction.
>
> Regards
>
> Joerg Bergmann
>
>
>





RE: Powder Diffraction In Q-Space

2007-02-21 Thread AlanCoelho

Whether a program has a button to display data as a function of 1/d or a
button to take the square toor of intensities is trivial to the point of not
being talked about much less setting up web sites to get opinions.

The only point worth talking about is how a conversion from 2Th to 1/d is
done in regards to takeing the fourier transform of a powder pattern. 

alan
 

-Original Message-
From: Joerg Bergmann [mailto:[EMAIL PROTECTED] 
Sent: Thursday, 22 February 2007 4:30 AM
To: rietveld_l@ill.fr
Subject: Re: Powder Diffraction In Q-Space

It's a principle of software design not to presume any kind of
equidistant data. Unfortunately, file formats for non-equidistant
data are seldom. So I could not implement any in BGMN, until now.
But, in principle, there is no restriction.

Regards

Joerg Bergmann





RE: Powder Diffraction In Q-Space

2007-02-21 Thread AlanCoelho


Simon Billinge wrote:
>We can resolve the problem by setting 2pi=1!

What school did you go to; thanks for the info.

This issue of plotting data is one of those's non-issues drummed up by
groups familar with one format but not in another; I would suggest that
everyone should be familar with both types of displays. If we had the case
where only 1/d is plotted eclusively then a whole lot of useful information
would be lost. For example, if I look at a fit of a 2Th plot then I can
immediately ascertain the following by looking at the misfits:

- Are the low angle peaks fitting properly; if not then something is wrong
with either the axial or equitorial divergence corrections.
- If the peaks are not fitting at 90 degrees then I know that sample
penetration is not being accounted for proeprly.
- If variable divergence slits are being used then I know the angle where
spill over on a post monochromator takes occurs.

You cannot make these deduction from 1/d plots.

Bob's "Law of Unintended Consequences" cannot be ignored. Are we to teach
crystalographers to ignore data collection - big mistake.

Simon also mentioned Fourier transform; FFTs' work in equal data steps so be
aware of data conversion. My guess is that most conversion programs do the
wrong thing.

alan
 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf
Of Simon Billinge
Sent: Thursday, 22 February 2007 2:52 AM
To: rietveld_l@ill.fr
Subject: Re: Powder Diffraction In Q-Space

roughly speaking, historically, Q=2pi/d is used by physicists and
S=1/d used by crystallographers and these communities define their
reciprocal lattices and Fourier transforms accordingly.  With the 2pi
in there, Q is the momentum transfer.  Without it in there the Laue
Equations are much cleaner.

Physicists are good at working in reduced units, like setting c=1.  We
can resolve the problem by setting 2pi=1!

S

On 2/21/07, Ray Osborn <[EMAIL PROTECTED]> wrote:
> On 2007/02/21 9:06, "Jonathan Wright" <[EMAIL PROTECTED]> wrote:
>
> > 2pi/d just needs a better name than Q?
>
> I guess, to some extent, this debate depends on whether you are only
> interested in talking to other powder diffraction specialists.  As a
> non-specialist, I would suggest that Q is a more widely used variable -
> certainly in the inelastic scattering community, but also I believe in the
> liquids and amorphous community, who might be interested in studying
> crystallization processes, for example.
>
> If I want to check the elastic scattering contained within my inelastic
> spectrum, it is certainly an inconvenience having to convert from
two-theta,
> even assuming I know the wavelength.  I certainly hope you don't settle on
> 10^4/d^2.
>
> Regards,
> Ray
> --
> Dr Ray OsbornTel: +1 (630) 252-9011
> Materials Science Division   Fax: +1 (630) 252-
> Argonne National Laboratory  E-mail: [EMAIL PROTECTED]
> Argonne, IL 60439-4845
>
>
>
>


-- 
Prof. Simon Billinge
Department of Physics and Astronomy
4268 Biomed. Phys. Sciences Building
Michigan State University
East Lansing, MI 48824
tel: +1-517-355-9200 x2202
fax: +1-517-353-4500
email: [EMAIL PROTECTED]
home: http://nirt.pa.msu.edu/




RE: Powder Diffraction In Q-Space

2007-02-21 Thread AlanCoelho


Representing data in Q space is fine for viewing etc... but the conversion
from 2Th to Q needs a little thought. If unequal Q steps is not a problem
then a straight conversion is ok. However if someone is going to use that Q
data for further analysis then it is important to use the original data or
at least know where it came from for reasons I gave in a previous reply to
the mailing list which I have included again below. 

In other words if a diffractomer or some other instrument collect data in Q
space at equal steps and this data is compared to data collected at 2Th then
the match wont be 100% depending on how the conversion is done.

Most importantly is that data from one or the other should not be thrown
away. 

Thus to discuss the question of archiving data in Q space means
understanding the data conversion principles.

cheers
alan


Von Dreele write:
>However, the profile shape functions are not simple 
> functions of Q but are simple (Gaussian & 
> Lorentzian) functions of 2-theta. Case closed.



== Previous reply on Q space conversion ===

Converting from one x-axis to another; you may want to consider the
following Melinda when you do your conversion.

For a particular data point in Q space the area of the space sampled is:

Area(Q) = I(Q) del_Q

For 2Th the area is:

Area(2 Th) = I(2 Th) del_2Th

where del_Q and del_2Th corresponds to the size of the receiving slit in the
equitorial plane for Q and 2Th spaces respectively. We assume constant
counting time at each data point.

First transform point for point with equal areas at any particlar data
point, or   

Area(Q) = Area(2 Th)

or,

I(2 Th) = I(Q) del_Q / del_2Th

If both data sets were collected with fixed receiving slits then
del_Q/del_2Th is a constant and can be ignored by setting it to 1.

I(2 Th) at this stage would be at unequal x-axis steps as you pointed out. 

To convert to equal x-axis steps you need to sample I(2 Th) at equal 2Th
steps, lets call the chosen step size 2Th_step_size. 

If you want the Sum of I(2 Th) to equal the Sum of I(Q) then you need to
sample I(2 Th) with a receiving slit width that has a size equal to
2Th_step_step.

Only when you do this do you use all of the observed data in the original
I(Q) and use it only once. 

When it comes time to fitting the data you would then need to account for
the chosen receiving slit, this can be done by a convolution. 

I dont know if GSAS does this but if it cant then I suggest you sample I(2
Th) at an infinitely small receiving slit and ignore any scaling constants.
In this approach some of the original data may not be sampled. 

cheers
alan  
 





RE: CIF for Pyrrhotite 4C

2007-02-02 Thread AlanCoelho
Hi Doug 

You can add new setting in the file SGCOM5.CPP where entries are in the form
of computer adapted symbols by Uri Shmueli, Acta Cryst. (1984). A40,
559-567.

I am probably wrong but my guess for F12/d1 is 

FMCI1A000P2D003

where I used the a+b face-diagonal rotation matrix of 2D, consistency of the
rotation operators are checked. 

The ICSD prvides a means of obtaining the equivalent positions thus you can
check if the string generated equivalent positions matches the ICSD
equivalent positions; this check is useful especially as a lot of symmetry
is redundant. 

Thus if your CIF file has equivalent positions then the job is easy.

If not then not so easy. For example, in the ICSD you will find two
distinctly different sets of equivalent positions for A12/a1 with one of
them being the standard setting.


Contact me off the list if the CIF has equivalent positions.
cheers
alan



-Original Message-
From: Allen, Douglas R. [mailto:[EMAIL PROTECTED] 
Sent: Saturday, 3 February 2007 8:02 AM
To: rietveld_l@ill.fr
Subject: CIF for Pyrrhotite 4C

Can anyone help me with a CIF file for Pyrrhotite 4C.  The CIF output
from ICSD #42491 gives SG 15 setting F12/d1 which is not recognized by
TOPAS.  I found an entry in the ICDD database from the Linus Pauling
file which shows  the setting A12/A1 but when the CIF file is exported
the same unrecognizable F12/d1 appears.  How can I convert this CIF file
to something that does not crash TOPAS.

Doug Allen
Phelps Dodge Process Technology Center



This email and any files transmitted with it are confidential and intended
solely for the use of the individual or entity to whom they are addressed.
If you have received this email in error please notify the system manager.
This message contains confidential information and is intended only for the
individual named. If you are not the named addressee you should not
disseminate, distribute or copy this e-mail.

Este mensaje (incluyendo los archivos adjuntos) esta dirigido solo al
receptor senalado y puede contener informacion de caracter privilegiada,
privada o confidencial.  Si usted no es el receptor senalado o bien ha
recibido este mensaje por error, por favor notifique inmediatamente al
remitente y elimine el mensaje original.  Cualquier otro uso de este mensaje
de correo electronico esta prohibido.






RE: About zero counts etc.

2006-10-13 Thread AlanCoelho
Title: RE: About zero counts etc.



Hi Bill
 
Excellent, thank you, I am surprised that I understand 
what you are saying for a change.
 
In other words instead of squaring some function F^2 to 
minimize on you instead simply minimize on a more general function G. Its the 
squaring that leads to the normal equations and to what we call least squares . 

 
Its actually quite trival to minimize on a general function 
G. To achieve convergence behaviour similar to least squares then the modified 
BFGS method should cope. The only trick would be to calculate derivatives in a 
manner fast enought to be useful.
 
Will put it on my todo list.
All the best
Alan


From: David, WIF (Bill) 
[mailto:[EMAIL PROTECTED] Sent: Saturday, 14 October 2006 8:23 
AMTo: rietveld_l@ill.frCc: Sivia, DS 
(Devinder)Subject: RE: About zero counts etc.

Hi Alan,
The short 
answer is that the question of zero counts 
can indeed be answered in a simple and 
practical manner. 
There's a lot of apparent mystique about probability theory especially when one 
invokes the term Bayesian probability theory. However, Bayesian probability theory is really just common sense about what's 
likely and what's unlikely converted into mathematics. It may seem tautologous but the most probable answer is the answer with the highest probability value – 
i.e. the maximum of the probability 
distribution function (pdf). This is quite 
obvious – 
the subtle bit 
is “what is 
the probability distribution function”. Normally with 
enough counts, we move over to a Gaussian pdf along the lines of
Gaussian pdf = const * exp(-0.5*(yobs-ycalc)/esd^2) 

This is 
a maximum when the negative log 
pdf is a minimum. The 
negative log pdf is simply
negative log 
pdf = 
const2 * (yobs-ycalc)/esd^2 
In other words, 
least squares. 
The corollary is that least squares is associated with (and only associated with) 
a Gaussian pdf which happily is the case almost 
all of the time in Rietveld analysis. 
However, when there are zeroes and ones around in the observed counts or when 
the model is incomplete or uncertain then we have to move away from 
a Gaussian pdf and by 
implication least squares. 
With zeroes and 
ones around, we have to move from a Gaussian pdf to a Poisson pdf and 
work with the negative log pdf of that. With incomplete models, we have to move over to the maximum 
likelihood methods 
that the macromolecular people have been using for years.
So the bottom 
line is that 
fundamental statistics is as basic as fundamental parameters. You can fudge your way with a 
Pearson VII or a pseudo-Voigt function to 
fit an X-ray emission profile but the physics says that there are better 
functions. Similarly you can fudge your way with tweaking the Gaussian pdf when there are zero and one 
counts around but you’d be 
better to move over to the correct probability functions. 
Antoniadis et al. 
have been through all of this back in the early 90s. I’ve got code (and it’s only a few lines) that is precisely Poisson and 
moves seamlessly over to the chi-squared metric. I’ll send it to you offline – it 
would be great to have the ability to program our own algebraic minimisation function in a TOPAS input 
file. 
I’d love to 
be able to do that for robust statistics and also maximum 
likelihood!
All the 
best,
Bill

-Original 
Message-From: AlanCoelho [mailto:[EMAIL PROTECTED]Sent: 13 October 2006 
22:30To: rietveld_l@ill.frSubject: RE: About zero counts 
etc.
Hi Bill and 
others
Why cant this 
question of zero counts be answered in a simple and practical
manner. I hope to 
read Devinder Sivia's excellent book one day but for the
time being 
it would be useful if the statistical 
heavy weights were to
advise on what 
weighting is appropriate without everyone havng to understand
the details. 

The original 
question was how to get XFIT to load data files with zero
counts; obviously 
setting the values to 1 is incorrect. 
Joerg Bergmann seems
to indicate that 
the weihgting should be:
    weighting = 1 / (Yobs+1);
as the esd is 
sqrt(n+1). Again without reading the book should the weighting
be:
    weighting = 1 / If(Yobs, Yobs, 
Ycalc);
Hopefully these 
spread sheet type formulas are understandable. This last
equation is not 
liked by computers due to a possible zero divide when Ycalc
is zero. 

Any ideas Bill 
and others
Alan

From: David, WIF 
(Bill) [mailto:[EMAIL PROTECTED] 

Sent: Thursday, 
12 October 2006 4:48 PM
To: 
rietveld_l@ill.fr
Subject: RE: 
About zero counts etc.
Dear 
all,
Jon's right - 
when the counts are very low - i.e. zeroes and ones around -
then the correct 
Bayesian approach is to use Poisson 
statistics. This, as
Jon said, has 
been tackled by Antoniadis et al. (Acta Cryst. (1990). A46,
692-711  
Maximum-likelihood methods in powder diffraction refinements, 
A.
Antoniadis, J. 
Berruyer and A. Filhol) in the context of the Rietveld method
some years ago. 
This paper is very 

RE: About zero counts etc.

2006-10-13 Thread AlanCoelho
Hi Bill and others

Why cant this question of zero counts be answered in a simple and practical
manner. I hope to read Devinder Sivia's excellent book one day but for the
time being it would be useful if the statistical heavy weights were to
advise on what weighting is appropriate without everyone havng to understand
the details. 

The original question was how to get XFIT to load data files with zero
counts; obviously setting the values to 1 is incorrect. Joerg Bergmann seems
to indicate that the weihgting should be:

weighting = 1 / (Yobs+1);

as the esd is sqrt(n+1). Again without reading the book should the weighting
be:

weighting = 1 / If(Yobs, Yobs, Ycalc);

Hopefully these spread sheet type formulas are understandable. This last
equation is not liked by computers due to a possible zero divide when Ycalc
is zero. 

Any ideas Bill and others
Alan





From: David, WIF (Bill) [mailto:[EMAIL PROTECTED] 
Sent: Thursday, 12 October 2006 4:48 PM
To: rietveld_l@ill.fr
Subject: RE: About zero counts etc.





Dear all,

Jon's right - when the counts are very low - i.e. zeroes and ones around -
then the correct Bayesian approach is to use Poisson statistics. This, as
Jon said, has been tackled by Antoniadis et al. (Acta Cryst. (1990). A46,
692-711  Maximum-likelihood methods in powder diffraction refinements, A.
Antoniadis, J. Berruyer and A. Filhol) in the context of the Rietveld method
some years ago. This paper is very informative for those who are intrigued
about the fact that you can do anything when diffraction patterns have lots
of zeroes and ones around. Curiously, the weighting ends up having as much
to do with the model value (which can, of course, be non-integer) as the
data. Devinder Sivia's excellent OUP monograph, "Data Analysis: a Bayesian
Tutorial" (http://www.oup.co.uk/isbn/0-19-856832-0) discusses all of this in
a very readable way.

Bill

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: 12 October 2006 00:37
To: rietveld_l@ill.fr
Subject: Re: About zero counts etc.

Hello Joerg,

> -Having measured n counts, the estimated value is n+1

You might have a hard time convincing me on that one.

> -Having measured n counts, the esd is also sqrt(n+1)!

If n is zero then spending more time on the data collection might be better
than

more time on the analysis.

> Things change with variable counting times. 

id31sum uses counts=counts and esd=sqrt(counts+alp) where alp=0.5 is the
default

and can be overridden on the command line. Perhaps there aren't many people
who

use that option. Should we change the default? The 0.5 came from the
literature

but it was some time ago and I can't remember where. In any case it then
gets

convoluted with the monitor error. Sqrt(n+1) gives a very low chi^2 if the

actual background is 0.1 (eg: 1 count every 10 datapoints). Might be better
to

just use the Poisson itself, as in abfit [1]. 

> the above correction for the estimated

> values gave significant better R values.

Are you using background subtracted R-values? If only R-values were
significant.

Jon


[1]  Acta Cryst. (1990). A46, 692-711   

Maximum-likelihood methods in powder diffraction refinements

A. Antoniadis, J. Berruyer and A. Filhol

-

This mail sent through IMP: http://horde.org/imp/





TOPAS-Academic version 4 now available

2006-10-11 Thread AlanCoelho



Dear all
Version 4 of the Academic version of Bruker-AXS TOPAS, TOPAS-Academic, is now 
available to degree-granting institutions comprising universities, university 
run institutes, laboratories and schools. Bruker-AXS TOPAS shall be released 
before the end of the year.This version is big step forward; it is 
faster, more stable and equipped to take on the largest of crystallographic 
problems comprising tens of thousands of parameters. New techniques and ways of 
doing things should significantly assist in the analysis of crystallographic 
data; details are here:
http://members.optusnet.com.au/~alancoelho
All the bestAlan Coelho 


RE: XFit issue

2006-10-10 Thread AlanCoelho
Hi Sajeev

In XFIT the values of the observed data points in the xdd file needs to be
all greater than 1. Simply copy the xdd file into a spread sheet and set
values less than 1 to 1.
alan

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] 
Sent: Wednesday, 11 October 2006 7:22 AM
To: rietveld_l@ill.fr
Subject: XFit issue


Hi,

I am trying to do a profile fit using XFit. When I try to fit using
Fit>Marqardt, I see an error statement which says,
"values for all the  XRD data points must be greater than or equal to 1"

Can someone let me know what is the cause of this error.

Thanks a lot.

regards

Sajeev Moorthiyedath






Re: CIF format for powder work

2006-07-12 Thread AlanCoelho
Pam, as the author of the INP format it was not my intention to make your
life difficult. 

Maybe I am wrong but is it the case that we are now supposed to write into
CIF more than just primary/raw data. This to me is not what was originally
meant for CIF; non-primary data such as logical statements and looping
constructs need a script. 

Before stoking the fire I will say that I am not an expert in the field of
archiving but maybe that’s why I may be able to contribute to the
discussion. A long time ago I briefly read the CIF documents and thought
that it was more about helping journals incorporate data into papers than it
was about storing non-raw information. It does of course help data exchange
as it is a standard (supposedly).

CIF at the time of inception was no doubt a step forward but in today’s
computing world the idea of data being separate to a document is fast coming
to an end. Not joining a fad can be wise but how can one level of looping in
CIF be sufficient (someone correct me if things have changed).

Even though the INP format is similar to XML without the tags it is not my
preferred option either. XML however is far superior to CIF especially when
combined with something like JScript. It can describe any document and at
any complexity but it can be cumbersome when things get complex. In extreme
cases nothing beats a script or dare I say a programming language. I don’t
know how many readers know of the Mathematica journals but it is the
ultimate in combining non-raw data with document.

This leads me to my preferred option! Why not use something like c or c++;
and don’t say that it’s not possible as its relatively easy to convert INP
format to c++ with the use of operator overloading and it would look just as
simple. 

Why can’t a CIF file be a c++ file with associated standard headers
predefined (analogous to a dictionary); is it because c++ is too complex –
maybe but gosh aren’t we scientists. There are tools coming online that
allow the viewing of c++ class hierarchies together with its data whilst a
programming is running. One such OLD tool I stumbled upon during a stint at
ISIS is the open source ROOT which allows the browsing of c++ classes and
its data in a GUI friendly manner whilst a program is running; this is
analagous to an XML browser. There are other commercial ones not to mention
any good debugger with the nicest of GUIs. Thus your 'checker' would in fact
be a c++ compiler (interpreter or otherwise) and I can’t think of a better
standard. 

Pie in the sky! Well please explain why? I am willing to bet that such an
approach would be superior to XML and the learning of the c++ syntax simpler
than XML.

All the best
Alan Coelho 



-Original Message-
From: Whitfield, Pamela [mailto:[EMAIL PROTECTED] 
Sent: Thursday, 13 July 2006 12:06 AM
To: rietveld_l@ill.fr

Hi Bob

This was a solution from scratch so I'm afraid I was using Topas 4.  It will
output the structural CIF file and stuff like reflections, calculated
patterns and whatnot into text files, but none of the other various bits and
pieces.  I had to do those by hand.

The CIF checking seems to have particular trouble handling all the different
residuals, i.e. overall, individual patterns, etc and it really doesn't like
splitting the RB into the different phases.  As far as I can tell they are
all there, in the correct places, but CheckCIF doesn't find any of them.
Trying to deal with convolution peak fitting is a little difficult, as
there's nothing like PV parameters to input.  I had to be very vague on that
indeed.  Topas also doesn't use the standard absorption corrections.
CheckCIF didn't manage to extract the structure data.  The Platon check did
(a little weird as I though they were supposed to be equivalent), but
believe it or not it got the Z wrong for the silicon standard so messed up
the checking!

I think I've taken it as far as I can go so it will have to do.  I'll just
have to explain that CheckCIF made some mistakes.

The next file I have to make is from a Topas analysis as well.  I'm not sure
any other program could handle the constraints I constructed for that one
(how's that for being controvertial?!).  
I'd be curious to see if CIF could handle the geometrical constraints that
have been published recently in J.Appl.Cryst. on apatites (I have to admit
to some involvement :-).  For that one no CIF was created but the Topas
input file was deposited.  Maybe I should have done the same here!

The template idea is a good one.  Unfortunately I seem to be doing all sorts
of analyses which aren't directly related to each other from different
instruments, so a template for this one won't help with the next one, which
also involves user-defined instrument convolutions - what joy!

Pam

-Original Message-
From: Von Dreele, Robert B. [mailto:[EMAIL PROTECTED] 
Sent: July 12, 2006 8:49 AM
To: rietveld_l@ill.fr


Pam,
Actually what was the issue with the cif files with multiple phases/data
sets? gsas2cif

Re: how to find out POLARISATION Factor

2006-06-04 Thread AlanCoelho
Miguel

I have seen low angle peaks that are void or have very little axial
divergence asymmetry. As an example, the low angle peak of Brucite in a
multiphase pattern often exhibits far less asymmetry that surrounding peaks
from different phases. This I am pretty sure is due to the preferred
orientation of the Brucite phase; do you have another explanation. Also
maybe the choice of word "spotty" is not accurate eqither but again do you
have a better adjective; how about incomplete cones.

>Refinement gave H/L=S/L=0.027 for the equatorial and axial 
>divergences stated on the fig.

By the way the FCJ corrects for axial divergence; not equitorial divergence.
It will as you pointed out fit to low angle peaks if its two parameters were
to be refined. These same two parameters fixed at their refined low angle
values will not fit to higher angle peaks. This conclusion was reached from
comparision with experimental data, you may want to look at Fig. 5 of Cheary
& Coelho (1998b)J. Appl. Cryst., 31, 862-868.

alan


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Monday, 5 June 2006 8:37 AM
To: rietveld_l@ill.fr

 
(Larry) 
> Would it be possible for you to generate the data points for the same
> set of parameters with peaks at 5 and 10 degrees? That way the
> asymmetry will be really pronounced and the convolution method will be
> even more stressed. I don't expect it to make any difference, but I'd
> like to see it on the screen.
As a complement to calculated vs calculated peak shapes it might be 
interesting to compare calculated vs observed peak shapes for a 
material with low angle reflections (~8 deg 2theta, see attached figure). I 
used this MFI zeolite material to calibrate H/L and S/L for our BB 
diffractometer, although it is probably not the best choice due to peak 
overlap at higher angles, it simply was the best crystallized zeolite 
sample I had at hand. Refinement gave H/L=S/L=0.027 for the 
equatorial and axial divergences stated on the fig. Note that the axial div 
was 0.04 rad (as specified by Philips), maybe just the full opening as 
opposed to half opening (0.02 rad) specified by Larry earlier in this 
discussion.
Anyway, the fig shows clearly that the FCJ correction gives the proper 
peak shape at low angles, and in the case of MFI it was by far the most 
important contribution to get the quite low error indices given in the 
figure.

(Alan) 
> - Because of the speed dynamic analysis of things like preferred
> orientation effects on axial divergence is possible. These effects are
> manifested as spotty cones leading to a peak dependent axial
> divergence.
(Pamela)
axialdivergence is concerned, I believe that's what Soller slits are 
usually used for! Unless you're unlucky, poor particle statistics are far 
more likely to be a seriousheadache (one of my particular favourite 
soapbox subjects :-).
In a sample prepared for BB geometry, preferred orientation is not 
normally supposed to give spotty cones, I think. I agree with Pamela that 
spottiness is more typically due to problems with graininess of the 
sample. This effect is normally not even recognized in BB diffraction 
patterns, but it contributes, of course, to asymmetries of the peak shape 
and intensity enhancement, fortunately in a more or less random way in 
both cases. So if things look funny, the first thing to check is graininess 
of the sample: this issue cannot be overemphasized!

best

miguel





RE: how to find out POLARISATION Factor

2006-05-31 Thread AlanCoelho
Title: Message



The saga 
continues
 
It has been 
brought to my attention that the fit.xy data I provided in a previous e-mail has 
misfits in the tails as seen in the following (blue is ray tracing, red is 
convolution):
 
 

 
 
 
These misfits are due to how the ray tracing program 
calculates the final pattern; it is not due to the aberration. 

 
I certainly do not wish to get into any discussion 
concerning implementation of methods and I think viewing the plot in full scale 
explains itself as follows.
 

 
 
 
Pamela Whitfield 
wrote:
 
>This seems to 
have moved away from polarisation onto something far more 
touchy. :-)
 
As 
always.
 
Cheers
Alan 
Coelho


RE: how to find out POLARISATION Factor

2006-05-31 Thread AlanCoelho




Great to hear form you Alexander
 
The mailing list is not really a lions den but you would 
be forgiven for thinking so.
 
Ok Zuev, you have mistook my meaning in your point 4.1 
below. It is not supposed to be “Self-explanatory”. The unreasonable conditions 
I gave for when the convolution 
approach is not valid was to show how unreasonable I believe the following 
is:
 
"Reefman has shown, by analyzing the effect 
of transparency, that the convolution model is not valid in the 
general case."
 
In other words of course it is “not valid in 
the general case”. Either one says that something is adequate for a purpose or 
inadequate for a purpose and not just simply 
inadequate.
 
Let’s not get into word games as this I am sure would 
bore everyone.
 
Now you say below that you were not critical of the 
convolution approach but only guided by the opinions of the authors of the 
convolution approach.  It is Ok to 
be critical of something if you have reason to  
be .
 
Let’s expand on the text:
 
“It 
must be emphasized that the instrumental function Ju(E) 
is 
an approximation.  Reefman has shown, by analyzing the effect of 
transparency, that the
convolution model is not valid in the 
general case."
 
Thus I guess is just another author’s 
opinion; enough said. 
 

I would like to comment now on your new 
approach which I am sure is far more interesting for you.  
 
In your approach I understand the fact that 
each ray is treated analytically in regards to its origin on the source, its diffraction on the sample  whcih then leads to a cone. Grids however  as stated  are necessary for 
the source and the sample and of course accuracy is dependent on the number of 
grid points. For sample transparency the grid would need 
to be three dimensional ; how on earth do you propose to speed that up.  
Good luck  with 
it. Also 
dont be 
surprised if you stick with that approach for axial divergence as in the Full 
axial model and use convolution for the lesser 
abberations.  I look forward to seeing a fine tuned 
version sometime in the future.  By the way I stopped using assembler at 
least 5 years  ago as compilers today  do the 
job.
 
All the 
best
Alan Coelho
 
 
 


From: Zuev [mailto:[EMAIL PROTECTED] 
Sent: Thursday, 1 June 2006 1:24 AMTo: 
rietveld_l@ill.frSubject: RE: how to find out POLARISATION 
Factor


Dear Dr. A. 
Coelho,
dear Rietvelders
 
 
there are four points to be 
considered. 
1. Precision of the fit (using the 
convolution approach).
2. Theoretical basis of the 
convolution approach. 
3. Calculation time with the method 
in my paper.
4. “… some inaccuracies of the above 
mentioned paper” 
 
 
1. There is no doubt that 
convolution approach provides good agreement in profile fitting with ray 
traycing. 
 
2. It is not obviously with the 
rigorous theoretical validation of the convolution method. 

I mean the following 
aspects:
a) Strictly speaking, even the flat 
specimen aberration coupled with the receiving slit width 

can not be mathematically 
described as convolution. 
It is not critique, but just a 
statement of fact. Well, the convolution can be used, 

but this is only approximation 
(even 
if it is good for the certain conditions).
 
b) The same is valid for the axial 
aberration. Also the treatment of the axial aberration is an approximation 

(“More recently Cheary and Coelho 
[3,4] have developed a semi-analytical approach 
to the calculation and their results 
have been incorporated into a 
profile refinement procedure.” From 
R. W. Cheary, A. A. Coelho, J. P. Cline. 
 Fundamental 
Parameters Line Profile Fitting in Laboratory Diffractometers, J. Res. Natl. 
Inst. Stand. Technol. 109, 1-25 
(2004))
 
3. Calculation time. There is only 
the approach in my article was represented. 
It makes possible to develop a 
number of the (exact) modifications and 
approximations (without lost 
of  accuracy). I had not yet made optimization. 

I mean not the code optimization “at 
an assembler code level” (From R. W. Cheary, A. A. Coelho, J. P. Cline.  
Fundamental 
Parameters Line Profile Fitting in Laboratory Diffractometers, J. Res. Natl. 
Inst. Stand. Technol. 109, 1-25 
(2004))), 

but only the mathematical 
algorithms. Partly I have made it.
 
4. About “some inaccuracies of the 
above mentioned paper” 
 
1) 
The reference was made on page 
310-311 and to quote the text of Zuev:
 
"Reefman has shown, by analyzing the 
effect of transparency, that the convolution model is not valid in the general 
case."
 
This of course is correct as the 
convolution approach is not valid for axial divergence greater than probably 10 

degrees in both the primary and 
secondary beams and linear absorption coefficients less than 10 
(1/cm).
 
Self-explanatory.
 
2) 
Back to the paper by Zuev; page 304-305: "To compensate for lack of 
knowledge 
about the influence of coupling 
specific instrumental functions in FPA, it is also necessary to tune 

the fundamental parameters to allow 
a best fit for the 

Re: how to find out POLARISATION Factor

2006-05-31 Thread AlanCoelho
Hi Larry

Whilst we are here I have used the  FCJ correction to fit to the ray tracing
0.5 degree equitorial divergence pattern. To do this I simply informed the
Full Axial Model that divergence in the primary beam is zero. Without
refining on any axial divergence parameters the fit is very poor as you
would expect.

To convert the Full Axial Model to the FCJ correction you need to set
divergence in the axial plane to zero, constrain the source length and
sample length in the axial plane to the same value; this I think corresponds
to something like the FCJ S parameter. Refining on the receiving slit length
in the axial plane corresponds to the other parameter which I think is
called L. In doing so and refining on the two parameters the fit was good
with a few misfits around the peak maxima but nothing too serious (let me
know if you want the data). The refined S and L values refined to:

  S = 11.8680272`
  L = 10.428035`

What you find with the FCJ correction however is that at high angles the S
and L refined values are different and at 90 degrees 2Th the FCJ correction
breaks down as it predicts zero braodening. 

But then again for high angles the emission profile dominates and axial
divergence is only a problem for accurate work. Having said that it is
surprising how much emphasis is placed on small changes in size/strain
Gaussian and Lorentzian broadening without first considering axial
divergence between the range 60 to 120 degrees 2Th. Divergences of 2.5
degrees and less however reduces the problem consdierably for the FCJ
approach.

cheers
alan

-Original Message-
From: Larry Finger [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, 31 May 2006 7:25 AM
To: rietveld_l@ill.fr

AlanCoelho wrote:
> I would like to conclude that to be critical of a method such as the
> convolution approach to describing instrument line profiles it would serve
> authors best to first investigate the approach rather than to be critical
of
> it without substance.
>   
Amen!

I'm really impressed with how well the convolution method has done. 
Without any way to test it, I've always had to accept the ray-tracers 
argument that they did better. I all knew is that convolution did well 
enough. You have shown that it makes no real difference.

Would it be possible for you to generate the data points for the same 
set of parameters with peaks at 5 and 10 degrees? That way the asymmetry 
will be really pronounced and the convolution method will be even more 
stressed. I don't expect it to make any difference, but I'd like to see 
it on the screen.

Thanks,

Larry