Re: GSAS-II - indexing TOF data

2020-04-13 Thread Jon Wright
Hi Alex
I think conograph does well for tof.
Best
Jon


On April 13, 2020 8:24:55 PM GMT+02:00, Alexandros Lappas 
 wrote:
>Dear colleagues, 
>
> 
>
>We are working on some neutron time-of-flight (TOF) data and attempt to
>index them within GSAS-II.
>
> 
>
>As the low-angle detector banks contain information that is not
>resolved by
>the higher angle banks, data indexing is not thorough.
>
> 
>
>Could you please advise whether it is possible to combine "Peak List"
>information from individual detector banks with the purpose to create
>an
>"Index Peak List" that contains a more exhaustive set of reflections
>for
>indexing purposes within GSAS-II?
>
> 
>
>If this is not possible within GSAS-II, could you please suggest an
>alternative 'auto-indexing' suite that may be able to handle TOF
>neutron
>data?
>
> 
>
>Thank you.
>
>Alex Lappas
>
> 
>
>--
>
>Dr. Alexandros Lappas,
>
>Research Director,
>
>Institute of Electronic Structure and Laser (IESL),
>
>Foundation for Research and Technology - Hellas (FORTH),
>
>P.O. Box 1385, Vassilika Vouton,
>
>711 10 Heraklion, Crete,
>
>GREECE
>
> 
>
>Tel   +30 2810 391344
>
>Fax   +30 2810 391305
>
>  Quantum
>Materials & Magnetism Lab
>
>--
>
> 

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.++
Please do NOT attach files to the whole list 
Send commands to  eg: HELP as the subject with no body text
The Rietveld_L list archive is on http://www.mail-archive.com/rietveld_l@ill.fr/
++



Re: Asymmetry peak shape difference between (pellet/powder)

2017-12-08 Thread Jon Wright
Hello,

In the photo, the pellet is black and the powder is white. Why is that?

Best

Jon

On December 8, 2017 3:35:20 PM GMT+01:00, "François Goutenoire" 
 wrote:
>Dear Rietveld users,
>
>We observe an unusual asymmetric peak shape on pellet and not when the 
>sample has been crushed in powder.
>
>This asymmetry is observed even at high 2theta angle. I would like to 
>known if someone has an explanation ?
>
>I have put the data in the link here after
>
>http://perso.univ-lemans.fr/~fgouten/Asymmetry
>
>Best Whishes, François Goutenoire
>
>-- 
>*
>Pr. Francois GOUTENOIRE
>e-mail:francois.gouteno...@univ-lemans.fr
>Tel: 02.43.83.33.54
>FAX: 02.43.83.35.06
>Département des Oxydes et Fluorures
>Institut des Molécules et des Matériaux du Mans
>IMMM - UMR CNRS 6283
>Université du Maine - Avenue Olivier Messiaen
>F-72085 Le Mans Cedex 9
>FRANCE
>*
>Formation Rietveld CNRS
>https://cnrsformation.cnrs.fr/stage.php?stage=18098
>Formation SAXS et Réflectivités pour couches minces et matériaux
>nanostructurés.
>https://cnrsformation.cnrs.fr/stage.php?stage=18104

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.++
Please do NOT attach files to the whole list 
Send commands to  eg: HELP as the subject with no body text
The Rietveld_L list archive is on http://www.mail-archive.com/rietveld_l@ill.fr/
++



Re: TOPAS Macro Language (peak shape broadening macros)

2015-06-24 Thread Jon Wright

Hi Alan,

In the limiting case of orienting all crystals in 3D you get a mosaic 
single crystal (e.g. [1], or using an XFEL to measure 1 at a time). Then 
the refined the crystal structure has somewhat better accuracy than with 
a 1D Rietveld fit. Perhaps not the most popular idea for this mailing 
list :-)


The intermediate case of multiple patterns with properly modelled 
texture should also be better even for refinement. Simplistically - you 
can add information about overlaps like 511/333 cubic overlap if you 
have a series of patterns.


With only a single pattern you introduce extra texture parameters 
without adding the extra information to pay for this (which is why 
people tell us to get rid of it). With multiple patterns the story is 
different, but the price is figuring out the texture properly.


Cheers,

Jon
===

[1] Kimura et al, Langmuir, 2006, 22 (8), pp 3464–3466
DOI: 10.1021/la053479n
http://pubs.acs.org/doi/abs/10.1021/la053479n




On 24/06/2015 19:46, Alan Hewat wrote:

Both Peter and Luca are correct :-) Preferred orientation can indeed be
used to help SOLVE (unknown) structures, but when you want to REFINE
structures you should try to eliminate systematic errors such as
preferred orientation, if possible by better sample preparation. That
may not be possible with highly oriented crystallites such as nanotubes,
and as Naveed found, modelling peak broadening will not help. Search the
program manual for "texture" or "preferred orientation" modelling.
Pax. Alan.

On 24 June 2015 at 17:44, Khalifah, Peter mailto:kp...@bnl.gov>> wrote:

Yes, that is of course a more precise and correct answer.  I did not
mean to imply that diffraction physics work differently, just that
the data processing resulted in some badly wrong assumptions about
the data being used for Rietveld refinement. 

__ __

I tried to give a quick answer to a basic question from a basic
point of view.  If it was an expert question, I would have left it
to the experts like you :-)

__ __

-Peter



__ __

*From:*Luca Lutterotti [mailto:luca.luttero...@ing.unitn.it
]
*Sent:* Wednesday, June 24, 2015 3:52 AM
*To:* Khalifah, Peter
*Cc:* rietveld_l@ill.fr 
*Subject:* Re: TOPAS Macro Language (peak shape broadening macros)

__ __

__ __

On 23 Jun 2015, at 22:22, Khalifah, Peter mailto:kp...@bnl.gov>> wrote:

__ __

Preferred orientation results in the central assumption that the
measured intensity is proportional to the reflection structure
being violated, 

__ __

This is actually quite new to me as definition of preferred
orientation or better texture. Until now I thought that in the case
of preferred orientations the measured intensity is proportional to
the volume fraction of grains oriented in a certain direction…….

__ __

As many people will tell you, it is much more effective to
eliminate preferred orientation (though improved sample
preparation) than to try to model it. 

__ __

We actually use preferred orientation (texture) to help structure
solution, and I found more easy to model them than to eliminate
them. But may be is just me.

__ __

Best regards,

__ __

Luca Lutterotti

__ __

---Luca
Lutterotti

Dipartimento di Ingegneria Industriale, Universita' di Trento,

via Sommarive 9, 38123 Trento, Italy

Temporary address:

Laboratoire CRISMAT-ENSICAEN, Université de Caen
Basse-Normandie, Campus 2

6, Bd. M. Juin 14050 Caen, France

e-mail address : luca.luttero...@unitn.it

Home page : http://www5.unitn.it/People/en/Web/Persona/PER0004735
New Maud page : http://maud.radiographema.com

Phone number :+39-0461-28-2414
Fax :
+39-0461-28-1977--


++
Please do NOT attach files to the whole list

Send commands to mailto:lists...@ill.fr>> eg: HELP
as the subject with no body text
The Rietveld_L list archive is on
http://www.mail-archive.com/rietveld_l@ill.fr/
++





--
__
*   Dr Alan Hewat, NeutronOptics, Grenoble, FRANCE *
 +33.476.98.41.68
http://www.NeutronOptics.com/hewat
__


++
Please do NOT attach files to the whole list 
Send commands to  eg: HELP as the subject with no body text
The Rietveld_L list archive is on http://www.mail-archive.com/rietveld_l@ill.fr/
+++

Re: regress in crystallographic good practices and knowledge

2015-05-08 Thread Jon Wright

Dear Kurt,

You can peaksearch the images with any number of packages and then look
at the extracted spot positions. The stuff we wrote lives at
http://sourceforge.net/projects/fable/ and it is normally used to fit 
grain-by-grain strain tensors. If you can isolate the spots you do 
indeed get a large improvement in resolution (centre of mass versus 
width is typically a factor of 10). If you can rotate the sample and 
collect a full data set you can process them all as single crystals. 
There is a blob searching algorithm in pyFAI which pulls out the peak 
positions out for you in the course of doing calibrations 
(github.com/kif/pyFAI).


In keeping with the subject line: is it a regress or progress to go back 
to using single crystals instead of powders? There is a rumour going 
around (http://dx.doi.org/10.1107/S2052252515004017) that the days of 
"powder diffraction" may be numbered :-)


Best,

Jon

On 08/05/2015 23:29, Kurt Leinenweber wrote:

P.S.  When the splitting arises from a phase transition, sometimes we
see what we call "snowmen" or pairs of spots that are split between
the two rings.  I interpret these as twins arising from the phase
transition.  Has anyone seen these types of pairs of spots before?  I
think there is a lot of information there that we are passing up if
we don't learn to interpret it.


Hi all,



In my defense, 10 of the 12 messages relating to this topic have
had their footers attached.  I will try to figure out how to get
rid of it for next time.



Here is a topic I am interested in: we are collecting a lot of data
on an imaging plate (GSECARS and HPCAT at Advanced Photon Source).
We are interested in splitting of peaks in some of the samples.
The splitting is very difficult to see on the integrated pattern,
but very easy to see on the 2-D imaging plate frame itself.  The
rings are "spotty" and it's very easy to tell which spots are in
which ring when the peak is slightly split.



My question is whether there is software I can use to take
advantage of this and fit the "spots" so I can get a better
resolution of the splitting.  Something in between powder and
single crystal.



Thank you,



- Kurt


++
Please do NOT attach files to the whole list 
Send commands to  eg: HELP as the subject with no body text
The Rietveld_L list archive is on http://www.mail-archive.com/rietveld_l@ill.fr/
++



Re: Delta ferrite by xrd

2013-08-08 Thread Jon Wright

On 08/08/2013 18:04, led15 wrote:

Is possible confirm the presence of delta ferrite in steel by xrd.


Yes: http://dx.doi.org/10.1007/s11661-010-0371-7
++
Please do NOT attach files to the whole list 
Send commands to  eg: HELP as the subject with no body text
The Rietveld_L list archive is on http://www.mail-archive.com/rietveld_l@ill.fr/
++



Re: [sdpd] Re: Are restraints as good as observations ?

2013-07-31 Thread Jon Wright

On 31/07/2013 20:19, Brian Toby wrote:

To perform R-free with powder data, one must excise multiple complete
peaks from the pattern, say a few sections making up 10-20% of
pattern.


Hi Brian,

It is not so bad, you only need to throw in some peak intensities as 
variables (a partial Pawley). You can then continue your multi pattern 
fit with several high resolution patterns. 10% of a thousand "peaks" 
still leaves you with 900. Anyway, if it can be done for SAXS 
(doi:10.1038/nature12070) there is not much excuse with the amazing 
patterns coming from instruments like 11-BM. Even 10% of a 100 peaks 
leaves you with 90.


But how to do it? We just have to find some folks who are in the process 
of writing a cool new Rietveld system and try to persuade them to help :-)


All the best,

Jon
++
Please do NOT attach files to the whole list 
Send commands to  eg: HELP as the subject with no body text
The Rietveld_L list archive is on http://www.mail-archive.com/rietveld_l@ill.fr/
++



Re: Negative Uiso in GSAS

2010-03-03 Thread Jon Wright

I can't resist adding one more to Mike's excellent list:

5. When peaks overlap strongly it becomes difficult to determine the 
background level. Negative Uiso is a consequence of the background 
refining to a value which is too low, especially where the peaks are 
most dense in the pattern (shorter d-spacings or higher angles).


All the best,

Jon

Michael Glazer wrote:

Negative U's in Rietveld can arise from several causes, so that there is
not one single answer. Some of the reasons are
1. The structural model is simply incorrect.
2. High absorption means that the low-angle data are weaker than they
should be, or conversely that the high-angle data appear stronger than
they should be. Abnormally strong high-angle data give rise to a
decrease in U's, even making them appear negative
3. Correlation between the refinement parameters. For instance U's will
tend to be highly correlated with site occupation parameters, often
making it difficult to separate them.
4. In general, many of the errors that one encounters tend to end up in
the refined U's, and this is why their precise values have to be treated
with caution.
Rietveld refinement (as opposed to single-crystal refinement) is in fact
refinement of degraded data (it is one-dimensional instead of
three-dimensional) and so the errors will be more significant.

Mike Glazer


-Original Message-
From: carolina.zip...@fi.isc.cnr.it
[mailto:carolina.zip...@fi.isc.cnr.it] 
Sent: 03 March 2010 16:08

To: rietveld_l@ill.fr
Subject: Negative Uiso in GSAS

Dear all,

could someone explain to me the meaning of obtaining a negative Uiso in
GSAS?
I thought it was always positive...(p. 123 manual)

thanks

Carolina


_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-


  Dr. Carolina Ziparo

   Istituto dei Sistemi Complessi - sezione di Firenze,
   C.N.R. - Consiglio Nazionale delle Ricerche

   via Madonna del Piano, 10
   I-50019 Sesto Fiorentino Italy


   tel.:   +39 055 5226693
   fax:+39 055 5226683
   e-mail: carolina.zip...@fi.isc.cnr.it



  




Re: Mask for Fit2d

2009-08-29 Thread Jon Wright
Just use mask polygon and draw a little triangle inside the pixel you 
want to mask. It helps to zoom in first. (Usually it is easier to find 
the hot pixels with a threshold mask anyway).


Best,

Jon


Yang, Ling wrote:

Hi,

 

Does anyone know how to mask single pixel in Fit2d? I found it necessary 
sometimes to remove only 1 pixel, but did not see this option. Please 
let me know if you have this experience.


Thanks.

 

 


Sincerely,
Ling Yang
Bldg. 8610, MS 6494
Oak Ridge National Laboratory
Oak Ridge, TN 37830
Tel: (865)576 4845
Fax: (865)574 1753

 





Re: Refinement of spiral magnetic structure from neutron powder diffraction

2009-07-24 Thread Jon Wright

Hi Andrew,


A word on GSAS vs Fullprof:
GSAS can only handle a magnetic structure that can be defined by a unit 
cell. So, your incommensurate structure would need to be close to a 
lock-in value to work, i.e. (0 0 0.32) is close to (0 0 
1/3). 


While it is a lot easier to use a super cell, the peak positions don't 
line up. In this gratuitous self citation:


"High-resolution powder neutron diffraction study of helimagnetic order 
in CrP1-xVxO4 solid solutions"

J. P. Wright, J. P. Attfield, W. I. F. David, and J. B. Forsyth
Phys. Rev. B 62, 992 (2000)

... there is an example of a (~1/3,0,0) that didn't lock in. The 
conclusion was that the student could not avoid learning to use a 
program that has propagation vectors ;-)


Cheers,

Jon


Re: Refinement of spiral magnetic structure from neutron powder diffraction

2009-07-23 Thread Jon Wright

Dear Alexander,

This is theoretically one of the few things that can be done with PRODD, 
which is a refinement program based on the CCSL subroutine library. Some 
multibank spiral structure refinements were done with the program when I 
was doing my thesis.



1) How do I decide on the space group? With incommensurate structures is it 
always just P -1


I don't think it is always P-1. With CCSL you either have symmetry 
elements as part of the magnetic symmetry, or not, called MSYM or NSYM. 
It was rather difficult to figure out.



5) Atom types: Where in the manual do I find the correct types I have to use? 
Is it always just M in front of the atom?


With CCSL you look up the magnetic form factors and put them in the 
input file with whatever names you like :-)


Sorry I can't help much on the fullprof questions. Since you seem to 
have a multi-k structure, for PRODD you would need at least two phases 
to do that. I remember spending some time trying with fullprof but 
eventually learned a valuable lesson about why source code is useful. If 
you like, I'd be happy to send you the PRODD sources and try to dig out 
some examples. Even better, you could tidy the sources up and merge them 
with the current CCSL, realistically I am never going to find the time 
to do it myself.


Spiral magnetic structures are a fantastic way to learn about 
crystallography and reciprocal space. Good luck!


Jon




Re: LP factor in the Rietveld refinement

2009-07-22 Thread Jon Wright
Sounds like the parameter is the monochromator angle you would need to 
use to convert an unpolarised beam into a beam with the polarisation 
state you have (eg, 90 degrees gives 100% polarised). Don't confuse this 
with the actual monochromator angle at the synchrotron, as the bean is 
usually polarised before it reaches the monochromator anyway. With some 
packages you can set the monochromator "roll" angle to put the 
polarisation in the right plane, depending which way up an area detector 
was mounted.


Good luck,

Jon

xiu...@ualberta.ca wrote:

Hello, everyone,

I have some questions about the refinement in Topas.

When we put the instrument parameters, we always include the LP 
factor, and set it to a constant value. I thought LP factor is a 
function of theta and not a constant value, so my question is what 
exact the constant value means. Why for unpolarized radiation, it is 
equal to 0, and for synchrotron radiation it is equal to 90. Sorry to 
throw so many questions.


Thank a lot for any help.

Xiujun Li
Master Student
Advanced Materials and Processing Laboratory
Chemical and Materials Engineering
University of Alberta
Edmonton, Alberta, Canada T6G 2G6
Phone: 1-780-492-0701





Re: UVW - how to avoid negative widths?

2009-03-19 Thread Jon Wright

Alan Hewat wrote:

Jon Wright said:

Quick question - does anyone have a trick to stop the Cagliotti formula
going negative?


This can happen if the resolution is relatively flat, so that there is no
well defined minimum. 


Seems to be the problem - also rather close to zero anyway.


 if you have access to the refinement code.


Here I am lucky! As suggested off list - simply return the function to 
the form it was in before I edited the code :-)


Thanks a lot for the help

Jon





Re: UVW - how to avoid negative widths?

2009-03-19 Thread Jon Wright
 According to Caglioti relation, the dimensions of U,V,W are as (angle)^2. 


Quick question - does anyone have a trick to stop the Cagliotti formula 
going negative? Prodd currently has a habit of bugging out on a 
sqrt(negative) and I'm wondering how other folks worked around that, or 
if I've got something completely wrong...?


Thanks,

Jon



Re: How to determine the crystalline system

2009-03-09 Thread Jon Wright

Alan Hewat wrote:

... your name appears as ½¨²¨Áº 


The problem is your mail reader, thunderbird displays some chinese 
characters. Liang's mail correctly uses a MIME encoded word as the ascii 
string:


=?GB2312?B?vaiyqMG6?=

Which means character set "GB2312" (chinese), "B"ase64 encoding, and 
"vaiyqMG6" is the binary encoded string.


Best,

Jon









Re: cRs

2009-03-06 Thread Jon Wright

Olga Smirnova wrote:
Did anyone try to calculate simple mean deviance and compare with 
popular R factors?


OS


For "simple mean deviance" do you mean like in:

http://en.wikipedia.org/wiki/Deviance_(statistics)

Seems we normally call that "chi^2". If that's what you meant, then yes, 
people do routinely calculate and compare it to their R-factors.


Jon



Re: cRs

2009-03-04 Thread Jon Wright

Dear Olga,

As you've noticed, the Rietveld R-factors tell you how well you fit the 
pattern, but don't tell you how well you fit the peaks. There is a 
correlated intensities R-factor which follows from the methods outlined in:


J. Appl. Cryst. (2004). 37, 621-628[ doi:10.1107/S0021889804013184 ]
"On the equivalence of the Rietveld method and the correlated integrated 
intensities method in powder diffraction"

W. I. F. David

If I remember well; to get the corresponding Rfactor you just take the 
correlated intensities chi^2 for your model and divide by the same 
statistic with all I_calc set to zero and perhaps convert to a 
percentage. (Hopefully Bill is reading and will correct that if I've 
screwed it up).


Fortunately any statistics which rely on peak intensities are open to 
abuse by using the wrong cell or space group etc, hence the value of 
Rietveld Rfactors and a really high background ;-)


Best,

Jon




Olga Smirnova wrote:

Dear All,

How is life with conventional R factors when you always have to divide 
by zero background?
Let's have time. Considering a part of the profile without peaks one 
gets 100% cR.
I did not give the agreement factor; I would say those cR with all 
non-excluded points is incorrect, but cR for

points with Bragg contribution is almost the same!
Do you decrease the Rs by adding the background or do you increase cRs 
by subtracting the background?


OS

PS I did not ask my supervisor before sending such a mail.






Re: scattering factor for B+3

2009-02-27 Thread Jon Wright
Out of curiousity, I'm wondering, do many people use ionic scattering 
factors instead atomic ones for Rietveld refinement?


Cheers,

Jon

Lubomir Smrcok wrote:
Just one question : have you asked your supervisor before sending such a 
mail ? Though I understand your position it seems that  you are not very 
familiar with the background of the method ...

lubo


On Thu, 26 Feb 2009, veron wrote:


Hello. I am a PhD student and need your help.

I am currently investigating the structure of an aluminosilicate with 
boron
substituted for aluminium by X-ray diffraction. Using Rietveld 
refinement, I
am trying to determine the respective positions of the boron and 
aluminium
sites within the structure, as well as their occupancies, and to 
follow the
structural evolutions when the boron content increases. Unfortunately, 
there

seems not to be any tabulated scattering factor for B+3, neither in the
refinement softwares (I use Fullprof) nor in the International tables for
X-ray crystallography?

Could someone tell me what the correct procedure for my study would be? I
first tried to define all the atoms as neutral and I am quite happy 
with the
fit obtained. But this does not really sound true. Should I better use 
Al3+,
Si4+, O2- and Be2+ (same electron number than B3+ but probably not the 
same

sin/theta*lambda evolution) ?

Thanks for your help.


Emmanuel Véron

CEMHTI-CNRS
Orléans-FRANCE

**

Emmanuel Veron

CNRS - CEMHTI

Tel. 02.38.25.55.32

Mail ve...@cnrs-orleans.fr

**









TotalCryst workshop, Grenoble, 1-3 April 2009

2009-02-25 Thread Jon Wright
You are warmly invited to attend a 3-day Workshop in Grenoble, which 
aims at explaining the TotalCryst methods and demonstrating the FABLE 
software package.


http://www.esrf.fr/events/conferences/TotalCryst/TotalCryst/

Please bring this to the attention of your colleagues. Registration is
open until 13th March.

We are looking forwards to seeing you in Grenoble!



Re: Bragg-Brentano vs. parallel beam

2009-01-23 Thread Jon Wright

Markus Valkeapää wrote:
Any comments on these two different geometries here on Rietveld list? 


It depends on the sample? Reflection geometry helps when there is a lot 
of absorbtion.


Jon


Re: Beginer problems, difficulty charecterizing our diffractometer...

2008-09-17 Thread Jon Wright

Dear Blaise,

A chi^2 of 500 means that on average your model is off by 500 sigma, or 
500*500=25 counts. This usually means some technical detail has gone 
wrong in file formats or weights.


If you don't manage to figure out the problem then if you can post your 
refinement somewhere I am sure someone reading the list will be able to 
help (should only need original data file in a phillips format, 
converted file in gsas format, and your .exp and instrument parameter 
file).


Best,

Jon

Mibeck, Blaise wrote:
 

I am re-learning GSAS to bring Rietveld to my department (for the first 
time). We have an old Phillips Xpert.


 

I am trying to refine a quartz standard to acquire my profile and 
instrument parameters for this instrument and have yet to get my Chi^2 
below 500. The instrument is not in my direct control. I have caught and 
asked them to correct a few problems already (dwell time too low, 
aluminum sneaking into the beam, etc…) and this has helped. I would like 
not to annoy them any more than I have too.


 

At this point I think my high Chi^2s are mainly due to low angle 
asymmetric peaks that my fit is not able to copy – the asym is not 
terrible but may be messing me up. I am getting better results with peak 
profile functions 3 and 4, but unable to get Chi^2 below 500. My scans 
go up to 70 degrees 2theta.


 

I would like to tell the people caring for the instrument that my 
problem is on their end, but I am not confident in my own refining 
skills to say this. When I work with the tutorials or standard data from 
my graduate school experiment I am ok – I think I am proceeding in a 
reasonable way, although I can get stuck here and there.


 

I would like to find out if I am doing something incorrectly or if the 
problem is instrument related. I hate to bug one of you but wonder if I 
could get someone to look at my work?? Is there a better forum for 
asking for this kind of help?


 


Kindest regards to all of you,

Blaise

 

 

 

 

 

 


* * * * * * * * * * * * *

Blaise Mibeck

Research Scientist

Energy & Environmental Research Center

University of North Dakota

15 North 23rd Street, Stop 9018

Grand Forks, ND 58202-9018

 


Phone: (701) 777-5077

Email: [EMAIL PROTECTED] 

 

 

 



Re: calculating f' and f''

2008-09-09 Thread Jon Wright

Try this:

ftp://ftp.ccp4.ac.uk/ccp4/6.0.2/source/chooch-5.0.2-src.tar.gz

Best,

Jon

Jean-Marc Joubert wrote:

Hello,
I am looking for a program able to calculate f' and f'' from absorption 
data for a resonnant diffraction experiment. I know that CHOOCH program 
was able to do that. This program is no more available from the web. It 
is said to be part of the ccp4 suite. I have downloaded the suite but 
could not find the program in it.

Thank you for your help.
/Jean-Marc


--
Jean-Marc Joubert
Chimie Métallurgique des Terres Rares
Institut de Chimie et des Matériaux Paris-Est
CNRS UMR 7182
2-8 rue Henri Dunant 94320 Thiais
phone/fax: 33 1 49 78 12 11/12 03
email / web : [EMAIL PROTECTED] / 
http://www.glvt-cnrs.fr/lcmtr







Re: Portable XRD

2008-09-03 Thread Jon Wright

Hi,

If you have the usual environment variables set :

GSAS=c:\gsas
PATH=...;c:\gsas\exe
PGPLOT_FONT=c:\gsas\pgl\grfont.dat

then it should all run from a dos prompt (run -> cmd or command.com). Or 
at least, you can type "expedt", "powpref" and "genles" etc and the 
programs run and even deal with redirection ("<" and ">"). With the 
arrow keys and some pre-emptive touch typing you can go quite fast, much 
to the horror of anyone trying to follow what you're doing by watching :-)


Best,

Jon

Mibeck, Blaise wrote:
 

 

I am relearning GSAS right now. I was pretty good at it in graduate 
school (DOS-GSAS). I have PC-GSAS running and everything works -- but I 
am having trouble getting use to the program being in the windows 
environment.


 

I know this is a stupid request: can I still get the dos version? Or is 
it possible to run PC-GSAS in dos? In other words can I make the entire 
program entirely command line? My fingers are twitching to go faster and 
my mouse is getting in the way.  

 

I feel like my dad when he asked me to upgrade his work laptop from 
windows 95 to windows 3.x. J  

 


Kind regards to all!

Blaise

 

 

 

 

 


* * * * * * * * * * * * *

Blaise Mibeck

Research Scientist

Energy & Environmental Research Center

University of North Dakota

15 North 23rd Street, Stop 9018

Grand Forks, ND 58202-9018

 


Phone: (701) 777-5077

Email: [EMAIL PROTECTED] 

 

 

 



Re: [sdpd] Re: Background Subtractions / 2D image plate

2008-08-11 Thread Jon Wright

Dear Jetta,

Unfortunately I didn't work out how to sign in to see the zip file, but 
it sounds like the problem is the "conserve intensity" which should be 
answered as "NO" when transforming data for Rietveld. Otherwise the 
program will account for there being more pixels at higher angles. For a 
flat image plate (eg mar345) you should also use "geometrical 
correction" as "YES" which accounts for the difference between a flat 
detector and a curved two theta scan. It is useful to use the 
polarisation correct (so "YES") if the data are from synchrotron, but 
this mostly affects texture type analysis.


For high throughput we sometimes use this:

http://fable.wiki.sourceforge.net/imaged11+-+fit2dcake

Best,

Jon

PS: Rietveld list is added in cc.


ypchapman wrote:



I have posted ascii files (labeled ".dat") and pics (sorry they are
bmp instead of jpg)of some representative data I obtain from my image
plate. They were integrated on Fit2D using no polarisation or
geometrical correction. "Conserve Integration" was used.

Description of files:

"No_x-rays" means its a scan of image plate when not irradiated with
x-rays...just a clean scan of plate
"Tape_only" means scan of image plate after irradiation of transparent
tape used to suspend samplesno sample, just tape
"Tape+NaCl" means scan of image plate after NaCl (our
calibrant)suspended on transparent tape is irradiated.

Any suggestions or comments are welcomed.

Thanks, Yetta

P.S. Let me know if you can't access the files (its in Bckgrnd
Subtr.zip), I'll send then to you directly.

--- In [EMAIL PROTECTED] <mailto:sdpd%40yahoogroups.com>, Jon Wright 
<[EMAIL PROTECTED]> wrote:


 Hello Yetta,

 If you can also add a jpg of the image too, it would show whether there
 is a high background in the actual data too. Otherwise try fit2d's
 "Image Processing, Display, Slice" feature to get a 1d projection for
 comparision. There are some options in fit2d which can cause
 confusion(eg, remember to subtract the detector dark current, also

check

 your values on "conserve int.", "polarisation", "geometry cor."). These
 are explained in the online help.

 Otherwise, Compton scattering is a form of background which

increases at

 high angles, see eg, J.Appl.Cryst (1995) 28 14-19.

 Take a look at the fit2d macros menu for "high through put".

 Best,

 Jon

 [EMAIL PROTECTED] wrote:
 >
 >
 > Hello Yetta,
 >
 > Would you like to deposit your raw ascii data elsewhere and then

share

 > us a link? i believe it helps for discussion, and solutions,

hopefully.

 >
 >
 > Faithfully
 > Jun Lu
 > --
 > Lst. Prof. Lijie Qiao
 > Department of Materials Physics and Chemistry
 > University of Science and Technology Beijing
 > 100083 Beijing
 > P.R. China
 > http://www.instrument.com.cn/ilog/handsomeland/ 

<http://www.instrument.com.cn/ilog/handsomeland/>
 > <http://www.instrument.com.cn/ilog/handsomeland/ 

<http://www.instrument.com.cn/ilog/handsomeland/>>

 >
 > Lst. Prof. Loidl and Lunkenheimer
 > Experimental Physics V
 > Center for Electronic Correlations and Magnetism (EKM)
 > University of Augsburg
 > Universitaetsstr. 2
 > 86159 Augsburg
 > Germany
 > http://www.physik.uni-augsburg.de/exp5 

<http://www.physik.uni-augsburg.de/exp5>
 > <http://www.physik.uni-augsburg.de/exp5 

<http://www.physik.uni-augsburg.de/exp5>>

 >
 > - Original Message -
 > From: ypchapman
 > To: [EMAIL PROTECTED] <mailto:sdpd%40yahoogroups.com> 

<mailto:sdpd%40yahoogroups.com>

 > Sent: Wednesday, August 06, 2008 11:17 PM
 > Subject: [sdpd] Background Subtractions / 2D image plate
 >
 >
 > Hello all,
 >
 > Does anyone know why my Mar 345 image plate leaves a large background
 > at high 2theta angles in my powder x-ray diffraction pattern. My
 > procedure consists of :
 > 1. collecting diffraction rings through transmission from powder using
 > the image plate
 > 2. integration of rings using Fit2D program.
 >
 > Concern: the intensities are unreliable because of background that
 > appears to be from the image plate.
 >
 > Goal: To subtract this background from each pattern in a high
 > throughput manner.
 >
 > All suggestions and comments are welcomed.
 >
 > Yetta
 >
 >
 >
 >
 >



__._,_.___
Messages in this topic 
<http://groups.yahoo.com/group/sdpd/message/2223;_ylc=X3oDMTM0OG0zYXFwBF9TAzk3MzU5NzE0BGdycElkAzEwNTI1NDgEZ3Jwc3BJZAMxNzA1MDgzNDEyBG1zZ0lkAzIyMjkEc2VjA2Z0cgRzbGsDdnRwYwRzdGltZQMxMjE4NDg2Mjc4BHRwY0lkAzIyMjM-> 
(4) Reply (via web post) 
<http://groups.yahoo.com/group/sdpd/post;_ylc=X3o

Re: interstitial hole size

2008-07-21 Thread Jon Wright
Won't you always be able to put a slightly larger atom if you let the 
atom move up or down out of the trigonal plane? Anyway, it sounds like 
you are asking for the largest circle which fits in the hole between 
three circles who are touching (this is a 2D problem?). Try a google for 
"Descartes Circle Theorem" for 2D or "Soddy Circles" for nD. An answer 
can even be found at the fountain of all human knowledge:

http://en.wikipedia.org/wiki/Descartes%27_theorem

Best,

Jon

Jean-Noel Chotard wrote:

Dear all,

I'm facing a geometrical problem which I hope will be obvious for some 
of you.
How can we calculate the maximum atomic radius of an intersitial 
trigonal site formed by 3 different atoms?
Does anybody know a mathematical way to do so or maybe even a software 
which could calculate that?


Thank you.


Re: WARNING: Posting large attachments on the Rietveld list

2008-03-17 Thread Jon Wright

We have also limited the size of Rietveld list messages to 0.5 Kbytes


0.5 KBytes is only 512 characters? I would h8 2 c text message style 
language as a consequence :-)


Best,

Jon



Re: Fullprof setup Linux

2008-03-02 Thread Jon Wright

William,

The "ELF" usually means it is the equivalent of a .exe file? Try:

chmod a+x setup_fullprof_suite ; ./setup_fullprof_suite

Something came up on the list a while ago about a upx issue. You 
previously  needed something like:


$ sudo apt-get install upx-ucl
$ upx -d fp2k

But perhaps things have changed now

Good luck

Jon



William Bisson wrote:

Dear all,

Can someone please tell me how to install Fullprof in Linux.

After downloading and untarring the file I run the setup_fullprof_suite file
typing $sh setup_fullprof_suite and I end up with the following message
regardless of whether being superuser or not.

setup_fullprof_suite: 1: ELFLinux?44: not found
setup_fullprof_suite: 2: 1?X???T?P???RQ?: not found
setup_fullprof_suite: 4: Syntax error: "(" unexpected

I am running Kubuntu 8.04 64-bit version.

William Bisson

--
Webmaster and Administrator - CCP14 and BCA
http://www.ccp14.ac.uk
http://bca.chem.ucl.ac.uk/


Re: advice on new powder diffractometer

2008-02-18 Thread Jon Wright

Alan Hewat wrote:

Even if you have a low profile R-factor (due to a high background and a
low resolution pattern) the calculated standard deviations in parameters
can still be large, which gives me confidence in Rietveld refinement :-)


I hoped the values of esds come out similar to the ones put in via the 
restraints?


See you next week,

Jon



Re: advice on new powder diffractometer

2008-02-18 Thread Jon Wright

Alan Hewat wrote:

Many theoretical arguments about signal/noise forget that the
experimentalist has a finite time to collect his data :-)


The finite time to analyse and publish can also be rate limiting! In 
this respect a high background is a fine addition to any instrument; it 
gives a much lower profile R-factor. This tells us something about the 
value of the profile R factor, I think ;-)


Best,

Jon


Re: advice on new powder diffractometer

2008-02-15 Thread Jon Wright

Kurt Leinenweber wrote:

... with wRp of 25% and crystallographic R also near 25%  ..


Maybe the background is too low :-?

Jon




Re: GSAS total beginner questions

2008-02-13 Thread Jon Wright
Works for me. Just fill out a BANK line with FXY or FXYE as described in 
the manual :-)


Cheers,

Jon

[EMAIL PROTECTED] wrote:


Isn't GSAS able to handle XY data?? 




Cheers

Matthew


Matthew Rowles

CSIRO Minerals
Box 312
Clayton South, Victoria
AUSTRALIA 3169

Ph: +61 3 9545 8892
Fax: +61 3 9562 8919 (site)
Email: [EMAIL PROTECTED]







Re: Automated phase matching?

2008-02-04 Thread Jon Wright

Yetta Porter wrote:
Hello all, I am interested in finding software that can convert 2D 
diffraction rings into 1D diffraction patterns AND allow phase/pattern 
matching using the ICDD's PDF database. Currently, I am using Fit2D to 
convert my rings to patterns and then Match! to phase match my patterns. 
However, I'd like something alot more high throughput. Am I asking for 
too much. 


Fit2d can do "high throughput". You can either write the macro yourself, 
or try using one of our scripts:


http://fable.wiki.sourceforge.net/imaged11+-+fit2dcake

Provided the calibrations are done, it should be possible to have it 
automatically churning your data as soon as you collect it. You will 
still need something for the search match. Always there seems to be a 
windows gui which you have to manually click on ;-)


Good luck,

Jon

I've looked into using a program called PolySNAP. However I do 
not believe it is compatible to PDF-4+. Do any have you have experience 
with PolySNAP?
 
Thanks in advance.





Open Access Articles: 60 years of Acta Crystallographica and the IUCr

2007-12-21 Thread Jon Wright

Hello Everyone,

Some reviews have just come out in the first 2008 issue of Acta A: 60 
years of Acta Crystallographica and the IUCr:


http://journals.iucr.org/a/issues/2008/01/00/issconts.html

Start citing now for a nicely skewed impact factor in a few years time :-)

Happy Christmas,

Jon



Re: Hyper Lorentzian peak shape

2007-11-22 Thread Jon Wright
If you can fit via a sum of a broader lorentzian and a narrower one it 
may indicate the sample is a "mixture". Are you sure the sample is 
microscopically homogenous?


Best,

Jon





Francois Goutenoire schrieb:

Dear Rietveld users,

I am currently working on Williamson-Hall graph in order to check 
strain effect on some new phase.


I have some hyper-lorentzian peaks eta>1 from a pseudo-Voigt 
decomposition , then I am not able to get BetaL and BetaG from 
Winplotr decomposition.


Does anyone have an idea ?


Francois Goutenoire
Laboratoire des Oxydes et des Fluorures, UMR6010.







Re: X-ray scattering contrast Fe/Cu

2007-11-19 Thread Jon Wright
Just a quick plug for a program called "chooch" which does a very nice 
job of calculating f' and f" from an edge scan, see: 
http://www.gwyndafevans.co.uk/chooch.html


For Fe/Cu then neutrons may help.

Cheers,

Jon


[...]
Each edge to be used is recorded, this allow to calculate using 
Kramers-Kroening relation the effective value of f' and f'' at the 

[...]


Regards,
Jfrancois


Dear Franz,
The best thing to do is collect 2 patterns, one just below the Fe
absorption edge and one well above both absorption edges to get normal
scattering from both atoms.
If you manage to collect data a few eV below the actual Fe edge
(remember that the edge energy changes a bit with charge and
coordination environment so an energy scan to determine the edge
position is recommended to select the energy to use) that will increase
the contrast from 4 electrons (assuming CuII and FeIII) to ~8 or 9 
electrons (~30% difference in scattering power, enough to refine

occupancies).
Collecting data at the Cu edge will just reduce the contrast between Fe
and Cu possibly making Cu weaker scatterer than Fe in some conditions.
The tricky part is to determine exactly the f' and f'' corrections (and
therefore the real change in scattering power) due to the energy change
of the absorption edge and the steep change in f' and f'' close to the
edge that makes small errors in energy (1eV error of selected energy
radiation) have a big effect on the correction . The closer you get to
the edge the larger the contrast but the higher the uncertainty in the
f' and f'' correction values.

Hope this helps.
Leo

Dr. Leopoldo Suescun
Argonne National Laboratory
Materials Science Division Bldg 223
9700 S. Cass Ave. Argonne IL 60439
Phone: 1 (630) 252-9760
Fax: 1 (630) 252-

- Original Message -
From: Franz Werner <[EMAIL PROTECTED]>
Date: Friday, November 16, 2007 10:44 pm
Subject: X-ray scattering contrast Fe/Cu

 

Dear Rietvelders
Does anyone know whether the scattering contrast of Fe and Cu is 
large enough to refine their occupancies from a single X-ray powder 
pattern or would multiple wavelength patterns help?


Thanks for your advice.

Regards
Franz Werner
--
Pt! Schon vom neuen GMX MultiMessenger gehört?
Der kann`s mit allen: http://www.gmx.net/de/go/multimessenger










Re: Absolute structure from powder data?

2007-11-14 Thread Jon Wright

Did anyone look into the effects of using circularly polarised x-rays?

Just wondering...

Jon

Larry Finger wrote:

Franz Werner wrote:

Dear Rietvelders

Is it in principle impossible to determine the absolute structure from powder 
data due to reflection overlap or is there a way via multiple wavelength 
diffraction experiments?

Thanks for your advice.


To determine absolute structure, you need three things: (1) a 
non-centrosymmetric structure, (2) at
least one "anomalous" scatterer, and (3) the differences in intensities between 
Friedel pairs of
reflections, i.e. reflection hkl and reflection -h-k-l.

In any diffraction experiment with the wavelength chosen carefully, you could 
satisfy conditions 1
and 2. Number 3 always nails you in a powder experiment due to the systematic, 
exact, overlap of hkl
and -h-k-l.

Larry




The magic of the algorithm

2007-08-14 Thread Jon Wright
Is there not some significant prior art in iterative phase refinement by 
an "FFT then flip" algorithm?


See for example:

Acta Cryst. (1996). D52, 30-42[ doi:10.1107/S0907444995008754 ]
"Methods used in the structure determination of bovine mitochondrial F1 
ATPase" J. P. Abrahams and A. G. W. Leslie


Abstract: With a size of 372  kDa, the F1 ATPase particle is the largest 
asymmetric structure solved to date. lsomorphous differences arising 
from reacting the crystals with methyl-mercury nitrate at two 
concentrations allowed the structure determination. Careful data 
collection and data processing were essential in this process as well as 
a new form of electron-density modification, `solvent flipping'. The 
most important feature of this new procedure is that the electron 
density in the solvent region is inverted rather than set to a constant 
value, as in conventional solvent flattening. All non-standard 
techniques and variations on new techniques which were employed in the 
structure determination are described.



Acta Cryst. (2000). D56, 1137-1147[ doi:10.1107/S09074449932X ]
"A flexible and efficient procedure for the solution and phase 
refinement of protein structures"
J. Foadi, M. M. Woolfson, E. J. Dodson, K. S. Wilson, Y. Jia-xing and Z. 
Chao-de


Abstract: An ab initio method is described for solving protein 
structures for which atomic resolution (better than 1.2 Å) data are 
available. The problem is divided into two stages. Firstly, a 
substructure composed of a small percentage (~5%) of the scattering 
matter of the unit cell is positioned. This is used to generate a 
starting set of phases that are slightly better than random. Secondly, 
the full structure is developed from this phase set. The substructure 
can be a constellation of atoms that scatter anomalously, such as metal 
or S atoms. Alternatively, a structural fragment such as an idealized 
[alpha]-helix or a motif from some distantly related protein can be 
orientated and sometimes positioned by an extensive 
molecular-replacement search, checking the correlation coefficient 
between observed and calculated structure factors for the highest 
normalized structure-factor amplitudes |E|. The top solutions are 
further ranked on the correlation coefficient for all E values. The 
phases generated from such fragments are improved using Patterson 
superposition maps and Sayre-equation refinement carried out with fast 
Fourier transforms. Phase refinement is completed using a novel 
density-modification process referred to as dynamic density modification 
(DDM). The method is illustrated by the solution of a number of known 
proteins. It has proved fast and very effective, able in these tests to 
solve proteins of up to 5000 atoms. The resulting electron-density maps 
show the major part of the structures at atomic resolution and can 
readily be interpreted by automated procedures.


Re: gnuplot help regarding

2007-07-16 Thread Jon Wright
Easiest is to click on the top corner of the plot window and select 
options "copy to clipboard". Or directly write to gif via:


gnuplot> set terminal gif
Terminal type set to 'gif'
Options are 'small size 640,480 '
gnuplot> set output "test.gif"
gnuplot> plot sin(x)
gnuplot> set terminal win
Terminal type set to 'windows'
Options are 'color "Arial" 10'
gnuplot> shell
gnuplot> set output "junk"


HTH,

Jon


Murugesan S wrote:

Dear All,
I try to save the plot file as a GIF or JPG iamge in wgnuplot program, 
but I can't,

please tell me how to get an out put file as an image from the wgnuplot,

(for microstrain plot)

Thanks in advance

regards
SM





Re: peaks splitting

2007-05-22 Thread Jon Wright

Hi everyone,

Three responses saying it might be a lower spacegroup? I wonder if the 
emperors have their clothes on today?


(006) reflections can't split in R-3c as they only have a multiplicity 
of 2 in the first place (0,0,6 and 0,0,-6). You were lucky enough to 
split (00l) which rules out any lattice distortion. Or lots of people 
are about to shout at me.


Either a paper about "phase segregation" or an experimental artifact 
like a temperature instability. Or both!


Cheers,

Jon

[EMAIL PROTECTED] wrote:

Dear all
I need your kind help
I am investigating a trigonal system. I collected neutron diffraction
patterns at T=300K and T=4K.
The data at T=300K can be nicely fitted with the spg R-3c.
At T=4K again I can described the data with the spg R-3c, but I noticed that now
the peaks with a larger c-axis component (see the (006) peak picture in
attachment) are splitted in two: it is like as at low temperature there are two
phases with different c-axis (10.5838 and 10.566 Amstrong) and same a-axis.
I don't think that the sample is chemically phase separated because at room
temperature the (006) reflection is clearly a single peak. The splitting
appears only at low temperature.

Could anyone suggest me any possible explanation of this splitting (lattice
distortion, modulation, etc)? Could be possible a triclinic distortion?

I don't know how to fit the data at T=4K. Should I change the space group
because now I have two peaks while the R-3c gives me only one peak? Then by
which criteria should I choose the new space group?

thank you very very very much for your advices


best regards

Stefano Agrestini

Physics Department
The University of Warwick, UK


This message was sent using IMP, the Internet Messaging Program.









Re: [Fwd: [ccp4bb] Nature policy update regarding source code]

2007-03-23 Thread Jon Wright

AlanCoelho wrote:

Not sure what to make of all this Jon


Don't shoot the messenger, I was surprised enough by it to forward it to 
the list. I guess they imply if you want to keep all implementation 
details secret you should be patenting instead of publishing? (Patents 
seems to be free online, NM is $30 per 2 page article?). As Vincent 
says, the new part of an algorithmic development is often much less than 
the whole package, and useless to the average user without a gui.


Wonder what this means for any REAL programmers out there who are still 
programming directly in hex. Their source is most people's compiled, but 
considerably more interesting to read ;-)


Best,

Jon


[Fwd: [ccp4bb] Nature policy update regarding source code]

2007-03-23 Thread Jon Wright

Hi Everyone; Just in case you don't follow ccp4bb or nature methods:

http://www.nature.com/nmeth/journal/v4/n3/full/nmeth0307-189.html


I thought that some of you might be interested that the journal Nature
has clarified the publication requirements regarding source code
accessibility.  It is likely that some of you deserve congrats
for this.  Cheers!

http://www.nature.com/nmeth/journal/v4/n3/full/nmeth0307-189.html

Although there are still some small problems, I think that this is a
big step forward, and certainly an interesting read, if you are
interested in FOSS and science.

Regards,
Michael L. Love Ph.D
Department of Biophysics and Biophysical Chemistry
School of Medicine
Johns Hopkins University
725 N. Wolfe Street
Room 608B WBSB
Baltimore MD 21205-2185

Interoffice Mail: 608B WBSB, SoM

office: 410-614-2267
lab:410-614-3179
fax:410-502-6910
cell:   443-824-3451
http://www.gnu-darwin.org/



-- Visit proclus realm! http://proclus.tripod.com/ -BEGIN GEEK CODE BLOCK- Version: 3.1 GMU/S d+@ s: a+ C UBULI$ P+ L+++() E--- W++ N- !o K- w--- !O M++@ V-- PS+++ PE Y+ PGP-- t+++(+) 5+++ X+ R tv-(--)@ b !DI D- G e h--- r+++ y --END GEEK CODE BLOCK-- 




Re: Powder Diffraction In Q-Space

2007-02-22 Thread Jon Wright

Jacco,

I/sigma_I is a good idea in principle. It just looks ugly if you get the 
sigmas right for certain instruments - there are meaningless steps all 
over the place.


All of these problems of "what to plot" would go away if we all 
submitted powderCIF files for our refinements. The "plot" could then be 
an interactive application embedded in the electronic document and 
readers could select the units they want. They could also zoom in on it. 
It is hard to get away without a cif for a single crystal structure now 
- just make it so for powders too.


Didn't a physicist already get caught faking results by leaving embedded 
plots in submitted documents?


Is it just time to formalise that plots should embed the table of 
numbers they represent? When the next generation of CIF arrives (DDL3) I 
understood it will be possible to describe the equation linking 2theta 
to Q for example. This could keep Klaus happy, as well as the thousands 
of people using CuKa in the lab.


Jon

Jacco van de Streek wrote:


--- "Von Dreele, Robert B." <[EMAIL PROTECTED]> wrote:
 


Actually, I looked at Luca's little "show" & was
sufficiently interested
that GSAS will now plot sqrt(I) style plots. There
is one "problem" -
the value of "I" can be negative particularly after
a background
subtraction. These must be suppressed to zero for
this plot to work -
thus there is a small risk of something getting
hidden.
   



I think that that is because although it is called a
SQRT plot, that is just because that is how it is
usually implemented. What is meant is a more general
principle, namely that of displaying the data points
divided by their ESDs (where, of course, the ESDs must
be calculated *from the raw number of counts*). I will
try to explain why.

 






Re: GSAS: interpretation of strain broadening

2006-10-25 Thread Jon Wright


Let's say one gets a value of 0.1% strain. 


=> width of the distribution of cell parameters in the sample

But on the other hand there are also the unit cell sigmas. 


=> error bar on the average value of the cell parameters


How can one visualise this?
 

There is a distribution of cell parameter values, hopefully this 
distribution looks like a peak: 
"0.1% strain" = width of peak

"unit cell sigma" = error on the peak position

HTH,

Jon



Re: crystal structure of SUCROSE

2006-07-04 Thread Jon Wright

Sudhish,

See: J. Appl. Cryst. (1991). 24, 352-354   
"Sucrose, a convenient test crystal for absolute structures"

R. C. Hynes and Y. Le Page
P2(1), a = 10.8631(9) , b = 8.7044(6) , c = 7.7624(7) Å, [beta] = 
102.938(7)° at 300(1)  K


Despite being great for availability, sugar crystals are frequently 
cracked so check them under a microscope!


Jon

sudhish kumar wrote:


Dear All,

I would greatly appreciate receiving the information about the crystal 
structure with space group and lattice parameter of Sucrose(sugar).

Thanks in advance

SUDHISH










Re: Uiso constraints

2006-07-03 Thread Jon Wright



I would like to know whether GSAS is able to constraint the Uiso value 
so that it will be always bigger than zero.


As others have indicated - GSAS does not do that (topas gives you the 
option, doubtless amongst others). Shelx does do that (with single 
crystal data) and you can't turn it off! Sometimes it is easier to find 
out what is actually going wrong when parameters are not allowed to 
refine to nonsensical values, and sometimes you might think you prefer 
the values to be free. You can usually fix the uiso to reasonable values 
and then try to work out what has gone wrong - didn't see anyone has 
mentioning the lorentz-polarisation effects yet, which can also have a 
great influence if you get the wrong one.


Good luck,

Jon



Re: GSAS ESD/STD conversion

2006-06-29 Thread Jon Wright

Brian H. Toby wrote:


I am trying to convert a GSAS ESD file to a GSAS STD file

BTW, I would be surprised if FullProf does not have a way to input 
intensities with uncertainties rather than assuming intensities are 
counts.


Indeed it does! INSTRM=10 is x, y, sigma format for fullprof where it 
takes the first 6 lines as comments and computes sqrt(I) only if you 
don't supply the sigma.


Some years ago I did see the reverse process of an x, y, esd file being 
mangled by some conversion program on the way to becoming GSAS format, 
but I forget the culprit. Quite surprisingly I see GSAS2CIF also 
truncates, but at least only corrupts the data by a fraction of an esd. eg:

BANK 190191804 CONST 0.200 0.200  0.0 0.0 ESD
   26.5 5.718.9 4.424.4 5.117.2 4.4
17.4 4.4

becomes:
loop_
 _pd_meas_intensity_total
 _pd_proc_ls_weight
 _pd_proc_intensity_bkg_calc
 _pd_calc_intensity_total
  27(6) 0.0-309450.1.
  19(4) 0.0-299678.4.
  24(5) 0.0-290192.4.
  17(4) 0.0-280984.3.

Is there any chance of seeing
_pd_meas_angle_2theta 

(or ToF, Energy) etc being added as a column and the sigma being in a 
separate column instead of in brackets? I didn't see a cif label for 
this column but the bracket notation is annoying to either parse or 
output (the mmcif people seem to be allowed to use *_esd). It seems that 
one could even give the "right thing" to copy+paste to an x,y,esd file, 
if the spare columns are at the end.


Seems that most conversion programs will screw up if they are doing 
something they weren't supposed to do. Wherever possible try to go back 
to the "raw" data file from the instrument and process it directly into 
the format you want.


Jon



Re: How to prepare intensity and hkl files needed by SHELXTL-97

2006-05-31 Thread Jon Wright

Bob,

Shelx's (3I4,2F8.2) for h,k,l,FoSq,sig is straightforward to do in 
fortran but needs a custom number format in excel? Some people are able 
to type:


awk "{printf(\"%4d%4d%4d%8.2f%8.2f\n\",$1,$2,$3,$8/100,$9/100)}" < 
test.rfl > test.hkl


...and then delete the first and last line from the hkl file - but such 
beings generally don't need help - hence the script ;-)


Best,

Jon

ps: Sadly my mail program appears to have corrupted the indentation, so 
here another attempt at sharing that python script is below. The awk 
version above should survive transmission but you have to fiddle about 
with the factor of 100 manually.


=
#!/usr/bin/env python

import sys

data = []
max_intensity = 0.
for line in open(sys.argv[1],"r").readlines()[1:]:
  try:
 [H,K,L,M,sth_lam,TTH100,FWHM,FoSq,sig,
  Fobs,obs,phase] = [float(x) for x in line.split()]
  except:
 break
  data.append([H,K,L,FoSq,sig])
  if FoSq > max_intensity:
 max_intensity = FoSq

scale = .99/max_intensity
outputfile = open(sys.argv[2],"w")
for [H,K,L,FoSq,sig] in data:
  outputfile.write("%4d%4d%4d%8.2f%8.2f\n"%(H,K,L,
  FoSq*scale,sig*scale))
outputfile.close()





Re: How to prepare intensity and hkl files needed by SHELXTL-97

2006-05-31 Thread Jon Wright
I've attached a python script below which will convert a gsas rfl file
into shelx hklf 4 format - it assumes there is no peak overlap at all,
so it is *not* hklf 5 format and *not* suitable for refinement!

After installing python from www.python.org and copying the script into
your working directory (eg c:\mydata) the usage from a dos prompt would be:

c:\mydata> rfltoshelx.py mydata.rfl mydata.hkl

It just picks out the h,k,l,FoSq,sig columns and scales the intensities
so they can be written within the fixed width limit for shelx.

good luck,

Jon

== rfl2shelx.py ==
#!/usr/bin/env python

import sys

data = []
max_intensity = 0.
for line in open(sys.argv[1],"r").readlines()[1:]:
try:
[H,K,L,M,sth_lam,TTH100,FWHM,FoSq,sig,Fobs,obs,phase] = [float(x) for x
in line.split()]
except:
break
data.append([H,K,L,FoSq,sig])
if FoSq > max_intensity:
max_intensity = FoSq

scale = .99/max_intensity
outputfile = open(sys.argv[2],"w")
for [H,K,L,FoSq,sig] in data:
outputfile.write("%4d%4d%4d%8.2f%8.2f\n"%(H,K,L,FoSq*scale,sig*scale))
outputfile.close()





Dr. Yi-Ping Tong wrote:

>   But the 'R' option of REFLIST of GSAS just generates file format designed 
> for use by SIRPOW, not by SHELXTL-97. I want know how GSAS can prepare a 
> intensity and hkl value file designed for use by SHELXTL-97. Thanks in 
> advance.
>   
>Best wishes
>
>Dr. Y.-P. Tong
>
>
>
>
>
>
>
>   
>
>Use "R" option in REFLIST
>
>R.B. Von Dreele
>IPNS Division
>Argonne National Laboratory
>Argonne, IL 60439-4814
>
>
>
>-Original Message-
>From: Dr. Yi-Ping Tong [mailto:[EMAIL PROTECTED] 
>Sent: Tuesday, May 30, 2006 9:55 AM
>To: Rietveld BBS
>software?
>
>
>
>Dear all
>
>Does anybody could tell me how GSAS can extract intensity and hkl
>valuesneeded by SHELXTL-97 software for structure determination ?  
>
>Any suggestions or comments are welcome. Thanks in advance.
>
>
>
>Best wishes
>
>Dr. Y. P. Tong
>
>China
>
>
>
>
>
>
>
>
>
>  
>




Re: fundamental parameters approach & X'celerator

2005-05-03 Thread Jon Wright

Its best to use the terms axial plane and horizontal plane to avoid
confusion.
 

Alan, I'm confused! What do you mean by those terms? I thought "axial 
divergence" was "beam hits different points along the two theta axis" - 
is that a misconception?

Last point, what do you mean by vertical divergence? The Topas 
manual defines the 'axial divergence' which can be limited by soller 
slits

Arie, I meant divergence in the direction you measure, which I was 
assuming as vertical. eg: finite size of source, detector, flatness of 
sample, achievement (or not) of focusing geometry. The effect of soller 
slits depends which way up you bolt them onto the instrument.

Jon


Re: fundamental parameters approach & X'celerator

2005-05-03 Thread Jon Wright

I did some tests on the profile of the first diffraction peak of LaB6, 
where I added a secondary soller angle, since I used secondary 
soller slits, and I added also the parameter for the horizontal 
divergence in the equitorial plane. Finally I added a crystallite-size 
parameter CS_L. I did the calculation for two measurements: one 
with a  horizontal divergence of 1/32 degrees and the other with 1/4 
degrees. Fundamental parameters were not refined,as it should be. 
If the FP description was correct, then CS_L should refine to 
approximately the same value in the two cases. Well, it doesn't 
(310 and 190 angstrom, respectively). The fits are reasonable, but 
not perfect. 
 

Silly question and I think I am missing the point - but how did you fix 
the vertical divergence contribution? I thought LaB6 from NIST had a 
larger crystallite size than either of your values, suggesting something 
has gone wrong in both cases. If the program thinks you have a lot of 
sample broadening, when in reality LaB6 was chosen for having very 
little, it is probably missing a contribution of the instrument. You 
appear to have forced that instrumental thing be Lorentzian by refining 
only CS_L. Maybe just refine CS_L and CS_G together? This is unlikely to 
solve your problem but remember powder diffraction is only really good 
for measuring crystallite size when there is an appreciable size 
broadening compared to a line shape standard like LaB6!

And no one has mentioned "tube tails" yet ;-)
Good luck,
Jon


Re: Mean vs. Median to reduce bias in grainy intensities (was Re:

2005-04-04 Thread Jon Wright
Alex,
While I can understand the general rationale for the idea (minimize the
weight of the very strong reflections to the final integrated intensity
for the reflection), could you expand further on your level of success
and any analysis you have carried out on this plan? 

The massive random spikes can be made to disappear? Works on things like 
reflections from diamond anvil cells, mixtures of two phases where one 
has a very large grain size compared to the other, ice peaks etc. Most 
people will try to "mask" this stuff out manually in fit2d in the 
absence of some other method. Sometimes means having data you can treat 
instead of deleting it and repeating the experiment with a faster spinner.

In particular, if
you or anyone else has anything out in print that discusses this I would
be very interested.
 

I'll ask around, can't think of any papers right now. One program to do 
it is "ascake", by Gavin Vaughan, if you are lucky enough to have data 
from ID11 at ESRF and access to our computers. Otherwise "cake" your 
data in fit2d into (say) 360 one degree azimuth bins, giving you 360 
powder diffraction patterns. Then go to "image processing", select 
"filter", then "median", then the x bins should be 1 and the ybins 
should be 359. Rather slow, but effective at picking out the smooth 
rings and killing the spots. If there are only a few problematic 
outliers you can get away with fewer azimuthal bins. Via the "keyboard 
interface" in fit2d you can also compute a "histogram" of intensity 
values for an "ROI" containing one two theta and the whole range of 
azimuth. That is currently a bit painful as you need a long macro to do 
it for all two thetas, but you could then try to write out and fit the 
intensity distribution at each two theta to some meaningful function, 
like a lognormal distribution. I've never tried, as one ought to put in 
a Lorentz correction for the rotation, which is a bit difficult. Nothing 
intractable, but it is easier to measure grain sizes when you have 
resolved the spots by scanning the grains through the beam, so not a lot 
of point in making life difficult. There is an extensive literature on 
what to do when you can resolve individual spots - eg: 
"Three-Dimensional X-ray Diffraction Microscopy" by  Henning Poulsen, 
Springer 2004.

Median filters are pretty much accepted, so it is not a great leap to 
realise that a 2D image has a high "redundancy" in terms of having 
measured the same 1D pattern multiple times. Even with only one image 
that information can confirm or rule out granularity and texture as long 
as you didn't rotate the sample around the incident beam during collection.

HTH,
Jon



Re: Level of Preferred Orientation

2005-04-01 Thread Jon Wright
Dear Bhuv
> pattern and I have corrected possible graininess with spherical
[...]
> The data I have collected is on a image plate (only one frame).
Not sure I understand? If you have a 2D image showing powder rings then 
you should have some very good ideas about the level of granularity or 
texture in the sample. Just look for the variation in intensity versus 
azimuth? Did you mean a one dimensional image plate?

One way to reduce "graininess" if you have a mixture of grains and fine 
powder: take the median when integrating around the rings instead of the 
mean. If you only have grains and no continuous rings then better to do 
the single crystal experiment...

> In such a case, is it possible to give (any meaningful) Standard
> Uncertainties for the atom positions, (also) given the fact that the
> atoms are not refined independently?
Some comments:
1) "Standard Uncertainty" is a defined statistical quantity. Always 
theoretically possible to derive it from a least squares refinement. It 
should always represent what it is defined to represent. Rarely what you 
want to know ;-)

2) If the cell parameters a,b,c of a cubic crystal are constrained to be 
equal I assume the value and esd is the same for all three, and that 
they are 100% correlated. That they have not been refined independently 
would be a relief (equivalent feats are sometimes attempted via Rietveld).

3) For your constrained positions the esd from the refinement may 
reflect the esd on the position and orientation rather the individual 
atom position. It just means there are very high correlations being 
hidden by the constraints (Z-matrix or otherwise).

4) Replace "Z-matrix constraint" with "symmetry operator constraint" and 
then decide if you could bring yourself to list atomic positions and 
esds in a lower space group than the one you used for refinement. (eg: 
for comparing structures above and below a phase transition)

So "yes" it seems possible to give "meaningful" standard uncertanties 
for the atomic positions, provided you mention the constraints and 
restraints used. Although they are not interesting, they are 
considerably more meaningful than Rwp in terms of evaluating the structure!

Good luck,
Jon


Re: Questions about the asymmetry correction

2005-03-31 Thread Jon Wright
Xianqin,
How did you calibrate the sample detector distance? For reliable 
absolute cell parameters it is very difficult to get good data with an 
area detector. One approach is to mix a standard material into the sample.

As a gross approximation for small angles (in radians):
sin(theta) ~ theta   ;   tan(2theta) ~ 2theta ~ x/d
tan(2theta) angles from the image plate are pixel-coordinate (x) / 
distance (d) and from Braggs law you get roughly:
d-spacing ~ wavelength * d / x

So the percentage error in d-spacing (or unit cell parameter) is of the 
same order of magnitude as the percentage error in sample detector 
distance. How different are your cell parameters from the different 
experiments? How much of an error in sample to detector distance would 
that correspond to? Was the detector tilted at all?

The peak widths are frequently dominated by the sample thickness, so if 
you can fit your peak position to within 10% of the width you have to 
align your sample to better than 10% of the thickness. eg: within 30 
microns for a 0.3 mm capillary. Or in laymans terms, less than the 
thickness of a human hair ;-)

Normally we can get away without using an asymmetry correction for area 
detector data, as the peaks come symmetric here if the beam centre is OK 
in the integration. Also ratios of cell parameters (a/b, b/c etc) and 
angles seem to be reliable without an in-situ standard, even if the 
absolute values are off.

Good luck,
Jon
XQ WANG wrote:
Dear all:
Recently, I did some refinements with GSAS program to get lattice 
parameters for some nanomaterials. The data were collected at X17B1 of 
NSLS with two different distances between image detector and sample. I 
noticed that the lattice parameters were different with different 
distances for the same sample. Also, I tried to refine the data with 
and without asymmetry correction; this also resulted in the different 
lattice parameters. (I used the same instrumental parameter file for 
two different distances). I also tried to refine the data collected at 
other beamlines, the cell parameter was also different. I am a bit 
confused with all these changes. Any comments from you all will be 
highly appreciated! Thanks,

Nice day,
Xianqin
Xianqin Wang
Brookhaven National Lab
Upton, NY, 11973
USA
_
Express yourself instantly with MSN Messenger! Download today - it's 
FREE! http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/





Re: Change format

2005-03-31 Thread Jon Wright
Brian Toby wrote:
If the numbers are real values then they are not counts. They may be 
counts per second or something else. One needs to have uncertainty 
estimates for these intensities before one can do a meaningful 
Rietveld refinement (with any code). Following the above procedure, by 
multiplying them by an arbitrary constant and then inputting them to 
GSAS as if they were counts will cause GSAS to compute the 
uncertainties as the square-root of the intensities, something that is 
quite unlikely to be true. This will cause the refinement to be 
weighted improperly and the "r-factors" to be wrong.
Assuming you model a single pattern, then multiplying the data by some 
constant should not alter the esd's on refined parameters for programs 
which derive them conventionally (using a chi^2 factor), since 
sqrt(cI)=sqrt(c)sqrt(I). This can be a very good approximation if the 
statistics are about the same over the whole pattern. If the statistics 
are different in different regions of the pattern, then problems could 
arise. However, in the majority of published refinements the 
distribution of misfits is not Gaussian and "all bets are off" anyway, 
so having the right error model for your data is not a great help, as 
you have no error model for your model. One can sympathise with anyone 
who wants to get on with the science ;-)

If the major difficulty is screwing up r-factor statistics, why would 
that be a problem? Isn't it better to abandon these meaningless numbers 
sooner rather than later? Lower powder R-factors are normally an 
indicator of lower data quality, either higher background or more peak 
overlap. As far as the major players (GSAS and fullprof) are concerned, 
they aren't even the same thing. Just stop reporting them please!

I agree that one should try to get the errors on the data right, but it 
won't stop the refinement being weighted "improperly" and it won't make 
the r-factors right ;-)

Jon


Re: K alpha-2 stripping

2005-03-29 Thread Jon Wright
Gerard,
If you "strip" the K-alpha-2 using a monochromator, then you might get 
better data. If you don't have that option, then fitting the data as 
they come is probably a better approach, as you don't run the risk of 
introducing artifacts.

Jon
Gerard, Garcia S wrote:
Dear everybody,
 
How important is the K alpha-2 peaks stripping in order to get a good 
refinement?
 
Thanks
 
P.S: Sorry for such simple questions but i couldn't work it out 
without your inestimable help. By the way, thanks Leonid for your 
clever answers




Rwp versus R(F^2)

2005-01-21 Thread Jon Wright
Hello everyone,
Has anyone else noticed that sometimes when comparing two models Rwp is 
better for one and R(F^2) is better for the other? The case I am looking 
at is spacegroups P3 versus P-3, with the non-centrosymmetric structure 
having more degrees of freedom and therefore very slightly better Rwp 
(7.6% versus 7.7%). I don't understand why it has a much worse R(F^2), 
8.5% versus 7.3%. These numbers are coming from GSAS, which gives the 
same number of F_obs reflections for each of the two fits. I haven't 
looked into the details much, but shouldn't the two statistics be better 
correlated than that?

Thanks for any hints!
Jon


Re: Google Scholar

2004-11-26 Thread Jon Wright
Alan wrote:
BTW, Google.com ranks ILL WWW pages 3rd worldwide, in both Condensed 
Matter and Crystallography, ahead of IUCr. To see what this means in 
practice, try a Google search on inorganic structure :-)

http://rankwhere.com/google-page-rank.php?url=www.ill.fr
www.ill.fr has Google PageRank 7 out of 10
http://rankwhere.com/google-page-rank.php?url=www.esrf.fr
www.esrf.fr has Google PageRank 8 out of 10
;-)
Jon


Re:

2004-11-24 Thread Jon Wright
Alberto,
Check you have the right polarisation correction. Synchrotron ~ 1,  
laboratory typically 0.5 with no mono...

HTH,
Jon
Alberto Martinelli wrote:
Dear all,
I've got a problem with the Rietveld refinements of synchrotron data. 
I'm using Fullprof and the problem is that the overall B, and then also
some of the isotropic displacement parameters, fall down to negative
values. I'm not very experienced in the treatment of synchrotron data and
may be I put some errouneous input in the PCR file, but I cannot understand
what.

Thanks in advance for your helpful suggestions, best regards
Alberto Martinelli
 




Re: TOPAS-Academic

2004-09-24 Thread Jon Wright
Alan,
The "planned plugin architecture" sounds interesting. How concrete are 
the plans? I'm interested in auto-generation of restraints for proteins 
and fourier mapping at the moment...

Regards,
Jon
alan coelho wrote:
Dear all,
An academic version of BRUKER-AXS TOPAS is now available to
degree-granting institutions comprising universities, university run
institutes, laboratories and schools at:
	http://pws.prserv.net/Alan.Coelho 

The main aim of TOPAS-Academic (TA) is to provide a low cost but
powerful and easy to use program for the academic community. One that
will grow through a planned Plugin Architecture.
BRUKER-AXS TOPAS will continue to be developed in parallel employing the
latest in Windows GUI Dialog technology.
regards 
alan

 




Re: Rietveld refinement and PDF refinement ?

2004-08-23 Thread Jon Wright
Bob,
This exactly what is needed when the sample is a mixture of amorphous 
and crystalline components. But what happens when the material is a 
single crystalline phase with some coherent defects? Don't the defect 
<-> average structure correlations start to dominate, and separating 
components is no longer possible...? Thinking of all the different kinds 
of rods and streaks you can see from a single crystal and then 
projecting them into one dimension gives a lot of possibilities! The 
difficulty is finding a way to describe all of the possible kinds of 
defect so that either the PDF or equally the diffraction pattern can be 
computed. Probably one would have to follow the route in PDFFit of 
giving the user a Turing complete command language to use. I'm just glad 
that the proteins don't seem have coherent defects! Can you fit that 
broad bump in the second half of the pattern which always seems to turn 
up in protein data via the Debye equations? I have to admit I haven't 
tried yet...

See you in Prague,
Jon
Von Dreele, Robert B. wrote:
Jon & others,
Well, there is an attempt at this in GSAS - the "diffuse scattering" functions for fitting these contributions separate from  the "background" functions. These things have three forms related to the Debye equations formulated for glasses. The possibly neat thing about them is that they separate the diffuse scattering component from the Bragg component unlike PDF analysis. As a test of them I can fit neutron TOF "diffraction" data from fused silica quite nicely. I'm sure others have tried them - we all might want to hear about their experience.
Bob Von Dreele
 




Re: Rietveld refinement and PDF refinement ?

2004-08-22 Thread Jon Wright

Well, that is an old chestnut that Cooper and Rollet used to oppose to
Rietveld refinement. I think Rollet eventually agreed that Rietveld was
the better method. Has Bill really gone back on that ?
 

The difference between the two approaches are just an interchange of the 
order of summations within a Rietveld program. Differences in esds 
should only arise through differences in accumulated rounding errors, 
assuming you don't apply any fudge factors. Since most people do apply 
fudge factors, the argument is really about which fudge factor you 
should apply. I will only comment that the conventional Rietveld 
approach (multiply the covariance matrix by chi^2) is often poor.

As for the PDF "versus" Rietveld - you should get smaller esds on 
thermal factors if you were to write a program which treats the 
background as a part of the crystal structure and has no arbitrary 
degrees of freedom in modelling the background. This is just due to 
adding in more data points that are normally treated as "background" but 
which should help to determine the thermal parameters via the diffuse 
scattering.

So, provided you were to remove the arbitrary background from the 
Rietveld program and compute the diffuse scattering the methods ought to 
be equivalent. Something like DIFFAX does this already for a subset of 
structures, but I think without refinement. The real difficulty arises 
with how to visualise the disordered component, decide what it is, and 
improve the fit - hence the use of the PDF. Although no one appears to 
have written such a program there does not seem to be any fundamental 
reason why it is not possible (compute the PDF to whatever Q limit you 
like, then transform the PDF and derivatives into reciprocal space). 
Biologists already manage to do this in order to use an FFT for 
refinement of large crystal structures!

In practice a large percentage of the beamtime for these experiments at 
the synchrotron is used to measure data at very high Q which visually 
has relatively little information content - just so that a Fourier 
transform can be used to get the PDF. This is silly! The model can 
always be Fourier transformed up to an arbitrary Q limit and then 
compared whatever range of data you have. For things like InGaAs the 
diffuse scatter bumps should occur mainly on the length scale of the 
actual two bond distances. Wiggles on shorter length scales  are going 
to be more and more dominated by the thermal motion of the atoms, and so 
don't really add as much to the picture (other than to allow an 
experimentalist to get some sleep!).

In effect it is like the difference between measuring single crystal 
data to the high Q limit and then computing an origin removed Patterson 
function and doing a refinement against that Patterson as raw data. No 
one does the latter as you can trivially avoid the truncation effects by 
doing the refinement in reciprocal space. The question then is whether 
it is worth using up most of your beamtime to measure the way something 
tends toward a constant value very very precisely? Could the PDF still 
be reconstructed via maximum entropy techniques from a restricted range 
of data for help in designing the model? Currently the PDF approach 
beats crystallographic refinement by modelling the diffuse scattering. 
As soon as there is a Rietveld program which can model this too then one 
might expect the these experiments become more straightforward away from 
the ToF source.

I'd be grateful if someone can correct me and show that most of the 
information is at the very high Q values. Visually these data contain 
very little compared to the oscillations at lower Q and seem to become 
progressively less interesting the further you go, as there is a larger 
and larger "random" component due to thermal motion. Measuring this just 
so you can do one transform of the data instead of transforms of the 
computed PDF and derivatives seems like a dubious use of resources? 
Since the ToF instruments get this data whether they like it or not, the 
one transform approach is entirely sensible there. For x-rays and CW 
neutrons, it seems there is a Rietveld program out there waiting to be 
modified.

August is still with us, Happy Silly Season!
Jon


Re: gnuplot, powder pattern

2004-08-19 Thread Jon Wright
Freidrich,
Assuming you can use gsas2cif (or reflist or something) plus a text 
editor to get an ascii file containing a column of peak positions then 
in gnuplot I use:

top=0
bottom=-100
set bar 0.0
plot "hklfile.dat" u (n):(bottom):(bottom):(top):(top) with financebars
where n is the column in your ascii file of reflection positions. Top 
and bottom align the reflection markers wherever you want. If you have 
hkls as well as positions you can even make the top and bottom positions 
functions of hkl to mark out superstructure peaks or potential 
systematic absences (eg: f(h,k,l)= (h+k)%2==0 ? top+10 : top ) . In 
practice I use powplot with gsas but this works well for prodd and other 
programs which give hkl positions on a list file and obs and calc data 
somewhere.

Jon
Friedrich W. Karau wrote:
Dear colleagues,
does someone know, how to make reflection tickmarks with gnuplot using 
the PATE output or some GSAS output in an easy way? The tickmarks 
should be in GSAS POWPLOT style. I would avoid to type every 
reflection tickmark in the configuration file.

Thank you for your help
Friedrich



Re: fundamental parameters approach

2004-06-04 Thread Jon Wright

Is the fundamental parameter approach better than
mathematical approach used in most of the Rietveld
refinement programs? 

Perhaps someone is about to explain the difference is between 
"fundamental parameters" and anything else? I used to think it might 
mean convoluting something which was actually measured into the 
peakshape description, but this doesn't always seem to be the case? I'm 
guessing it has to be more than choosing suitable equations for peak 
width parameters and peak positions as a function of scattering 
variable, otherwise all programs are using fundamental parameters 
already, just some are better approximations for certain diffractometers 
than others.

In any case, if the calculated peakshape matches the observed peakshape 
then it makes no difference for refinement of a crystal structure. For 
deriving microstructural parameters, like "size" and "strain", then a 
better description of the instrument can help, and can be a good 
indicator of diffractometer misalignment. In that sense, zero shift is a 
fundamental parameter, but does not seem to be unique to "fundamental 
parameters" programs. Perhaps the difference is that programs which 
don't do "fundamental parameters" make you compute "size" and "strain" 
from the peakshape parameters yourself.

Jon


Re: GSAS informations

2004-04-26 Thread Jon Wright
>... to answer to your (too) long questions. May be later, OK?
Going back to this quartics versus ellipsoids peak broadening stuff, 
maybe I can summarise:

Why should the distribution of lattice parameters (=strain) in a sample 
match the crystallographic symmetry? If the sample has random, isolated 
defects then I see it, but if the strains are induced (eg: by grinding) 
then I'd expect the symmetry to be broken. Suggests to me the symmetry 
constraints should be optional, and that the peak shape function needs 
to know about the crystallographic space group and subgroups. Either the 
program or the user would need to recognise equivalent solutions when 
the symmetry is broken (like the "star of k" for magnetic structures).

Why would anyone have anything against using an ellipsoid? That same 
function can be described by the quartic approach, it just has less 
degrees of freedom.

In short, I don't understand why there is such a strong recommendation 
to use the quartics instead of ellipsoids or why the symmetry is not 
optional. I'm still persuing this because I have looked at something 
with a very small anisotropic broadening which seems to fit better with 
an ellipsoid which breaks the symmetry compared to a quartic which doesn't!

Thanks for any advice,
Jon


Re: GSAS informations

2004-04-14 Thread Jon Wright

Not violating symmetry restrictions you may either
have the sphere with the terms 11=22=33 and 12=13=23=0
or something else allowing the 12=13=23 terms to be equal
but different from 0. These two possibilities are all you can do
in cubic symmetry with h,k,l permutable. If I am not wrong.
The (111) and (-111) come out with different widths if the {12=13=23} != 
0, but the quartic's have declared this blasphemous as they are symmetry 
equivalents... perhaps you should go into hiding before they burn you at 
the stake[*].  There would be an equivalent "solution" with (-12=13=23, 
etc), so the "something else" is an ellipsoid along the 111 direction, 
and you can find the same "solution" if you put the ellipsoid along any 
of the 111 directions. Same solutions, but some have the crystal upside 
down or on it's side. Sometimes happens if you drop your crystal.

So if  (111) and (-111) do not need to be equal in width, someone is 
hopefully about explain to me why (100) and (010) do need to be equal in 
width in cubic symmetry. I don't understand why they do, and for a 
single crystal I have a vague memory of seeing them measured as being 
different (it is a very vague memory and I might have been mistaken). In 
a powder you don't know which direction is which anymore, but squash 
some cubic grains and then shake them up, and each grain will probably 
remember which way was up when you squashed it. If you use a subgroup of 
your crystal spacegroup for the peak widths then you'll find the same 
solution moved by the symmetry operators you threw out to make the 
subgroup. Nothing surprising there. I wouldn't generally expect the 
crystal defects to have the full crystal symmetry, but some people seem 
to be insisting they should have. I am curious as to how that comes 
about, especially if the defects interact with each other and eventually 
gather themselves up into a full blown symmetry breaking small 
distortion. If I were to implement this stuff in PRODD, should I force 
the users to apply the symmetry or not? If not it means taking care 
internally to sum over the equivalents when computing the peakshape. If 
that sum is not carried out, then I can see why problems could arise, 
but otherwise it seems unreasonable to insist that everyone use the full 
crystal symmetry?

So can someone just tell me why the symmetry is not optional?

Jon

[*] Please excuse my sense of humour.


Re: GSAS informations

2004-04-13 Thread Jon Wright
Nicolae Popa wrote:

the condition to not violate some elementary principles, in particular,
here, the invariance to symmetry. 

Dear Prof Popa,

I had been meaning to implement the quartic form for peak width in a 
refinement program for some time, but did not figure out how to generate 
the constraints from a general list of symmetry operators. Is there a 
simple trick for doing this? I was thinking of just choosing a 
particular peak like hkl=(1,2,3) where all the fifteen quartic terms 
look to be different, and then just comparing the terms generated by 
symmetry equivalent peaks. Then if the code for getting equivalent 
reflections is OK, the constraints can be determined, messily, that way. 
Is this on the right track?

I was also wondering if the user should be allowed to choose a different 
symmetry for the peak widths compared to the crystal structure. I 
thought it was possible to measure quite different peak shapes for 
equivalent hkl's in heavily deformed single crystals...? Something like 
a powdered iron sample which has strains introduced by being squashed 
between two magnets might have a lower symmetry for the peak broadening 
compared to the cubic crystal symmetry. Does the powder averaging 
somehow wash this kind of effect away? What about small distortions 
which only show up in the peak widths and where the space group change 
does not generate superstructure peaks (eg P4/m to P2/m)? At some point 
in a series of datasets going through a second order transition you'd 
want to go from tetragonal to monoclinic in a smooth way. What about 
samples containing a mixture of strained and unstrained crystallites? I 
guess that ought to modelled as two phases?

Thanks in advance for any advice. Sadly I don't know if I will ever find 
to the time to get this peakshape stuff going until I have a sample that 
really needs it...

Jon

PS: With the debate raging over which is the "right" approach, I got 
very confused, is one an extension of the other...? Assuming you 
normalise the L11 etc by the reciprocal cell metric tensor elements (who 
wouldn't ?-), then making them all equal gives isotropic strain. 
Allowing those Lij to be different from each other gives an ellipsoid 
with six degrees of freedom, which would seem to be related to the S_hkl 
parameters as below. Going to the quartic form just allows the elements 
in that matrix to be inconsistent with the Lij parameters (eg the 
entries for L11*L11, L22*L22 and L11*L22 do not agree on the L11 and L22 
values). In implementing this stuff it would seem more sane to define 
Lij as being normalised by the cell parameters (RM11 etc in GSAS) and to 
subtract off the isotropic contribution (LY). Then define the S_hkl in 
terms of the Lij and "LY" - with the diagonal elements as Lij^2 and the 
off diagonal elements as being the difference between the values needed 
to fit the peaks, and the values predicted by the Lij's. This means that 
the anisotropic broadening model with just one direction (stec? in GSAS) 
can also be implemented by making two of the eigenvectors of an Lij 
based matrix be equal (you could refine a direction).

It might be more work for the programmer, but numerically things should 
be more stable if the functions refined are more orthogonal to each 
other. Fitting the anisotropy as the difference between the peak widths, 
rather than their absolute values should make life easier, at least for 
getting the fit started. Also I much prefer to have refined values where 
I can ask if something is within esd of being zero! (Is the anisotropic 
broadening "significant"? Am I in a false minimum?)

width = sqrt{( L11 h*h )   ( L11 h*h )^T }
( L22 k*k )   ( L22 k*k )
( L33 l*l )   ( L33 l*l )
( L12 h*k )   ( L12 h*k )
( L13 h*l )   ( L13 h*l )
( L23 k*l )   ( L23 k*l )
giving: (work out the h,k,l powers from the 11, 12 etc)

( L11*L11, L22*L11, L33*L11, L12*L11, L13*L11, L23*L11 )
( L11*L22, L22*L22, L33*L22, L12*L22, L13*L22, L23*L22 )
( L11*L33, L22*L33, L33*L33, L12*L33, L13*L33, L23*L33 )
( L11*L12, L22*L12, L33*L12, L12*L12, L13*L12, L23*L12 )
( L11*L13, L22*L13, L33*L13, L12*L13, L13*L13, L23*L13 )
( L11*L23, L22*L23, L33*L23, L12*L23, L13*L23, L23*L23 )
The fifteen S_hkl parameters are then:
1 L11*L114 L11*L127 L22*L23
2 L22*L225 L11*L138 L33*L13
3 L33*L336 L22*L129 L33*L23
10 L11*L22 + L12*L12
11 L11*L33 + L13*L13
12 L22*L33 + L23*L23
13 L12*L23 + L22*L13
14 L13*L23 + L33*L12
15 L12*L13 + L11*L23
So one could make a peakshape with the 6 by 6 matrix above and implement 
all of the different models (single direction, ellipsoid and quartic 
form) in terms of the constraints on the matrix elements.

Can someone tell me if all that is "right" or "wrong"? I was kind of 
hoping that if I ever do get around to it, I could have a single peak 
shape function which works for everything and never needs to be messed 
about with again...!

Would there be any rea

Choosing origins

2004-03-31 Thread Jon Wright


I am amazed by the flow of miss information that flows on this list whenever an
apparent problem with a space group comes up. 

I asked a related question on sci.techniques.xtallography a few weeks 
ago, but have yet to hear anything, misinformation or otherwise. If 
anyone here can give me some pointers, I'd be very grateful. I just want 
to find all the allowed equivalent origin choices for comparing 
structures, and I'm wondering if there is a way to choose a specific one 
(for example in terms of the phases of certain reflections?).

Thanks,

Jon

Forwarded from sci.techniques.xtallography, with my apologies if you 
have seen it before.

I was looking at models coming back from a molecular replacement
program being run using various datasets and then trying to decide if
the models are "good" or "bad", and therefore if the data were "good"
or "bad". In a specific example with space group P212121, frequently
the resulting model was found displaced by <1/2,0,0> from the ideal
position (and invariably moved still further away by one of 21 axes).
[All programs are using x,y,z; 1/2-x,-y,1/2+z; -x,1/2+y,1/2-z;
1/2+x,1/2-y,-z for P212121.]
From looking at the space group diagrams in Int Tables, this seems to
be a perfectly good origin shift, as the symmetry operators are
arranged around [1/2,0,0] in the same way as [0,0,0]. So I wrote a
little script which applies all the origin shifts and symmetry
operators to a test model and tells me which origin shift and symmetry
operator gives the closest fit a target model. All well and good for
P212121, but now I was thinking that one day I might want to do this
for another space group...
The first attempt to generalise was to apply the space group symmetry
to the point [0,0,0], which gives me three face centers, but misses
the body center and points <1/2,0,0>. Then it occurred to look at the
Patterson symmetry (apparently Pmmm here) and from that I could
probably have gotten a list of possible origin shifts, with a concern
about sometimes flipping enantiomers. Now I'm scared that one day I'll
meet a trigonal thing which has hexagonal Patterson symmetry and could
come back rotated by 60 degrees, but still be the same structure!
So the question is: "How can the full list coordinate transformations
be generated which leave a structure invarient?"
For P212121 it seems that "add [0.5,0,0]" is allowed, but I didn't see
how I should figure that out from the info in Int tables, or
algorithmically.
There's a followup: "How should the transformation be chosen in order
to end up at a unique and reproducible representation of the
structure?"
Would something like platon just do all this? At least one pair of
structures in the PDB database seem to represent different choices
about this origin shifting, but they represent the same packing and
structure... realising that was not as straightforward as it would
have been had both structures been recorded in a standardised way.
Thanks in advance,

Jon




Re: Is it really preferred orientation problem?

2004-03-15 Thread Jon Wright
zhijian fan wrote:

*> Like Rp, Rwp, Chi2, RB are respectively from 3.96, 5.33, 2.68, 5.80
> to 3.35, 4.42, 1.85, 3.07.
This should pass a test of statistical significance. *
I made a hypothesis testing. H0: It needs not PO correction.
The model with PO correction:
G1=sum[wi(Yi-Yc,i)^2]=(n-p1)*chi2=715*1.85=1322.75.
The model without PO correction:
G2=sum[wi(Yi-Yc,i)^2]=(n-p2)*chi2=717*2.68=1921.56,
Then the F distribution is:
F=[(G2-G1)/(p1-p2)]/[G1/(n-p1)]=[(1921.56-1322.75)/2]/1.85
=161.84
The corresponding p value is 0.0. Hence the H0 are rejected.
*(2.)The course above is true?*
It's conventional at least, although I haven't checked your numbers.

In the appendix of the paper(1965) written by Hamilton, it says"
Interpolation procedures for values of b and (n-m) not
found in the tables are based on the fact that interpolation in F
maybe carried out on the reciprocals of the degree of freedom."
*(3.) Who knows how to use interpolation to get F values not
listed in the F table? Or is there a free program which may
calculate it? * Because in realities, the degree of freedom is
always too large.
There is a subroutine in the imsl subroutine library (not free, but I 
guess anyone with a license could compile a program which supplies the 
number). I think you already passed Hamilton's test anyway, a 1% change 
on Rwp is massive. The problem of degrees of freedom being "too large" 
is indeed one of the challenges of applying this to powder data. ... 
"Lies, damned lies and statistics".

*(5.) The model having anisotropic temperature factors and the one
just having isotropic temperature factors are nested? If making a
test with F distribution, usually how large the alfa value is
taken? 0.05 too?*
In practice there are not too many people doing this routinely, but if 
you mean that you need to add the anisotropic thermal factors with a 95% 
probability that they are required, then that sounds about right. In my 
experience it always came out higher than that but didn't really change 
my views about a model, which probably means I should be burned at the 
stake for heresy. You could also ask what Rwp you needed to get the 
model to be at this 95% confidence level, I think the change will be 
very small.

Here's another idea that I wanted to run past the list, and might prove 
useful for you as well. How about trying to get something like the RFree 
statistic that some single crystal people use? Take your model with no 
preferred orientation and add a few excluded regions which throw out 
some of your peaks (wide enough to hide whole peaks, not just half of a 
peak). Then refine your preferred orientation and see if it comes to 
about the same result as before, and if it correctly predicts the peaks 
which you threw out. Clearly you need to have more data than parameters 
for this to work, but if your refinement can predict things about data 
which it doesn't know about, then you can feel more confident that the 
model is good.

Hope this helps,

Jon







Re: Magnetic Form Factor of Ru+4

2004-03-09 Thread Jon Wright
Ozhan Unverdi wrote:

Could someone give me the magnetic scattering factors of Ru+4 please?
   

www.google.com searching for "Ru4+ magnetic form factor" gives 17 results.

One of them is Sidis et al, "Evidence for Incommensurate Spin 
Fluctuations in Sr2RuO4", Phys. Rev. Lett (1999) 83, 3320.

In that paper they say: "The intensity of the scattering can be 
reasonably well described by the squared magnetic form factors of the 
Ru+ ion [19] (note that the magnetic form factor is Ru4+ is not 
available)". Ref 19 is international tables vol C.

So it seems unlikely that anyone is about to give you the 8 numbers you 
seek, although that paper is five years old. You could try plotting out 
the Ru+ form factor and squaring it to get an approximation to the Ru4+ 
form factor. Or you could try to do it analytically if you know what the 
8 numbers mean, I'd guess at sum of A(I)*exp(-B(I)*SQ) for I=1 to 4 with 
SQ=sin(theta/lambda)**2, and no constant term based on what GSAS uses. 
It would probably be a good idea to try testing all of Ru, Ru+ and 
(Ru+)**2 form factors too see if there is any difference in the results.

Going further into the google results you might also want to read 
Gukasov et al, Phys. Rev. Lett (2002) 89, 087202. They seem to discuss 
the form factor of Ru in Ca1.5Sr0.5RuO4 quite extensively.

Good luck!

Jon



Re: CIF to powder pattern

2004-03-08 Thread Jon Wright

With all the file converters floating around, what are the chances that we
are going to proliferate files with corrupted formats?  I have gotten some
files already that were supposed to be "GSAS files" but didn't have 80
columns, had blank lines, etc., and I assume they were made by file
converters that created the wrong format.  Has anybody else run into this?
 

Once saw a case where an x,y,esd file was converted into an x,y file in 
GSAS format, throwing away the esd information. This was a shame as it 
screwed up the refinement statistics. A collection of open source 
converters could indeed be a blessing, particularly when that makes it 
easier for data to be produced in a suitable format at the earliest 
possible time, avoiding potential mishaps when converting later.

Jon




Re: Is it really preferred orientation problem?

2004-03-05 Thread Jon Wright
> Like Rp, Rwp, Chi2, RB are respectively from 3.96, 5.33, 2.68, 5.80
> to   3.35, 4.42, 1.85, 3.07.

This should pass a test of statistical significance. That does not mean
it really is preferred orientation, but if it is not preferred
orientation, then you probably want to find out what it is.

> By the way, I haven't known how to calulate the site radius so far. I 
> have read the other two papers 
> following the suggestion of Dr Alan Hewat, but these two papers 
> didn't tell the calculation course in
> details. Is there some programs which can calculate it?
> 
> -
> D atomsEnvironmentSite radiusOccupancies
>(angstrom) (angstrom) (%)
> --
> D1   2b1   Ni1  1.589  0.387  16.0%
>3   Ni3  1.640
> D2   2b1   Ni2  1.606  0.377  15.5%
>3   Ni3  1.623
> D3   6c1   La   2.595  0.548  23.0%
>1   La   2.461
>2   Ni3  1.654
> ...

D1 has one neighbour Ni which is 1.589 angstrom away and 3 neighbours
Ni3 which are 1.640 Angstrom away. The distance from the centre of the
D1 atom to the surface of the neighbouring atoms is just the distance
minus the radius of the atoms, ie: 1x0.347 and 3x0.398, as you noted. If
you assume the site is tetrahedral, what is the largest sphere which can
fit in the space which the three neighbours enclose? A quick and dirty
approximation is (0.347+3*0.398)/4=0.385. Presumably the extra 0.02
comes from working it out properly, so let's try to do that...

General equation of a sphere centred at [a,b,c] in cartesian co-ords is:
(x-a)^2+(y-b)^2+(z-c)^2=r^2.
Substitute in points for the x,y,z co-ordinates of the closest bit of
the Ni surfaces. In general you are going to need four points to define
a cavity in this way. This is easiest if you take a unit cube and put
the points on alternate corners of the cube.  Thus the points are at
[u,u,u],[-1,-1,1],[-1,1,-1],[1,-1,-1], where u=(0.347/0.398) and one
unit in this space is 0.398/sqrt(3). (This is a shortcut, I've assumed
the tetrahedral angles are equal and the closest Ni site is displaced
along the body diagonal of the cube, convince yourself about u by
drawing it out).

u^2-2ua+a^2 + u^2-2ub+b^2 + u^2-2uc+c^2 = r^2   ; [ u, u, u] ; eq 1
1  +2a +a^2 + 1  +2b +b^2 + 1  -2c +c^2 = r^2   ; [-1,-1, 1] ; eq 2
etc.

By symmetry a=b=c. You could solve it without noticing that, but life is
too short.
eq 1   = r^2 = eq 2  = r^2
3(u^2-2ua+a^2) = r^2 = 3 + 2a + 3a^2 = r^2
a = 3(u^2-1)/(6u+2) = -0.0995

So "a" is a small negative number - the centre of the sphere is
displaced away from the closest neighbour - as we would expect.

Finally r^2 = 3 + 2a + 3a^2 = 2.8307

So that the "site radius" is sqrt(2.8307)*0.398/sqrt(3)=0.3866

Which rounds to the 0.387 given in the paper you cite. I leave the
others to your own pencil and paper. The point is that the site radius
is a function of only the neighbouring atoms - it has nothing to do what
is placed on the site, or where you put it within the cavity. A computer
program which told you this number would not seem to help any more than
a paper which tells you this number, although I can see that it is
tedious to calculate in the general case.

Hope this helps,

Jon



Re: Bragg R and GSAS

2004-02-26 Thread Jon Wright
In comparing refinements with different powder datasets there is no 
"number of observations" in common usage and no 10:1 rule of thumb. 
Broadly, this means "crappy" data with a chemically unreasonable model 
can sometimes give much better figures of merit than a good structure 
with "good" data.

Sad, isn't it?

Jon

Von Dreele, Robert B. wrote:

Nandini,
I suggest you look at the paper "Rietveld refinement guidelines", J. Appl. Cryst. 
(1999) 32, 36-50.
Bob Von Dreele
-Original Message-
From: Nandini Devi Radhamonyamma [mailto:[EMAIL PROTECTED]
Sent: Thu 2/26/2004 8:38 AM
To: [EMAIL PROTECTED]
Thank you.
In parallel with single crystal analysis, is there an
accepted limit for R(F2) or any other parameter like
Rexp to follow? I'm using synchrotron data.
nandini
 





RE: Riet_L: Scale factor in Rietveld (with a question for Bob and Juan)

2000-09-15 Thread Jon Wright

Paolo,

On Fri, 15 Sep 2000, Radaelli, PG (Paolo)  wrote:
.
> Finally, here is a question for Bob and Juan.  To me, it would be much more
> natural to remove Vo from the scale factor, that is to redefine a new S' so
> that
>  
> Y=S'*L*A*E*|F|^2/Vo^2 and S'=K*Ltot*f
> 
> This way, the scale factor will only depend on the sample effective density
> and not its crystal structure.  This is very useful in phase transitions
> involving a change in the size of the unit cell, as you can imagine.  
> Is there any rationale in doing it the way it's currently done?

[Admittedly it wasn't a question for me, so sorry for butting in.]
Here are two unit cells and scale factors fitted to the same (x-ray) 
dataset using PRODD. I'm assuming this is what was meant? The cell doubles
and the scale stays the same (roughly). I'd be pretty confident it does
the same with TOF and CN data provided it's a recent version.

C  8.392690   8.392690  16.775499  89.83088  89.83088  89.80757
L SCAL  91322.

C  8.392508   8.392508   8.387020  89.83144  89.83144  89.80786
L SCAL  91479.

Suffice to say the use of cell volume in scale factors within CCSL was
rather vague until we tried to quantify a multiphase sample. I think a
factor was in for lab x-ray data which still doesn't work anyway, but not
for TOF or CW neutrons. The rationale for doing it this way was that I
weighed the multiphase sample myself and this was the only way to get
sensible numbers out. GSAS got the weight fractions right as well, so the
volume factor must be in there somewhere...

Does this mean GEM now produces data where histogram scale factors may be
constrained to be equal across all banks? An impressive achievement, but
it opens a can of worms with sample absorbtion and attenuation. It has
been suggested that absorbtion corrections are something which belong
firmly with the data, as they depend on the geometry, size and packing
density of the sample (packing density can be measured). As such the A*
values could be supplied to the program, and applied to the model. Bruce
was sorting this out for PRODD. Any comments on this idea? It has been
rumoured that absorbtion (for neutrons at least) is not a refinable
quantity, but something that ought to be known(!)

Best wishes,

Jon


 Dept. of Chemistry, Lensfield Road, Cambridge, CB2 1EW
   Phone-Office 01223 (3)36396; Lab 01223 (3)36305; Home 01223 462024





Re: Powplot problems with NT

2000-08-08 Thread Jon Wright

On Tue, 8 Aug 2000, Brian H. Toby wrote:

> Brian Mitchell wrote:
> > 
> > I have recently installed Windows NT on my laptop and find that the GSAS
> > graphics plot takes an excessive amount of time to appear. I did not have
> > this problem with Win 98.
> > 
> > Any help/advice appreciated.
> 
> My suggestion is to upgrade to Linux.
> 
> Brian

Ouch :-)

It sounds like it's something to do with your graphics drivers. Can you
check to see if you have equivalent ones before and after the change? Was
Win98 installed with a specialist driver and now you have a bog standard
one? With the PC I'm working on I get a crash if I look at a plot in a
window, as opposed to full screen! Try fiddling with your display
settings, running in dos mode and getting the latest graphics driver for
your machine. If you are low on graphics memory it may help to switch to
fewer colours or lower resolution. Also see if both 'A' and 'B' options
take the same amount of time.

The other suggestion is to avoid powplot altogether. Get hold of one of
the "alternative" gsas plotting utilities. eg DMPlot etc. Also there is a
tcldump utility within gsas which gives you an ascii file which can be fed
to a plotting program, probably via a script. Sounds like there is an
opening in the market here for something which can feed GSAS data to
something like winplotr? 

Hope this helps,

Jon


 Dept. of Chemistry, Lensfield Road, Cambridge, CB2 1EW, UK




Re: Lp correction in GSAS

2000-07-11 Thread Jon Wright

On Mon, 10 Jul 2000, Brian H. Toby wrote:

> There are three polarization functions listed in the manual, but only 2
> when I try to select the function in EXPEDT. I seem to remember that the
> 1st two functions are equivalent, other than scaling, but I am not going
> to work this mystery today.

My apologies for such a public display of ignorance, I have a related 
mystery to ask about. Having had a quick look at the manual, all three
corrections are for parallel geometry, I'm guessing "no monochromator",
then "incident beam mono." and then "diffracted beam mono.". OK. 

The question is what about flat plate transmission (STOE) type data versus
Bragg-Brentano versus Debye-Sherrer? Doesn't a factor of cos(theta) at
least turns up in the transmission data. This is where I start to feel
rather stupid, but how can you specify these cases? It doesn't appear to
come into the instrument parameter file or the lorentz corrections
dicussed here?

Any hints or advice would be appreciated. Probably Bob has the best idea
of what GSAS is doing, and how it's doing it. I've heard rumours of
manual "corrections" being applied to data before presenting it for
refinement, which sounds ugly. Fullprof has a flag for transmission
geometry, hence my question.

Thanks in advance,

Jon Wright

PS : If someone could refer me to simple derivations of the LP
corrections for all the various geometries, I'd be very grateful.


 Dept. of Chemistry, Lensfield Road, Cambridge, CB2 1EW, UK






Getting background subtracted r-factors from GSAS?

2000-06-21 Thread Jon Wright

Hi all,

Does anyone know if there is a quick way of getting at a background
subtracted r-factor within GSAS? (Without wishing to start a discussion
about why I might want one) Thus far the only idea I've had is to
calculate it by hand from hstdump, but that appears to be listing the
background as zero, which it most certainly isn't.

Any ideas appreciated, I'm using a linear interpolation background
function so if I assume that consists of evenly spaced points (in ToF),
with one at each end of the pattern, then I guess I can calculate it. Is
the assumption right?

Thanks in advance,

Jon


 Dept. of Chemistry, Lensfield Road, Cambridge, CB2 1EW, UK




Re: GSAS and CYCLIC refeinements ?

2000-06-02 Thread Jon Wright

Alex,

There's something below which might help you. You'd need to set up a
refinement with a .exp file and a .raw file with the same root name (eg
fit.raw and fit.exp). The batch file then takes two arguments, the data
file name (eg run1.raw) and the root of name of the .exp file (eg. fit for
fit.exp). It just copies the appropriate data onto fit.raw and does a
powpref, genles and pubtable. I used it to follow a cell parameter as a
function of temperature. Use the 'for' command to apply this to multiple
files as explained below. See alt.msdos.batch for further info, it should
be possible to customise it to do just about whatever you want. If you
have windows NT I think you need to remove the /Y from the copy commands.

Hope this helps,

Jon


 Dept. of Chemistry, Lensfield Road, Cambridge, CB2 1EW
   Phone-Office 01223 (3)36396; Lab 01223 (3)36305; Home 01223 462024

-ramp.bat

rem - Batch file for fitting multiple gsas files
rem - Type the following to use it.
rem - "for %1 in (scan*.raw) do call ramp %1 something"
rem - where scan*.raw is a wildcard list of filenames
rem - and  'something' is your .exp/.raw file
rem
rem - Copy the data of interest over the data in the dummy/inital fit
copy /Y %1 %2.raw
rem
rem - write to a list file the name of the data file (%1, which is the
rem - first argument given to ramp.bat)
echo %1 >> %2.lis
rem
rem - Fit the data and get some results (%2, the second argument to ramp
rem - it should be the name of a .exp file) 
powpref %2
genles %2
pubtable %2
rem
rem - use the find command to get the cell parameter from the .tbl file
find "a =" < %2.tbl >> %2.lis
rem
rem - save the results from this fit
copy /Y %2.tbl %1.tbl
exit





Fitting lorentzian tails in GSAS

2000-03-21 Thread Jon Wright

Hi all,

I've got some high resolution data from a synchrotron with quite a lot of
sample broadening. A quick go with a curve fitting program has shown that
the fitting a pseudo-voigt peakshape leads to a negative eta parameter. So
it is more that 100% "lorentzian" in character. I can get quite nice fits
to individual peaks with a sum of two lorentzians, or with the strange 
pseudo-voigt. To refine the data I want to give a peakshape like this to a
Reitveld refinement program. 

Can this be done with GSAS without using more than one crystallographic
phase? As far as I can tell the answer is no - can anyone tell me
otherwise or lend any handy hints?  I'd like to be doing a combined fit of
this data with some high resolution TOF neutron data at some point in the
future, which seems to rule out many other programs. 

Pointers to source code for a suitable peakshape (with an low angle
asymmetry correction) would be also be appreciated.

Thanks in advance,

Jon Wright


 Dept. of Chemistry, Lensfield Road, Cambridge, CB2 1EW
   Phone-Office 01223 (3)36396; Lab 01223 (3)36305; Home 01223 462024




Re: Asymmetry

2000-03-07 Thread Jon Wright

Asymmetry is discussed in:
J.Appl.Cryst (1984) 17, p.47 - van Laar & Yelon  (+ ref's therein)

The above method is implemented in:
J.Appl.Cryst (1994) 27, p.892 - Finger, Cox & Jephcoat. 
and source code for this peakshape is available. (They also talk about
secondary monochromators).

Hope this helps,

Jon

On Tue, 7 Mar 2000, Bruce A. Weir wrote: 

> Does anyone know of a good reference which discusses peak asymmetry in
> low angle X-ray powder diffraction data? 
> 
> --
> 
> 
> Bruce A. Weir, Optoelectronics, Cavendish Laboratory, University Of
> Cambridge, Madingley Rd, Cambridge, CB3 0HE U.K. 
> 
> Tel: +44 01223 337285
> 
> 
> 
> 
> 


 Dept. of Chemistry, Lensfield Road, Cambridge, CB2 1EW
   Phone-Office 01223 (3)36396; Lab 01223 (3)36305; Home 01223 462024




Re: Absorption GSAS and TOF Instruments

2000-02-04 Thread Jon Wright

On Thu, 3 Feb 2000, Mitchell, Brian J. wrote:

> mu*R < 1 when dealing with CW neutron diffraction experiments.  I am not,
> however, familiar with the determination of sample absorption using a TOF
> instrument and at what "hard values" I should consider that the absorption
> is causing a real problem with the corresponding Rietveld refinement. 
> 
> Any help or input in this matter would be greatly appreciated.

I'm not sure how much this helps but my understanding is that the twotheta
dependance of the absorbtion is unimportant, unless you are using multiple
banks and constraining the scale factors across them (most people don't).
The wavelength dependence is important, even if mu*R seems quite small,
and for most elements the absorbtion cross section varies as 1/v, the
neutron velocity. Values are usually tabulated at v=2200m/s. This *is*
important and it tends to correlate with the Debye-Waller factor. Having
said that, it usually only means the absolute values are off, but relative
values for different atoms in the structure are still meaningful.

You can run into problems which depend on how well the data was normalised
for the incident intensity. See C.G.Windsor's book, 'pulsed neutron
scattering', which goes through one methodology. Assuming it's all been
done correctly, you can either correct the raw data, or put mu*R into a
refinement program. Some say you should never refine the absorbtion, since
it is not an unknown.

As for when does absorbtion cause a problem? I've seen an example of an
annular sample containing Gd (HUGE absorber) giving a dataset which was
refinable for the crystal structure, with no absorbtion correction. The
thermal factors were duff, but in theory (!) they should come out right
when absorbtion is taken into account.  So absorbtion is only a problem
when the peaks disappear.

Hope this helps,

Jon Wright

PS: Can anyone tell me if sigma_abs goes as 1/v for Gd? 


 Dept. of Chemistry, Lensfield Road, Cambridge, CB2 1EW
   Phone-Office 01223 (3)36396; Lab 01223 (3)36305; Home 01223 462024



Re: Understatement of the year..so far

2000-01-10 Thread Jon Wright

On Mon, 10 Jan 2000, Jaime Alamo wrote:

> >PS: Has anyone found the error in that Fe3O4 entry yet?
> 
> I think,  It lacks the sqrt symbol,  so ...
> 
>  [a->(a+b)/sqrt(2), b->(a-b)/sqrt(2)), c->2c]
> 
> Also,  it yields a negative setting that is less important.

?? I'm thinking of vector a b c, which I thought was implied as otherwise
a-b would be zero for the cubic spinel. So I stick by the 2, sorry about
the negative setting. The Fe(1) z should be 7/16, but is 5/16, a typo in
the paper. Is anyone at ICSD listening to this?

As Alan points out it is pretty clear once you look at the picture, the
problem was that I didn't look at the picture and the distance angle
calculations are not really unreasonable for the first shell at least.  I
only mean to suggest that a quality database should be and is more than
just copytyping from journals, and as such could be worth as much as a
journal subscription for example? Maybe more?

Anyway, how and why do I get embroiled in these discussions when I have
work to do? Apologies for filling up all your inboxes.

Jon


 Dept. of Chemistry, Lensfield Road, Cambridge, CB2 1EW




Re: Understatement of the year..so far

2000-01-10 Thread Jon Wright

On Mon, 10 Jan 2000, Armel Le Bail wrote:

> The number of papers published in crystallography which
> have incorrect space group or cell choice are quite numerous.
> Databases should show exactly the same number (5-10% 
> or even more ??).

But isn't one of the values of a database that it should filter these out
- or at least point out any problems on the test cards? (eg 16677 in ICSD,
has the coordinates corrected.) Databases offer a central repository for 
corrections and additions which are much easier to find than corrections
in later editions of a journal. This seems to be of great value? 

Cheers,

Jon

PS: Has anyone found the error in that Fe3O4 entry yet? 


 Dept. of Chemistry, Lensfield Road, Cambridge, CB2 1EW



Re: Understatement of the year..so far

2000-01-10 Thread Jon Wright

On Fri, 7 Jan 2000, Armel Le Bail wrote:

> 15mn for checking a crystallography set is about my
> performance too. 

Just out of interest, how long does it take people to pick up the
error in entry number 17280, attached below. It's just a cell
transformation from the cubic spinel [a->(a+b)/2, b->(a-b)/2), c->2c]

> I need more if there is really an error in the original. 

Well the error is in the original publication as well, but I don't
think this counts. It is just a transformation from the Fd3m setting, so
you could take a=b, beta=90 and have the cubic structure.

Perhaps I am just a bit slow, but I think it would take more than 15
minutes to check this, unless you have access to specialised software?

Happy new year all,

Jon
---

 COL  ICSD Collection Code 35002 (DATE=R821231/U 0 REL= 17280/ 1)
 NAME Iron diiron(III) oxide   MINR Magnetite low Spinel group
 FORM Fe3 O4= FE3 O4
 TITL Structure of magnetite Fe3 O4 below the Verwey transition   
  temperature 
 AUT  Iizumi M, Koetzle T F, Shirane G, Chikazumi S, Matsui M, Todo S 
 REF  ACBCA 38 (1982) P. 2121-2133
 JRNL Acta Crystallographica B (24,1968-38,1982)  
 CELL A=5.934(1) B=5.925(1) C=16.752(4) à=90.0 á=90.0 ç=90.0  V=589.0
Z=8 
 SGR  P m c 21  ( 26)
 PARM Atom Nr Ox Wy   x  y  z    SOF
  Fe1 +2.2a  0.00.00.3125   
  Fe2 +2.2b  0.50.00.0625   
  Fe3 +2.2b  0.50.50.1875   
  Fe4 +2.2a  0.00.50.3125   
  Fe5 +3.4c  0.25   0.50.0  
  Fe6 +3.2a  0.00.25   0.125
  Fe7 +3.2a  0.00.75   0.125
  Fe8 +3.4c  0.25   0.00.25 
  Fe9 +3.2b  0.50.25   0.375
  Fe   10 +3.2b  0.50.75   0.375
  O 1 -2.2a  0.00.2596 0.0024   
  O 2 -2.2b  0.50.2596-0.0024   
  O 3 -2.2a  0.00.7404 0.0024   
  O 4 -2.2b  0.50.7404-0.0024   
  O 5 -2.4c  0.2404 0.00.1274   
  O 6 -2.4c  0.2404 0.50.1226   
  O 7 -2.2a  0.00.2404 0.2476   
  O 8 -2.2b  0.50.2404 0.2524   
  O 9 -2.2a  0.00.7596 0.2476   
  O10 -2.2b  0.50.7596 0.2524   
  O11 -2.4c  0.2596 0.00.3726   
  O12 -2.4c  0.2596 0.50.3774   
 REM  M Idealized coordinates derived from structure above 120 K, cp. 
   35003   
   V=589.0  
  M Real cell has a and b doubled, C1c1 with $-beta=90.20(3), Z=32
 TEST Calc. density unusual but tolerable  
 TEST No R value given


 Dept. of Chemistry, Lensfield Road, Cambridge, CB2 1EW



Cell constraints question

1999-12-02 Thread Jon Wright

Apologies if this is in an FAQ somewhere, but I couldn't find it.

When refining unit cell parameters with a magnetic structure in GSAS or
Fullprof (or anywhere else) and using a separate magnetic phase, how does
one go about constraining the cell parameters? The specific example of
interest is triclinic with a magnetic cell of a*2b*2c. 

The reciprocal cell quadratic products don't appear to lend themselves
easily to keeping the cell angles equal across the two phases. Any ideas?

On a related question - is there a really good reason for putting 
quadratic products into the least squares matrix rather than the real
space unit cell parameters? I haven't picked up yet on why the derivatives
cannot be converted from w.r.t A* etc to w.r.t a etc. Is it just a time
saving?

Thanks

Jon Wright


 Dept. of Chemistry, Lensfield Road, Cambridge, CB2 1EW
   Phone-Office 01223 (3)36396; Lab 01223 (3)36305; Home 01223 523710




Non standard space group settings.

1999-11-12 Thread Jon Wright

Help! 

Can anyone point me in the right direction for using non-standard space
group settings in GSAS? I want to look at a rhombohedral distortion of a
spinel structure while retaining the cubic setting. I have been able to do
this with CCSL based programs where I can simply give a list of
generators. Is this possible with fullprof or would I run out of matrices
there too?

Thanks for any hints,

Jon Wright
--

>From international tables Fd-3m has a subgroup F 1 -3 2/m, which is a non
standard setting of R-3m. I get the following output from expedt:

Enter space group symbol (ex:  P n a 21, P 42/n c m, R -3 c, P 42/m,
R -3 m R for rhombohedral setting)  ( / if OK) >F 1 -3 2/m
** Error processing symbol F 1 -3 2/m
More than 24 matrices needed to define group


 Dept. of Chemistry, Lensfield Road, Cambridge, CB2 1EW
   Phone-Office 01223 (3)36396; Lab 01223 (3)36305; Home 01223 523710



RE: As sent to the neutron mailing list

1999-09-27 Thread Jon Wright

> ... not at all for magnetism, ...

Ahem, magnetism - resolution is very useful, in certain (rare) cases. I
guess that magnetic structures are not Armel's primary interest
but in at least one case resolution was essential. (FeAsO4 from IRIS (TOF
neutron) dataset, J.Phys:Condens.Matter 11 (1999) 1473, apologies for the
gratuitous self reference!). The resolution was needed for indexing a
helimagnetic structure with peaks around d=7A.

Bets wishes,

Jon

PS: I'm not sure how relevant this is but my impression is that the TOF
instruments get better at longer d-spacings. This can be seen on plots of
FWHM vs "effective" 2 theta (or sin(theta)/lambda). As such, I guess there
would be a better example than the MgO, which is only going to about
d=2.1A(?). I have an HRPD dataset with a peak (in backscattering, ie. high
res.) at 3.3A, but the sample broadening is awful. So there is probably an
even better dataset out there somewhere (collected with the choppers at
5Hz, or re-phased).

On Sat, 25 Sep 1999, Armel Le Bail wrote:

> Any new value and place welcome. Remember where this (stupid?)
> comparison takes place, before to explode : SDPD (Structure
> Determination by Powder Diffractometry), 

... not at all for magnetism, ...

> tunability of wavelength and so on, only SDPD for which resolution
> counts, first for indexing, then for extracting structure factors used
> for solving, and finally for refining by the Rietveld method : this
> whole process being called "(ab initio) structure determination".
> 
> Best,
> 
> Armel
> 


 Dept. of Chemistry, Lensfield Road, Cambridge, CB2 1EW
   Phone-Office 01223 (3)36396; Lab 01223 (3)36305; Home 01223 523710




Re: Bond length and angle ESDs

1999-09-23 Thread Jon Wright

> On Thu, 23 Sep 1999, Ed Cussen wrote:

> > We're forced to use CCSL for the superior peak shape description and
> > so simply changing refinement program is not an option.  I'm sure
> > that this problem must have been encountered before and I'm suprised that
> > there's not a 'standard' solution to it.

I wouldn't be so depressed about using CCSL, how many other refinement
programs offer you pawley refinements and access to the source? There is a
bonds program included in CCSL which you might find useful. For HRPD data
I'm surprised you can't get away with using GSAS unless you have 
particularly nasty anisotropic peak shape effects. 

The following is from the source I'm looking at... give me a shout if you
having trouble compiling it.

C *** BONDS updated by PJB  26-Aug-1998 *** 
... 
CD The program will calculate bond lengths, with optional standard
CD deviations, and requested bond angles.
...
CI The crystal data file (CDF) must contain:
CI S cards giving the symmetry
CI a  C card with the cell dimensions
CI  and if bond esd's are required
CI a C SD card giving the esd's in the cell parameters
CI A cards defining the atom names and positions
CI  and if bond esd's are required
CI A SD cards giving the esd's in the atomic parameters
CI B cards as described above and in the Users' manual
...

Best of luck!

Jon


 Dept. of Chemistry, Lensfield Road, Cambridge, CB2 1EW
   Phone-Office 01223 (3)36396; Lab 01223 (3)36305; Home 01223 523710



Re: RIET: TOF Neutron Web based programming resources?

1999-09-20 Thread Jon Wright

Lachlan,

You can get hold of the freely available CCSL code from your site and
compile programs tf112m etc for doing TOF neutron Reitveld fits with
magnetism too! Compiles with the free g77 compiler after some minor
changes. All code is there, I have compiled and run it on a PC and
presented the ensuing results at IUCR and ECNS, although only one person
seemed to notice.

It is the old ral code from about 1995. Does pattern simulation as well as
refinement.

Cheers,

Jon

On Mon, 20 Sep 1999, L. Cranswick wrote:

> 
> Re: lack of any dedicated powder pattern calculation
> software that can handle TOF Flight neutrons:
> 
> Are there any Time of Flight Neutron internet/web
> based links giving hints and code for programming
> TOF capability into Rietveld and Powder Pattern
> calculation software.  It would be far easier to convince
> programmers to support this form of diffraction 
> if some easy to get "example" code and hints could 
> be made available.
> 
> Something of the ilk of the Le Bail method page which
> has hints on where to insert changes into Rietveld
> code and some example Fortran for getting over some
> of the nuances that may result:
> http://www.ccp14.ac.uk/solution/lebail/
> 
> (Telling programmers using the internet, who are not
> full or part time TOF users, to "read the book"
> could look like telling them to "go to hell" -
> especially as equations described in the literature 
> may not be directly translatable into robust code)
> 
> Lachlan.
> 
> -- 
> Lachlan M. D. Cranswick
> 
> Collaborative Computational Project No 14 (CCP14)
> for Single Crystal and Powder Diffraction
> Daresbury Laboratory, Warrington, WA4 4AD U.K
> Tel: +44-1925-603703  Fax: +44-1925-603124
> E-mail: [EMAIL PROTECTED]  Ext: 3703  Room C14
>http://www.ccp14.ac.uk
> 
> 


 Dept. of Chemistry, Lensfield Road, Cambridge, CB2 1EW
   Phone-Office 01223 (3)36396; Lab 01223 (3)36305; Home 01223 740198



Re:

1999-09-20 Thread Jon Wright

There are two programs which spring to mind. GSAS and Fullprof both allow
you to simulate neutron data. GSAS only does commensuate magnetic
structures. See www.ccp14.ac.uk for link to the programs.

Jon


 Dept. of Chemistry, Lensfield Road, Cambridge, CB2 1EW
   Phone-Office 01223 (3)36396; Lab 01223 (3)36305; Home 01223 740198



Re: RIET: Are Spherical Harmonics for Preferred OrientationCorrection Evil?

1999-09-16 Thread Jon Wright

On Thu, 16 Sep 1999, Dr.Joerg Bergmann wrote:

> (high systematic error, bad difference plot, but low e.s.d.'s) 
> (low systematic error, good difference plot, but high e.s.d.'s).

Apologies for diving off onto the general case, but does this make sense?
You improve your model and the quality of your parameter estimation gets
worse? The esd's in the first case seem underestimated (to me).

I assume your esd's are coming directly from the LSQ matrix. 
Conventionally people seem to multiply them by a chi^2 factor to degrade
the ones with the high systematic error (Giacovazzo's book has a nifty
explanation of this in terms of assuming the systematic errors are really 
an underestimation of experimental errors). If a procedure like that is
carried out then I would (naively) expect you to find your fit *and*
your parameter estimation to improve with the use of a better model. 

That's if the extra parameters improve your fit, if not then you'd be
introducing extra degrees of freedom and correlations which might degrade
the esd's.

Try thinking of using a nice peakshape versus a crappy peakshape as
opposed to prefered orientation corrections. As usual one should only
introduce parameters if they *significantly* improve the fit (Hamilton's
test etc). 

Hope I'm making sense,

Jon


 Dept. of Chemistry, Lensfield Road, Cambridge, CB2 1EW



RE: Combined neutron/x-ray refinements

1999-05-25 Thread Jon Wright

On Tue, 25 May 1999, Alan Hewat, ILL Grenoble wrote:

> >>I guess the degradation which is found would come from parameters which
> >>are determined by both datasets and come out with different values in each
> >>separate refinement. 
> 
> If they come out differently it is because they are differently biased by 
> different systematic errors in the data not described by the model.  

I was thinking of C-H (D) bondlengths from x-ray and neutron data. Don't
they come out differently if you use spherical form factors for the x-ray
data? I guess a neutron expert might look on this as an systematic error
in the x-ray model :) Maybe not if one looks on the x-ray refinement as
fitting of the electron density function, rather than the nuclear 
positions. For bonding studies it is the differences which are of 
interest!

Jon Wright.

PS: Any offers other than GSAS and multipattern fullprof for actually
doing these fits? 



RE: Combined neutron/x-ray refinements

1999-05-25 Thread Jon Wright

On Tue, 25 May 1999 [EMAIL PROTECTED] wrote:

> Not necessarily.  In order to get the ESD, the variance-covariance matrix is
> multiplied by chi^2, and the roots of the diagonal elements are taken.

The justification for multiplying by chi^2 is to assume that the
systematic errors are really just due to overestimated counting statistics
and you can rescale the weight of each data point accordingly. A question
arises as to whether you should rescale each pattern's esds according to
the individual patterns chi^2 or do you have to use the overall chi^2 for
both together?

Thinking of an (over-determined) D20 data and an (under-determined) lab
x-ray data set then it makes sense to rescale errors for the D20 data but
not the x-ray (common sense?!). It seems as if the method for calculating
the esd's is nonsense - surely one can only justify rescaling the weights
on a per dataset basis. The systematic errors which are being accounted
for in each dataset are different. Fullprof (multipattern) does give a
chi^2 per pattern although I don't know how it gets the esd's, GSAS
doesn't so I assume it degrades the esd's. (I read that the multiplication
by chi^2 has no basis in statistics anyway :)  

So is it compulsory to multiply by the overall chi^2? If not then I see no
reason for a degradation unless the individual fits get worse due to a
disagreement over a parameter. 

> Therefore, if the chi^2 of the combined refinement is worse than that of the
> individual ones, the ESD will automatically be worsened.  I think this is by
> far the commonest case. 

Agreed although I'm interpreting it as an odd method for estimating an
error. Is it set in stone?

>  Also, by adding reflections that are insensitive to
> a given parameter my feeling is that you increase the esd on that parameter
> even if chi^2=1, but the proof of this is too tedious.

Can you direct me to a text with this tedious proof? My feeling is that if
the derivative of a data point w.r.t a parameter is small or zero then it
does not affect the LSQ calculation unless it alters the chi^2.  If the
chi^2 is 1 then how do an extra bunch of zero derivatives affect an
esd??? For example adding or excluding background regions shouldn't alter
the esd's on positions provided the chi^2 is unchanged. 

Is there anything other than GSAS for doing combined fits anyway?
Apologies to the list if I am displaying my ignorance, sometimes it's the
quickest way to learn.

Jon Wright

PS: Sorry to pick at your comments Paolo, it's a shame I'm not at RAL at
the moment. Could have discussed it out over a coffee...




Combined neutron/x-ray refinements

1999-05-25 Thread Jon Wright

Hi all,

Am I right in thinking there are roughly two camps in this dicussion?
Those who think that adding more data degrades the refinement if that data
is not useful and those who think it makes no difference. (I say 'not
useful' in the context of Vanadium in neutron data or deuterium in x-ray
data for examples).

My understanding is that the least squares matrix is made up of the
derivatives of the differences between data and model w.r.t the parameters
you are refining. So if a dataset has no useful information for a  
parameter then the relevant derivatives are zero. So there should be no
degradation in the esd on the parameter, right? This would imply both
datasets should (always) be used simaltaneously.

I guess the degradation which is found would come from parameters which
are determined by both datasets and come out with different values in each
separate refinement. The higher esd reflects the disagreement between the
neutrons and x-rays. (Now enter the questions about precision and
accuracy.) Is this the area where there is a disagreement? What should one
do when the datasets disagree with each other? Ideally work out why and
model that in a combined fit! In practice either 

-Do a combined fit and report postions with higher esd's (as you aren't
 too sure where the atoms really are.)

-Do two separate fits, each having lower esd's but disagreeing with each
 other.

Which is better? 

Enough from me now, it's time to do some work today. Incidentally does
anyone have an example of a refinement where parameters are degraded by
the combined fit *and* they agree with each other when two separate fits
are carried out.

Jon Wright

PhD Student, Chemistry Dept, Cambridge Uni, UK.

On 25 May 1999, Andrew Wills wrote:

> Alan,
> 
> I am not suggesting removing reflections. But, I think that we should make
> sure that we are combining the data in the best possible way. If we know have
> strong information on a vanadium position from X-rays and (extrapolate again)
> have only noise from neutrons, then stastically introducing the neutron data
> whilst no changing the best fit will degrade the least - squares approach to
> it. The final structure should fit all data, but are we approaching it
> optimally? I know that this is a can of worms, but it is good to think about
> what we are doing as combined refinements will continue to become less exotic.
> 
> -Andrew
> --
> Andrew Wills
> Centre D'Études Nucléaires de Grenoble
> 
> 
> "Alan Hewat, ILL Grenoble" <[EMAIL PROTECTED]> wrote:
> >If we have an atom that is seen by one
> >radiation and not by the other there will be a degradation in the quality of
> >the parameters by combining the refinement in the current fashion. 
> 
> Do you mean for example that we might degrade the parameters of a V atom 
> by introducing neutron data ?
> 
> I don't think this is true, but it is an interesting question.  If we were to
> extrapolate this argument "ad absurdum" we could say that because some
> reflections (for a given radiation) do not give any information about some 
> parameters (easy to demonstrate) then we would obtain better estimates 
> for those parameters by removing those reflections from the least squares 
> process.  (Surely untrue :-)



Re: data format for GSAS using ISIS Polaris

1999-05-19 Thread Jon Wright

Renyang,

Your best bet is to log into isis and read the data back into genie, then
write it out using gsasgen.
Something like the following will read your data in (check with the
instrument scientist, it depends on you login.com file as to which
commands you need and what works) : 
>> lo w1 myfile.dat g_axp:loader
The check it looks ok
>> display w1
Write it out in gsas format
>> gsasgen w1
Remember to pick up the correct instrument parameter file and make sure
you check the data looks the same in gsas as it does in genie. My own
experience is that the bin type changes from linear to logarithmic and
this screws up the very low TOF data so you have to rebin it by hand.

You really should get in touch with whoever your local conatact for that
experiment was and they should be happy to help you get the data into
gsas. I haven't used polaris for a while and they may have upgraded
everything to use opengenie, in which case the commmands will be
different.

Hope this helps,

Jon


On Wed, 19 May 1999, YangRen wrote:

> 
> Dear all, 
> 
> We are working on structure refinement using the Time-of Flight
> data collected in the ISIS Polaris, in UK. We are using the GSAS
> software. Could anyone tell us what is the correct format for the
> TOF data input file for the GSAS refinement? And how to make it if
> it is different from the data format we have which is:
> 
>  Bin no.x y e
>   1  200.6388  3.964094 0.1950043E-01
>   2  201.8044  3.906375 0.1908529E-01
>   3  203.0007  3.911664 0.1934239E-01
> ...
> 
> Thanks a lot!
> 
> Renyang
> 


 Dept. of Chemistry, Lensfield Road, Cambridge, CB2 1EW
   Phone-Office 01223 (3)36396; Lab 01223 (3)36305; Home 01223 740198



Re: Combined neutron/x-ray refinements

1999-05-11 Thread Jon Wright

On Tue, 11 May 1999, Armel Le Bail wrote:

> I already suggested to install such an "automatic" powder diffractometer
> at ILL. As Alan wrote recently, this could be a question of manpower.
> I think that this is rather a local political question : it is not a very
> interesting job for a human being to be a simple sample loader...

A scheme like this could open up NPD to a wider range of users which would
inevitably mean new users...? The simple loader might get involved in a
lot of collaborations by returing refinement results instead of just raw
data. They would also be in a unique position to offer the odd favour here
and there. So long as it doesn't mean getting up every six hours then I
think it sounds like a fantastic job. 

In any case - since simaltaneous fitting is being discussed - I have
a question about multipattern fullprof which is now available as a beta
version. It appears you can assign the statistical weight to each pattern,
and in the example x-rays and neutrons are weighted equally. I'm wondering
what the best choice is. Each observation comes with a statistical weight
from the experiment? Isn't it a sin to alter that?

Jon Wright


  PhD Student, Dept. of Chemistry, Lensfield Road, Cambridge, CB2 1EW, UK



Re: Errors in Rietveld refinement

1999-04-13 Thread Jon Wright

Micheal

Did this get posted twice? I thought I was waiting to read an explanation
myself, but I could just be going mad guess I'll have a go myself.

> I have a question concerning the errors calculated in a rietveld
> refinement. As far as I understand, the error of a parameter is calculated
> as
> 
> sigma(p_i) = sqrt(c_ii) * sqrt(chi^2/N-P) (*)
> 
> where c_ii is the i-th diagonal element of the inverse of the
> curvature matrix, chi^2 = sum_i w_i*(yobs_i - ycalc_i), N number of
> observations, P number of parameters.

Are you sure about these? I thought chi^2 should contain the factor N-P,
also it is usually (obs-calc)^2. Pedantics aside you have something like 
the statistical error * chi.

> But I thought the statistical error is just sqrt(c_ii)! 

Yup, it is.

> The second factor
> is alway << 1, because N is a large number, and so the calculated errors
> are much to small! Even more, I can minimize them by increasing the number
> of steps, although, after a certain point, I gain no more information.

No! Remember that the chi^2 as you have defined it increases with the
number of observations you make - as you sum over i. Using
 chi^2=[sum_i w_i*(yobs_i - ycalc_i)^2]/(N-P),
you can see that chi^2 is approximately the average number of esd's you
are wrong by for each point in the pattern (N >> P). So the second factor
is your familiar chi^2; very rarely << 1 !

The correction you are applying to the diagonal elements of your matrix
is to try to allow for the systematic errors (ie, non random) in your
model. This appears to be your source of confusion. From Young's book (the
Rietveld Method) E.Prince says, "It is customary to multiply the e.s.d's
by the chi factor, on the assumption that the lack of fit is due to an
over-weighting of all data points. While this practice is conservative, it
should be understood that it has no basis in statistics." 

If you look at a difference curve there will be random noise and 
systematic problems, such as peak shape or intensity or position. This fix
says that the systematic problems are really just random noise and 
effectively increases the weights on the y_obs accordingly, giving larger
esd's.

Hope this helps,

Jon Wright

PS: See also GSAS/fullprof manuals and "Rietveld refinement guidelines" 
McCusker et al, J.Appl,Cryst 32(1999),36-50. Also the non-Rietveld
specific - Acta Cryst A45 (1989) 63-75. "Statistical descriptors in
crystallography". 


 Dept. of Chemistry, Lensfield Road, Cambridge, CB2 1EW, UK
   Phone-Office 01223 (3)36396; Lab 01223 (3)36305; Home 01223 740198





Re: Graphics

1999-03-31 Thread Jon Wright

Hongwu,

Assuming your problem is just to get graphics over a network you can
probably use an xserver. This works for looking at TOF data collected at
the ISIS source in the UK, which also uses VAX computers. In that case you
connect with a telnet and utter the magic words:  
"set display/create/trans=tcpip/node=your.ip.address", sometimes appending
":0.0". When you try to run a program which uses graphics you need to make
sure an xserver is running on your PC/mac and the pictures might appear.

The difficult part is getting hold of a free xserver for a PC/mac, you
could try MI/X. The URL to download MI/X from MicroImages' is
http://www.microimages.com/ in the "Free Downloads" section.

Hope this helps,

Jon Wright

On Tue, 30 Mar 1999, Hongwu Xu wrote:

> Hi, Everyone,
> 
> We have some neutron diffraction data collected and stored in the VAX
> computer at Argonne, and we want to remotely get access to those data
> from our Windows-PC or Mac here. However, we were unable to display the
> diffraction patterns on our computers. Do we need an emulation program for
> doing this? If yes, what are the possible programs, especially those that
> could be downloaded from websites? Any suggestions will be apreciated.
> 
> Hongwu
> 
> 
> 
> 


 Dept. of Chemistry, Lensfield Road, Cambridge, CB2 1EW
   Phone-Office 01223 (3)36396; Lab 01223 (3)36305; Home 01223 740198




Re: Comments on "Importance of matching software with hardware"

1999-03-26 Thread Jon Wright

On Thu, 25 Mar 1999, Brian H. Toby wrote:

> Plotting in Q sounds good to me.
> 

Good for comparing instruments for resolution and d-spacing range, no
wonder the suggestion came from ISIS :-) However not so good for examining
fits from long d-spacing TOF diffractometers such as OSIRIS and IRIS
(ISIS/TOF instruments for longer d-spacings). While I agree with Paolo
that a standard unit would be useful, please don't make me plot an OSIRIS
dataset with range 0.8-8 Angstroms in Q! The region of interest is the
longer d-spacings (incommensurate magnetic structures), and it gets
crunched up into some spikes. 

Horses for courses... shouldn't a fit show information which all has 
approximately equal weighting in the minimisation algorithm? In the same
way that difference/esd is often more illuminating than just a
plain difference curve.

Jon Wright

PhD Student 


   Dept. of Chemistry, Lensfield Road, Cambridge, CB2 1EW, UK
   Phone-Office 01223 (3)36396; Lab 01223 (3)36305; Home 01223 740198