It is a bit of a mystery to me why two structures of supposedly similar resolution with equally acceptable maps can give very different r factors - one sticks in the low 20s and another gives a smug 17% ..

I guess one could go back and analyse the model against the data and time..
Eleanor

 On 01/25/2012 09:05 AM, Miguel Ortiz Lombardía wrote:
Le 24/01/12 21:18, Dale Tronrud a écrit :
On 01/24/12 11:52, Miguel Ortiz Lombardia wrote:
El 24/01/12 18:56, Greg Costakes escribió:
Whoops, I misspoke... I meant Rsym and Rmerge increase with higher
redundancies.


But then suppose that one merges data from a crystal that is degrading
while exposed, sp the data gets degraded. This is not at all unusual. In
the absence of a deep understanding of refinement, intuition suggests
that degraded data should produce degraded models. If Rwork and Rfree
are measuring anything useful they should go up redundancy in those
not-so-unusual cases. Or intuition is misguiding me again.


    Yes, if one has a poorer quality data set one expects the Rw and Rf to
be higher, but this is not necessarily a correlation to high redundancy.
Surely if you have high redundancy and know the crystal is decaying you
have to flexibility to not use the decayed data in the merge.  I would
expect that decayed data would only be merged with the early data if
the redundancy was so low that you had to just to get a full data set.

Dale Tronrud


I agree. I would also expect so... unless the user simply runs the data
reduction software and does not check the log files to see, among other
important issues, at what point the data starts degrading due to a
decaying crystal. If the software is clever enough to decide by itself,
it will be all right or sort of, which is, I suppose, a good point for
automation. Unfortunately, there are many users of black boxes, which
is, I presume, a danger of automation. My answer was kind of a caveat
for such type of users.


Reply via email to