This issue has come up from time to time, and I don't think
anyone has been convinced to change their mind by the
theoretical discussions.
But isn't this amenable to test experimentally?
Given the ready availability of CPU time . . .

Take a particular structure, preferably a deposited PDB structure
so that it is a fixed starting point available to everyone.
Select a refinement strategy (which will determine radius of convergence?).

1. Refine the structure to convergence with the original free set.
R-free may be different from original depositors because of different
strategy, refinement target, and bulk solvent model, so this will
be the reference to compare subsequent results with.

2. Take that structure (1), select a new R-free set, again refine
to convergence. R-free will be different, but should not be significantly
different, from that in 1. Might need to refine against 5 or 10 different
R-free choices to see what is significant.

3. Take the structure from (2), refine using the original free set.
According to one school, R-free is hopelessly corrupted and will never
rise to the original level (1) unless drastic refinement steps are
taken to "shake out" the bias.
The other school would predict R-free to converge on the same value (1),
provided drastic steps are *not* taken, as they might allow the refinement
to jump into a different local maximum.

Then measure RMS- and maximum- atomic deviations between the models
and see if there are any differences that a PDB user would care about.

This does not directly address the original poster's question, in which
a new set of data is being used. However I think we would agree that
if there is no bias when exactly the same data is being used, there
would be none in the case of different data.

Even if the R-free is biased by this test, it may not be in the
case of a new dataset- the "noise" which is being "overfit" in the
new dataset could be completely independent from that in the old.
However I think it is generally agreed that there is a component
of "noise" (in the most general sense, meaning the difference
between what is calculated from our best model and what is observed)
which is common between different crystals.

Ed

Ian Tickle wrote:
-----Original Message-----
From: Dale Tronrud [mailto:det...@uoxray.uoregon.edu]
Sent: 24 September 2009 17:21
To: Ian Tickle
Cc: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] Rfree in similar data set

   While I agree with Ian on the theoretical level, in practice
people use free R's to make decisions before the ultimate model
is finished, and our refinement programs are still limited in
their abilities to find even a local minimum.

I wasn't saying that Rfree is only useful for the ultimate finished
model.  My argument also applies to all intermediate models; the
criterion is that the refinement has converged against the current
working set, even if it is only an incomplete model, or if it is only to
a local optimum.  So it's perfectly possible to use Rfree for
overfitting & other tests on intermediate models.  The point is that it
doesn't matter how you arrived at that optimum (whether local or
global), Rfree is a function only of the parameters at that point, not
of any previous history.  If you arrived at that same local or global
optimum via a path which didn't involve switching datasets midway, you
must get the same answer for Rfree, so I just don't see how it can be
biased one way and not biased the other.  Note that this is meant as a
'thought experiment', I'm not saying necessarily that it's possible to
perform this experiment in practice!

   On the automated level the test set is used, sometimes, to
determine bulk solvent parameter, and more importantly to calibrate
the likelihood calculations in refinement.  If the test set is
not "free" the likelihood calculation will overestimate the
reliability
of the model and I'm not confident that error will not become
a self-fulfilling prophecy.  It is not useful to divine meaning
from the free R until convergence is achieved, but the test
set is used from the first cycle.

That is indeed a fair point, but I would maintain that the test set
becomes 'free', i.e. free of the memory of all previous models, the
first time you reach convergence, so the question of the effect on
sigmaA calculations, which use the test set, is only relevant to the
first refinement after switching test sets, thereafter it should be
irrelevant.  Converging to a local or global optimum wipes out all
memory of previous models because the parameter values at that optimum
are independent of any previous history, and so Rfree must be the same
for that optimum no matter what path you took to get there.

   Perhaps I'm in one of my more persnickety moods, but every
paper I've read about optimization algorithms say that the method
requires a number of iteration many times the number of parameters
in the model.  The methods used in refinement programs are pretty
amazing in their ability to drop the residuals with a small number
of cycles, but we are violating the mathematical warranty on
each and every one of them.   A refinement program will produce
a model that is close to optimal, but cannot be expected to be
optimal.  Since we haven't seen an optimal model yet it's hard
to say how far we are off.

I thought that for a quadratic approximation CG requires a number of
iterations that is not more than the number of parameters (not that we
ever use even that many iterations!)?  Anyway that's a problem in
theory, but it's possible to refine until nothing more 'interesting'
happens, i.e. further changes appear to be purely random and at the
level of rounding errors.  Plotting the maximum shift of the atom
positions or B factors from one iteration to the next is a very
sensitive way of detecting whether convergence has been achieved;
looking at changes in R factors or in RMSDs of bonds etc is a bad way,
since R factors are not sensitive to small changes and atoms can move in
concert without affecting bond lengths etc. (or it may just be the
waters that are moving!).

As a final point I would note that cell parameters frequently vary by
several % between crystals even from the same batch due to unavoidable
variations in rates of freezing etc, so what you think are independent
test set reflections may in reality overlap significantly in reciprocal
space with working set reflections from another dataset anyway!

-- Ian


Disclaimer
This communication is confidential and may contain privileged information intended solely for the named addressee(s). It may not be used or disclosed except for the purpose for which it has been sent. If you are not the intended recipient you must not review, use, disclose, copy, distribute or take any action in reliance upon it. If you have received this communication in error, please notify Astex Therapeutics Ltd by emailing i.tic...@astex-therapeutics.com and destroy all copies of the message and any attached documents. Astex Therapeutics Ltd monitors, controls and protects all its messaging traffic in compliance with its corporate email policy. The Company accepts no liability or responsibility for any onward transmission or use of emails and attachments having left the Astex Therapeutics domain. Unless expressly stated, opinions in this message are those of the individual sender and not of Astex Therapeutics Ltd. The recipient should check this email and any attachments for the presence of computer viruses. Astex Therapeutics Ltd accepts no liability for damage caused by any virus transmitted by this email. E-mail is susceptible to data corruption, interception, unauthorized amendment, and tampering, Astex Therapeutics Ltd only send and receive e-mails on the basis that the Company is not liable for any such alteration or any consequences thereof.
Astex Therapeutics Ltd., Registered in England at 436 Cambridge Science Park, 
Cambridge CB4 0QA under number 3751674

Reply via email to