One other question (for both key issues described): what exactly is the
problem the committees are aiming to address?
Because I can't help noticing that Tom's email did not spark an on-list
discussion; do people actually feel either are issues? Isn't the more
burning problem how best to use the 10,000s of structures we're churning
out? In the grand scheme of things, they're pretty inaccurate anyway:
static snapshots of crippled fragments of proteins far from their many
interaction partners. So do we need 100,000s of structures instead? If
so, we may soon (collectively) stop being able to care about the
original dataset or how to reproduce analysis number 2238 from 2 years ago.
(No, I'm not convinced this question is relevant only to structural
genomics.)
phx.
On 16/10/2011 19:38, Frank von Delft wrote:
On the deposition of raw data:
I recommend to the committee that before it convenes again, every
member should go collect some data on a beamline with a Pilatus
detector [feel free to join us at Diamond]. Because by the probable
time any recommendations actually emerge, most beamlines will have one
of those (or similar), we'll be generating more data than the LHC, and
users will be happy just to have it integrated, never mind worry about
its fate.
That's not an endorsement, btw, just an observation/prediction.
phx.
On 14/10/2011 23:56, Thomas C. Terwilliger wrote:
For those who have strong opinions on what data should be deposited...
The IUCR is just starting a serious discussion of this subject. Two
committees, the "Data Deposition Working Group", led by John Helliwell,
and the Commission on Biological Macromolecules (chaired by Xiao-Dong
Su)
are working on this.
Two key issues are (1) feasibility and importance of deposition of raw
images and (2) deposition of sufficient information to fully
reproduce the
crystallographic analysis.
I am on both committees and would be happy to hear your ideas
(off-list).
I am sure the other members of the committees would welcome your
thoughts
as well.
-Tom T
Tom Terwilliger
terwilli...@lanl.gov
This is a follow up (or a digression) to James comparing test set to
missing reflections. I also heard this issue mentioned before but was
always too lazy to actually pursue it.
So.
The role of the test set is to prevent overfitting. Let's say I have
the final model and I monitored the Rfree every step of the way and
can
conclude that there is no overfitting. Should I do the final
refinement
against complete dataset?
IMCO, I absolutely should. The test set reflections contain
information, and the "final" model is actually biased towards the
working set. Refining using all the data can only improve the
accuracy
of the model, if only slightly.
The second question is practical. Let's say I want to deposit the
results of the refinement against the full dataset as my final model.
Should I not report the Rfree and instead insert a remark
explaining the
situation? If I report the Rfree prior to the test set removal, it is
certain that every validation tool will report a mismatch. It does
not
seem that the PDB has a mechanism to deal with this.
Cheers,
Ed.
--
Oh, suddenly throwing a giraffe into a volcano to make water is crazy?
Julian, King of
Lemurs