James Holton wrote:
I go to a lot of methods meetings, and it pains me to see the most
brilliant minds in the field starved for "interesting" data sets. The
problem is that it is very easy to get people to send you data that is
so bad that it can't be solved by any software imaginable (I've got
piles of that!). As a developer, what you really need is a "right
answer" so you can come up with better metrics for how close you are to
it. Ironically, bad, unsolvable data that is connected to a right
answer (aka a PDB ID) is very difficult to obtain. The explanations
This is probably an idea that has already been tried (or discarded as
unsuitable for reasons that don't occur to me at the moment) - but why
not start with good crystals (such as lysozyme) and deliberately make
them worse? Exactly how would depend on what kind of methods you were
trying to develop - but I'd imaging "titrating in" organic
solvents/detergents would be able to turn a well-diffracting crystal
into a poor one (with a known, or at least knowable, answer).
Deliberately causing radiation damage, or using known poor
cryo-conditions would also work - probably the type of "badness" in the
data would be different.
I don't think you'd be able to tune solvent content or number of
anomalous scatterers by damaging good crystals. This would also require
a decent number of crystals (but lysozyme is reasonably inexpensive).
But making good crystals from bad ones is difficult - making bad ones
from good ones shouldn't be.
Any ideas why this wouldn't work (or citations where it did)?
Pete
usually involve protestations about being in the middle of writing up
the paper, the student graduated and we don't understand how he/she
labeled the tapes, or the RAID crashed and we lost it all, etc. etc.
Then again, just finding someone who has a data set with the kind of
problem you are interested in is a lot of work! So is figuring out
which problem affects the most people, and is therefore "interesting".
Is this not exactly the kind of thing that publicly-accessible
centralized scientific databases are created to address?
-James Holton
MAD Scientist
On 10/16/2011 11:38 AM, Frank von Delft wrote:
On the deposition of raw data:
I recommend to the committee that before it convenes again, every
member should go collect some data on a beamline with a Pilatus
detector [feel free to join us at Diamond]. Because by the probable
time any recommendations actually emerge, most beamlines will have one
of those (or similar), we'll be generating more data than the LHC, and
users will be happy just to have it integrated, never mind worry about
its fate.
That's not an endorsement, btw, just an observation/prediction.
phx.
On 14/10/2011 23:56, Thomas C. Terwilliger wrote:
For those who have strong opinions on what data should be deposited...
The IUCR is just starting a serious discussion of this subject. Two
committees, the "Data Deposition Working Group", led by John Helliwell,
and the Commission on Biological Macromolecules (chaired by Xiao-Dong
Su)
are working on this.
Two key issues are (1) feasibility and importance of deposition of raw
images and (2) deposition of sufficient information to fully
reproduce the
crystallographic analysis.
I am on both committees and would be happy to hear your ideas
(off-list).
I am sure the other members of the committees would welcome your
thoughts
as well.
-Tom T
Tom Terwilliger
terwilli...@lanl.gov
This is a follow up (or a digression) to James comparing test set to
missing reflections. I also heard this issue mentioned before but was
always too lazy to actually pursue it.
So.
The role of the test set is to prevent overfitting. Let's say I have
the final model and I monitored the Rfree every step of the way and
can
conclude that there is no overfitting. Should I do the final
refinement
against complete dataset?
IMCO, I absolutely should. The test set reflections contain
information, and the "final" model is actually biased towards the
working set. Refining using all the data can only improve the
accuracy
of the model, if only slightly.
The second question is practical. Let's say I want to deposit the
results of the refinement against the full dataset as my final model.
Should I not report the Rfree and instead insert a remark
explaining the
situation? If I report the Rfree prior to the test set removal, it is
certain that every validation tool will report a mismatch. It does
not
seem that the PDB has a mechanism to deal with this.
Cheers,
Ed.
--
Oh, suddenly throwing a giraffe into a volcano to make water is crazy?
Julian, King of
Lemurs