On the deposition of raw data:
Committees, wherever you are!
I guess that abstaining from storing the raw diffraction data in the form of  
frames is not very wise
whatever its size is. I regret that for some PDB entries I do not have 
diffraction data (needless to say that authors
do not submitted even  structure factors). 
I maintain a bit more than 1.2 T diffraction data starting from 2001 and all is 
nicely
resides on two small WD pocket disks (needless to say that I have several 
copies of the data).
Generally I have all data I ever collected going back to beginning of 80's, but 
I am to lazy to reform DAT tapes.
Sure, running Pilatus for an olympic record, we will go home with several T of 
data after 24 h (will we?).
But this is an abuse of the system. The final goal is the structure 
determination, and there are much less 
good crystals everywhere in one year that  one Pilatus could collect in one 
week.
But to decide fast if the crystal diffraction data from Pilatus is good for 
storage or even for measurement whatever the speed of data collection is, good 
data processing
software is needed. I personnaly think that there is only one, the one.
Anyhow, I think if the author wish to publish his structure, and it is 
important, and I am a reviewer, and it is going
to prestigious journal,  I will reprocess his data and will check his way to 
the final crystal structure solution from the beginning.
It is as in mathematics. If someone claim that he solved a long-staing problem 
from the past, he will not go away from his envious colleagues, who 
will drop everything and will sit and check, until they will find a mistake. 
What a pleasure!!!
And if there are no mistakes - chapeau !!!

FF
 
Dr Felix Frolow   
Professor of Structural Biology and Biotechnology
Department of Molecular Microbiology
and Biotechnology
Tel Aviv University 69978, Israel

Acta Crystallographica F, co-editor

e-mail: mbfro...@post.tau.ac.il
Tel:  ++972-3640-8723
Fax: ++972-3640-9407
Cellular: 0547 459 608

On Oct 16, 2011, at 20:38 , Frank von Delft wrote:

> On the deposition of raw data:
> 
> I recommend to the committee that before it convenes again, every member 
> should go collect some data on a beamline with a Pilatus detector [feel free 
> to join us at Diamond].  Because by the probable time any recommendations 
> actually emerge, most beamlines will have one of those (or similar), we'll be 
> generating more data than the LHC, and users will be happy just to have it 
> integrated, never mind worry about its fate.
> 
> That's not an endorsement, btw, just an observation/prediction.
> 
> phx.
> 
> 
> 
> 
> On 14/10/2011 23:56, Thomas C. Terwilliger wrote:
>> For those who have strong opinions on what data should be deposited...
>> 
>> The IUCR is just starting a serious discussion of this subject. Two
>> committees, the "Data Deposition Working Group", led by John Helliwell,
>> and the Commission on Biological Macromolecules (chaired by Xiao-Dong Su)
>> are working on this.
>> 
>> Two key issues are (1) feasibility and importance of deposition of raw
>> images and (2) deposition of sufficient information to fully reproduce the
>> crystallographic analysis.
>> 
>> I am on both committees and would be happy to hear your ideas (off-list).
>> I am sure the other members of the committees would welcome your thoughts
>> as well.
>> 
>> -Tom T
>> 
>> Tom Terwilliger
>> terwilli...@lanl.gov
>> 
>> 
>>>> This is a follow up (or a digression) to James comparing test set to
>>>> missing reflections.  I also heard this issue mentioned before but was
>>>> always too lazy to actually pursue it.
>>>> 
>>>> So.
>>>> 
>>>> The role of the test set is to prevent overfitting.  Let's say I have
>>>> the final model and I monitored the Rfree every step of the way and can
>>>> conclude that there is no overfitting.  Should I do the final refinement
>>>> against complete dataset?
>>>> 
>>>> IMCO, I absolutely should.  The test set reflections contain
>>>> information, and the "final" model is actually biased towards the
>>>> working set.  Refining using all the data can only improve the accuracy
>>>> of the model, if only slightly.
>>>> 
>>>> The second question is practical.  Let's say I want to deposit the
>>>> results of the refinement against the full dataset as my final model.
>>>> Should I not report the Rfree and instead insert a remark explaining the
>>>> situation?  If I report the Rfree prior to the test set removal, it is
>>>> certain that every validation tool will report a mismatch.  It does not
>>>> seem that the PDB has a mechanism to deal with this.
>>>> 
>>>> Cheers,
>>>> 
>>>> Ed.
>>>> 
>>>> 
>>>> 
>>>> --
>>>> Oh, suddenly throwing a giraffe into a volcano to make water is crazy?
>>>>                                                 Julian, King of Lemurs
>>>> 

Reply via email to