Re: [ccp4bb] IUCr committees, depositing images

2012-01-31 Thread Terwilliger, Thomas C
For those who may not have made it through all the CCP4bb postings in 
October-December 2011 on archiving raw images, I have posted a summary at the 
IUCR Diffraction Data Deposition Working Group forum page  
http://forums.iucr.org/viewforum.php?f=21  in which I have attempted to list 
the unique points made during the discussion, along with links to some of the 
original postings.

All the best,
Tom T
Tom Terwilliger
terwilli...@lanl.gov


Re: [ccp4bb] IUCr committees, depositing images

2011-10-27 Thread Ashley Buckle
A few responses regarding TARDIS:

While it's true that TARDIS.edu.au is just an index to files hosted elsewhere, 
a new solution, MyTARDIS has been developed as a repository that holds 
information on diffraction (and other kinds of) data. The idea being that 
facilities, labs or institutions take responsibility by hosting their own 
instance of MyTARDIS on their own storage with the eventual view to have 
entries released publicly and indexed by TARDIS.edu.au.

An example of this is the MyTARDIS experiment released here:

http://mytardis.its.monash.edu.au/experiment/view/636/

Also accessible in the tardis.edu.au entry here:

http://hdl.handle.net/102.100.100/6984

Both point to the exact same file location.

We also have a mechanism in which MyTARDIS has been placed at the Australian 
Synchrotron and linked to beamlines there to automatically store diffraction 
data locally. This data is then routed to MyTARDIS at a user's local 
university, where applicable. From here the workflow to publication takes 
similar form as that shown above.

It's open source licensed and deployable today:
http://code.google.com/p/mytardis/

With this as the installation documentation:
http://mytardis.readthedocs.org/en/latest/install.html

And here is how new instrument data can reach MyTARDIS:
http://packages.python.org/MyTARDIS/ingesting.html#mets
http://packages.python.org/MyTARDIS/ref/mets-format.html

This function has seen deployments at the Australian neutron facility ANSTO and 
in Microscopy/Microanalysis labs, too.

In response to James Holton's suggestion of getting PDB to accept TARDIS id's, 
I spoke with Kim Henrick  4 years ago if it would be possible simply to put a 
TARDIS id/url/handle in a REMARK, so the coordinates could be associated with 
the raw data that produced them.  Kim was very positive that this should 
happen, and told me that it would be discussed in a wwPDB advisory board 
meeting, but I never heard anything back. In the meantime we have managed to 
get TARDIS URL's in several papers, usually in supplementary information.  The 
editors/publishers didn't really object but didnt really take much notice I 
think either!  

TARDIS development is an active area of my group and now that we are 
contributing to the IUCr efforts (thanks to John Helliwell), I'm optimistic 
that things are moving forward.  We discussed some of the points raised here in 
the Acta Cryst TARDIS paper a couple of years ago 
(http://scripts.iucr.org/cgi-bin/paper?S0907444908015540) particularly the 
usefulness of raw data availability for methods development, so I'm really 
happy to see continued active discussion in this area!

Cheers
Ashley Buckle 
Steve Androulakis

On 27/10/2011, at 6:25 PM, Ethan Merritt wrote:

> On Wednesday, 26 October 2011, James Holton wrote:
>> Of course, if we are willing to relax the requirement of validation and 
>> curation, this could be a whole lot easier.  In fact, there is already 
>> an image deposition infrastructure in place!  It is called TARDIS:
>> 
>> http://tardis.edu.au/
>> 
>> Perhaps the best way forward would be for "the PDB" to introduce a new 
>> field for one or more TARDIS ids in a PDB deposition?  It would be 
>> optional at the first, but no doubt required in the future.
> 
> As I understand it, TARDIS is just an indexing system.
> You're still on your own to actually store the images.
> The TARDIS setup will cough up the information that your images
> are stored on a machine named pony.lbl.gov, or at least they were 
> at the time you registered them for indexing, where they supposedly
> can be retrieved using tag #XYZ.
> 
> But so far as I know you would still be at the mercy of pony.lbl.gov
> going up in flames, or being renamed twinkle.lbl.gov, or being 
> decommissioned when the next budget crunch hits.
> 
> For that matter, I don't know what the provision or expectation
> is that anyone outside your institution could see or access the
> machine holding the set of files that TARDIS told them were there.
> 
> If I've got this wrong, perhaps Ashley Buckle can chime in with
> an update on TARDIS.
> 
>   Ethan


Associate Professor Ashley M Buckle
NHMRC Senior Research Fellow
The Department of Biochemistry and Molecular Biology,
Faculty of Medicine
Building 77
Monash University, Clayton, Vic 3800
Australia

http://www.med.monash.edu.au/biochem/staff/abuckle.html
iChat: ashley.buc...@me.com
skype: ashley.buckle
Tel: (613) 9902 9313 (office)
Fax : (613) 9902 9500
mobile: 0430 913031



Re: [ccp4bb] IUCr committees, depositing images

2011-10-27 Thread Ed Pozharski
On Thu, 2011-10-27 at 15:36 +0200, George M. Sheldrick wrote:
> In non-continuous mode, the goniometer has
> to accelerate at the start of a frame and decellerate at the end, then
> wait for
> the frame to be read.

Someone should be able to confirm this, but I was under impression that
at synchrotrons the acceleration/decelaration occurs outside
phi_start-phi_end range  to allow for constant speed when the shutter
opens.

 
-- 
"I'd jump in myself, if I weren't so good at whistling."
   Julian, King of Lemurs


Re: [ccp4bb] IUCr committees, depositing images

2011-10-27 Thread George M. Sheldrick
There are two further complications. In non-continuous mode, the goniometer has
to accelerate at the start of a frame and decellerate at the end, then wait for
the frame to be read. So even if the shutter always functions perfectly, my 
intuition tells me that it must be more accurate to rotate at constant speed 
(however my intuition is often wrong). Secondly, in continuous mode, usually 
not all pixels are read out at precisely the same time.

George

On Thu, Oct 27, 2011 at 11:05:01AM +0100, David Waterman wrote:
> I agree with Colin here. Framing is simply a process of sampling an original
> signal at some 'frequency' (related to the phi-width of each frame). At some
> point, delta phi is small enough that the original signal is oversampled, and
> can be reconstructed _within the bounds of noise_. Beyond that point I see no
> advantage to sampling finer - and certainly not going to the limit of
> representing your data in some unframed continuous readout form.
> 
> Perhaps I am missing something, and I realise this is another OT diversion 
> from
> this most fruitful of threads.
> 
> Cheers
> 
> -- David
> 
> 
> On 27 October 2011 08:55, Martin M. Ripoll  wrote:
> 
> Dear Colin,
> 
> I think you understood perfectly what George was saying regarding the loss
> of information, but he will probably answer better than I.
> 
> In any case, and for the ones that did not understand it, what George was
> telling is related to the fact that a data collection made with a
> continuous
> crystal rotation contains more information than when this information is
> transformed into frames... The loss of information that we are referring 
> to
> has the same meaning as when we calculate electron density maps with
> different grid sizes. The finer the grid, the greater is the information 
> on
> the map.
> 
> But you are right saying that the shorter the interval between produced
> frames, the lower the loss of information. However, the procedure that you
> are suggesting should have some limits... otherwise the amount of
> information would grow dramatically.
> 
> All the best,
> Martin
> 
> Dr. Martin Martinez-Ripoll
> Research Professor
> xmar...@iqfr.csic.es
> Department of Crystallography & Structural Biology
> www.xtal.iqfr.csic.es
> Telf.: +34 917459550
> Consejo Superior de Investigaciones Cient ficas
> Spanish National Research Council
> www.csic.es
> 
> 
> -Mensaje original-
> De: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] En nombre de Colin
> Nave
> Enviado el: jueves, 27 de octubre de 2011 0:49
> Para: CCP4BB@JISCMAIL.AC.UK
> Asunto: Re: [ccp4bb] IUCr committees, depositing images
> 
> Dear George, Martin
> 
> I don't understand the point that one is throwing away information by
> storing in frames. If the frames have sufficiently fine intervals (given 
> by
> some sampling theorem consideration) I can't see how one loses 
> information.
> Can one of you explain?
> Thanks
>     Colin
> 
> 
> 
> -Original Message-
> From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of
> Martin
> M. Ripoll
> Sent: 26 October 2011 22:50
> To: ccp4bb
> Subject: Re: [ccp4bb] IUCr committees, depositing images
> 
> Dear George, dear all,
> 
> I was just trying to summarize my point of view regarding this important
> issue when I got your e-mail, that reflects exactly my own opinion!
> 
> Martin
> 
> Dr. Martin Martinez-Ripoll
> Research Professor
> xmar...@iqfr.csic.es
> Department of Crystallography & Structural Biology
> www.xtal.iqfr.csic.es
> Telf.: +34 917459550
> Consejo Superior de Investigaciones Cient ficas
> Spanish National Research Council
> www.csic.es
> 
> 
> 
> -Mensaje original-
> De: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] En nombre de George
> M. Sheldrick
> Enviado el: mi rcoles, 26 de octubre de 2011 11:52
> Para: CCP4BB@JISCMAIL.AC.UK
> Asunto: Re: [ccp4bb] IUCr committees, depositing images
> 
> This raises an important point. The new continuous readout detectors such
> as
> the
> Pilatus for beamlines or the Bruker Photon for in-house use enable the
> crystal to
> be rotated at constant velocity, eliminating the mechanical errors
> associated with
> 'stop and go' data collection. Storing their data in &#

Re: [ccp4bb] IUCr committees, depositing images

2011-10-27 Thread John R Helliwell
cted there, which they archive
> anyway, and into making those that are eventually associated with a
> publication accessible online in a reasonably interactive manner; this is
> not a description of the void set, as both the Australian synchrotron and
> Diamond expressed interest;
>
>     2. a fraction of the creators of MX pdb entries willing to have the
> images associated to a pdb entry made available to the public in this way;
>
>     3. a participation of the PDB and the IUCr in the supervision, or even
> coordination, of this pilot project, with a view to examining the logistics
> involved and coming up with realistic, "evidence-based" estimates of costs
> and benefits, as the basis for subsequently making the hard decisions about
> gathering the necessary resources to implement the best form of archiving
> (central vs. federated), and deciding whether to make the deposition of raw
> data compulsory or not.
>
>     The expression "evidence-based" is crucial here: without starting to do
> something on a small scale with a subset of willing partners, we will
> continue to discuss things "in abstracto" for ever, as has been happening in
> some of the contributions to this thread. The true costs will only be known
> by actually doing something, and this is even more true about the benefits,
> i.e. some developers showing that re-exploiting the raw images with new
> processing approaches can deliver better results than those originally
> deposited. With such evidence, getting the extra support needed would be
> much more likely than if it was applied for in the present situation, where
> only opinions are available. So the "let's do it" argument seems to me
> irresistible.
>
>     I hope, of course, that I haven't misinterpreted nor overinterpreted
> what was discussed in Madrid. It was a pity that so many of the influential
> people with various commission-related titles were only able to stay for the
> beginning of that meeting, that was quite formal and lacking in any form of
> ground-breaking discussion. I can guarantee that I was *increasingly awake*
> as the meeting proceeded, and that the principles I outlined above were
> actually discussed, even if some of them might have been only partially
> recorded. I do dream about these matters at night, but I don't think that I
> hallucinate (yet).
>
>
>     I hope, finally, that this goes some way towards providing the
> clarification you wanted.
>
>
>     With best wishes,
>
>          Gerard.
>
> --
> On Wed, Oct 26, 2011 at 03:10:21PM +, John Helliwell wrote:
>> Dear Gerard,
>> Thankyou for your CCP4bb detailed message of today, 
>>
>> But I have to come off line re :-
>> The IUCr Forum already has an outline of a feasibility study that would cost 
>> only a small amount of
>> joined-up thinking and book-keeping around already stored information, so 
>> let us not use the inaccessibility of federal or EC funding as a scarecrow 
>> to justify not even trying what is proposed there.
>>
>> and especially:-
>> already stored information
>>
>> Where exactly is this existing store to which you refer?
>>
>>
>> The PDB is out it seems to take on this role for the MX community so it 
>> leaves us with convincing the Journals. I have started with IUCr Journals 
>> and have submitted a Business Plan, already a month ago.
>> It has not instantly led to 'lets do it' but I remain hopeful that that 
>> doesn't mean 'No'.
>>
>> I copy in Tom and Brian.
>>
>> In anticipation of your clarification.
>>
>> Thankyou.
>> Yours sincerely,
>> John
>> Prof John R Helliwell DSc
> ===
>
> (Below, the message John was referring to in his request for clarification)
>
> --
> Date:         Wed, 26 Oct 2011 15:29:32 +0100
> From: Gerard Bricogne 
> Subject: Re: [ccp4bb] IUCr committees, depositing images
> To: CCP4BB@JISCMAIL.AC.UK
>
> Dear John and colleagues,
>
>     There seem to be a set a centrifugal forces at play within this thread
> that are distracting us from a sensible path of concrete action by throwing
> decoys in every conceivable direction, e.g.
>
>     * "Pilatus detectors spew out such a volume of data that we can't
> possibly archive it all" - does that mean that because the 5th generation of
> Dectris detectors will be able to write one billion images a second and
> catch every scattered photon individually, we should not try and archive
> more information than is given by the current merged structure factor data?

Re: [ccp4bb] IUCr committees, depositing images

2011-10-27 Thread Francis E Reyes
This discussion of image deposition and archival has certainly been 
illuminating. While there have been clear directions on how the process would 
actually work, I am becoming increasingly curious on why it should be done 
(outside of threatening non publication, social acceptance, funding, etc).  I 
think a huge challenge is to convince the users that filling out the metadata 
with as much detail as possible is a worthwhile endeavor.


Specifically, are there documented cases where reinterpretation of an MX 
dataset  (the raw images) has resulted in new biological insight into the 
system being modeled?


If possible I'd like to compile a list (off list) for my own education (though 
I can certainly report my results to the BB if the interest is there).

(Ok I have a selfish reason as well as I will be considering the reanalysis of 
some really old datasets that proved troublesome for a colleague and am looking 
for small hints of wisdom)


Some that come to mind for me:

Meyer et al. Structure of the 12-subunit RNA polymerase II refined with the aid 
of anomalous diffraction data. J Biol Chem (2009) vol. 284 (19) pp. 12933-9

Wang. Inclusion of weak high-resolution X-ray data for improvement of a group 
II intron structure. Acta Crystallogr D Biol Crystallogr (2010) vol. 66 (Pt 9) 
pp. 988-1000

F



-
Francis E. Reyes M.Sc.
215 UCB
University of Colorado at Boulder


Re: [ccp4bb] IUCr committees, depositing images

2011-10-27 Thread Martin M. Ripoll
Dear David, 

 

As you probably have read in my previous message to Colin, I also agreed
with him. 

 

I just wanted to clarify (and obviously not to him) what George said. It was
just a comment and not a lesson! Sorry if my message was misunderstood!

 

Martin



Dr. Martin Martinez-Ripoll

Research Professor

 <mailto:xmar...@iqfr.csic.es> xmar...@iqfr.csic.es

Department of Crystallography & Structural Biology

 <http://www.xtal.iqfr.csic.es/> www.xtal.iqfr.csic.es

Telf.: +34 917459550

Consejo Superior de Investigaciones Científicas

Spanish National Research Council

www.csic.es

 <http://www.csic.es/> Sin título-1

 

De: David Waterman [mailto:dgwater...@gmail.com] 
Enviado el: jueves, 27 de octubre de 2011 12:05
Para: Martin M. Ripoll
CC: CCP4BB@jiscmail.ac.uk
Asunto: Re: [ccp4bb] IUCr committees, depositing images

 

I agree with Colin here. Framing is simply a process of sampling an original
signal at some 'frequency' (related to the phi-width of each frame). At some
point, delta phi is small enough that the original signal is oversampled,
and can be reconstructed _within the bounds of noise_. Beyond that point I
see no advantage to sampling finer - and certainly not going to the limit of
representing your data in some unframed continuous readout form.

 

Perhaps I am missing something, and I realise this is another OT diversion
from this most fruitful of threads.

 

Cheers

-- David



On 27 October 2011 08:55, Martin M. Ripoll  wrote:

Dear Colin,

I think you understood perfectly what George was saying regarding the loss
of information, but he will probably answer better than I.

In any case, and for the ones that did not understand it, what George was
telling is related to the fact that a data collection made with a continuous
crystal rotation contains more information than when this information is
transformed into frames... The loss of information that we are referring to
has the same meaning as when we calculate electron density maps with
different grid sizes. The finer the grid, the greater is the information on
the map.

But you are right saying that the shorter the interval between produced
frames, the lower the loss of information. However, the procedure that you
are suggesting should have some limits... otherwise the amount of
information would grow dramatically.

All the best,

Martin

Dr. Martin Martinez-Ripoll
Research Professor
xmar...@iqfr.csic.es
Department of Crystallography & Structural Biology
www.xtal.iqfr.csic.es
Telf.: +34 917459550  
Consejo Superior de Investigaciones Científicas
Spanish National Research Council
www.csic.es


-Mensaje original-

De: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] En nombre de Colin
Nave
Enviado el: jueves, 27 de octubre de 2011 0:49

Para: CCP4BB@JISCMAIL.AC.UK
Asunto: Re: [ccp4bb] IUCr committees, depositing images

Dear George, Martin

I don't understand the point that one is throwing away information by
storing in frames. If the frames have sufficiently fine intervals (given by
some sampling theorem consideration) I can't see how one loses information.
Can one of you explain?
Thanks
Colin



-Original Message-
From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Martin
M. Ripoll
Sent: 26 October 2011 22:50
To: ccp4bb
Subject: Re: [ccp4bb] IUCr committees, depositing images

Dear George, dear all,

I was just trying to summarize my point of view regarding this important
issue when I got your e-mail, that reflects exactly my own opinion!

Martin

Dr. Martin Martinez-Ripoll
Research Professor
xmar...@iqfr.csic.es
Department of Crystallography & Structural Biology
www.xtal.iqfr.csic.es
Telf.: +34 917459550  
Consejo Superior de Investigaciones Científicas
Spanish National Research Council
www.csic.es



-Mensaje original-
De: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] En nombre de George
M. Sheldrick
Enviado el: miércoles, 26 de octubre de 2011 11:52
Para: CCP4BB@JISCMAIL.AC.UK
Asunto: Re: [ccp4bb] IUCr committees, depositing images

This raises an important point. The new continuous readout detectors such as
the
Pilatus for beamlines or the Bruker Photon for in-house use enable the
crystal to
be rotated at constant velocity, eliminating the mechanical errors
associated with
'stop and go' data collection. Storing their data in 'frames' is an
artifical
construction that is currently required for the established data integration
programs but is in fact throwing away information. Maybe in 10 years time
'frames'
will be as obsolete as punched cards!

George

On Wed, Oct 26, 2011 at 09:39:40AM +0100, Graeme Winter wrote:
> Hi James,
>
> Just to pick up on your point about the Pilatus detectors. Yesterday
> in 2 hours of giving a beamline a workout (admittedly with Thaumatin)
&

Re: [ccp4bb] IUCr committees, depositing images

2011-10-27 Thread Gerard Bricogne
ove were
actually discussed, even if some of them might have been only partially
recorded. I do dream about these matters at night, but I don't think that I
hallucinate (yet). 


 I hope, finally, that this goes some way towards providing the
clarification you wanted.


 With best wishes,
 
  Gerard.

--
On Wed, Oct 26, 2011 at 03:10:21PM +, John Helliwell wrote:
> Dear Gerard,
> Thankyou for your CCP4bb detailed message of today, 
> 
> But I have to come off line re :-
> The IUCr Forum already has an outline of a feasibility study that would cost 
> only a small amount of
> joined-up thinking and book-keeping around already stored information, so let 
> us not use the inaccessibility of federal or EC funding as a scarecrow to 
> justify not even trying what is proposed there.
> 
> and especially:-
> already stored information
> 
> Where exactly is this existing store to which you refer?
> 
> 
> The PDB is out it seems to take on this role for the MX community so it 
> leaves us with convincing the Journals. I have started with IUCr Journals and 
> have submitted a Business Plan, already a month ago. 
> It has not instantly led to 'lets do it' but I remain hopeful that that 
> doesn't mean 'No'.
> 
> I copy in Tom and Brian.
> 
> In anticipation of your clarification.
> 
> Thankyou.
> Yours sincerely,
> John
> Prof John R Helliwell DSc 
===================

(Below, the message John was referring to in his request for clarification)

--
Date: Wed, 26 Oct 2011 15:29:32 +0100
From: Gerard Bricogne 
Subject: Re: [ccp4bb] IUCr committees, depositing images
To: CCP4BB@JISCMAIL.AC.UK

Dear John and colleagues,

 There seem to be a set a centrifugal forces at play within this thread
that are distracting us from a sensible path of concrete action by throwing
decoys in every conceivable direction, e.g.

 * "Pilatus detectors spew out such a volume of data that we can't
possibly archive it all" - does that mean that because the 5th generation of
Dectris detectors will be able to write one billion images a second and
catch every scattered photon individually, we should not try and archive
more information than is given by the current merged structure factor data?
That seems a complete failure of reasoning to me: there must be a sensible
form of raw data archiving that would stand between those two extremes and
would retain much more information that the current merged data but would
step back from the enormous degree of oversampling of the raw diffraction
pattern that the Pilatus and its successors are capable of.

 * "It is all going to cost an awful lot of money, therefore we need a
team of grant writers to raise its hand and volunteer to apply for resources
from one or more funding agencies" - there again there is an avoidance of
the feasible by invocation of the impossible. The IUCr Forum already has an
outline of a feasibility study that would cost only a small amount of
joined-up thinking and book-keeping around already stored information, so
let us not use the inaccessibility of federal or EC funding as a scarecrow
to justify not even trying what is proposed there. And the idea that someone
needs to decide to stake his/her career on this undertaking seems totally
overblown.

 Several people have already pointed out that the sets of images that
would need to be archived would be a very small subset of the bulk of
datasets that are being held on the storage systems of synchrotron sources.
What needs to be done, as already described, is to be able to refer to those
few datasets that gave rise to the integrated data against which deposited
structures were refined (or, in some cases, solved by experimental phasing),
to give them special status in terms of making them visible and accessible
on-line at the same time as the pdb entry itself (rather than after the
statutory 2-5 years that would apply to all the rest, probably in a more
off-line form), and to maintain that accessibility "for ever", with a link
from the pdb entry and perhaps from the associated publication. It seems
unlikely that this would involve the mobilisation of such large resources as
to require either a human sacrifice (of the poor person whose life would be
staked on this gamble) or writing a grant application, with the indefinite
postponement of action and the loss of motivation this would imply.

 Coming back to the more technical issue of bloated datasets, it is a
scientific problem that must be amenable to rational analysis to decide on a
sensible form of compression of overly-verbose sets of thin-sliced, perhaps
low-exposure images that would already retain a large fraction, if not all,
of the extra information on which we would wish future improved versions of
processing programs to cu

Re: [ccp4bb] IUCr committees, depositing images

2011-10-27 Thread David Waterman
I agree with Colin here. Framing is simply a process of sampling an original
signal at some 'frequency' (related to the phi-width of each frame). At some
point, delta phi is small enough that the original signal is oversampled,
and can be reconstructed _within the bounds of noise_. Beyond that point I
see no advantage to sampling finer - and certainly not going to the limit of
representing your data in some unframed continuous readout form.

Perhaps I am missing something, and I realise this is another OT diversion
from this most fruitful of threads.

Cheers

-- David


On 27 October 2011 08:55, Martin M. Ripoll  wrote:

> Dear Colin,
>
> I think you understood perfectly what George was saying regarding the loss
> of information, but he will probably answer better than I.
>
> In any case, and for the ones that did not understand it, what George was
> telling is related to the fact that a data collection made with a
> continuous
> crystal rotation contains more information than when this information is
> transformed into frames... The loss of information that we are referring to
> has the same meaning as when we calculate electron density maps with
> different grid sizes. The finer the grid, the greater is the information on
> the map.
>
> But you are right saying that the shorter the interval between produced
> frames, the lower the loss of information. However, the procedure that you
> are suggesting should have some limits... otherwise the amount of
> information would grow dramatically.
>
> All the best,
> Martin
> 
> Dr. Martin Martinez-Ripoll
> Research Professor
> xmar...@iqfr.csic.es
> Department of Crystallography & Structural Biology
> www.xtal.iqfr.csic.es
> Telf.: +34 917459550
> Consejo Superior de Investigaciones Científicas
> Spanish National Research Council
> www.csic.es
>
>
> -Mensaje original-
> De: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] En nombre de Colin
> Nave
> Enviado el: jueves, 27 de octubre de 2011 0:49
> Para: CCP4BB@JISCMAIL.AC.UK
> Asunto: Re: [ccp4bb] IUCr committees, depositing images
>
> Dear George, Martin
>
> I don't understand the point that one is throwing away information by
> storing in frames. If the frames have sufficiently fine intervals (given by
> some sampling theorem consideration) I can't see how one loses information.
> Can one of you explain?
> Thanks
> Colin
>
>
>
> -Original Message-----
> From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of
> Martin
> M. Ripoll
> Sent: 26 October 2011 22:50
> To: ccp4bb
> Subject: Re: [ccp4bb] IUCr committees, depositing images
>
> Dear George, dear all,
>
> I was just trying to summarize my point of view regarding this important
> issue when I got your e-mail, that reflects exactly my own opinion!
>
> Martin
> 
> Dr. Martin Martinez-Ripoll
> Research Professor
> xmar...@iqfr.csic.es
> Department of Crystallography & Structural Biology
> www.xtal.iqfr.csic.es
> Telf.: +34 917459550
> Consejo Superior de Investigaciones Científicas
> Spanish National Research Council
> www.csic.es
>
>
>
> -Mensaje original-
> De: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] En nombre de George
> M. Sheldrick
> Enviado el: miércoles, 26 de octubre de 2011 11:52
> Para: CCP4BB@JISCMAIL.AC.UK
> Asunto: Re: [ccp4bb] IUCr committees, depositing images
>
> This raises an important point. The new continuous readout detectors such
> as
> the
> Pilatus for beamlines or the Bruker Photon for in-house use enable the
> crystal to
> be rotated at constant velocity, eliminating the mechanical errors
> associated with
> 'stop and go' data collection. Storing their data in 'frames' is an
> artifical
> construction that is currently required for the established data
> integration
> programs but is in fact throwing away information. Maybe in 10 years time
> 'frames'
> will be as obsolete as punched cards!
>
> George
>
> On Wed, Oct 26, 2011 at 09:39:40AM +0100, Graeme Winter wrote:
> > Hi James,
> >
> > Just to pick up on your point about the Pilatus detectors. Yesterday
> > in 2 hours of giving a beamline a workout (admittedly with Thaumatin)
> > we acquired 400 + GB of data*. Now I appreciate that this is not
> > really routine operation, but it does raise an interesting point - if
> > you have loaded a sample and centred it, collected test shots and
> > decided it's not that great, why not collect anyway as it may later
> > prove to be useful?
> >
> > Bzzt. 2 minutes or less later you have 

Re: [ccp4bb] IUCr committees, depositing images

2011-10-27 Thread Alun Ashton
Hi James,

1) thanks for sending and email in this thread longer than mine, I was 
"worried" I had killed it... ;)

2)  you say:
> Of course, if we are willing to relax the requirement of validation and
> curation, this could be a whole lot easier.  In fact, there is already
> an image deposition infrastructure in place!  It is called TARDIS:
> 
> http://tardis.edu.au/
> 
> Perhaps the best way forward would be for "the PDB" to introduce a new
> field for one or more TARDIS ids in a PDB deposition?  It would be
> optional at the first, but no doubt required in the future.

And Ethan has just beat me to this point, from the Tardis web site, first 
paragraph last sentence:
"Storage was and remains federated, meaning the public index, TARDIS.edu.au 
contains no data itself and merely points to data stored in external labs and 
institutions."

So is the raw data even public? In that sense I think what were supposed to 
adopt in European facilities is ODI's which link accordingly to the facilities 
own suppository, and how that is developed and where you put it is down to 
regional preferences. It can obviously be large, small, publicly accessible, 
shared between friends or private. To be effective it will need to be around 
for a long time. 

But to build on the success of model for the 'PDB' I would agree that someone 
should pay to have someone host this, and at least in the first instance make 
sure PDB structures have their raw data available and accessible to all. Tardis 
is in the right direction as a catalogue, ICAT in the EU is a simpler DB, ISPyB 
would need a bit of work to get data sharable through the interfaces, but I 
think it is a richer data structure. 

Standardisation of the data would also be good, its amazing what a simple word 
checker can do to your emails these days

BTW we pay the equivalent to about a weeks beamtime on one beamline for 200TB a 
year data storage and access to it for the whole facility, and someone else to 
take the pain of hosting it 

Alun
___
Alun Ashton, alun.ash...@diamond.ac.uk Tel: +44 1235 778404
Scientific Software Team Leader,  http://www.diamond.ac.uk/
Diamond Light Source, Chilton, Didcot, Oxon, OX11 0DE, U.K.


Re: [ccp4bb] IUCr committees, depositing images

2011-10-27 Thread Martin M. Ripoll
Dear Colin,

I think you understood perfectly what George was saying regarding the loss
of information, but he will probably answer better than I. 

In any case, and for the ones that did not understand it, what George was
telling is related to the fact that a data collection made with a continuous
crystal rotation contains more information than when this information is
transformed into frames... The loss of information that we are referring to
has the same meaning as when we calculate electron density maps with
different grid sizes. The finer the grid, the greater is the information on
the map. 

But you are right saying that the shorter the interval between produced
frames, the lower the loss of information. However, the procedure that you
are suggesting should have some limits... otherwise the amount of
information would grow dramatically.

All the best,
Martin

Dr. Martin Martinez-Ripoll
Research Professor
xmar...@iqfr.csic.es
Department of Crystallography & Structural Biology
www.xtal.iqfr.csic.es
Telf.: +34 917459550
Consejo Superior de Investigaciones Científicas
Spanish National Research Council
www.csic.es


-Mensaje original-
De: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] En nombre de Colin
Nave
Enviado el: jueves, 27 de octubre de 2011 0:49
Para: CCP4BB@JISCMAIL.AC.UK
Asunto: Re: [ccp4bb] IUCr committees, depositing images

Dear George, Martin

I don't understand the point that one is throwing away information by
storing in frames. If the frames have sufficiently fine intervals (given by
some sampling theorem consideration) I can't see how one loses information.
Can one of you explain?
Thanks
Colin



-Original Message-
From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Martin
M. Ripoll
Sent: 26 October 2011 22:50
To: ccp4bb
Subject: Re: [ccp4bb] IUCr committees, depositing images

Dear George, dear all,

I was just trying to summarize my point of view regarding this important
issue when I got your e-mail, that reflects exactly my own opinion!

Martin

Dr. Martin Martinez-Ripoll
Research Professor
xmar...@iqfr.csic.es
Department of Crystallography & Structural Biology
www.xtal.iqfr.csic.es
Telf.: +34 917459550
Consejo Superior de Investigaciones Científicas
Spanish National Research Council
www.csic.es



-Mensaje original-
De: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] En nombre de George
M. Sheldrick
Enviado el: miércoles, 26 de octubre de 2011 11:52
Para: CCP4BB@JISCMAIL.AC.UK
Asunto: Re: [ccp4bb] IUCr committees, depositing images

This raises an important point. The new continuous readout detectors such as
the
Pilatus for beamlines or the Bruker Photon for in-house use enable the
crystal to 
be rotated at constant velocity, eliminating the mechanical errors
associated with
'stop and go' data collection. Storing their data in 'frames' is an
artifical
construction that is currently required for the established data integration
programs but is in fact throwing away information. Maybe in 10 years time
'frames' 
will be as obsolete as punched cards!

George

On Wed, Oct 26, 2011 at 09:39:40AM +0100, Graeme Winter wrote:
> Hi James,
> 
> Just to pick up on your point about the Pilatus detectors. Yesterday
> in 2 hours of giving a beamline a workout (admittedly with Thaumatin)
> we acquired 400 + GB of data*. Now I appreciate that this is not
> really routine operation, but it does raise an interesting point - if
> you have loaded a sample and centred it, collected test shots and
> decided it's not that great, why not collect anyway as it may later
> prove to be useful?
> 
> Bzzt. 2 minutes or less later you have a full data set, and barely
> even time to go get a cup of tea.
> 
> This does to some extent move the goalposts, as you can acquire far
> more data than you need. You never know, you may learn something
> interesting from it - perhaps it has different symmetry or packing?
> What it does mean is if we can have a method of tagging this data
> there may be massively more opportunity to get also-ran data sets for
> methods development types. What it also means however is that the cost
> of curating this data is then an order of magnitude higher.
> 
> Also moving it around is also rather more painful.
> 
> Anyhow, I would try to avoid dismissing the effect that new continuous
> readout detectors will have on data rates, from experience it is
> pretty substantial.
> 
> Cheerio,
> 
> Graeme
> 
> *by "data" here what I mean is images, rather than information which
> is rather more time consuming to acquire. I would argue you get that
> from processing / analysing the data...
> 
> On 24 October 2011 22:56, James Holton  wrote:
> > The Pilatus is fast, but or decades now we hav

Re: [ccp4bb] IUCr committees, depositing images

2011-10-27 Thread Ethan Merritt
On Wednesday, 26 October 2011, James Holton wrote:
> Of course, if we are willing to relax the requirement of validation and 
> curation, this could be a whole lot easier.  In fact, there is already 
> an image deposition infrastructure in place!  It is called TARDIS:
> 
> http://tardis.edu.au/
> 
> Perhaps the best way forward would be for "the PDB" to introduce a new 
> field for one or more TARDIS ids in a PDB deposition?  It would be 
> optional at the first, but no doubt required in the future.

As I understand it, TARDIS is just an indexing system.
You're still on your own to actually store the images.
The TARDIS setup will cough up the information that your images
are stored on a machine named pony.lbl.gov, or at least they were 
at the time you registered them for indexing, where they supposedly
can be retrieved using tag #XYZ.

But so far as I know you would still be at the mercy of pony.lbl.gov
going up in flames, or being renamed twinkle.lbl.gov, or being 
decommissioned when the next budget crunch hits.

For that matter, I don't know what the provision or expectation
is that anyone outside your institution could see or access the
machine holding the set of files that TARDIS told them were there.

If I've got this wrong, perhaps Ashley Buckle can chime in with
an update on TARDIS.

Ethan


Re: [ccp4bb] IUCr committees, depositing images

2011-10-27 Thread Alun Ashton
Sorry Matt, some large facilities do already keep all their raw and processed 
data. And I think the EU grant you mention to coordinate this is 
http://www.pan-data.eu  soon to be odi, includes the ESRF :), don't your 
computing people tell you anything ?! :)

PanData have a meeting in early November and they (ahem we) are already in 
touch with the working group and will formalise that soon after the meeting.

Alun
___
Alun Ashton, alun.ash...@diamond.ac.uk Tel: +44 1235 778404
Scientific Software Team Leader,  http://www.diamond.ac.uk/
Diamond Light Source, Chilton, Didcot, Oxon, OX11 0DE, U.K.
From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Matthew 
BOWLER
Sent: 26 October 2011 10:03
To: ccp4bb
Subject: Re: [ccp4bb] IUCr committees, depositing images

The archiving of all raw data and subsequently making it public is something 
that the large facilities are currently debating whether to do.  Here at the 
ESRF we store user data for only 6 months (and I believe that it is available 
longer on tape) and we already have trouble with capacity.  My personal view is 
that facilities should take the lead on this - for MX we already have a very 
good archiving system - ISPyB - also running at Diamond.  ISPyB stores lots of 
meta data and jpgs of the raw images but not the images themselves but a link 
to the location of the data with an option to download if still available.  My 
preferred option would be to store all academically funded data and then make 
it publicly available after say 2-5 years (this will no doubt spark another 
debate on time limits, special dispensation etc).  What needs to be thought 
about is how to order the data and how to make sure that the correct meta data 
are stored with each data set - this will rely heavily on user input at the 
time of the experiment rather than gathering together data sets for depositions 
much later.  As already mentioned, this type of resource could be extremely 
useful for developers and also as a general scientific resource.  Smells like 
an EU grant to me. Cheers, Matt.


On 26/10/2011 10:21, Frank von Delft wrote:
Since when has the cost of any project been limited by the cost of hardware?  
Someone has to implement this -- and make a career out of it;  thunderingly 
absent from this thread has been the chorus of volunteers who will write the 
grant.
phx





--

Matthew Bowler

Structural Biology Group

European Synchrotron Radiation Facility

B.P. 220, 6 rue Jules Horowitz

F-38043 GRENOBLE CEDEX

FRANCE

===

Tel: +33 (0) 4.76.88.29.28

Fax: +33 (0) 4.76.88.29.04



http://go.esrf.eu/MX

http://go.esrf.eu/Bowler

===


Re: [ccp4bb] IUCr committees, depositing images

2011-10-26 Thread Colin Nave
James
Thanks - your list is far more comprehensive than mine. I presume MAD/SAD/MIR 
data is implicit.

One doesn't need to progress through the list in this order (I am sure you are 
not suggesting this). One could usefully go to m without some of the 
intermediate stages. 

Your comments about problems the PDB might have are useful. I have been in 
meetings where the PDB has been criticised for not following modern database 
standards and here we are trying to create more anarchy. In principle multiple 
formats would work so long as they are well defined but I can see why the PDB 
might object to this. 

I should go through the documents from the Madrid meeting and see what is 
actually being proposed to get going. If a separate exercise independent of, 
but linked to, the PDB is set up for storing image data stage m) then that 
would be a good start.

Regards
  Colin

-Original Message-
From: James Holton [mailto:jmhol...@lbl.gov] 
Sent: 27 October 2011 06:04
To: Nave, Colin (DLSLtd,RAL,DIA)
Cc: ccp4bb
Subject: Re: [ccp4bb] IUCr committees, depositing images


In the spirit of supporting a "can do" attitude, I have decided to try 
and frame the binary "images or no images" question as a gradual scale.  
Below is a list of ways to represent crystallographic data, with 
increasing amounts of "information" as you move down.  That is, making 
validation more robust and allowing more and more yet-to-be-developed 
technologies to be applied, but also requiring higher costs, such as 
validation effort.

a) depositing coordinates only
b) coordinates and structure factors
c) coordinates, structure factors, and their sigmas (there was a time 
when we didn't do this!)
d) scaled and merged intensities (before "truncate" or sqrt?)
e) scaled and "unmerged" intensities with combined partials (future 
absorption corrections)
f) scaled spot intensities with partials separate (correct the shutter 
jitter someday?)
g) unscaled individual spot intensities (with geometry for calculating 
Lorentz, polarization, etc corrections)
h) spot intensities with a separate column for the "local background 
level" (this would be an efficient way to "compress" the images)
i) spots, local background, and background levels partway between spots 
(for reconstructing diffuse scatter)
j) all pixels from spot areas (~1000x compression over normal 
"corrected" images)
k) spot pixels plus lossy compression of background (1000-30x over 
corrected images)
l) losslessly compressed images (~2x over corrected images)
m) "corrected" images: all pixels from spatially-corrected 
dark-and-flood applied images
n) uncorrected images with dark, flood and bad-pixel map for relevant 
detector
o) raw output from detector's ADC or counter (no idea how to capture 
this...)
p) cryo-preserved data crystal (for verification of cell, or just 
chemical forensics, such as verifying the identity of the protein)
q) cryo-preserved duplicate crystal, to be held in a vault until you are 
accused of fraud.
r) sample of pure protein with crystallization conditions
s) protein sequence, let the PDB "verify" the experiment by repeating 
it, knowing that it "can be solved".

Now, I think it is clear that both the "benefit to the community" as 
well as the burden on PDB resources increase as we move down this list.  
In such situations one looks for "inflection points" where the next, 
small increase in benefit requires a disproportionately large cost.  
Historically, going from a to b was such an inflection point.  This was 
back when the compact disc was a new thing, and the whole PDB could fit 
on one!  In fact, if you had a "multi-session" drive you could back up 
your hard drive onto the remaining space.  Those days are gone.

Recently, I heard the PDB is seriously considering jumping to somewhere 
between "e" and "g" (unmerged data).  Doing so, I think, will be an 
interesting exercise in file format "standardization" since every major 
data processing package treats partials and postrefinement differently.  
And this is perhaps the main reason why "the PDB" (aka Gerard K) are 
wary of the idea of going all the way to "m" (corrected images).  Do we 
expect PDB staff to re-process our dataset as part of the "validation" 
procedure?

Of course, if we are willing to relax the requirement of validation and 
curation, this could be a whole lot easier.  In fact, there is already 
an image deposition infrastructure in place!  It is called TARDIS:

http://tardis.edu.au/

Perhaps the best way forward would be for "the PDB" to introduce a new 
field for one or more TARDIS ids in a PDB deposition?  It would be 
optional at the first, but no doubt required in the future.


-James Holton
MAD Scientist


On 10/26/2011 4:20 PM, Colin Nave wrote:
> Dear Ge

Re: [ccp4bb] IUCr committees, depositing images

2011-10-26 Thread James Holton
In the spirit of supporting a "can do" attitude, I have decided to try 
and frame the binary "images or no images" question as a gradual scale.  
Below is a list of ways to represent crystallographic data, with 
increasing amounts of "information" as you move down.  That is, making 
validation more robust and allowing more and more yet-to-be-developed 
technologies to be applied, but also requiring higher costs, such as 
validation effort.


a) depositing coordinates only
b) coordinates and structure factors
c) coordinates, structure factors, and their sigmas (there was a time 
when we didn't do this!)

d) scaled and merged intensities (before "truncate" or sqrt?)
e) scaled and "unmerged" intensities with combined partials (future 
absorption corrections)
f) scaled spot intensities with partials separate (correct the shutter 
jitter someday?)
g) unscaled individual spot intensities (with geometry for calculating 
Lorentz, polarization, etc corrections)
h) spot intensities with a separate column for the "local background 
level" (this would be an efficient way to "compress" the images)
i) spots, local background, and background levels partway between spots 
(for reconstructing diffuse scatter)
j) all pixels from spot areas (~1000x compression over normal 
"corrected" images)
k) spot pixels plus lossy compression of background (1000-30x over 
corrected images)

l) losslessly compressed images (~2x over corrected images)
m) "corrected" images: all pixels from spatially-corrected 
dark-and-flood applied images
n) uncorrected images with dark, flood and bad-pixel map for relevant 
detector
o) raw output from detector's ADC or counter (no idea how to capture 
this...)
p) cryo-preserved data crystal (for verification of cell, or just 
chemical forensics, such as verifying the identity of the protein)
q) cryo-preserved duplicate crystal, to be held in a vault until you are 
accused of fraud.

r) sample of pure protein with crystallization conditions
s) protein sequence, let the PDB "verify" the experiment by repeating 
it, knowing that it "can be solved".


Now, I think it is clear that both the "benefit to the community" as 
well as the burden on PDB resources increase as we move down this list.  
In such situations one looks for "inflection points" where the next, 
small increase in benefit requires a disproportionately large cost.  
Historically, going from a to b was such an inflection point.  This was 
back when the compact disc was a new thing, and the whole PDB could fit 
on one!  In fact, if you had a "multi-session" drive you could back up 
your hard drive onto the remaining space.  Those days are gone.


Recently, I heard the PDB is seriously considering jumping to somewhere 
between "e" and "g" (unmerged data).  Doing so, I think, will be an 
interesting exercise in file format "standardization" since every major 
data processing package treats partials and postrefinement differently.  
And this is perhaps the main reason why "the PDB" (aka Gerard K) are 
wary of the idea of going all the way to "m" (corrected images).  Do we 
expect PDB staff to re-process our dataset as part of the "validation" 
procedure?


Of course, if we are willing to relax the requirement of validation and 
curation, this could be a whole lot easier.  In fact, there is already 
an image deposition infrastructure in place!  It is called TARDIS:


http://tardis.edu.au/

Perhaps the best way forward would be for "the PDB" to introduce a new 
field for one or more TARDIS ids in a PDB deposition?  It would be 
optional at the first, but no doubt required in the future.



-James Holton
MAD Scientist


On 10/26/2011 4:20 PM, Colin Nave wrote:

Dear Gerard

Yes, perhaps I was getting a bit carried away with the possibilities. Although I believe 
that, with high resolution detectors and low divergence beams, one should be able to 
separate out the various lattices it is not really relevant to the main issue - getting 
the best from existing data.  The point I made about "correcting" data probably 
comes in a similar category - taking the opportunity to air a favourite subject.

Regards
   Colin

PS. While here though I realise one of my points was a bit unclear. Point 5 
should be
"5.  My view is that for data in the PDB the same release rules should apply for the 
images as for the other data. For data not (yet) in the PDB, the funders of the research 
might want to define release rules. However, we can make suggestions!"
The original had "For other data" rather than "For data not (yet) in the PDB"

-Original Message-
From: Gerard Bricogne [mailto:g...@globalphasing.com]
Sent: 26 October 2011 23:23
To: Nave, Colin (DLSLtd,RAL,DIA)
Cc: ccp4bb
Subject: Re: [ccp4bb] 

Re: [ccp4bb] IUCr committees, depositing images

2011-10-26 Thread Colin Nave
Dear Gerard

Yes, perhaps I was getting a bit carried away with the possibilities. Although 
I believe that, with high resolution detectors and low divergence beams, one 
should be able to separate out the various lattices it is not really relevant 
to the main issue - getting the best from existing data.  The point I made 
about "correcting" data probably comes in a similar category - taking the 
opportunity to air a favourite subject.

Regards
  Colin

PS. While here though I realise one of my points was a bit unclear. Point 5 
should be
"5.  My view is that for data in the PDB the same release rules should apply 
for the images as for the other data. For data not (yet) in the PDB, the 
funders of the research might want to define release rules. However, we can 
make suggestions!"
The original had "For other data" rather than "For data not (yet) in the PDB"

-Original Message-
From: Gerard Bricogne [mailto:g...@globalphasing.com] 
Sent: 26 October 2011 23:23
To: Nave, Colin (DLSLtd,RAL,DIA)
Cc: ccp4bb
Subject: Re: [ccp4bb] IUCr committees, depositing images

Dear Colin,

 Thank you for accepting the heavy burden of responsibility your
colleagues have thrown onto your shoulders ;-) . It is great that you are
entering this discussion, and I am grateful for the support you are bringing
to the notion of starting something at ground level and learning from it,
rather that staying in the realm of conjecture and axiomatics, or entering
the virility contest as to whose beamline will make raw data archiving most
impossible.

 One small point, however, about your statement regarding multiple
lattices, that 

 "...  all crystals are, to a greater or lesser extent, subject to this.
 We just might not see it easily as the detector resolution or beam
 divergence is inadequate. Just think we could have several structures
 (one from each lattice) each with less disorder rather than just one
 average structure."

I am not sure that what you describe in your last sentence is a realistic
prospect, nor that it would in any case constitute the main advantage of
better dealing with multiple lattices. The most important consequence of
their multiplicity is that their spots overlap and corrupt each other's
intensities, so that the main benefit of improved processing would be to
mitigate that mutual corruption, first by correctly flagging overlaps, then
by partially trying to resolve those overlaps internally as much as scaling
procedures will allow (one could call that "non-merohedral detwinning" - it
is done e.g. by small-molecule softeware), and finally by adapting
refinement protocols to recognise that they may have to refine against
measurements that are a mixture of several intensities, to a degree and
according to a pattern that varies from one observation to another (unlike
regular twinning).

 Currently, if a "main" lattice can be identified and indexed, one tends
to integrate the spots it successfully indexes, and to abstain from worrying
about the accidental corruption of the resulting intensities by accidental
overlaps with spots of the other lattices (whose existence is promptly
forgotten). It is the undoing of that corruption that would bring the main
benefit, not the fact that one could see several variants of the structure
by fitting the data attached to the various lattices: that would be possible
only if overlaps were negligible. The prospects for improving electron
density maps by reprocessing raw images in the future are therefore
considerable for mainstream structures, not just as a way of perhaps teasing
interestingly different structures from each lattice in infrequent cases.

 I apologise if I have laboured this point, but I am concerned that
every slight slip of the pen that makes the benefits of future reprocessing
look as if they will just contribute to splitting hairs does a disservice to
this crucial discussion (and hence, potentially, to the community) by
belittling the importance and urgency of the task.


 With best wishes,
 
Gerard (B.)

--
On Wed, Oct 26, 2011 at 07:58:51PM +, Colin Nave wrote:
> I have been nominated by the IUCr synchrotron commission (thanks colleagues!) 
> to represent them for this issue. However, at the moment, this is a personal 
> view.
> 
> 1. For archiving "raw" diffraction image data for structures in the PDB, it 
> should be the responsibility of the worldwide PDB. They are by far the best 
> place to do it and as Jacob says the space requirements are trivial. Gerard 
> K's negative statement at CCP4-2010 sounds rather ex cathedra (in increasing 
> order of influence/power do we have the Pope, US president, the Bond Market 
> and finally Gerard K?). Did he make the statement in a formal presentation or 
> in the bar? More seriously, I am sure he had good reasons (e.g. PDB 
> pri

Re: [ccp4bb] IUCr committees, depositing images

2011-10-26 Thread Colin Nave
Dear George, Martin

I don't understand the point that one is throwing away information by storing 
in frames. If the frames have sufficiently fine intervals (given by some 
sampling theorem consideration) I can't see how one loses information. Can one 
of you explain?
Thanks
Colin



-Original Message-
From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Martin M. 
Ripoll
Sent: 26 October 2011 22:50
To: ccp4bb
Subject: Re: [ccp4bb] IUCr committees, depositing images

Dear George, dear all,

I was just trying to summarize my point of view regarding this important
issue when I got your e-mail, that reflects exactly my own opinion!

Martin

Dr. Martin Martinez-Ripoll
Research Professor
xmar...@iqfr.csic.es
Department of Crystallography & Structural Biology
www.xtal.iqfr.csic.es
Telf.: +34 917459550
Consejo Superior de Investigaciones Científicas
Spanish National Research Council
www.csic.es



-Mensaje original-
De: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] En nombre de George
M. Sheldrick
Enviado el: miércoles, 26 de octubre de 2011 11:52
Para: CCP4BB@JISCMAIL.AC.UK
Asunto: Re: [ccp4bb] IUCr committees, depositing images

This raises an important point. The new continuous readout detectors such as
the
Pilatus for beamlines or the Bruker Photon for in-house use enable the
crystal to 
be rotated at constant velocity, eliminating the mechanical errors
associated with
'stop and go' data collection. Storing their data in 'frames' is an
artifical
construction that is currently required for the established data integration
programs but is in fact throwing away information. Maybe in 10 years time
'frames' 
will be as obsolete as punched cards!

George

On Wed, Oct 26, 2011 at 09:39:40AM +0100, Graeme Winter wrote:
> Hi James,
> 
> Just to pick up on your point about the Pilatus detectors. Yesterday
> in 2 hours of giving a beamline a workout (admittedly with Thaumatin)
> we acquired 400 + GB of data*. Now I appreciate that this is not
> really routine operation, but it does raise an interesting point - if
> you have loaded a sample and centred it, collected test shots and
> decided it's not that great, why not collect anyway as it may later
> prove to be useful?
> 
> Bzzt. 2 minutes or less later you have a full data set, and barely
> even time to go get a cup of tea.
> 
> This does to some extent move the goalposts, as you can acquire far
> more data than you need. You never know, you may learn something
> interesting from it - perhaps it has different symmetry or packing?
> What it does mean is if we can have a method of tagging this data
> there may be massively more opportunity to get also-ran data sets for
> methods development types. What it also means however is that the cost
> of curating this data is then an order of magnitude higher.
> 
> Also moving it around is also rather more painful.
> 
> Anyhow, I would try to avoid dismissing the effect that new continuous
> readout detectors will have on data rates, from experience it is
> pretty substantial.
> 
> Cheerio,
> 
> Graeme
> 
> *by "data" here what I mean is images, rather than information which
> is rather more time consuming to acquire. I would argue you get that
> from processing / analysing the data...
> 
> On 24 October 2011 22:56, James Holton  wrote:
> > The Pilatus is fast, but or decades now we have had detectors that can
read
> > out in ~1s.  This means that you can collect a typical ~100 image
dataset in
> > a few minutes (if flux is not limiting).  Since there are ~150 beamlines
> > currently operating around the world and they are open about 200
days/year,
> > we should be collecting ~20,000,000 datasets each year.
> >
> > We're not.
> >
> > The PDB only gets about 8000 depositions per year, which means either we
> > throw away 99.96% of our images, or we don't actually collect images
> > anywhere near the ultimate capacity of the equipment we have.  In my
> > estimation, both of these play about equal roles, with ~50-fold
attrition
> > between ultimate data collection capacity and actual collected data, and
> > another ~50 fold attrition between collected data sets and published
> > structures.
> >
> > Personally, I think this means that the time it takes to collect the
final
> > dataset is not rate-limiting in a "typical" structural biology
> > project/paper.  This does not mean that the dataset is of little value.
> >  Quite the opposite!  About 3000x more time and energy is expended
preparing
> > for the final dataset than is spent collecting it, and these efforts
require
> > experimental feedback.  The trick is figuring out how best t

Re: [ccp4bb] IUCr committees, depositing images

2011-10-26 Thread Gerard Bricogne
n PDB depositions. There seems to be a view that the 
> archiving of this should be the responsibility of the synchrotrons which 
> generated the data. This should be possible for some synchrotrons (e.g. 
> Diamond) where there is pressure in any case from their funders to archive 
> all data generated at the facility. However not all synchrotrons will be able 
> to do this. There is also the issue of data collected at home sources. 
> Presumably it will require a few willing synchrotrons to pioneer this in a 
> coordinated way. Hopefully others will then follow. I don't think we can 
> expect the PDB to archive the 99.96% of the data which did not result in 
> structures.
> 
> 5.  My view is that for data in the PDB the same release rules should apply 
> for the images as for the other data. For other data, the funders of the 
> research might want to define release rules. However, we can make suggestions!
> 
> 6. Looking to the future, there is FEL data coming along, both single 
> molecule and nano-crystals (assuming the FEL delivers for these areas).
> 
> 7. I agree with Gerard B - "as far as I see it, the highest future benefit of 
> having archived raw images will result from being able to reprocess datasets 
> from samples containing multiple lattices" 
> My view is that all crystals are, to a greater or lesser extent, subject to 
> this. We just might not see it easily as the detector resolution or beam 
> divergence is inadequate. Just think we could have several structures (one 
> from each lattice) each with less disorder rather than just one average 
> structure.  Not sure whether Gloria's modulated structures would be as 
> ubiquitous but her argument is along the same lines.
> 
> Regards 
>   Colin
> 
> -Original Message-
> From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Herbert 
> J. Bernstein
> Sent: 26 October 2011 18:55
> To: ccp4bb
> Subject: Re: [ccp4bb] IUCr committees, depositing images
> 
> Dear Colleagues,
> 
>Gerard strikes a very useful note in pleading for a "can-do"
> approach.  Part of going from "can-do" to "actually-done"
> is to make realistic estimates of the costs of "doing" and
> then to adjust plans appropriately to do what can be afforded
> now and to work towards doing as much of what remains undone
> as has sufficient benefit to justify the costs.
> 
>We appear to be in a fortunate situation in which some
> portion of the raw data behind a signficant portion of the
> studies released in the PDB could probably be retained for some
> significant period of time and be made available for further
> analysis.  It would seem wise to explore these possibilities
> and try to optimize the approaches used -- e.g. to consider
> moves towards well documented formats, and retention of critical
> metadata with such data to help in future analysis.
> 
>Please do not let the perfect be the enemy of the good.
> 
>Regards,
>  Herbert
> 
> =
>   Herbert J. Bernstein, Professor of Computer Science
> Dowling College, Kramer Science Center, KSC 121
>  Idle Hour Blvd, Oakdale, NY, 11769
> 
>   +1-631-244-3035
>   y...@dowling.edu
> =
> 
> On Wed, 26 Oct 2011, Gerard Bricogne wrote:
> 
> > Dear John and colleagues,
> >
> > There seem to be a set a centrifugal forces at play within this thread
> > that are distracting us from a sensible path of concrete action by throwing
> > decoys in every conceivable direction, e.g.
> >
> > * "Pilatus detectors spew out such a volume of data that we can't
> > possibly archive it all" - does that mean that because the 5th generation of
> > Dectris detectors will be able to write one billion images a second and
> > catch every scattered photon individually, we should not try and archive
> > more information than is given by the current merged structure factor data?
> > That seems a complete failure of reasoning to me: there must be a sensible
> > form of raw data archiving that would stand between those two extremes and
> > would retain much more information that the current merged data but would
> > step back from the enormous degree of oversampling of the raw diffraction
> > pattern that the Pilatus and its successors are capable of.
> >
> > * "It is all going to cost an awful lot of money, therefore we need a
> > team of grant writers to raise its hand and volunteer to apply for resources
> > from one or more funding agencies" - there

Re: [ccp4bb] IUCr committees, depositing images

2011-10-26 Thread Martin M. Ripoll
Dear George, dear all,

I was just trying to summarize my point of view regarding this important
issue when I got your e-mail, that reflects exactly my own opinion!

Martin

Dr. Martin Martinez-Ripoll
Research Professor
xmar...@iqfr.csic.es
Department of Crystallography & Structural Biology
www.xtal.iqfr.csic.es
Telf.: +34 917459550
Consejo Superior de Investigaciones Científicas
Spanish National Research Council
www.csic.es



-Mensaje original-
De: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] En nombre de George
M. Sheldrick
Enviado el: miércoles, 26 de octubre de 2011 11:52
Para: CCP4BB@JISCMAIL.AC.UK
Asunto: Re: [ccp4bb] IUCr committees, depositing images

This raises an important point. The new continuous readout detectors such as
the
Pilatus for beamlines or the Bruker Photon for in-house use enable the
crystal to 
be rotated at constant velocity, eliminating the mechanical errors
associated with
'stop and go' data collection. Storing their data in 'frames' is an
artifical
construction that is currently required for the established data integration
programs but is in fact throwing away information. Maybe in 10 years time
'frames' 
will be as obsolete as punched cards!

George

On Wed, Oct 26, 2011 at 09:39:40AM +0100, Graeme Winter wrote:
> Hi James,
> 
> Just to pick up on your point about the Pilatus detectors. Yesterday
> in 2 hours of giving a beamline a workout (admittedly with Thaumatin)
> we acquired 400 + GB of data*. Now I appreciate that this is not
> really routine operation, but it does raise an interesting point - if
> you have loaded a sample and centred it, collected test shots and
> decided it's not that great, why not collect anyway as it may later
> prove to be useful?
> 
> Bzzt. 2 minutes or less later you have a full data set, and barely
> even time to go get a cup of tea.
> 
> This does to some extent move the goalposts, as you can acquire far
> more data than you need. You never know, you may learn something
> interesting from it - perhaps it has different symmetry or packing?
> What it does mean is if we can have a method of tagging this data
> there may be massively more opportunity to get also-ran data sets for
> methods development types. What it also means however is that the cost
> of curating this data is then an order of magnitude higher.
> 
> Also moving it around is also rather more painful.
> 
> Anyhow, I would try to avoid dismissing the effect that new continuous
> readout detectors will have on data rates, from experience it is
> pretty substantial.
> 
> Cheerio,
> 
> Graeme
> 
> *by "data" here what I mean is images, rather than information which
> is rather more time consuming to acquire. I would argue you get that
> from processing / analysing the data...
> 
> On 24 October 2011 22:56, James Holton  wrote:
> > The Pilatus is fast, but or decades now we have had detectors that can
read
> > out in ~1s.  This means that you can collect a typical ~100 image
dataset in
> > a few minutes (if flux is not limiting).  Since there are ~150 beamlines
> > currently operating around the world and they are open about 200
days/year,
> > we should be collecting ~20,000,000 datasets each year.
> >
> > We're not.
> >
> > The PDB only gets about 8000 depositions per year, which means either we
> > throw away 99.96% of our images, or we don't actually collect images
> > anywhere near the ultimate capacity of the equipment we have.  In my
> > estimation, both of these play about equal roles, with ~50-fold
attrition
> > between ultimate data collection capacity and actual collected data, and
> > another ~50 fold attrition between collected data sets and published
> > structures.
> >
> > Personally, I think this means that the time it takes to collect the
final
> > dataset is not rate-limiting in a "typical" structural biology
> > project/paper.  This does not mean that the dataset is of little value.
> >  Quite the opposite!  About 3000x more time and energy is expended
preparing
> > for the final dataset than is spent collecting it, and these efforts
require
> > experimental feedback.  The trick is figuring out how best to compress
the
> > "data used to solve a structure" for archival storage.  Do the "previous
> > data sets" count?  Or should the compression be "lossy" about such
> > historical details?  Does the stuff between the spots matter?  After
all,
> > h,k,l,F,sigF is really just a form of data compression.  In fact, there
is
> > no such thing as "raw" data.  Even "raw" diffraction images are a
> > simplification of the signals 

Re: [ccp4bb] IUCr committees, depositing images

2011-10-26 Thread Colin Nave
I have been nominated by the IUCr synchrotron commission (thanks colleagues!) 
to represent them for this issue. However, at the moment, this is a personal 
view.

1. For archiving "raw" diffraction image data for structures in the PDB, it 
should be the responsibility of the worldwide PDB. They are by far the best 
place to do it and as Jacob says the space requirements are trivial. Gerard K's 
negative statement at CCP4-2010 sounds rather ex cathedra (in increasing order 
of influence/power do we have the Pope, US president, the Bond Market and 
finally Gerard K?). Did he make the statement in a formal presentation or in 
the bar? More seriously, I am sure he had good reasons (e.g. PDB priorities) if 
he did make this statement. It would be nice if Gerard could provide some 
explanation.

2. I agree with the "can do" attitude at Madrid as supported by Gerard B. 
Setting up something as best one can with existing enthusiasts will get the 
ball rolling, provide some immediate benefit and allow subsequent improvements. 

3. Ideally the data to be deposited should include all stages e.g. raw images, 
"corrected" images, MIR/SAD/MAD images, unmerged integrated intensities, 
scaled, merged etc. Plus the metadata, software & versions used for the various 
stages. Worrying too much about all of this should not of course prevent a 
start being made. (An aside. I put the "corrected" in quotes because the raw 
images have fewer errors. The subsequent processing for detector distortions 
etc. depend on an imperfect model for the detector. I don't like the phrase 
data correction).

4. Doing this for PDB depositions would then provide a basis for other data 
which did not result in PDB depositions. There seems to be a view that the 
archiving of this should be the responsibility of the synchrotrons which 
generated the data. This should be possible for some synchrotrons (e.g. 
Diamond) where there is pressure in any case from their funders to archive all 
data generated at the facility. However not all synchrotrons will be able to do 
this. There is also the issue of data collected at home sources. Presumably it 
will require a few willing synchrotrons to pioneer this in a coordinated way. 
Hopefully others will then follow. I don't think we can expect the PDB to 
archive the 99.96% of the data which did not result in structures.

5.  My view is that for data in the PDB the same release rules should apply for 
the images as for the other data. For other data, the funders of the research 
might want to define release rules. However, we can make suggestions!

6. Looking to the future, there is FEL data coming along, both single molecule 
and nano-crystals (assuming the FEL delivers for these areas).

7. I agree with Gerard B - "as far as I see it, the highest future benefit of 
having archived raw images will result from being able to reprocess datasets 
from samples containing multiple lattices" 
My view is that all crystals are, to a greater or lesser extent, subject to 
this. We just might not see it easily as the detector resolution or beam 
divergence is inadequate. Just think we could have several structures (one from 
each lattice) each with less disorder rather than just one average structure.  
Not sure whether Gloria's modulated structures would be as ubiquitous but her 
argument is along the same lines.

Regards 
  Colin

-Original Message-
From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Herbert 
J. Bernstein
Sent: 26 October 2011 18:55
To: ccp4bb
Subject: Re: [ccp4bb] IUCr committees, depositing images

Dear Colleagues,

   Gerard strikes a very useful note in pleading for a "can-do"
approach.  Part of going from "can-do" to "actually-done"
is to make realistic estimates of the costs of "doing" and
then to adjust plans appropriately to do what can be afforded
now and to work towards doing as much of what remains undone
as has sufficient benefit to justify the costs.

   We appear to be in a fortunate situation in which some
portion of the raw data behind a signficant portion of the
studies released in the PDB could probably be retained for some
significant period of time and be made available for further
analysis.  It would seem wise to explore these possibilities
and try to optimize the approaches used -- e.g. to consider
moves towards well documented formats, and retention of critical
metadata with such data to help in future analysis.

   Please do not let the perfect be the enemy of the good.

   Regards,
 Herbert

=
  Herbert J. Bernstein, Professor of Computer Science
Dowling College, Kramer Science Center, KSC 121
 Idle Hour Blvd, Oakdale, NY, 11769

  +1-631-244-3035
  y...@dowling.edu
=

On Wed, 26 Oct 201

Re: [ccp4bb] IUCr committees, depositing images

2011-10-26 Thread James Stroud

On Oct 26, 2011, at 9:59 AM, Patrick Shaw Stewart wrote:

> The principle is that the movie is written to the same area of memory, 
> jumping back to the beginning when it is full (this part is not essential, 
> but it makes the principle clear).  Then, when the photographer takes his 
> finger off the trigger, the last x seconds is permanently stored.  So you 
> keep your wits about you, and press the metaphorical "store" button just 
> after you have got the movie in the can so to speak

This idea seems equivalent to only storing permanently those datasets that 
actually yield structures worthy of deposition.

James



Re: [ccp4bb] IUCr committees, depositing images

2011-10-26 Thread Herbert J. Bernstein

Dear Colleagues,

  Gerard strikes a very useful note in pleading for a "can-do"
approach.  Part of going from "can-do" to "actually-done"
is to make realistic estimates of the costs of "doing" and
then to adjust plans appropriately to do what can be afforded
now and to work towards doing as much of what remains undone
as has sufficient benefit to justify the costs.

  We appear to be in a fortunate situation in which some
portion of the raw data behind a signficant portion of the
studies released in the PDB could probably be retained for some
significant period of time and be made available for further
analysis.  It would seem wise to explore these possibilities
and try to optimize the approaches used -- e.g. to consider
moves towards well documented formats, and retention of critical
metadata with such data to help in future analysis.

  Please do not let the perfect be the enemy of the good.

  Regards,
Herbert

=
 Herbert J. Bernstein, Professor of Computer Science
   Dowling College, Kramer Science Center, KSC 121
Idle Hour Blvd, Oakdale, NY, 11769

 +1-631-244-3035
 y...@dowling.edu
=

On Wed, 26 Oct 2011, Gerard Bricogne wrote:


Dear John and colleagues,

There seem to be a set a centrifugal forces at play within this thread
that are distracting us from a sensible path of concrete action by throwing
decoys in every conceivable direction, e.g.

* "Pilatus detectors spew out such a volume of data that we can't
possibly archive it all" - does that mean that because the 5th generation of
Dectris detectors will be able to write one billion images a second and
catch every scattered photon individually, we should not try and archive
more information than is given by the current merged structure factor data?
That seems a complete failure of reasoning to me: there must be a sensible
form of raw data archiving that would stand between those two extremes and
would retain much more information that the current merged data but would
step back from the enormous degree of oversampling of the raw diffraction
pattern that the Pilatus and its successors are capable of.

* "It is all going to cost an awful lot of money, therefore we need a
team of grant writers to raise its hand and volunteer to apply for resources
from one or more funding agencies" - there again there is an avoidance of
the feasible by invocation of the impossible. The IUCr Forum already has an
outline of a feasibility study that would cost only a small amount of
joined-up thinking and book-keeping around already stored information, so
let us not use the inaccessibility of federal or EC funding as a scarecrow
to justify not even trying what is proposed there. And the idea that someone
needs to decide to stake his/her career on this undertaking seems totally
overblown.

Several people have already pointed out that the sets of images that
would need to be archived would be a very small subset of the bulk of
datasets that are being held on the storage systems of synchrotron sources.
What needs to be done, as already described, is to be able to refer to those
few datasets that gave rise to the integrated data against which deposited
structures were refined (or, in some cases, solved by experimental phasing),
to give them special status in terms of making them visible and accessible
on-line at the same time as the pdb entry itself (rather than after the
statutory 2-5 years that would apply to all the rest, probably in a more
off-line form), and to maintain that accessibility "for ever", with a link
from the pdb entry and perhaps from the associated publication. It seems
unlikely that this would involve the mobilisation of such large resources as
to require either a human sacrifice (of the poor person whose life would be
staked on this gamble) or writing a grant application, with the indefinite
postponement of action and the loss of motivation this would imply.

Coming back to the more technical issue of bloated datasets, it is a
scientific problem that must be amenable to rational analysis to decide on a
sensible form of compression of overly-verbose sets of thin-sliced, perhaps
low-exposure images that would already retain a large fraction, if not all,
of the extra information on which we would wish future improved versions of
processing programs to cut their teeth, for a long time to come. This
approach would seem preferable to stoking up irrational fears of not being
able to cope with the most exaggerated predictions of the volumes of data to
archive, and thus doing nothing at all.

I very much hope that the "can do" spirit that marked the final
discussions of the DDDWG (Diffraction Data Deposition Working Group) in
Madrid will emerge on top of all the counter-arguments that consist in
moving the goal posts to prove that the initial goal is unreachable.


With best wishes,

   

Re: [ccp4bb] IUCr committees, depositing images

2011-10-26 Thread Jacob Keller
Touche! But alas, I have no access to the PDB's server, so...

JPK

On Wed, Oct 26, 2011 at 11:54 AM, Frank von Delft
 wrote:
> Cool - we've found our volunteer!!
>
> On 26/10/2011 17:28, Jacob Keller wrote:
>>
>> Is anyone seriously questioning whether we should archive the images
>> used for published structures? That amount of space is trivial, could
>> be implemented just as another link in the PDB website, and would be
>> really helpful in some cases. One person could set it up in a day! You
>> could just make it a policy: no images, no PDB submission, no
>> publishing!
>>
>> Jacob
>>
>>
>> On Wed, Oct 26, 2011 at 11:15 AM, Gloria Borgstahl
>>  wrote:
>>>
>>> I just want to jump in to state that I am ALL FOR the notion of
>>> depositing the images that go with the structure factors and the
>>> refined structure.
>>>
>>> Through the years, I have been interviewing folks about the strange
>>> satellite diffraction they saw, but ignored,
>>> used the mains that they could integrate and deposited that structure,
>>> does not help me to
>>> justify the existance of modulated protein crystals to reviewers.
>>>
>>> But if I could go and retrieve those images, and reanalyze with new
>>> methods.
>>> Dream come true.  Reviewers convinced.
>>>
>>> On Wed, Oct 26, 2011 at 10:59 AM, Patrick Shaw Stewart
>>>   wrote:

 Could you perhaps use the principle of "capture storage" that is used by
 wild-life photographers with high-speed cameras?
 The principle is that the movie is written to the same area of memory,
 jumping back to the beginning when it is full (this part is not
 essential,
 but it makes the principle clear).  Then, when the photographer takes
 his
 finger off the trigger, the last x seconds is permanently stored.  So
 you
 keep your wits about you, and press the metaphorical "store" button just
 after you have got the movie in the can so to speak

 Just a thought
 Patrick

 On Wed, Oct 26, 2011 at 2:18 PM, John R Helliwell
 wrote:
>
> Dear Frank,
> re 'who will write the grant?'.
>
> This is not as easy as it sounds, would that it were!
>
> There are two possible business plans:-
> Option 1. Specifically for MX is the PDB as the first and foremost
> candidate to seek such additional funds for full diffraction data
> deposition for each future PDB deposiition entry. This business plan
> possibility is best answered by PDB/EBI (eg Gerard Kleywegt has
> answered this in the negative thus far at the CCP4 January 2010).
>
> Option 2 The Journals that host the publications could add the cost to
> the subscriber and/or the author according to their funding model. As
> an example and as a start a draft business plan has been written by
> one of us [JRH] for IUCr Acta Cryst E; this seemed attractive because
> of its simpler 'author pays' financing. This proposed business plan is
> now with IUCr Journals to digest and hopefully refine. Initial
> indications are that Acta Cryst C would be perceived by IUCr Journals
> as a better place to start considering this in detail, as it involves
> fewer crystal structures than Acta E and would thus be more
> manageable. The overall advantage of the responsibility being with
> Journals as we see it is that it encourages such 'archiving of data
> with literature' across all crystallography related techniques (single
> crystal, SAXS, SANS, Electron crystallography etc) and fields
> (Biology, Chemistry, Materials, Condensed Matter Physics etc) ie not
> just one technique and field, although obviously biology is dear to
> our hearts here in the CCP4bb.
>
> Yours sincerely,
> John and Tom
> John Helliwell  and Tom Terwilliger
>
> On Wed, Oct 26, 2011 at 9:21 AM, Frank von Delft
>   wrote:
>>
>> Since when has the cost of any project been limited by the cost of
>> hardware?  Someone has to implement this -- and make a career out of
>> it;
>> thunderingly absent from this thread has been the chorus of volunteers
>> who
>> will write the grant.
>> phx
>>
>>
>> On 25/10/2011 21:10, Herbert J. Bernstein wrote:
>>
>> To be fair to those concerned about cost, a more conservative estimate
>> from the NSF RDLM workshop last summer in Princeton is $1,000 to
>> $3,000
>> per terabyte per year for long term storage allowing for overhead in
>> moderate-sized institutions such as the PDB.  Larger entities, such
>> as Google are able to do it for much lower annual costs in the range
>> of
>> $100 to $300 per terabyte per year.  Indeed, if this becomes a serious
>> effort, one might wish to consider involving the large storage farm
>> businesses such as Google and Amazon.  They might be willing to help
>> support science partially in exchange for eyeballs going to their
>> sites.
>>

Re: [ccp4bb] IUCr committees, depositing images

2011-10-26 Thread Frank von Delft

Cool - we've found our volunteer!!

On 26/10/2011 17:28, Jacob Keller wrote:

Is anyone seriously questioning whether we should archive the images
used for published structures? That amount of space is trivial, could
be implemented just as another link in the PDB website, and would be
really helpful in some cases. One person could set it up in a day! You
could just make it a policy: no images, no PDB submission, no
publishing!

Jacob


On Wed, Oct 26, 2011 at 11:15 AM, Gloria Borgstahl  wrote:

I just want to jump in to state that I am ALL FOR the notion of
depositing the images that go with the structure factors and the
refined structure.

Through the years, I have been interviewing folks about the strange
satellite diffraction they saw, but ignored,
used the mains that they could integrate and deposited that structure,
does not help me to
justify the existance of modulated protein crystals to reviewers.

But if I could go and retrieve those images, and reanalyze with new methods.
Dream come true.  Reviewers convinced.

On Wed, Oct 26, 2011 at 10:59 AM, Patrick Shaw Stewart
  wrote:

Could you perhaps use the principle of "capture storage" that is used by
wild-life photographers with high-speed cameras?
The principle is that the movie is written to the same area of memory,
jumping back to the beginning when it is full (this part is not essential,
but it makes the principle clear).  Then, when the photographer takes his
finger off the trigger, the last x seconds is permanently stored.  So you
keep your wits about you, and press the metaphorical "store" button just
after you have got the movie in the can so to speak

Just a thought
Patrick

On Wed, Oct 26, 2011 at 2:18 PM, John R Helliwell
wrote:

Dear Frank,
re 'who will write the grant?'.

This is not as easy as it sounds, would that it were!

There are two possible business plans:-
Option 1. Specifically for MX is the PDB as the first and foremost
candidate to seek such additional funds for full diffraction data
deposition for each future PDB deposiition entry. This business plan
possibility is best answered by PDB/EBI (eg Gerard Kleywegt has
answered this in the negative thus far at the CCP4 January 2010).

Option 2 The Journals that host the publications could add the cost to
the subscriber and/or the author according to their funding model. As
an example and as a start a draft business plan has been written by
one of us [JRH] for IUCr Acta Cryst E; this seemed attractive because
of its simpler 'author pays' financing. This proposed business plan is
now with IUCr Journals to digest and hopefully refine. Initial
indications are that Acta Cryst C would be perceived by IUCr Journals
as a better place to start considering this in detail, as it involves
fewer crystal structures than Acta E and would thus be more
manageable. The overall advantage of the responsibility being with
Journals as we see it is that it encourages such 'archiving of data
with literature' across all crystallography related techniques (single
crystal, SAXS, SANS, Electron crystallography etc) and fields
(Biology, Chemistry, Materials, Condensed Matter Physics etc) ie not
just one technique and field, although obviously biology is dear to
our hearts here in the CCP4bb.

Yours sincerely,
John and Tom
John Helliwell  and Tom Terwilliger

On Wed, Oct 26, 2011 at 9:21 AM, Frank von Delft
  wrote:

Since when has the cost of any project been limited by the cost of
hardware?  Someone has to implement this -- and make a career out of it;
thunderingly absent from this thread has been the chorus of volunteers
who
will write the grant.
phx


On 25/10/2011 21:10, Herbert J. Bernstein wrote:

To be fair to those concerned about cost, a more conservative estimate
from the NSF RDLM workshop last summer in Princeton is $1,000 to $3,000
per terabyte per year for long term storage allowing for overhead in
moderate-sized institutions such as the PDB.  Larger entities, such
as Google are able to do it for much lower annual costs in the range of
$100 to $300 per terabyte per year.  Indeed, if this becomes a serious
effort, one might wish to consider involving the large storage farm
businesses such as Google and Amazon.  They might be willing to help
support science partially in exchange for eyeballs going to their sites.

Regards,
H. J. Bernstein

At 1:56 PM -0600 10/25/11, James Stroud wrote:

On Oct 24, 2011, at 3:56 PM, James Holton wrote:

The PDB only gets about 8000 depositions per year

Just to put this into dollars. If each dataset is about 17 GB in
size, then that's about 14 TB of storage that needs to come online
every year to store the raw data for every structure. A two second
search reveals that Newegg has a 3GB hitachi for $200. So that's
about $1000 / year of storage for the raw data behind PDB deposits.

James





--
Professor John R Helliwell DSc



--
  patr...@douglas.co.ukDouglas Instruments Ltd.
  Douglas House, East Garston, Hungerford, Berkshire, RG17 7HD, UK
  Directors: P

Re: [ccp4bb] IUCr committees, depositing images

2011-10-26 Thread Jacob Keller
Is anyone seriously questioning whether we should archive the images
used for published structures? That amount of space is trivial, could
be implemented just as another link in the PDB website, and would be
really helpful in some cases. One person could set it up in a day! You
could just make it a policy: no images, no PDB submission, no
publishing!

Jacob


On Wed, Oct 26, 2011 at 11:15 AM, Gloria Borgstahl  wrote:
> I just want to jump in to state that I am ALL FOR the notion of
> depositing the images that go with the structure factors and the
> refined structure.
>
> Through the years, I have been interviewing folks about the strange
> satellite diffraction they saw, but ignored,
> used the mains that they could integrate and deposited that structure,
> does not help me to
> justify the existance of modulated protein crystals to reviewers.
>
> But if I could go and retrieve those images, and reanalyze with new methods.
> Dream come true.  Reviewers convinced.
>
> On Wed, Oct 26, 2011 at 10:59 AM, Patrick Shaw Stewart
>  wrote:
>>
>> Could you perhaps use the principle of "capture storage" that is used by
>> wild-life photographers with high-speed cameras?
>> The principle is that the movie is written to the same area of memory,
>> jumping back to the beginning when it is full (this part is not essential,
>> but it makes the principle clear).  Then, when the photographer takes his
>> finger off the trigger, the last x seconds is permanently stored.  So you
>> keep your wits about you, and press the metaphorical "store" button just
>> after you have got the movie in the can so to speak
>>
>> Just a thought
>> Patrick
>>
>> On Wed, Oct 26, 2011 at 2:18 PM, John R Helliwell 
>> wrote:
>>>
>>> Dear Frank,
>>> re 'who will write the grant?'.
>>>
>>> This is not as easy as it sounds, would that it were!
>>>
>>> There are two possible business plans:-
>>> Option 1. Specifically for MX is the PDB as the first and foremost
>>> candidate to seek such additional funds for full diffraction data
>>> deposition for each future PDB deposiition entry. This business plan
>>> possibility is best answered by PDB/EBI (eg Gerard Kleywegt has
>>> answered this in the negative thus far at the CCP4 January 2010).
>>>
>>> Option 2 The Journals that host the publications could add the cost to
>>> the subscriber and/or the author according to their funding model. As
>>> an example and as a start a draft business plan has been written by
>>> one of us [JRH] for IUCr Acta Cryst E; this seemed attractive because
>>> of its simpler 'author pays' financing. This proposed business plan is
>>> now with IUCr Journals to digest and hopefully refine. Initial
>>> indications are that Acta Cryst C would be perceived by IUCr Journals
>>> as a better place to start considering this in detail, as it involves
>>> fewer crystal structures than Acta E and would thus be more
>>> manageable. The overall advantage of the responsibility being with
>>> Journals as we see it is that it encourages such 'archiving of data
>>> with literature' across all crystallography related techniques (single
>>> crystal, SAXS, SANS, Electron crystallography etc) and fields
>>> (Biology, Chemistry, Materials, Condensed Matter Physics etc) ie not
>>> just one technique and field, although obviously biology is dear to
>>> our hearts here in the CCP4bb.
>>>
>>> Yours sincerely,
>>> John and Tom
>>> John Helliwell  and Tom Terwilliger
>>>
>>> On Wed, Oct 26, 2011 at 9:21 AM, Frank von Delft
>>>  wrote:
>>> > Since when has the cost of any project been limited by the cost of
>>> > hardware?  Someone has to implement this -- and make a career out of it;
>>> > thunderingly absent from this thread has been the chorus of volunteers
>>> > who
>>> > will write the grant.
>>> > phx
>>> >
>>> >
>>> > On 25/10/2011 21:10, Herbert J. Bernstein wrote:
>>> >
>>> > To be fair to those concerned about cost, a more conservative estimate
>>> > from the NSF RDLM workshop last summer in Princeton is $1,000 to $3,000
>>> > per terabyte per year for long term storage allowing for overhead in
>>> > moderate-sized institutions such as the PDB.  Larger entities, such
>>> > as Google are able to do it for much lower annual costs in the range of
>>> > $100 to $300 per terabyte per year.  Indeed, if this becomes a serious
>>> > effort, one might wish to consider involving the large storage farm
>>> > businesses such as Google and Amazon.  They might be willing to help
>>> > support science partially in exchange for eyeballs going to their sites.
>>> >
>>> > Regards,
>>> >    H. J. Bernstein
>>> >
>>> > At 1:56 PM -0600 10/25/11, James Stroud wrote:
>>> >
>>> > On Oct 24, 2011, at 3:56 PM, James Holton wrote:
>>> >
>>> > The PDB only gets about 8000 depositions per year
>>> >
>>> > Just to put this into dollars. If each dataset is about 17 GB in
>>> > size, then that's about 14 TB of storage that needs to come online
>>> > every year to store the raw data for every structure. A two second
>>> > sear

Re: [ccp4bb] IUCr committees, depositing images

2011-10-26 Thread Gloria Borgstahl
I just want to jump in to state that I am ALL FOR the notion of
depositing the images that go with the structure factors and the
refined structure.

Through the years, I have been interviewing folks about the strange
satellite diffraction they saw, but ignored,
used the mains that they could integrate and deposited that structure,
does not help me to
justify the existance of modulated protein crystals to reviewers.

But if I could go and retrieve those images, and reanalyze with new methods.
Dream come true.  Reviewers convinced.

On Wed, Oct 26, 2011 at 10:59 AM, Patrick Shaw Stewart
 wrote:
>
> Could you perhaps use the principle of "capture storage" that is used by
> wild-life photographers with high-speed cameras?
> The principle is that the movie is written to the same area of memory,
> jumping back to the beginning when it is full (this part is not essential,
> but it makes the principle clear).  Then, when the photographer takes his
> finger off the trigger, the last x seconds is permanently stored.  So you
> keep your wits about you, and press the metaphorical "store" button just
> after you have got the movie in the can so to speak
>
> Just a thought
> Patrick
>
> On Wed, Oct 26, 2011 at 2:18 PM, John R Helliwell 
> wrote:
>>
>> Dear Frank,
>> re 'who will write the grant?'.
>>
>> This is not as easy as it sounds, would that it were!
>>
>> There are two possible business plans:-
>> Option 1. Specifically for MX is the PDB as the first and foremost
>> candidate to seek such additional funds for full diffraction data
>> deposition for each future PDB deposiition entry. This business plan
>> possibility is best answered by PDB/EBI (eg Gerard Kleywegt has
>> answered this in the negative thus far at the CCP4 January 2010).
>>
>> Option 2 The Journals that host the publications could add the cost to
>> the subscriber and/or the author according to their funding model. As
>> an example and as a start a draft business plan has been written by
>> one of us [JRH] for IUCr Acta Cryst E; this seemed attractive because
>> of its simpler 'author pays' financing. This proposed business plan is
>> now with IUCr Journals to digest and hopefully refine. Initial
>> indications are that Acta Cryst C would be perceived by IUCr Journals
>> as a better place to start considering this in detail, as it involves
>> fewer crystal structures than Acta E and would thus be more
>> manageable. The overall advantage of the responsibility being with
>> Journals as we see it is that it encourages such 'archiving of data
>> with literature' across all crystallography related techniques (single
>> crystal, SAXS, SANS, Electron crystallography etc) and fields
>> (Biology, Chemistry, Materials, Condensed Matter Physics etc) ie not
>> just one technique and field, although obviously biology is dear to
>> our hearts here in the CCP4bb.
>>
>> Yours sincerely,
>> John and Tom
>> John Helliwell  and Tom Terwilliger
>>
>> On Wed, Oct 26, 2011 at 9:21 AM, Frank von Delft
>>  wrote:
>> > Since when has the cost of any project been limited by the cost of
>> > hardware?  Someone has to implement this -- and make a career out of it;
>> > thunderingly absent from this thread has been the chorus of volunteers
>> > who
>> > will write the grant.
>> > phx
>> >
>> >
>> > On 25/10/2011 21:10, Herbert J. Bernstein wrote:
>> >
>> > To be fair to those concerned about cost, a more conservative estimate
>> > from the NSF RDLM workshop last summer in Princeton is $1,000 to $3,000
>> > per terabyte per year for long term storage allowing for overhead in
>> > moderate-sized institutions such as the PDB.  Larger entities, such
>> > as Google are able to do it for much lower annual costs in the range of
>> > $100 to $300 per terabyte per year.  Indeed, if this becomes a serious
>> > effort, one might wish to consider involving the large storage farm
>> > businesses such as Google and Amazon.  They might be willing to help
>> > support science partially in exchange for eyeballs going to their sites.
>> >
>> > Regards,
>> >    H. J. Bernstein
>> >
>> > At 1:56 PM -0600 10/25/11, James Stroud wrote:
>> >
>> > On Oct 24, 2011, at 3:56 PM, James Holton wrote:
>> >
>> > The PDB only gets about 8000 depositions per year
>> >
>> > Just to put this into dollars. If each dataset is about 17 GB in
>> > size, then that's about 14 TB of storage that needs to come online
>> > every year to store the raw data for every structure. A two second
>> > search reveals that Newegg has a 3GB hitachi for $200. So that's
>> > about $1000 / year of storage for the raw data behind PDB deposits.
>> >
>> > James
>> >
>> >
>>
>>
>>
>> --
>> Professor John R Helliwell DSc
>
>
>
> --
>  patr...@douglas.co.uk    Douglas Instruments Ltd.
>  Douglas House, East Garston, Hungerford, Berkshire, RG17 7HD, UK
>  Directors: Peter Baldock, Patrick Shaw Stewart
>
>  http://www.douglas.co.uk
>  Tel: 44 (0) 148-864-9090    US toll-free 1-877-225-2034
>  Regd. England 2177994, VAT Reg. GB 480 7371 36
>

Re: [ccp4bb] IUCr committees, depositing images

2011-10-26 Thread Patrick Shaw Stewart
Could you perhaps use the principle of "capture storage" that is used by
wild-life photographers with high-speed cameras?

The principle is that the movie is written to the same area of memory,
jumping back to the beginning when it is full (this part is not essential,
but it makes the principle clear).  Then, when the photographer takes his
finger off the trigger, the last x seconds is permanently stored.  So you
keep your wits about you, and press the metaphorical "store" button just *after
*you have got the movie in the can so to speak

Just a thought

Patrick


On Wed, Oct 26, 2011 at 2:18 PM, John R Helliwell wrote:

> Dear Frank,
> re 'who will write the grant?'.
>
> This is not as easy as it sounds, would that it were!
>
> There are two possible business plans:-
> Option 1. Specifically for MX is the PDB as the first and foremost
> candidate to seek such additional funds for full diffraction data
> deposition for each future PDB deposiition entry. This business plan
> possibility is best answered by PDB/EBI (eg Gerard Kleywegt has
> answered this in the negative thus far at the CCP4 January 2010).
>
> Option 2 The Journals that host the publications could add the cost to
> the subscriber and/or the author according to their funding model. As
> an example and as a start a draft business plan has been written by
> one of us [JRH] for IUCr Acta Cryst E; this seemed attractive because
> of its simpler 'author pays' financing. This proposed business plan is
> now with IUCr Journals to digest and hopefully refine. Initial
> indications are that Acta Cryst C would be perceived by IUCr Journals
> as a better place to start considering this in detail, as it involves
> fewer crystal structures than Acta E and would thus be more
> manageable. The overall advantage of the responsibility being with
> Journals as we see it is that it encourages such 'archiving of data
> with literature' across all crystallography related techniques (single
> crystal, SAXS, SANS, Electron crystallography etc) and fields
> (Biology, Chemistry, Materials, Condensed Matter Physics etc) ie not
> just one technique and field, although obviously biology is dear to
> our hearts here in the CCP4bb.
>
> Yours sincerely,
> John and Tom
> John Helliwell  and Tom Terwilliger
>
> On Wed, Oct 26, 2011 at 9:21 AM, Frank von Delft
>  wrote:
> > Since when has the cost of any project been limited by the cost of
> > hardware?  Someone has to implement this -- and make a career out of it;
> > thunderingly absent from this thread has been the chorus of volunteers
> who
> > will write the grant.
> > phx
> >
> >
> > On 25/10/2011 21:10, Herbert J. Bernstein wrote:
> >
> > To be fair to those concerned about cost, a more conservative estimate
> > from the NSF RDLM workshop last summer in Princeton is $1,000 to $3,000
> > per terabyte per year for long term storage allowing for overhead in
> > moderate-sized institutions such as the PDB.  Larger entities, such
> > as Google are able to do it for much lower annual costs in the range of
> > $100 to $300 per terabyte per year.  Indeed, if this becomes a serious
> > effort, one might wish to consider involving the large storage farm
> > businesses such as Google and Amazon.  They might be willing to help
> > support science partially in exchange for eyeballs going to their sites.
> >
> > Regards,
> >H. J. Bernstein
> >
> > At 1:56 PM -0600 10/25/11, James Stroud wrote:
> >
> > On Oct 24, 2011, at 3:56 PM, James Holton wrote:
> >
> > The PDB only gets about 8000 depositions per year
> >
> > Just to put this into dollars. If each dataset is about 17 GB in
> > size, then that's about 14 TB of storage that needs to come online
> > every year to store the raw data for every structure. A two second
> > search reveals that Newegg has a 3GB hitachi for $200. So that's
> > about $1000 / year of storage for the raw data behind PDB deposits.
> >
> > James
> >
> >
>
>
>
> --
> Professor John R Helliwell DSc
>



-- 
 patr...@douglas.co.ukDouglas Instruments Ltd.
 Douglas House, East Garston, Hungerford, Berkshire, RG17 7HD, UK
 Directors: Peter Baldock, Patrick Shaw Stewart

 http://www.douglas.co.uk
 Tel: 44 (0) 148-864-9090US toll-free 1-877-225-2034
 Regd. England 2177994, VAT Reg. GB 480 7371 36


Re: [ccp4bb] IUCr committees, depositing images

2011-10-26 Thread Gerard Bricogne
Dear John and colleagues,

 There seem to be a set a centrifugal forces at play within this thread
that are distracting us from a sensible path of concrete action by throwing
decoys in every conceivable direction, e.g.

 * "Pilatus detectors spew out such a volume of data that we can't
possibly archive it all" - does that mean that because the 5th generation of
Dectris detectors will be able to write one billion images a second and
catch every scattered photon individually, we should not try and archive
more information than is given by the current merged structure factor data?
That seems a complete failure of reasoning to me: there must be a sensible
form of raw data archiving that would stand between those two extremes and
would retain much more information that the current merged data but would
step back from the enormous degree of oversampling of the raw diffraction
pattern that the Pilatus and its successors are capable of.

 * "It is all going to cost an awful lot of money, therefore we need a
team of grant writers to raise its hand and volunteer to apply for resources
from one or more funding agencies" - there again there is an avoidance of
the feasible by invocation of the impossible. The IUCr Forum already has an
outline of a feasibility study that would cost only a small amount of
joined-up thinking and book-keeping around already stored information, so
let us not use the inaccessibility of federal or EC funding as a scarecrow
to justify not even trying what is proposed there. And the idea that someone
needs to decide to stake his/her career on this undertaking seems totally
overblown.

 Several people have already pointed out that the sets of images that
would need to be archived would be a very small subset of the bulk of
datasets that are being held on the storage systems of synchrotron sources.
What needs to be done, as already described, is to be able to refer to those
few datasets that gave rise to the integrated data against which deposited
structures were refined (or, in some cases, solved by experimental phasing),
to give them special status in terms of making them visible and accessible
on-line at the same time as the pdb entry itself (rather than after the
statutory 2-5 years that would apply to all the rest, probably in a more
off-line form), and to maintain that accessibility "for ever", with a link
from the pdb entry and perhaps from the associated publication. It seems
unlikely that this would involve the mobilisation of such large resources as
to require either a human sacrifice (of the poor person whose life would be
staked on this gamble) or writing a grant application, with the indefinite
postponement of action and the loss of motivation this would imply.

 Coming back to the more technical issue of bloated datasets, it is a
scientific problem that must be amenable to rational analysis to decide on a
sensible form of compression of overly-verbose sets of thin-sliced, perhaps
low-exposure images that would already retain a large fraction, if not all,
of the extra information on which we would wish future improved versions of
processing programs to cut their teeth, for a long time to come. This
approach would seem preferable to stoking up irrational fears of not being
able to cope with the most exaggerated predictions of the volumes of data to
archive, and thus doing nothing at all.

 I very much hope that the "can do" spirit that marked the final
discussions of the DDDWG (Diffraction Data Deposition Working Group) in
Madrid will emerge on top of all the counter-arguments that consist in
moving the goal posts to prove that the initial goal is unreachable.


 With best wishes,
 
  Gerard.

--
On Wed, Oct 26, 2011 at 02:18:25PM +0100, John R Helliwell wrote:
> Dear Frank,
> re 'who will write the grant?'.
> 
> This is not as easy as it sounds, would that it were!
> 
> There are two possible business plans:-
> Option 1. Specifically for MX is the PDB as the first and foremost
> candidate to seek such additional funds for full diffraction data
> deposition for each future PDB deposiition entry. This business plan
> possibility is best answered by PDB/EBI (eg Gerard Kleywegt has
> answered this in the negative thus far at the CCP4 January 2010).
> 
> Option 2 The Journals that host the publications could add the cost to
> the subscriber and/or the author according to their funding model. As
> an example and as a start a draft business plan has been written by
> one of us [JRH] for IUCr Acta Cryst E; this seemed attractive because
> of its simpler 'author pays' financing. This proposed business plan is
> now with IUCr Journals to digest and hopefully refine. Initial
> indications are that Acta Cryst C would be perceived by IUCr Journals
> as a better place to start considering this in detail, as it involves
> fewer crystal structures than Acta E and would thus be more
> manageable. The overall advantage of the responsibility being with
> Journals

Re: [ccp4bb] IUCr committees, depositing images

2011-10-26 Thread John R Helliwell
Dear Frank,
re 'who will write the grant?'.

This is not as easy as it sounds, would that it were!

There are two possible business plans:-
Option 1. Specifically for MX is the PDB as the first and foremost
candidate to seek such additional funds for full diffraction data
deposition for each future PDB deposiition entry. This business plan
possibility is best answered by PDB/EBI (eg Gerard Kleywegt has
answered this in the negative thus far at the CCP4 January 2010).

Option 2 The Journals that host the publications could add the cost to
the subscriber and/or the author according to their funding model. As
an example and as a start a draft business plan has been written by
one of us [JRH] for IUCr Acta Cryst E; this seemed attractive because
of its simpler 'author pays' financing. This proposed business plan is
now with IUCr Journals to digest and hopefully refine. Initial
indications are that Acta Cryst C would be perceived by IUCr Journals
as a better place to start considering this in detail, as it involves
fewer crystal structures than Acta E and would thus be more
manageable. The overall advantage of the responsibility being with
Journals as we see it is that it encourages such 'archiving of data
with literature' across all crystallography related techniques (single
crystal, SAXS, SANS, Electron crystallography etc) and fields
(Biology, Chemistry, Materials, Condensed Matter Physics etc) ie not
just one technique and field, although obviously biology is dear to
our hearts here in the CCP4bb.

Yours sincerely,
John and Tom
John Helliwell  and Tom Terwilliger

On Wed, Oct 26, 2011 at 9:21 AM, Frank von Delft
 wrote:
> Since when has the cost of any project been limited by the cost of
> hardware?  Someone has to implement this -- and make a career out of it;
> thunderingly absent from this thread has been the chorus of volunteers who
> will write the grant.
> phx
>
>
> On 25/10/2011 21:10, Herbert J. Bernstein wrote:
>
> To be fair to those concerned about cost, a more conservative estimate
> from the NSF RDLM workshop last summer in Princeton is $1,000 to $3,000
> per terabyte per year for long term storage allowing for overhead in
> moderate-sized institutions such as the PDB.  Larger entities, such
> as Google are able to do it for much lower annual costs in the range of
> $100 to $300 per terabyte per year.  Indeed, if this becomes a serious
> effort, one might wish to consider involving the large storage farm
> businesses such as Google and Amazon.  They might be willing to help
> support science partially in exchange for eyeballs going to their sites.
>
> Regards,
>H. J. Bernstein
>
> At 1:56 PM -0600 10/25/11, James Stroud wrote:
>
> On Oct 24, 2011, at 3:56 PM, James Holton wrote:
>
> The PDB only gets about 8000 depositions per year
>
> Just to put this into dollars. If each dataset is about 17 GB in
> size, then that's about 14 TB of storage that needs to come online
> every year to store the raw data for every structure. A two second
> search reveals that Newegg has a 3GB hitachi for $200. So that's
> about $1000 / year of storage for the raw data behind PDB deposits.
>
> James
>
>



-- 
Professor John R Helliwell DSc


Re: [ccp4bb] IUCr committees, depositing images

2011-10-26 Thread Loes Kroon-Batenburg

Dear James,

Good analysis! You bring up important points.

On 10/24/11 23:56, James Holton wrote:

The Pilatus is fast, but or decades now we have had detectors that can read
out in ~1s. This means that you can collect a typical ~100 image dataset in a
few minutes (if flux is not limiting). Since there are ~150 beamlines
currently operating around the world and they are open about 200 days/year,
we should be collecting ~20,000,000 datasets each year.

We're not.

The PDB only gets about 8000 depositions per year, which means either we
throw away 99.96% of our images, or we don't actually collect images anywhere
near the ultimate capacity of the equipment we have. In my estimation, both
of these play about equal roles, with ~50-fold attrition between ultimate
data collection capacity and actual collected data, and another ~50 fold
attrition between collected data sets and published structures.


Your estimation says: we collect  1/50 * 20,000,000 = 400,000 data sets of which 
only 8,000 get deposited.
An average Pilatus data set (0.1 degree scan) takes about 4 Gb (compressed, 
without loosing information. EVAL can read those!). Storing the 8,000 data sets, 
as James Stroud mentions, can not be the problem.
It is the 392,000 other data sets that we have to find a home for. That would be 
1568 Tb and would cost 49,000 $/year. This may be a slight overestimation, but 
it shows us the problems we face if we want to store ALL raw data.


Even if we would find a way to store all these data, how would we set up a 
useful data base?  If we store all data by name, date and beamline, we will in 
the end inevitable be drowning is a sea of information. It is very unlikely that 
the very interesting data sets will ever be found and used.
It would be much more useful if every data sets would be annotated by the user 
or beam line scientist. Like: "impossible to index", "bad data from integration 
step", "overlap", "diffuse streaks" etc. Such information could be part of the 
meta data. This however, takes time and may not fit the eagerness to get results 
from one of the other data sets recorded at the same synchrotron trip.

I am afraid that just throwing data sets in a big pool, will not be very useful.

Loes.
--
__

Dr. Loes Kroon-Batenburg
Dept. of Crystal and Structural Chemistry
Bijvoet Center for Biomolecular Research
Utrecht University
Padualaan 8, 3584 CH Utrecht
The Netherlands

E-mail : l.m.j.kroon-batenb...@uu.nl
phone  : +31-30-2532865
fax: +31-30-2533940
__


Re: [ccp4bb] IUCr committees, depositing images

2011-10-26 Thread George M. Sheldrick
This raises an important point. The new continuous readout detectors such as the
Pilatus for beamlines or the Bruker Photon for in-house use enable the crystal 
to 
be rotated at constant velocity, eliminating the mechanical errors associated 
with
'stop and go' data collection. Storing their data in 'frames' is an artifical
construction that is currently required for the established data integration
programs but is in fact throwing away information. Maybe in 10 years time 
'frames' 
will be as obsolete as punched cards!

George

On Wed, Oct 26, 2011 at 09:39:40AM +0100, Graeme Winter wrote:
> Hi James,
> 
> Just to pick up on your point about the Pilatus detectors. Yesterday
> in 2 hours of giving a beamline a workout (admittedly with Thaumatin)
> we acquired 400 + GB of data*. Now I appreciate that this is not
> really routine operation, but it does raise an interesting point - if
> you have loaded a sample and centred it, collected test shots and
> decided it's not that great, why not collect anyway as it may later
> prove to be useful?
> 
> Bzzt. 2 minutes or less later you have a full data set, and barely
> even time to go get a cup of tea.
> 
> This does to some extent move the goalposts, as you can acquire far
> more data than you need. You never know, you may learn something
> interesting from it - perhaps it has different symmetry or packing?
> What it does mean is if we can have a method of tagging this data
> there may be massively more opportunity to get also-ran data sets for
> methods development types. What it also means however is that the cost
> of curating this data is then an order of magnitude higher.
> 
> Also moving it around is also rather more painful.
> 
> Anyhow, I would try to avoid dismissing the effect that new continuous
> readout detectors will have on data rates, from experience it is
> pretty substantial.
> 
> Cheerio,
> 
> Graeme
> 
> *by "data" here what I mean is images, rather than information which
> is rather more time consuming to acquire. I would argue you get that
> from processing / analysing the data...
> 
> On 24 October 2011 22:56, James Holton  wrote:
> > The Pilatus is fast, but or decades now we have had detectors that can read
> > out in ~1s.  This means that you can collect a typical ~100 image dataset in
> > a few minutes (if flux is not limiting).  Since there are ~150 beamlines
> > currently operating around the world and they are open about 200 days/year,
> > we should be collecting ~20,000,000 datasets each year.
> >
> > We're not.
> >
> > The PDB only gets about 8000 depositions per year, which means either we
> > throw away 99.96% of our images, or we don't actually collect images
> > anywhere near the ultimate capacity of the equipment we have.  In my
> > estimation, both of these play about equal roles, with ~50-fold attrition
> > between ultimate data collection capacity and actual collected data, and
> > another ~50 fold attrition between collected data sets and published
> > structures.
> >
> > Personally, I think this means that the time it takes to collect the final
> > dataset is not rate-limiting in a "typical" structural biology
> > project/paper.  This does not mean that the dataset is of little value.
> >  Quite the opposite!  About 3000x more time and energy is expended preparing
> > for the final dataset than is spent collecting it, and these efforts require
> > experimental feedback.  The trick is figuring out how best to compress the
> > "data used to solve a structure" for archival storage.  Do the "previous
> > data sets" count?  Or should the compression be "lossy" about such
> > historical details?  Does the stuff between the spots matter?  After all,
> > h,k,l,F,sigF is really just a form of data compression.  In fact, there is
> > no such thing as "raw" data.  Even "raw" diffraction images are a
> > simplification of the signals that came out of the detector electronics.
> >  But we round-off and average over a lot of things to remove "noise".
> >  Largely because "noise" is difficult to compress.  The question of how much
> > compression is too much compression depends on which information (aka noise)
> > you think could be important in the future.
> >
> > When it comes to fine-sliced data, such as that from Pilatus, the main
> > reason why it doesn't compress very well is not because of the spots, but
> > the background.  It occupies thousands of times more pixels than the spots.
> >  Yes, there is diffuse scattering information in the background pixels, but
> > this kind of data is MUCH smoother than the spot data (by definition), and
> > therefore is optimally stored in larger pixels.  Last year, I messed around
> > a bit with applying different compression protocols to the spots and the
> > background, and found that ~30 fold compression can be easily achieved if
> > you apply h264 to the background and store the "spots" with lossless png
> > compression:
> >
> > http://bl831.als.lbl.gov/~jamesh/lossy_compression/
> >
> > 

Re: [ccp4bb] IUCr committees, depositing images

2011-10-26 Thread Matthew BOWLER
The archiving of all raw data and subsequently making it public is 
something that the large facilities are currently debating whether to 
do.  Here at the ESRF we store user data for only 6 months (and I 
believe that it is available longer on tape) and we already have trouble 
with capacity.  My personal view is that facilities should take the lead 
on this - for MX we already have a very good archiving system - ISPyB - 
also running at Diamond.  ISPyB stores lots of meta data and jpgs of the 
raw images but not the images themselves but a link to the location of 
the data with an option to download if still available.  My preferred 
option would be to store all academically funded data and then make it 
publicly available after say 2-5 years (this will no doubt spark another 
debate on time limits, special dispensation etc).  What needs to be 
thought about is how to order the data and how to make sure that the 
correct meta data are stored with each data set - this will rely heavily 
on user input at the time of the experiment rather than gathering 
together data sets for depositions much later.  As already mentioned, 
this type of resource could be extremely useful for developers and also 
as a general scientific resource.  Smells like an EU grant to me. 
Cheers, Matt.



On 26/10/2011 10:21, Frank von Delft wrote:
Since when has the cost of any project been limited by the cost of 
hardware?  Someone has to /implement /this -- and make a career out of 
it;  thunderingly absent from this thread has been the chorus of 
volunteers who will write the grant.

phx





--
Matthew Bowler
Structural Biology Group
European Synchrotron Radiation Facility
B.P. 220, 6 rue Jules Horowitz
F-38043 GRENOBLE CEDEX
FRANCE
===
Tel: +33 (0) 4.76.88.29.28
Fax: +33 (0) 4.76.88.29.04

http://go.esrf.eu/MX
http://go.esrf.eu/Bowler
===



Re: [ccp4bb] IUCr committees, depositing images

2011-10-26 Thread Graeme Winter
Hi James,

Just to pick up on your point about the Pilatus detectors. Yesterday
in 2 hours of giving a beamline a workout (admittedly with Thaumatin)
we acquired 400 + GB of data*. Now I appreciate that this is not
really routine operation, but it does raise an interesting point - if
you have loaded a sample and centred it, collected test shots and
decided it's not that great, why not collect anyway as it may later
prove to be useful?

Bzzt. 2 minutes or less later you have a full data set, and barely
even time to go get a cup of tea.

This does to some extent move the goalposts, as you can acquire far
more data than you need. You never know, you may learn something
interesting from it - perhaps it has different symmetry or packing?
What it does mean is if we can have a method of tagging this data
there may be massively more opportunity to get also-ran data sets for
methods development types. What it also means however is that the cost
of curating this data is then an order of magnitude higher.

Also moving it around is also rather more painful.

Anyhow, I would try to avoid dismissing the effect that new continuous
readout detectors will have on data rates, from experience it is
pretty substantial.

Cheerio,

Graeme

*by "data" here what I mean is images, rather than information which
is rather more time consuming to acquire. I would argue you get that
from processing / analysing the data...

On 24 October 2011 22:56, James Holton  wrote:
> The Pilatus is fast, but or decades now we have had detectors that can read
> out in ~1s.  This means that you can collect a typical ~100 image dataset in
> a few minutes (if flux is not limiting).  Since there are ~150 beamlines
> currently operating around the world and they are open about 200 days/year,
> we should be collecting ~20,000,000 datasets each year.
>
> We're not.
>
> The PDB only gets about 8000 depositions per year, which means either we
> throw away 99.96% of our images, or we don't actually collect images
> anywhere near the ultimate capacity of the equipment we have.  In my
> estimation, both of these play about equal roles, with ~50-fold attrition
> between ultimate data collection capacity and actual collected data, and
> another ~50 fold attrition between collected data sets and published
> structures.
>
> Personally, I think this means that the time it takes to collect the final
> dataset is not rate-limiting in a "typical" structural biology
> project/paper.  This does not mean that the dataset is of little value.
>  Quite the opposite!  About 3000x more time and energy is expended preparing
> for the final dataset than is spent collecting it, and these efforts require
> experimental feedback.  The trick is figuring out how best to compress the
> "data used to solve a structure" for archival storage.  Do the "previous
> data sets" count?  Or should the compression be "lossy" about such
> historical details?  Does the stuff between the spots matter?  After all,
> h,k,l,F,sigF is really just a form of data compression.  In fact, there is
> no such thing as "raw" data.  Even "raw" diffraction images are a
> simplification of the signals that came out of the detector electronics.
>  But we round-off and average over a lot of things to remove "noise".
>  Largely because "noise" is difficult to compress.  The question of how much
> compression is too much compression depends on which information (aka noise)
> you think could be important in the future.
>
> When it comes to fine-sliced data, such as that from Pilatus, the main
> reason why it doesn't compress very well is not because of the spots, but
> the background.  It occupies thousands of times more pixels than the spots.
>  Yes, there is diffuse scattering information in the background pixels, but
> this kind of data is MUCH smoother than the spot data (by definition), and
> therefore is optimally stored in larger pixels.  Last year, I messed around
> a bit with applying different compression protocols to the spots and the
> background, and found that ~30 fold compression can be easily achieved if
> you apply h264 to the background and store the "spots" with lossless png
> compression:
>
> http://bl831.als.lbl.gov/~jamesh/lossy_compression/
>
> I think these results "speak" to the relative information content of the
> spots and the pixels between them.  Perhaps at least the "online version" of
> archived images could be in some sort of lossy-background format?  With the
> "real images" in some sort of slower storage (like a room full of tapes that
> are available upon request)?  Would 30-fold compression make the storage of
> image data tractable enough for some entity like the PDB to be able to
> afford it?
>
>
> I go to a lot of methods meetings, and it pains me to see the most brilliant
> minds in the field starved for "interesting" data sets.  The problem is that
> it is very easy to get people to send you data that is so bad that it can't
> be solved by any software imaginable (I've got pil

Re: [ccp4bb] IUCr committees, depositing images

2011-10-26 Thread Frank von Delft
Since when has the cost of any project been limited by the cost of 
hardware?  Someone has to /implement /this -- and make a career out of 
it;  thunderingly absent from this thread has been the chorus of 
volunteers who will write the grant.

phx


On 25/10/2011 21:10, Herbert J. Bernstein wrote:

To be fair to those concerned about cost, a more conservative estimate
from the NSF RDLM workshop last summer in Princeton is $1,000 to $3,000
per terabyte per year for long term storage allowing for overhead in
moderate-sized institutions such as the PDB.  Larger entities, such
as Google are able to do it for much lower annual costs in the range of
$100 to $300 per terabyte per year.  Indeed, if this becomes a serious
effort, one might wish to consider involving the large storage farm
businesses such as Google and Amazon.  They might be willing to help
support science partially in exchange for eyeballs going to their sites.

Regards,
H. J. Bernstein

At 1:56 PM -0600 10/25/11, James Stroud wrote:

On Oct 24, 2011, at 3:56 PM, James Holton wrote:


The PDB only gets about 8000 depositions per year


Just to put this into dollars. If each dataset is about 17 GB in
size, then that's about 14 TB of storage that needs to come online
every year to store the raw data for every structure. A two second
search reveals that Newegg has a 3GB hitachi for $200. So that's
about $1000 / year of storage for the raw data behind PDB deposits.

James




Re: [ccp4bb] IUCr committees, depositing images

2011-10-25 Thread Nat Echols
On Tue, Oct 25, 2011 at 3:20 PM, Pete Meyer  wrote:

> This is probably an idea that has already been tried (or discarded as
> unsuitable for reasons that don't occur to me at the moment) - but why not
> start with good crystals (such as lysozyme) and deliberately make them
> worse?  Exactly how would depend on what kind of methods you were trying to
> develop - but I'd imaging "titrating in" organic solvents/detergents would
> be able to turn a well-diffracting crystal into a poor one (with a known, or
> at least knowable, answer). Deliberately causing radiation damage, or using
> known poor cryo-conditions would also work - probably the type of "badness"
> in the data would be different.
>
> I don't think you'd be able to tune solvent content or number of anomalous
> scatterers by damaging good crystals.  This would also require a decent
> number of crystals (but lysozyme is reasonably inexpensive). But making good
> crystals from bad ones is difficult - making bad ones from good ones
> shouldn't be.
>
> Any ideas why this wouldn't work (or citations where it did)?


It probably depends on what kind of methods you are trying to develop, but
there are any number of reasons why it wouldn't be a complete substitute for
bad data found in the wild (you've already listed a couple), all of them
essentially coming down to the same problem: you're limiting what kind of
imperfections and pathologies you'll see.  How do you get a crystal that is
just slightly split, for instance?  Or one that diffracts to high resolution
but has a mosaicity of 2.5 degrees?  I can think of plenty of other special
cases, and the data processing people probably have dozens of ideas.  We
could spend a month abusing lysozyme crystals and still not come up with
anywhere near the diversity that the community can generate with their
membrane proteins, glycoproteins, RNA, etc.  This is true for everything
from data processing to refinement.  (Fortunately, for refinement we already
have most of what we need in the PDB.)

That doesn't mean it's not a good idea anyway; I've considered doing
something similar to generate pairs of realistic high-resolution and
low-resolution data.  (Probably not using lysozyme - it's much too small to
be representative of the average 4A structure.)  It's a good summer project
for a capable undergraduate student, if you have synchrotron time to burn.

-Nat


Re: [ccp4bb] IUCr committees, depositing images

2011-10-25 Thread Pete Meyer

James Holton wrote:
I go to a lot of methods meetings, and it pains me to see the most 
brilliant minds in the field starved for "interesting" data sets.  The 
problem is that it is very easy to get people to send you data that is 
so bad that it can't be solved by any software imaginable (I've got 
piles of that!).  As a developer, what you really need is a "right 
answer" so you can come up with better metrics for how close you are to 
it.  Ironically, bad, unsolvable data that is connected to a right 
answer (aka a PDB ID) is very difficult to obtain.  The explanations 


This is probably an idea that has already been tried (or discarded as 
unsuitable for reasons that don't occur to me at the moment) - but why 
not start with good crystals (such as lysozyme) and deliberately make 
them worse?  Exactly how would depend on what kind of methods you were 
trying to develop - but I'd imaging "titrating in" organic 
solvents/detergents would be able to turn a well-diffracting crystal 
into a poor one (with a known, or at least knowable, answer). 
Deliberately causing radiation damage, or using known poor 
cryo-conditions would also work - probably the type of "badness" in the 
data would be different.


I don't think you'd be able to tune solvent content or number of 
anomalous scatterers by damaging good crystals.  This would also require 
a decent number of crystals (but lysozyme is reasonably inexpensive). 
But making good crystals from bad ones is difficult - making bad ones 
from good ones shouldn't be.


Any ideas why this wouldn't work (or citations where it did)?

Pete

usually involve protestations about being in the middle of writing up 
the paper, the student graduated and we don't understand how he/she 
labeled the tapes, or the RAID crashed and we lost it all, etc. etc.  
Then again, just finding someone who has a data set with the kind of 
problem you are interested in is a lot of work!  So is figuring out 
which problem affects the most people, and is therefore "interesting".


Is this not exactly the kind of thing that publicly-accessible 
centralized scientific databases are created to address?


-James Holton
MAD Scientist

On 10/16/2011 11:38 AM, Frank von Delft wrote:

On the deposition of raw data:

I recommend to the committee that before it convenes again, every 
member should go collect some data on a beamline with a Pilatus 
detector [feel free to join us at Diamond].  Because by the probable 
time any recommendations actually emerge, most beamlines will have one 
of those (or similar), we'll be generating more data than the LHC, and 
users will be happy just to have it integrated, never mind worry about 
its fate.


That's not an endorsement, btw, just an observation/prediction.

phx.




On 14/10/2011 23:56, Thomas C. Terwilliger wrote:

For those who have strong opinions on what data should be deposited...

The IUCR is just starting a serious discussion of this subject. Two
committees, the "Data Deposition Working Group", led by John Helliwell,
and the Commission on Biological Macromolecules (chaired by Xiao-Dong 
Su)

are working on this.

Two key issues are (1) feasibility and importance of deposition of raw
images and (2) deposition of sufficient information to fully 
reproduce the

crystallographic analysis.

I am on both committees and would be happy to hear your ideas 
(off-list).
I am sure the other members of the committees would welcome your 
thoughts

as well.

-Tom T

Tom Terwilliger
terwilli...@lanl.gov



This is a follow up (or a digression) to James comparing test set to
missing reflections.  I also heard this issue mentioned before but was
always too lazy to actually pursue it.

So.

The role of the test set is to prevent overfitting.  Let's say I have
the final model and I monitored the Rfree every step of the way and 
can
conclude that there is no overfitting.  Should I do the final 
refinement

against complete dataset?

IMCO, I absolutely should.  The test set reflections contain
information, and the "final" model is actually biased towards the
working set.  Refining using all the data can only improve the 
accuracy

of the model, if only slightly.

The second question is practical.  Let's say I want to deposit the
results of the refinement against the full dataset as my final model.
Should I not report the Rfree and instead insert a remark 
explaining the

situation?  If I report the Rfree prior to the test set removal, it is
certain that every validation tool will report a mismatch.  It does 
not

seem that the PDB has a mechanism to deal with this.

Cheers,

Ed.



--
Oh, suddenly throwing a giraffe into a volcano to make water is crazy?
 Julian, King of 
Lemurs




Re: [ccp4bb] IUCr committees, depositing images

2011-10-25 Thread Herbert J. Bernstein

To be fair to those concerned about cost, a more conservative estimate
from the NSF RDLM workshop last summer in Princeton is $1,000 to $3,000
per terabyte per year for long term storage allowing for overhead in
moderate-sized institutions such as the PDB.  Larger entities, such
as Google are able to do it for much lower annual costs in the range of
$100 to $300 per terabyte per year.  Indeed, if this becomes a serious
effort, one might wish to consider involving the large storage farm
businesses such as Google and Amazon.  They might be willing to help
support science partially in exchange for eyeballs going to their sites.

Regards,
  H. J. Bernstein

At 1:56 PM -0600 10/25/11, James Stroud wrote:

On Oct 24, 2011, at 3:56 PM, James Holton wrote:


The PDB only gets about 8000 depositions per year



Just to put this into dollars. If each dataset is about 17 GB in 
size, then that's about 14 TB of storage that needs to come online 
every year to store the raw data for every structure. A two second 
search reveals that Newegg has a 3GB hitachi for $200. So that's 
about $1000 / year of storage for the raw data behind PDB deposits.


James



--
=
 Herbert J. Bernstein, Professor of Computer Science
 Dowling College, Brookhaven Campus, B111B
   1300 William Floyd Parkway, Shirley, NY, 11967

 +1-631-244-1328
   Lab: +1-631-244-1935
  Cell: +1-631-428-1397
 y...@dowling.edu
=


Re: [ccp4bb] IUCr committees, depositing images

2011-10-25 Thread James Stroud

On Oct 24, 2011, at 3:56 PM, James Holton wrote:

> The PDB only gets about 8000 depositions per year

Just to put this into dollars. If each dataset is about 17 GB in size, then 
that's about 14 TB of storage that needs to come online every year to store the 
raw data for every structure. A two second search reveals that Newegg has a 3GB 
hitachi for $200. So that's about $1000 / year of storage for the raw data 
behind PDB deposits.

James



Re: [ccp4bb] IUCr committees, depositing images

2011-10-25 Thread Jrh
Dear James,
This is technically ingenious stuff.

Perhaps it could be applied to help the 'full archive challenge' ie containing 
many data sets that will never lead to publication/ database deposition?

However for the latter,publication/deposition, subset you would surely not 
'tamper' with the raw measurements? 

The 'grey area' between the two clearcut cases  ie where eventually 
publication/deposition MAY result then becomes the challenge as to whether to 
compress or not? (I would still prefer no tampering.)

Greetings,
John

Prof John R Helliwell DSc 




On 24 Oct 2011, at 22:56, James Holton  wrote:

> The Pilatus is fast, but or decades now we have had detectors that can read 
> out in ~1s.  This means that you can collect a typical ~100 image dataset in 
> a few minutes (if flux is not limiting).  Since there are ~150 beamlines 
> currently operating around the world and they are open about 200 days/year, 
> we should be collecting ~20,000,000 datasets each year.
> 
> We're not.
> 
> The PDB only gets about 8000 depositions per year, which means either we 
> throw away 99.96% of our images, or we don't actually collect images anywhere 
> near the ultimate capacity of the equipment we have.  In my estimation, both 
> of these play about equal roles, with ~50-fold attrition between ultimate 
> data collection capacity and actual collected data, and another ~50 fold 
> attrition between collected data sets and published structures.
> 
> Personally, I think this means that the time it takes to collect the final 
> dataset is not rate-limiting in a "typical" structural biology project/paper. 
>  This does not mean that the dataset is of little value.  Quite the opposite! 
>  About 3000x more time and energy is expended preparing for the final dataset 
> than is spent collecting it, and these efforts require experimental feedback. 
>  The trick is figuring out how best to compress the "data used to solve a 
> structure" for archival storage.  Do the "previous data sets" count?  Or 
> should the compression be "lossy" about such historical details?  Does the 
> stuff between the spots matter?  After all, h,k,l,F,sigF is really just a 
> form of data compression.  In fact, there is no such thing as "raw" data.  
> Even "raw" diffraction images are a simplification of the signals that came 
> out of the detector electronics.  But we round-off and average over a lot of 
> things to remove "noise".  Largely because "noise" is difficult to compress.  
> The question of how much compression is too much compression depends on which 
> information (aka noise) you think could be important in the future.
> 
> When it comes to fine-sliced data, such as that from Pilatus, the main reason 
> why it doesn't compress very well is not because of the spots, but the 
> background.  It occupies thousands of times more pixels than the spots.  Yes, 
> there is diffuse scattering information in the background pixels, but this 
> kind of data is MUCH smoother than the spot data (by definition), and 
> therefore is optimally stored in larger pixels.  Last year, I messed around a 
> bit with applying different compression protocols to the spots and the 
> background, and found that ~30 fold compression can be easily achieved if you 
> apply h264 to the background and store the "spots" with lossless png 
> compression:
> 
> http://bl831.als.lbl.gov/~jamesh/lossy_compression/
> 
> I think these results "speak" to the relative information content of the 
> spots and the pixels between them.  Perhaps at least the "online version" of 
> archived images could be in some sort of lossy-background format?  With the 
> "real images" in some sort of slower storage (like a room full of tapes that 
> are available upon request)?  Would 30-fold compression make the storage of 
> image data tractable enough for some entity like the PDB to be able to afford 
> it?
> 
> 
> I go to a lot of methods meetings, and it pains me to see the most brilliant 
> minds in the field starved for "interesting" data sets.  The problem is that 
> it is very easy to get people to send you data that is so bad that it can't 
> be solved by any software imaginable (I've got piles of that!).  As a 
> developer, what you really need is a "right answer" so you can come up with 
> better metrics for how close you are to it.  Ironically, bad, unsolvable data 
> that is connected to a right answer (aka a PDB ID) is very difficult to 
> obtain.  The explanations usually involve protestations about being in the 
> middle of writing up the paper, the student graduated and we don't understand 
> how he/she labeled the tapes, or the RAID crashed and we lost it all, etc. 
> etc.  Then again, just finding someone who has a data set with the kind of 
> problem you are interested in is a lot of work!  So is figuring out which 
> problem affects the most people, and is therefore "interesting".
> 
> Is this not exactly the kind of thing that publicly-accessible centralized 
> scie

Re: [ccp4bb] IUCr committees, depositing images

2011-10-24 Thread James Holton
The Pilatus is fast, but or decades now we have had detectors that can 
read out in ~1s.  This means that you can collect a typical ~100 image 
dataset in a few minutes (if flux is not limiting).  Since there are 
~150 beamlines currently operating around the world and they are open 
about 200 days/year, we should be collecting ~20,000,000 datasets each 
year.


We're not.

The PDB only gets about 8000 depositions per year, which means either we 
throw away 99.96% of our images, or we don't actually collect images 
anywhere near the ultimate capacity of the equipment we have.  In my 
estimation, both of these play about equal roles, with ~50-fold 
attrition between ultimate data collection capacity and actual collected 
data, and another ~50 fold attrition between collected data sets and 
published structures.


Personally, I think this means that the time it takes to collect the 
final dataset is not rate-limiting in a "typical" structural biology 
project/paper.  This does not mean that the dataset is of little value.  
Quite the opposite!  About 3000x more time and energy is expended 
preparing for the final dataset than is spent collecting it, and these 
efforts require experimental feedback.  The trick is figuring out how 
best to compress the "data used to solve a structure" for archival 
storage.  Do the "previous data sets" count?  Or should the compression 
be "lossy" about such historical details?  Does the stuff between the 
spots matter?  After all, h,k,l,F,sigF is really just a form of data 
compression.  In fact, there is no such thing as "raw" data.  Even "raw" 
diffraction images are a simplification of the signals that came out of 
the detector electronics.  But we round-off and average over a lot of 
things to remove "noise".  Largely because "noise" is difficult to 
compress.  The question of how much compression is too much compression 
depends on which information (aka noise) you think could be important in 
the future.


When it comes to fine-sliced data, such as that from Pilatus, the main 
reason why it doesn't compress very well is not because of the spots, 
but the background.  It occupies thousands of times more pixels than the 
spots.  Yes, there is diffuse scattering information in the background 
pixels, but this kind of data is MUCH smoother than the spot data (by 
definition), and therefore is optimally stored in larger pixels.  Last 
year, I messed around a bit with applying different compression 
protocols to the spots and the background, and found that ~30 fold 
compression can be easily achieved if you apply h264 to the background 
and store the "spots" with lossless png compression:


http://bl831.als.lbl.gov/~jamesh/lossy_compression/

I think these results "speak" to the relative information content of the 
spots and the pixels between them.  Perhaps at least the "online 
version" of archived images could be in some sort of lossy-background 
format?  With the "real images" in some sort of slower storage (like a 
room full of tapes that are available upon request)?  Would 30-fold 
compression make the storage of image data tractable enough for some 
entity like the PDB to be able to afford it?



I go to a lot of methods meetings, and it pains me to see the most 
brilliant minds in the field starved for "interesting" data sets.  The 
problem is that it is very easy to get people to send you data that is 
so bad that it can't be solved by any software imaginable (I've got 
piles of that!).  As a developer, what you really need is a "right 
answer" so you can come up with better metrics for how close you are to 
it.  Ironically, bad, unsolvable data that is connected to a right 
answer (aka a PDB ID) is very difficult to obtain.  The explanations 
usually involve protestations about being in the middle of writing up 
the paper, the student graduated and we don't understand how he/she 
labeled the tapes, or the RAID crashed and we lost it all, etc. etc.  
Then again, just finding someone who has a data set with the kind of 
problem you are interested in is a lot of work!  So is figuring out 
which problem affects the most people, and is therefore "interesting".


Is this not exactly the kind of thing that publicly-accessible 
centralized scientific databases are created to address?


-James Holton
MAD Scientist

On 10/16/2011 11:38 AM, Frank von Delft wrote:

On the deposition of raw data:

I recommend to the committee that before it convenes again, every 
member should go collect some data on a beamline with a Pilatus 
detector [feel free to join us at Diamond].  Because by the probable 
time any recommendations actually emerge, most beamlines will have one 
of those (or similar), we'll be generating more data than the LHC, and 
users will be happy just to have it integrated, never mind worry about 
its fate.


That's not an endorsement, btw, just an observation/prediction.

phx.




On 14/10/2011 23:56, Thomas C. Terwilliger wrote:

For those who have strong opinions on

Re: [ccp4bb] IUCr committees, depositing images

2011-10-20 Thread Thomas C. Terwilliger
John Helliwell points out to me that it might be useful to know what MX
crystallographic data researchers in different countries are already
expected to deposit or save. He notes that research funding agencies in
the UK expect researchers to preserve their raw experimental data for at
least 5 years.

Can people comment on what data they are already expected to save in their
countries, and what mechanisms they already have for facilitating this
(for example the Australian Research Council TARDIS initiative which helps
store raw diffraction images)?

If you want to see or post comments on this thread on the IUCR Forum you
can do that at:

 http://forums.iucr.org/viewforum.php?f=21

Also of course posts here on this mailing list are fine.

-Tom T


Re: [ccp4bb] IUCr committees, depositing images

2011-10-19 Thread Graeme Winter
Hi Eleanor,

So far I have managed to "lurk" on this one - keeping an eye on things
but not getting involved. However this has prompted me to respond!

> Has anyone raided the point that while archiving is good, it will only be
> generally  useful if the image HEADERS are informative and use a
> comprehensible format - and the data base is documented...

There are a number of issues here:

 - whether to publish data
 - how to publish data
 - how to make the published data useful
 - whether to centrally archive that data
 - whether to standardise the data, and if so how
 - who should pay
 - moving the data around

etc. The image headers issue is clearly one of these, however properly
resolving this has thus far proved to be if not intractable, at least
challenging. There is at least one "standard" comprehensible format
which is out there and has been for a while, but is thus far lacking
widespread adoption. However, and this is a huge however, we really
need to ask how much effort it is worth putting into (handling) each
data set. The more effort is needed to make the data available, the
less likely it is that it will ever become available. I know from
experience that even when people want you to have data the activation
energy is substantial. Making the process more complex will decrease
the likelihood of it occurring. If on the other hand we lower this
barrier (i.e. you make available the data you have on the hard disk
exactly how it is) you lower this barrier some way. We can make this
"useful" by also including the processing logs - i.e. your mosflm /
denzo / XDS log file - so that anyone who really wants to know can
look and figure it out.

This biases the effort in the right direction. Even if every data set
was perfectly published, it is pretty unlikely that any given data set
would be re-analysed - unless it is really interesting. If it is
really interesting, it is then worth the effort to figure out the
parameters, so make this possible if inconvenient. As I see it, a main
benefit of this is to allow that occasional questionable structure to
be looked at really hard - and simply the need to upload the original
data and processing results would help in reducing the possibility of
depositing anything "fake".

Another factor I am painfully aware of is that disks die - certainly
mine do. This is all well and good - however the time it takes to move
4TB of data from one drive to another is surprisingly long, as even
with things like eSATA we have oceans of data connected by hosepipes.
Moving all of your raw data from a visit to the synchrotron (which
could easily be TB's) home is a challenge enough - subsequently moving
it to some central archive could be a real killer. Equally making the
data public from your own lab is also difficult for many. At least
facility sites are equipped to meet some of these challenges.

So - as I can see it we have much bigger fish to fry than getting the
headers for historical data standardised! Current and future data,
well that's a different pot of worms. And that's from someone who
really does care about image headers ;o)

Cheerio,

Graeme


Re: [ccp4bb] IUCr committees, depositing images

2011-10-19 Thread Eleanor Dodson
Has anyone raided the point that while archiving is good, it will only 
be generally  useful if the image HEADERS are informative and use a 
comprehensible format - and the data base is documented...


 Eleanor


On 10/19/2011 10:45 AM, Alun Ashton wrote:

‘Short’ bit:


Re: [ccp4bb] IUCr committees, depositing images

2011-10-19 Thread Alun Ashton
Sorry for my boring response………

‘Short’ bit:
Has anyone here considered DOI’s onto data? Facility sites within Europe and 
planning to make this available, I hope to do a proof of principle this year on 
data from Diamond (volunteers?). But as an example the ISIS neutron site on the 
same campus as us have started to do this, as a random example you can go to 
http://doi.org  and put in the DOI reference 10.5286/ISIS.E.24079772 (catchy), 
but this takes you to a landing page where you can see some details of the data 
and an actual citable (I think) reference to the data for a publication. There 
is a link to the data but the data has not yet been made public by the author 
or facility, but at least its (should be) there and will eventually be public. 
The responsibility is now on the facility for looking after  and making the 
data available.

This wouldn’t suit everyone, and also there is the issue of home sources, but 
tools are under development to make this easy. I could easily imagine that 
within the UK STFC would probably host something like this for non facility 
data (it is actually them who host Diamond data for us)…. Maybe at a nominal 
cost of course….

Long bit:
Something similar at Diamond, /dls/$Beamline_name/data/$Year/$proposal-$visit 
and permissions are set accordingly so only the people on the visit or the PI’s 
of the proposal can see the data therein. What happens within that directory is 
still pretty much the users choice at the moment. Though once the data is 
collected its read only and its all recorded in ISPyB (beamline database with 
web pages developed at ESRF and Diamond). You can also record details of the 
sample and link the data collections to it.

There is an EU funded initiative that I have make the IUCr DDDwg aware of in 
Europe called PanData (http://www.pan-data.eu/) which includes most of Europe’s 
X-ray and neutron sites. Under this initiative the facilities are attempting to 
standardise on authorisation, data formats, some software, access policies 
(making data public) data retention and cataloguing.

Here we’ve been a bit lucky to get ahead on this and we have been able to keep 
a copy of all our data off all beamlines, raw and processed on tape (that’s 
just under 200Tb and 53 million catalogued files so far, lots of data including 
processed data its not yet catalogued but is on tape). We are currently beta 
testing a web page to the data that is catalogued, so anyone who has collected 
data at diamond should be able to get it from https://icat.diamond.ac.uk. The 
data will probably be coming off tape so can take a while, also it’s a little 
bit clumsy as an interface but it will get better. This is the same technology 
as is being proposed for PanData facilities, but the backend of the actual data 
archive is the choice of each facility, ours is hosted in a tape robot by STFC 
at the moment.

This is by no means the only solution out there but DOI’s could help unify the 
solutions?

Alun
___
Alun Ashton, alun.ash...@diamond.ac.uk Tel: +44 1235 778404
Scientific Software Team Leader,  http://www.diamond.ac.uk/
Diamond Light Source, Chilton, Didcot, Oxon, OX11 0DE, U.K.
From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Tom Peat
Sent: 18 October 2011 23:29
To: ccp4bb
Subject: Re: [ccp4bb] IUCr committees, depositing images

If we are talking schemes, here is another one that we use that might be 
considered:

Date/person/project/barcode/well#/crystal#

At the Australian synchrotron, a directory is automatically made with the date, 
so that is our starting point.
We sometimes skip the person, but project-barcode-well are always there, as 
then it can correspond to our crystal database.
I imagine that most high throughput centres use barcodes, so barcodes and well 
numbers would be good things to have in the path.

Cheers, tom

From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of 
mjvdwo...@netscape.net
Sent: Wednesday, 19 October 2011 6:03 AM
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] IUCr committees, depositing images

Phoebe,

Just automate the archiving and come up with a reasonable scheme how to. Ours 
is that data sets are called:

userid_yearmonth_projectid_#

Userid is derived from the login into CrystalClear (oops, free advertizing), 
projectid is set by the PI (so she can remember 10 years from now what in the 
world these data are all about) and the users are asked (threatened) to call 
their data sets "projectid_#" (and not the ubiquitous "test"). We have a script 
that automatically archives everything away from our data collection computer 
into an archive - activated by an icon on the desktop - and it adds the userid 
and date to the filename. This has the nice added advantage that the data 
collection disk stays clean. This only breaks when we collect synchrotron data 
(which is all the time) because our synchrotron remote sc

Re: [ccp4bb] IUCr committees, depositing images

2011-10-18 Thread Felix Frolow
Sure they will
There is no irony in what I say
FF
Dr Felix Frolow   
Professor of Structural Biology and Biotechnology
Department of Molecular Microbiology
and Biotechnology
Tel Aviv University 69978, Israel

Acta Crystallographica F, co-editor

e-mail: mbfro...@post.tau.ac.il
Tel:  ++972-3640-8723
Fax: ++972-3640-9407
Cellular: 0547 459 608

On Oct 18, 2011, at 20:01 , Phoebe Rice wrote:

> One more consideration:
> Since organization is not one of my greatest talents, I would be absolutely 
> delighted if a databank took over the burden of archiving my raw data for me. 
>  
>  Phoebe
> 
> =
> Phoebe A. Rice
> Dept. of Biochemistry & Molecular Biology
> The University of Chicago
> phone 773 834 1723
> http://bmb.bsd.uchicago.edu/Faculty_and_Research/01_Faculty/01_Faculty_Alphabetically.php?faculty_id=123
> http://www.rsc.org/shop/books/2008/9780854042722.asp
> 
> 
>  Original message 
>> Date: Tue, 18 Oct 2011 18:17:14 +0100
>> From: CCP4 bulletin board  (on behalf of Gerard 
>> Bricogne )
>> Subject: Re: [ccp4bb] IUCr committees, depositing images  
>> To: CCP4BB@JISCMAIL.AC.UK
>> 
>> Dear Enrico, Frank and colleagues,
>> 
>>I am glad to have suggested that everyone's views on this issue should
>> be aired out on this BB rather than sent off-list to an IUCr committee
>> member: this is much more interactive and thought-provoking. 
>> 
>>There would seem to be clear biases in some of the positions - for
>> instance, the statement that we overvalue individual structures and that
>> there is value only in their ensemble has to be seen to be coming from
>> someone in a structural genomics centre ;-) . However, as Wladek pointed
>> out, when an investigator's project is crucially dependent on a result
>> embodied in a deposited structure, it would be of the greatest value to that
>> investigator to be able to double-check how reliable some features of that
>> structure (especially its ligands) actually are.
>> 
>>On the other hand Enrico, as a specialist of crystallisation and
>> modelling, sees value only in improving those contributors to the task of
>> structure determination. This is forgetting (1) an essential capability of
>> crystallography: that, through experimental phasing, it can show you what a
>> protein looks like even if you have never seen nor modelled one before,
>> through the wondrous process of producing model-free electron-density maps;
>> and (2) an essential aspect of the task of structure determination: that it
>> doesn't aim at producing a model with perfect geometry, but one that best
>> explains the measured data and neither under- nor over-interprets them (I
>> realise, though, that Enrico's statement "Data just introduces experimental
>> errors into what would otherwise be a perfect structure" is likely to be
>> tongue-in-cheek ...). 
>> 
>>When it comes to making explicit the advantages of archiving at least
>> the raw images that yielded the data against which a deposited PDB entry was
>> refined, many good reasons have been given, but I feel that 
>> 
>>(1) there is an over-emphasis on the preservation of diffuse scattering
>> that has a tendency to give this archiving a nuance of "blue-skies" research
>> and thus to detract from its practical urgency; time will come for diffuse
>> scattering to be fully appreciated, but at the moment its mention acts as a
>> bit of a distraction, if not a turn-off in this context for people who not
>> not love it already;
>> 
>>(2) as far as I see it, the highest future benefit of having archived
>> raw images will result from being able to reprocess datasets from samples
>> containing multiple lattices ("non-merohedral twinning"). Numerous
>> structures are determined and refined against data obtained by integrating
>> only the spots from the major lattice, without rejecting those that are
>> corrupted by overlap by a spot from a minor lattice. This leads to
>> systematic errors in these data that may only be incompletely taken out by
>> outlier rejection at the merging stage, and will create noise or confusing
>> residual features in difference maps, if not false features in the main map
>> and therefore its interpretation by the model. In my opinion it will be the
>> development of methods for dealing with overlapped lattices and for the
>> proper treatment of such data in scaling and refinement (as is already
>> possible with small molecules) that will bring about the major possibility
>> of substantially improving deposited

Re: [ccp4bb] IUCr committees, depositing images

2011-10-18 Thread Bosch, Juergen
What do you consider good ? r.m.s.d of 2.5 Å ? fatal for drug design.

On Oct 18, 2011, at 12:19 PM, Enrico Stura wrote:

Dear Peter,

How many crystallographers does it take to transform bad data into good
data?
None, you need a modeller. Only a modeller can give you a structure with
perfect
geometry. Data just introduces experimental errors into what would
otherwise be a perfect
structure.

If you have good data do you need crystallographers?
No just the right programs. Which brings us back to the hypothetical question 
of why do we care ? I mean more precisely the question each of us is trying to 
answer in their own research program.

Crystallography is a high-end tool, not quite as simple as a western blot or 
ELISA but in the end it's just a tool to answer some critical questions.

Jürgen


..
Jürgen Bosch
Johns Hopkins University
Bloomberg School of Public Health
Department of Biochemistry & Molecular Biology
Johns Hopkins Malaria Research Institute
615 North Wolfe Street, W8708
Baltimore, MD 21205
Office: +1-410-614-4742
Lab:  +1-410-614-4894
Fax:  +1-410-955-2926
http://web.mac.com/bosch_lab/







Re: [ccp4bb] IUCr committees, depositing images

2011-10-18 Thread mjvdwoerd

 Phoebe,

Just automate the archiving and come up with a reasonable scheme how to. Ours 
is that data sets are called:

userid_yearmonth_projectid_#

Userid is derived from the login into CrystalClear (oops, free advertizing), 
projectid is set by the PI (so she can remember 10 years from now what in the 
world these data are all about) and the users are asked (threatened) to call 
their data sets "projectid_#" (and not the ubiquitous "test"). We have a script 
that automatically archives everything away from our data collection computer 
into an archive - activated by an icon on the desktop - and it adds the userid 
and date to the filename. This has the nice added advantage that the data 
collection disk stays clean. This only breaks when we collect synchrotron data 
(which is all the time) because our synchrotron remote scientist who collects 
the data cannot (should not) be threatened. :-) I then rename all data sets for 
archiving so the naming is consistent and you can actually make (say in pdf) an 
index of all the data you have, organized by user, date, or project. 

Our policy is that the PI decides if data should be maintained or if it really 
can go (no diffraction, really a test crystal to see that the crystal is in the 
beam etc). In practice this doesn't happen so someone else makes the decision. 
We tend to err on the side of caution. We tend to think that all results should 
be saved, unless it is blatantly obvious that there is no point. Storage is 
cheap (and cheaper every time you think of it).

After you automate in the previously agreed upon scheme, it is somewhat easier 
to find things back because if you can remember who collected it, or 
approximately when it was done, or what the project was, you can find it. The 
pain was up front: to come up with a scheme, to enable a rigorous naming 
convention and to implement it (data collection computer and archive are not 
physically on the same computer etc). 

Maybe the Committee is also thinking about that issue - how are you going to 
keep all the data manageable and searchable. Presumably by something like a PDB 
id (this seems to make sense for published/deposited structures) but for 
"things that did not make it to PDB" one would have to come up with another 
plan.

Mark

 

 

-Original Message-
From: Phoebe Rice 
To: CCP4BB 
Sent: Tue, Oct 18, 2011 12:01 pm
Subject: Re: [ccp4bb] IUCr committees, depositing images


One more consideration:
Since organization is not one of my greatest talents, I would be absolutely 
delighted if a databank took over the burden of archiving my raw data for me.  
  Phoebe

=
Phoebe A. Rice
Dept. of Biochemistry & Molecular Biology
The University of Chicago
phone 773 834 1723
http://bmb.bsd.uchicago.edu/Faculty_and_Research/01_Faculty/01_Faculty_Alphabetically.php?faculty_id=123
http://www.rsc.org/shop/books/2008/9780854042722.asp


 Original message 
>Date: Tue, 18 Oct 2011 18:17:14 +0100
>From: CCP4 bulletin board  (on behalf of Gerard 
>Bricogne 
)
>Subject: Re: [ccp4bb] IUCr committees, depositing images  
>To: CCP4BB@JISCMAIL.AC.UK
>
>Dear Enrico, Frank and colleagues,
>
> I am glad to have suggested that everyone's views on this issue should
>be aired out on this BB rather than sent off-list to an IUCr committee
>member: this is much more interactive and thought-provoking. 
>
> There would seem to be clear biases in some of the positions - for
>instance, the statement that we overvalue individual structures and that
>there is value only in their ensemble has to be seen to be coming from
>someone in a structural genomics centre ;-) . However, as Wladek pointed
>out, when an investigator's project is crucially dependent on a result
>embodied in a deposited structure, it would be of the greatest value to that
>investigator to be able to double-check how reliable some features of that
>structure (especially its ligands) actually are.
>
> On the other hand Enrico, as a specialist of crystallisation and
>modelling, sees value only in improving those contributors to the task of
>structure determination. This is forgetting (1) an essential capability of
>crystallography: that, through experimental phasing, it can show you what a
>protein looks like even if you have never seen nor modelled one before,
>through the wondrous process of producing model-free electron-density maps;
>and (2) an essential aspect of the task of structure determination: that it
>doesn't aim at producing a model with perfect geometry, but one that best
>explains the measured data and neither under- nor over-interprets them (I
>realise, though, that Enrico's statement "Data just introduces experimental
>errors into what would otherwise be a perfect structure" is likely to be
>tongue-in-cheek ...). 
&g

Re: [ccp4bb] IUCr committees, depositing images

2011-10-18 Thread Ed Pozharski
On Tue, 2011-10-18 at 18:17 +0100, Gerard Bricogne wrote:
> it would be of the greatest value to that
> investigator to be able to double-check how reliable some features of
> that
> structure (especially its ligands) actually are. 

Certainly, one could argue here that the current PDB policy that
requires the deposition of the processed data already provides that
option.  I must add though that I find it disturbingly easy to
manipulate the Fo's to produce any ligand anywhere and generate a fake
dataset that is essentially impossible to detect as such.  It is only a
matter of time until someone under pressure does this, in all
likelihood, this might have already happened.  True, diffraction images
can also be ultimately manipulated, but that requires much higher level
of sophistication, and the hope is that individuals capable of such feat
have better things to do with their skills.

It seems to me that the cost of storage is so cheap these days that even
if reducing the chances of another retraction disaster is the only
benefit, it is worth it.  There are many other reasons why the benefits
of image deposition outweigh the costs, which reminds me of an old joke:

- Colonel, why your cannons did not fire?
- General, there were five reasons.  First, we had no gunpowder,
second...
- This one is enough!

Cheers,

Ed.

-- 
Oh, suddenly throwing a giraffe into a volcano to make water is crazy?
Julian, King of Lemurs


Re: [ccp4bb] IUCr committees, depositing images

2011-10-18 Thread Phoebe Rice
One more consideration:
Since organization is not one of my greatest talents, I would be absolutely 
delighted if a databank took over the burden of archiving my raw data for me.  
  Phoebe

=
Phoebe A. Rice
Dept. of Biochemistry & Molecular Biology
The University of Chicago
phone 773 834 1723
http://bmb.bsd.uchicago.edu/Faculty_and_Research/01_Faculty/01_Faculty_Alphabetically.php?faculty_id=123
http://www.rsc.org/shop/books/2008/9780854042722.asp


 Original message 
>Date: Tue, 18 Oct 2011 18:17:14 +0100
>From: CCP4 bulletin board  (on behalf of Gerard 
>Bricogne )
>Subject: Re: [ccp4bb] IUCr committees, depositing images  
>To: CCP4BB@JISCMAIL.AC.UK
>
>Dear Enrico, Frank and colleagues,
>
> I am glad to have suggested that everyone's views on this issue should
>be aired out on this BB rather than sent off-list to an IUCr committee
>member: this is much more interactive and thought-provoking. 
>
> There would seem to be clear biases in some of the positions - for
>instance, the statement that we overvalue individual structures and that
>there is value only in their ensemble has to be seen to be coming from
>someone in a structural genomics centre ;-) . However, as Wladek pointed
>out, when an investigator's project is crucially dependent on a result
>embodied in a deposited structure, it would be of the greatest value to that
>investigator to be able to double-check how reliable some features of that
>structure (especially its ligands) actually are.
>
> On the other hand Enrico, as a specialist of crystallisation and
>modelling, sees value only in improving those contributors to the task of
>structure determination. This is forgetting (1) an essential capability of
>crystallography: that, through experimental phasing, it can show you what a
>protein looks like even if you have never seen nor modelled one before,
>through the wondrous process of producing model-free electron-density maps;
>and (2) an essential aspect of the task of structure determination: that it
>doesn't aim at producing a model with perfect geometry, but one that best
>explains the measured data and neither under- nor over-interprets them (I
>realise, though, that Enrico's statement "Data just introduces experimental
>errors into what would otherwise be a perfect structure" is likely to be
>tongue-in-cheek ...). 
>
> When it comes to making explicit the advantages of archiving at least
>the raw images that yielded the data against which a deposited PDB entry was
>refined, many good reasons have been given, but I feel that 
>
> (1) there is an over-emphasis on the preservation of diffuse scattering
>that has a tendency to give this archiving a nuance of "blue-skies" research
>and thus to detract from its practical urgency; time will come for diffuse
>scattering to be fully appreciated, but at the moment its mention acts as a
>bit of a distraction, if not a turn-off in this context for people who not
>not love it already;
>
> (2) as far as I see it, the highest future benefit of having archived
>raw images will result from being able to reprocess datasets from samples
>containing multiple lattices ("non-merohedral twinning"). Numerous
>structures are determined and refined against data obtained by integrating
>only the spots from the major lattice, without rejecting those that are
>corrupted by overlap by a spot from a minor lattice. This leads to
>systematic errors in these data that may only be incompletely taken out by
>outlier rejection at the merging stage, and will create noise or confusing
>residual features in difference maps, if not false features in the main map
>and therefore its interpretation by the model. In my opinion it will be the
>development of methods for dealing with overlapped lattices and for the
>proper treatment of such data in scaling and refinement (as is already
>possible with small molecules) that will bring about the major possibility
>of substantially improving deposited results by reprocessing the raw images
>co-deposited with them;
>
> (3) there is also the more immediate possibility of better removing ice
>rings, or ligand powder rings, from images, than by having to throw away
>certain thin shells of merged data in the structure factor file.
>
> I see the case for raw image deposition as absolutely compelling,
>especially in view of the auto-catalytic process through which their
>availability will speed up the development of precisely the new methods and
>software to extract better data from them and better refine models against
>them. The impact of structure factor deposition on the development of better
>refinement programs is there to prove that this paradigm 

Re: [ccp4bb] IUCr committees, depositing images

2011-10-18 Thread Gerard Bricogne
Dear Enrico, Frank and colleagues,

 I am glad to have suggested that everyone's views on this issue should
be aired out on this BB rather than sent off-list to an IUCr committee
member: this is much more interactive and thought-provoking. 

 There would seem to be clear biases in some of the positions - for
instance, the statement that we overvalue individual structures and that
there is value only in their ensemble has to be seen to be coming from
someone in a structural genomics centre ;-) . However, as Wladek pointed
out, when an investigator's project is crucially dependent on a result
embodied in a deposited structure, it would be of the greatest value to that
investigator to be able to double-check how reliable some features of that
structure (especially its ligands) actually are.

 On the other hand Enrico, as a specialist of crystallisation and
modelling, sees value only in improving those contributors to the task of
structure determination. This is forgetting (1) an essential capability of
crystallography: that, through experimental phasing, it can show you what a
protein looks like even if you have never seen nor modelled one before,
through the wondrous process of producing model-free electron-density maps;
and (2) an essential aspect of the task of structure determination: that it
doesn't aim at producing a model with perfect geometry, but one that best
explains the measured data and neither under- nor over-interprets them (I
realise, though, that Enrico's statement "Data just introduces experimental
errors into what would otherwise be a perfect structure" is likely to be
tongue-in-cheek ...). 

 When it comes to making explicit the advantages of archiving at least
the raw images that yielded the data against which a deposited PDB entry was
refined, many good reasons have been given, but I feel that 

 (1) there is an over-emphasis on the preservation of diffuse scattering
that has a tendency to give this archiving a nuance of "blue-skies" research
and thus to detract from its practical urgency; time will come for diffuse
scattering to be fully appreciated, but at the moment its mention acts as a
bit of a distraction, if not a turn-off in this context for people who not
not love it already;

 (2) as far as I see it, the highest future benefit of having archived
raw images will result from being able to reprocess datasets from samples
containing multiple lattices ("non-merohedral twinning"). Numerous
structures are determined and refined against data obtained by integrating
only the spots from the major lattice, without rejecting those that are
corrupted by overlap by a spot from a minor lattice. This leads to
systematic errors in these data that may only be incompletely taken out by
outlier rejection at the merging stage, and will create noise or confusing
residual features in difference maps, if not false features in the main map
and therefore its interpretation by the model. In my opinion it will be the
development of methods for dealing with overlapped lattices and for the
proper treatment of such data in scaling and refinement (as is already
possible with small molecules) that will bring about the major possibility
of substantially improving deposited results by reprocessing the raw images
co-deposited with them;

 (3) there is also the more immediate possibility of better removing ice
rings, or ligand powder rings, from images, than by having to throw away
certain thin shells of merged data in the structure factor file.

 I see the case for raw image deposition as absolutely compelling,
especially in view of the auto-catalytic process through which their
availability will speed up the development of precisely the new methods and
software to extract better data from them and better refine models against
them. The impact of structure factor deposition on the development of better
refinement programs is there to prove that this paradigm of a chain reaction
makes total sense.

 Various arguments tend to be fired off as decoys - "get better
crystals", why not "get a better post-doc"? - but they are unhelpful in the
way they prolong procrastination when what we need is to bite the bullet.
The IUCr Forum that John Helliwell pointed at already contains draft plans
for a pilot run of a reasonable scheme.


 With best wishes,
 
  Gerard.

--
On Tue, Oct 18, 2011 at 06:19:27PM +0200, Enrico Stura wrote:
> Dear Peter,
>
> How many crystallographers does it take to transform bad data into good 
> data?
> None, you need a modeller. Only a modeller can give you a structure with 
> perfect
> geometry. Data just introduces experimental errors into what would 
> otherwise be a perfect
> structure.
>
> If you have good data do you need crystallographers?
> ...
>
> Of course there all the cases in between. That ... you are right, is the 
> other half of the story.
>
> From a biological point of view, only borderline cases make "cents" ($+€) 
> to store.
> T

Re: [ccp4bb] IUCr committees, depositing images

2011-10-18 Thread Mark J van Raaij
given that:
- storage is becoming cheaper exponentially,
- computer power is increasing exponentially,
I think there is no reason to not store all images used for solving a structure 
- linked to the pdb entry and properly annotated with beam centre, lambda, 
pixel order and all other necessary processing parameters.

Possible uses:
- programmers of data processing (and experimental phasing) software can test 
their programs routinely on many sets of images instead of just a few, to 
better judge the result of changes they have made to the code.
- the PDB (or someone else), can periodically reprocess all the data with the 
then state-of-the-art software, remodel where necessary and re-refine all the 
structures, and thus periodically improve all the structures in the pdb. This 
does not mean the original authors did a bad job, just that technical 
improvements inevitably will allow doing a better one in the future.

Sure, this does not lead directly to new biomedical insights, but indirectly it 
will.
The improved data processing (and experimental phasing) programs will allow 
solving at least a few, new, structures that otherwise would not be solvable.
The improved old structures in the pdb may allow a few, new, structures to be 
solved by MR that otherwise could not have been solved.
The dataset of better structures will lead to better performance of structure 
prediction programs (better energy calculations etc), allowing better 
modelling. (I was, like Enrico, very sceptical of modelling, until the first 
modelled structure was successfully used in MR to solve a new structure. I now 
believe modelling will play an increasing role).

I agree with Enrico that storing and annotating "failed" data collections will 
be difficult to enforce - I for one would rather concentrate on the data 
collections that did work in that trip (if any), or spend my time drowning 
sorrows, resting and then trying to get better crystals for the next trip, 
rather than filling in meta-data for a data storage facility, even if it is 
only a two-minute job.

Another thing are the in-between cases, datasets collected, processed and for 
which structure solution has been tried but abandoned due to the ratio of 
difficulty / interest being to high (difficulties being irreproducibility of 
crystals, some pathology in the data, etc...). These data I would gladly submit 
for someone else to have a go. Currently, probably no-one would, but in the 
future, due to better software, understanding and possible a new structure in 
the pdb that could be used in MR, the difficulty may go down, and some-one 
might do it if the potential structure is still of interest. This some-one 
could be myself, and the images being stored safely and centrally would make it 
easier also for me to recover them.

Mark J van Raaij
Laboratorio M-4
Dpto de Estructura de Macromoleculas
Centro Nacional de Biotecnologia - CSIC
c/Darwin 3
E-28049 Madrid, Spain
tel. (+34) 91 585 4616
http://www.cnb.csic.es/content/research/macromolecular/mvraaij





On 18 Oct 2011, at 18:19, Enrico Stura wrote:

> Dear Peter,
> 
> How many crystallographers does it take to transform bad data into good data?
> None, you need a modeller. Only a modeller can give you a structure with 
> perfect
> geometry. Data just introduces experimental errors into what would otherwise 
> be a perfect
> structure.
> 
> If you have good data do you need crystallographers?
> ...
> 
> Of course there all the cases in between. That ... you are right, is the 
> other half of the story.
> 
> From a biological point of view, only borderline cases make "cents" ($+€) to 
> store.
> The experimenter in consultation with a beamline scientist at an SR facility 
> is the best
> small commitee suitable to evaluate what is worth keeping. I am sure that the 
> images
> that are worth storing for a long long time would fit on a few Tb at a 
> reasonable cost.
> Storing everything would make it harder to find something worth improving in 
> the future.
> 
> Enrico.
> 
> 
> On Tue, 18 Oct 2011 17:12:42 +0200, Peter Keller  
> wrote:
> 
>> Dear Enrico,
>> 
>> Please don't get me wrong: what you are saying is not incorrect, but it
>> is only half the story.
>> 
>> On Tue, 2011-10-18 at 15:13 +0200, Enrico Stura wrote:
>>> With improving techniques, we should always be making progress!
>> 
>> Yes, of course!
>> 
>>> If we are trying to answer a biological question that is really important,
>>> we would be better off
>>> improving the purification, the crystallization, the cryo-conditions
>> 
>> You have left X-ray crystallography out of this list. It is a technique
>> like the others, and can also be improved :-)
>> 
>> It may be true that the number of crystallographers that are working on
>> improving instrumental methodology and software is small compared to the
>> number working on improving wet-lab techniques, but that number is not
>> zero, and the contribution is significant. The rest of you benefit from
>> that w

Re: [ccp4bb] IUCr committees, depositing images

2011-10-18 Thread Enrico Stura

Dear Peter,

How many crystallographers does it take to transform bad data into good  
data?
None, you need a modeller. Only a modeller can give you a structure with  
perfect
geometry. Data just introduces experimental errors into what would  
otherwise be a perfect

structure.

If you have good data do you need crystallographers?
...

Of course there all the cases in between. That ... you are right, is the  
other half of the story.


From a biological point of view, only borderline cases make "cents" ($+€)  
to store.
The experimenter in consultation with a beamline scientist at an SR  
facility is the best
small commitee suitable to evaluate what is worth keeping. I am sure that  
the images
that are worth storing for a long long time would fit on a few Tb at a  
reasonable cost.
Storing everything would make it harder to find something worth improving  
in the future.


Enrico.


On Tue, 18 Oct 2011 17:12:42 +0200, Peter Keller  
 wrote:



Dear Enrico,

Please don't get me wrong: what you are saying is not incorrect, but it
is only half the story.

On Tue, 2011-10-18 at 15:13 +0200, Enrico Stura wrote:

With improving techniques, we should always be making progress!


Yes, of course!

If we are trying to answer a biological question that is really  
important,

we would be better off
improving the purification, the crystallization, the cryo-conditions


You have left X-ray crystallography out of this list. It is a technique
like the others, and can also be improved :-)

It may be true that the number of crystallographers that are working on
improving instrumental methodology and software is small compared to the
number working on improving wet-lab techniques, but that number is not
zero, and the contribution is significant. The rest of you benefit from
that work!


instead of having to rely on
processing old images with new software.

I have 10 years  worth of images. I have reprocessed very few of them  
and

never made any
sensational progress using the new software. Poor diffraction is poor
diffraction.


Maybe so, but certain types of datasets are useful for methods and
software development, even if no new biological insights could be gained
by reprocessing them. These datasets are often hard to get hold of in
practice, especially when they are in someone's lab on a tape that
no-one has a reader for any more.

Obtaining protein, growing crystals and collecting new data in such a
way that the interesting features of those datasets are reproduced can
be much much harder than curating the images would be. This is
especially true for software-oriented people like us who don't have
regular access to wet-lab facilities.


Money can be better spent buying a wine cellar, storage works for wine.


Images have already been lost that ought to have been kept. The
questions are: how to select the datasets that are potentially of value,
and how to make sure that they don't disappear.

Regards,
Peter.




--
Enrico A. Stura D.Phil. (Oxon) ,Tel: 33 (0)1 69 08 4302 Office
Room 19, Bat.152,   Tel: 33 (0)1 69 08 9449Lab
LTMB, SIMOPRO, IBiTec-S, CE Saclay, 91191 Gif-sur-Yvette,   FRANCE
http://www-dsv.cea.fr/en/institutes/institute-of-biology-and-technology-saclay-ibitec-s/unites-de-recherche/department-of-molecular-engineering-of-proteins-simopro/molecular-toxinology-and-biotechnology-laboratory-ltmb/crystallogenesis-e.-stura
http://www.chem.gla.ac.uk/protein/mirror/stura/index2.html
e-mail: est...@cea.fr Fax: 33 (0)1 69 08 90 71


Re: [ccp4bb] IUCr committees, depositing images

2011-10-18 Thread 苏晓东
Sorry if this message reads a bit out of date, it has been sent two days ago 
without success, I am trying to sent it one more time. 

XDS

发件人: gsw...@pku.edu.cn
收件人: Felix Frolow , CCP4BB@JISCMAIL.AC.UK, 
x...@pku.edu.cn
已发送邮件: Tue, 18 Oct 2011 10:36:51 +0800 (CST)
主题: Re: [ccp4bb] IUCr committees, depositing images


Dear All, 

Since John has responded to the issues raised by Tom and several responders, 
and he has just described the overall goal of IUCr DDD WG (Diffraction Data 
Deposition Working Group), I will just second on this important issue 
particularly for the future biological macromolecular crystallography. 

In my opinion, the reasons for raw data deposition can be summarized as 
following (overlapping with others)
* Keep permanent experimental records; this may not be done well for all 
crystallographers. 
* Make it possible for rigorous thorough check by reviewers as FF suggested in 
this mail; in fact this has been a problem for some structures published 
prestigious journals, particularly for low-resolution results, and these 
low-resolution structures tend to become more and more. 
* Minimize falsification and other types of mistakes and misconducts in 
structural biology.
* Make it easier for validations/collaborations, I bet many structures in PDB 
are wrong, but there is no way to find out by the current data deposition. 
* Good for methods developers; 
 
I also agree that there are essentially no much technical problem for the raw 
image(data) depostion, this is especially true if we leave the deposition at or 
near the synchrotron sites.  I hope we will join in John's forum on this 
important issue at: 
http://forums.iucr.org/


XDS


Xiao-Dong Su, Professor
 Chairman of the Commission on Biological Macromolecules (CBM),
 International Union of Crystallography (IUCr);
 
 School of Life Sciences, Peking University
 100871 Beijing, China
 Phone:  +86-10-62759743
 FAX: +86-10-62765669
 E-mail: x...@pku.edu.cn

- 原始邮件 -
发件人: Felix Frolow 
收件人: CCP4BB@JISCMAIL.AC.UK
已发送邮件: Mon, 17 Oct 2011 03:31:52 +0800 (CST)
主题: Re: [ccp4bb] IUCr committees, depositing images

On the deposition of raw data:
Committees, wherever you are!
I guess that abstaining from storing the raw diffraction data in the form of  
frames is not very wise
whatever its size is. I regret that for some PDB entries I do not have 
diffraction data (needless to say that authors
do not submitted even  structure factors). 
I maintain a bit more than 1.2 T diffraction data starting from 2001 and all is 
nicely
resides on two small WD pocket disks (needless to say that I have several 
copies of the data).
Generally I have all data I ever collected going back to beginning of 80's, but 
I am to lazy to reform DAT tapes.
Sure, running Pilatus for an olympic record, we will go home with several T of 
data after 24 h (will we?).
But this is an abuse of the system. The final goal is the structure 
determination, and there are much less 
good crystals everywhere in one year that  one Pilatus could collect in one 
week.
But to decide fast if the crystal diffraction data from Pilatus is good for 
storage or even for measurement whatever the speed of data collection is, good 
data processing
software is needed. I personnaly think that there is only one, the one.
Anyhow, I think if the author wish to publish his structure, and it is 
important, and I am a reviewer, and it is going
to prestigious journal,  I will reprocess his data and will check his way to 
the final crystal structure solution from the beginning.
It is as in mathematics. If someone claim that he solved a long-staing problem 
from the past, he will not go away from his envious colleagues, who 
will drop everything and will sit and check, until they will find a mistake. 
What a pleasure!!!
And if there are no mistakes - chapeau !!!

FF
 
Dr Felix Frolow   
Professor of Structural Biology and Biotechnology
Department of Molecular Microbiology
and Biotechnology
Tel Aviv University 69978, Israel

Acta Crystallographica F, co-editor

e-mail: mbfro...@post.tau.ac.il
Tel:  ++972-3640-8723
Fax: ++972-3640-9407
Cellular: 0547 459 608

On Oct 16, 2011, at 20:38 , Frank von Delft wrote:

> On the deposition of raw data:
> 
> I recommend to the committee that before it convenes again, every member 
> should go collect some data on a beamline with a Pilatus detector [feel free 
> to join us at Diamond].  Because by the probable time any recommendations 
> actually emerge, most beamlines will have one of those (or similar), we'll be 
> generating more data than the LHC, and users will be happy just to have it 
> integrated, never mind worry about its fate.
> 
> That's not an endorsement, btw, just an observation/prediction.
> 
> phx.
> 
> 
> 
> 
> On 14/10/2011 23:56, Thomas C. Terwilliger wrote:
>> For those who have strong opinions on what data should be deposited...
>> 
>> The

Re: [ccp4bb] IUCr committees, depositing images

2011-10-18 Thread Peter Keller
Dear Enrico,

Please don't get me wrong: what you are saying is not incorrect, but it
is only half the story.

On Tue, 2011-10-18 at 15:13 +0200, Enrico Stura wrote:
> With improving techniques, we should always be making progress!

Yes, of course!

> If we are trying to answer a biological question that is really important,  
> we would be better off
> improving the purification, the crystallization, the cryo-conditions  

You have left X-ray crystallography out of this list. It is a technique
like the others, and can also be improved :-)

It may be true that the number of crystallographers that are working on
improving instrumental methodology and software is small compared to the
number working on improving wet-lab techniques, but that number is not
zero, and the contribution is significant. The rest of you benefit from
that work!

> instead of having to rely on
> processing old images with new software.
> 
> I have 10 years  worth of images. I have reprocessed very few of them and  
> never made any
> sensational progress using the new software. Poor diffraction is poor  
> diffraction.

Maybe so, but certain types of datasets are useful for methods and
software development, even if no new biological insights could be gained
by reprocessing them. These datasets are often hard to get hold of in
practice, especially when they are in someone's lab on a tape that
no-one has a reader for any more.

Obtaining protein, growing crystals and collecting new data in such a
way that the interesting features of those datasets are reproduced can
be much much harder than curating the images would be. This is
especially true for software-oriented people like us who don't have
regular access to wet-lab facilities.

> Money can be better spent buying a wine cellar, storage works for wine.

Images have already been lost that ought to have been kept. The
questions are: how to select the datasets that are potentially of value,
and how to make sure that they don't disappear.

Regards,
Peter.

-- 
Peter Keller Tel.: +44 (0)1223 353033
Global Phasing Ltd., Fax.: +44 (0)1223 366889
Sheraton House,
Castle Park,
Cambridge CB3 0AX
United Kingdom


Re: [ccp4bb] IUCr committees, depositing images

2011-10-18 Thread Richard Gillilan
For the record, the amount of disk storage space per unit cost has doubled 
every 14 months for the last 30 years.  It's an exponential relationship:
www.mkomo.com/cost-per-gigabyte

So data generated at a very high rate today, will be trivial to store in the 
near future.  That's not to say it is cost free, of course ... but 
exponentially approaching free. 

I worked at a Supercomputing facility for 7 years. At that time whole rooms 
were filled with state-of-the-art tape archive robots that could hold an 
unimaginable amount of data: a whole terabyte. Today, of course, that same 
volume costs under 100 USD with much faster I/O ... and I have personal copies 
of everything I generated (even digitized, uncompressed analog video).

To keep data backed up and online, of course costs something, but 
distributed/cloud computing is also changing that picture dramatically.

I am curious to know: those who have Pilatus 6M, for example. How much data do 
you generate in  a year? 

I suspect this is limited by beam intensity ... at the moment. 

Richard


On Oct 18, 2011, at 6:52 AM, Chris Morris wrote:

> Some crystals are hard to make, so storing all the data the best way to get 
> reproducibility. On the other hand, no one needs more images of lysozyme. So 
> using the same standard for every deposition doesn't sound right.
> 
> The discussion should be held on the basis of overall cost to the research 
> budget - not on the assumption that some costs can be externalised. It is too 
> easy to say "you should store the images, in case I want to reprocess them 
> sometime". IT isn't free, nor is it always cheaper than the associated 
> experimental work. The key comparison is:
> 
>   Cost of growing new crystals + cost of beam line time
> 
> With:
> 
>   Cost of storing images * probability of processing them again
> 
> At present, detectors are improving more quickly than processing software. 
> Sample preparation methods are also improving. These forces both press 
> downward the probability that a particular image will ever be reprocessed. 
> 
> regards,
> Chris


Re: [ccp4bb] IUCr committees, depositing images

2011-10-18 Thread Enrico Stura

With improving techniques, we should always be making progress!
If we are trying to answer a biological question that is really important,  
we would be better off
improving the purification, the crystallization, the cryo-conditions  
instead of having to rely on

processing old images with new software.

I have 10 years  worth of images. I have reprocessed very few of them and  
never made any
sensational progress using the new software. Poor diffraction is poor  
diffraction.

Money can be better spent buying a wine cellar, storage works for wine.

Enrico.

On Tue, 18 Oct 2011 14:51:27 +0200, Boaz Shaanan  
 wrote:



Hi Felix,

Excuse my question, but what have you discovered about lysozyme that we  
haven't already known before which justifies all these efforts?
After all, we're mostly after finding solutions to biological problems,  
aren't we?


  Boaz


Boaz Shaanan, Ph.D.
Dept. of Life Sciences
Ben-Gurion University of the Negev
Beer-Sheva 84105
Israel

E-mail: bshaa...@bgu.ac.il
Phone: 972-8-647-2220  Skype: boaz.shaanan
Fax:   972-8-647-2992 or 972-8-646-1710






From: CCP4 bulletin board [CCP4BB@JISCMAIL.AC.UK] on behalf of Felix  
Frolow [mbfro...@post.tau.ac.il]

Sent: Tuesday, October 18, 2011 1:40 PM
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] IUCr committees, depositing images

I could not agree less. There is constant development of the software  
for refinement that allow to do things that were not
possible or were not necessary  in the past such as intelligent  
refinement of occupancies of mutually exclusive sites, entities and  
conformations.
I frequently remeasure lysozyme crystals. I use them as a test system  
for the beam lines, new detectors, novel software developments,  
refinement improvement etc. Sometimes I am collecting data in quite  
different wavelength than of existing structures. And what about  
diffraction  data from a chemically modified lysozyme molecule?
They are good data that show evolution of the beam line stations if they  
are keeper in historical order.

To store them all, or not to store at all…
Storage of the diffraction data is not a drinking club with muscle-bound  
selectors outside :-)

Felix Frolow
Dr Felix Frolow
Professor of Structural Biology and Biotechnology
Department of Molecular Microbiology
and Biotechnology
Tel Aviv University 69978, Israel

Acta Crystallographica F, co-editor

e-mail: mbfro...@post.tau.ac.il
Tel:  ++972-3640-8723
Fax: ++972-3640-9407
Cellular: 0547 459 608

On Oct 18, 2011, at 12:52 , Chris Morris wrote:

Some crystals are hard to make, so storing all the data the best way to  
get reproducibility. On the other hand, no one needs more images of  
lysozyme. So using the same standard for every deposition doesn't sound  
right.


The discussion should be held on the basis of overall cost to the  
research budget - not on the assumption that some costs can be  
externalised. It is too easy to say "you should store the images, in  
case I want to reprocess them sometime". IT isn't free, nor is it  
always cheaper than the associated experimental work. The key  
comparison is:


  Cost of growing new crystals + cost of beam line time

With:

  Cost of storing images * probability of processing them again

At present, detectors are improving more quickly than processing  
software. Sample preparation methods are also improving. These forces  
both press downward the probability that a particular image will ever  
be reprocessed.


regards,
Chris

Chris Morris
chris.mor...@stfc.ac.uk
Tel: +44 (0)1925 603689  Fax: +44 (0)1925 603634
Mobile: 07921-717915
Skype: chrishgmorris
http://pims.structuralbiology.eu/
http://www.citeulike.org/blog/chrishmorris
Daresbury Lab,  Daresbury,  Warrington,  UK,  WA4 4AD



--
Enrico A. Stura D.Phil. (Oxon) ,Tel: 33 (0)1 69 08 4302 Office
Room 19, Bat.152,   Tel: 33 (0)1 69 08 9449Lab
LTMB, SIMOPRO, IBiTec-S, CE Saclay, 91191 Gif-sur-Yvette,   FRANCE
http://www-dsv.cea.fr/en/institutes/institute-of-biology-and-technology-saclay-ibitec-s/unites-de-recherche/department-of-molecular-engineering-of-proteins-simopro/molecular-toxinology-and-biotechnology-laboratory-ltmb/crystallogenesis-e.-stura
http://www.chem.gla.ac.uk/protein/mirror/stura/index2.html
e-mail: est...@cea.fr Fax: 33 (0)1 69 08 90 71


Re: [ccp4bb] IUCr committees, depositing images

2011-10-18 Thread Boaz Shaanan
Hi Felix,

Excuse my question, but what have you discovered about lysozyme that we haven't 
already known before which justifies all these efforts?
After all, we're mostly after finding solutions to biological problems, aren't 
we?

  Boaz


Boaz Shaanan, Ph.D.
Dept. of Life Sciences
Ben-Gurion University of the Negev
Beer-Sheva 84105
Israel

E-mail: bshaa...@bgu.ac.il
Phone: 972-8-647-2220  Skype: boaz.shaanan
Fax:   972-8-647-2992 or 972-8-646-1710






From: CCP4 bulletin board [CCP4BB@JISCMAIL.AC.UK] on behalf of Felix Frolow 
[mbfro...@post.tau.ac.il]
Sent: Tuesday, October 18, 2011 1:40 PM
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] IUCr committees, depositing images

I could not agree less. There is constant development of the software for 
refinement that allow to do things that were not
possible or were not necessary  in the past such as intelligent refinement of 
occupancies of mutually exclusive sites, entities and conformations.
I frequently remeasure lysozyme crystals. I use them as a test system for the 
beam lines, new detectors, novel software developments, refinement improvement 
etc. Sometimes I am collecting data in quite different wavelength than of 
existing structures. And what about diffraction  data from a chemically 
modified lysozyme molecule?
They are good data that show evolution of the beam line stations if they are 
keeper in historical order.
To store them all, or not to store at all…
Storage of the diffraction data is not a drinking club with muscle-bound 
selectors outside :-)
Felix Frolow
Dr Felix Frolow
Professor of Structural Biology and Biotechnology
Department of Molecular Microbiology
and Biotechnology
Tel Aviv University 69978, Israel

Acta Crystallographica F, co-editor

e-mail: mbfro...@post.tau.ac.il
Tel:  ++972-3640-8723
Fax: ++972-3640-9407
Cellular: 0547 459 608

On Oct 18, 2011, at 12:52 , Chris Morris wrote:

> Some crystals are hard to make, so storing all the data the best way to get 
> reproducibility. On the other hand, no one needs more images of lysozyme. So 
> using the same standard for every deposition doesn't sound right.
>
> The discussion should be held on the basis of overall cost to the research 
> budget - not on the assumption that some costs can be externalised. It is too 
> easy to say "you should store the images, in case I want to reprocess them 
> sometime". IT isn't free, nor is it always cheaper than the associated 
> experimental work. The key comparison is:
>
>   Cost of growing new crystals + cost of beam line time
>
> With:
>
>   Cost of storing images * probability of processing them again
>
> At present, detectors are improving more quickly than processing software. 
> Sample preparation methods are also improving. These forces both press 
> downward the probability that a particular image will ever be reprocessed.
>
> regards,
> Chris
> 
> Chris Morris
> chris.mor...@stfc.ac.uk
> Tel: +44 (0)1925 603689  Fax: +44 (0)1925 603634
> Mobile: 07921-717915
> Skype: chrishgmorris
> http://pims.structuralbiology.eu/
> http://www.citeulike.org/blog/chrishmorris
> Daresbury Lab,  Daresbury,  Warrington,  UK,  WA4 4AD


Re: [ccp4bb] IUCr committees, depositing images

2011-10-18 Thread Felix Frolow
I could not agree less. There is constant development of the software for 
refinement that allow to do things that were not
possible or were not necessary  in the past such as intelligent refinement of 
occupancies of mutually exclusive sites, entities and conformations.
I frequently remeasure lysozyme crystals. I use them as a test system for the 
beam lines, new detectors, novel software developments, refinement improvement 
etc. Sometimes I am collecting data in quite different wavelength than of 
existing structures. And what about diffraction  data from a chemically 
modified lysozyme molecule?
They are good data that show evolution of the beam line stations if they are 
keeper in historical order.
To store them all, or not to store at all…
Storage of the diffraction data is not a drinking club with muscle-bound 
selectors outside :-)
Felix Frolow
Dr Felix Frolow   
Professor of Structural Biology and Biotechnology
Department of Molecular Microbiology
and Biotechnology
Tel Aviv University 69978, Israel

Acta Crystallographica F, co-editor

e-mail: mbfro...@post.tau.ac.il
Tel:  ++972-3640-8723
Fax: ++972-3640-9407
Cellular: 0547 459 608

On Oct 18, 2011, at 12:52 , Chris Morris wrote:

> Some crystals are hard to make, so storing all the data the best way to get 
> reproducibility. On the other hand, no one needs more images of lysozyme. So 
> using the same standard for every deposition doesn't sound right.
> 
> The discussion should be held on the basis of overall cost to the research 
> budget - not on the assumption that some costs can be externalised. It is too 
> easy to say "you should store the images, in case I want to reprocess them 
> sometime". IT isn't free, nor is it always cheaper than the associated 
> experimental work. The key comparison is:
> 
>   Cost of growing new crystals + cost of beam line time
> 
> With:
> 
>   Cost of storing images * probability of processing them again
> 
> At present, detectors are improving more quickly than processing software. 
> Sample preparation methods are also improving. These forces both press 
> downward the probability that a particular image will ever be reprocessed. 
> 
> regards,
> Chris
> 
> Chris Morris  
> chris.mor...@stfc.ac.uk   
> Tel: +44 (0)1925 603689  Fax: +44 (0)1925 603634
> Mobile: 07921-717915
> Skype: chrishgmorris
> http://pims.structuralbiology.eu/
> http://www.citeulike.org/blog/chrishmorris
> Daresbury Lab,  Daresbury,  Warrington,  UK,  WA4 4AD


Re: [ccp4bb] IUCr committees, depositing images

2011-10-18 Thread Peter Keller
Perhaps this kind of discussion that should be continued on the IUCr
forums, once people have had a chance to register. However, to answer
these particular points:

On Tue, 2011-10-18 at 10:52 +, Chris Morris wrote:
> Some crystals are hard to make, so storing all the data the best way to get 
> reproducibility. On the other hand, no one needs more images of lysozyme. So 
> using the same standard for every deposition doesn't sound right.

Yes, good point, but...

> The discussion should be held on the basis of overall cost to the research 
> budget - not on the assumption that some costs can be externalised. It is too 
> easy to say "you should store the images, in case I want to reprocess them 
> sometime". IT isn't free, nor is it always cheaper than the associated 
> experimental work. The key comparison is:
> 
>Cost of growing new crystals + cost of beam line time
>
> With:
>   
>Cost of storing images * probability of processing them again
> 
> At present, detectors are improving more quickly than processing software. 
> Sample preparation methods are also improving. These forces both press 
> downward the probability that a particular image will ever be reprocessed. 

... this analysis assumes that the only value of the images are for an
improved structure determination of that particular sample in order to
get new scientific insights about it. Software and methods development
have different requirements. The kinds of images that are of interest
may include:

 (1) Hard-to-process images of various kinds

 (2) Images collected using non-obvious data collection protocols,
especially if someone has put a lot of time and thought into designing
them.

>From that point of view, it doesn't matter if a high-quality structure
of the same protein has since been refined and deposited - the original
images can still be useful.

Regards,
Peter.

-- 
Peter Keller Tel.: +44 (0)1223 353033
Global Phasing Ltd., Fax.: +44 (0)1223 366889
Sheraton House,
Castle Park,
Cambridge CB3 0AX
United Kingdom


Re: [ccp4bb] IUCr committees, depositing images

2011-10-18 Thread Chris Morris
Some crystals are hard to make, so storing all the data the best way to get 
reproducibility. On the other hand, no one needs more images of lysozyme. So 
using the same standard for every deposition doesn't sound right.

The discussion should be held on the basis of overall cost to the research 
budget - not on the assumption that some costs can be externalised. It is too 
easy to say "you should store the images, in case I want to reprocess them 
sometime". IT isn't free, nor is it always cheaper than the associated 
experimental work. The key comparison is:

   Cost of growing new crystals + cost of beam line time
   
With:
  
   Cost of storing images * probability of processing them again

At present, detectors are improving more quickly than processing software. 
Sample preparation methods are also improving. These forces both press downward 
the probability that a particular image will ever be reprocessed. 

regards,
Chris

Chris Morris  
chris.mor...@stfc.ac.uk   
Tel: +44 (0)1925 603689  Fax: +44 (0)1925 603634
Mobile: 07921-717915
Skype: chrishgmorris
http://pims.structuralbiology.eu/
http://www.citeulike.org/blog/chrishmorris
Daresbury Lab,  Daresbury,  Warrington,  UK,  WA4 4AD


Re: [ccp4bb] IUCr committees, depositing images

2011-10-18 Thread John R Helliwell
Dear Colleagues,
Following on from my posting to the CCP4bb of yesterday an IUCr Forum
has been set up for Public input on the diffraction data deposition
future. Thus this Forum will record an organised set of inputs for
future reference. Instructions on how to register at this Forum can be
found there:-

http://forums.iucr.org/index.php?sid=4e83bcc36ec972f5bed1508d5bb7c05a

 and/or via Brian McMahon (b...@iucr.org) in case of difficulty.

As further background information you will find at the Forum the IUCr
Diffraction Data Deposition Working Group Terms of Reference, the
Minutes of the IUCr Madrid Congress inaugural meeting and a paper from
the Commission on Biological Macromolecules (lead author Tom
Terwilliger) .

We look forward to your inputs to the Forum.

Best wishes,
John
Prof John R Helliwell DSc
Chairman of the IUCr Diffraction Data Deposition Working Group (IUCr DDDWG)



On Mon, Oct 17, 2011 at 7:52 AM, Artem Evdokimov
 wrote:
> We overestimate the value of individual structures because we're human :)
>
> If a problem is important enough that one structure makes or breaks the
> case, a sensible thing to do would be to get more structures and strive to
> obtain some other flavor of pertinent information by methods that are
> unlikely to suffer from the same bias as structures.
>
> Objectivity of the experimenter is key.
> I personally would love to see the development of computationally objective
> (i.e. human-free) methods for integrating various kinds of scientific data.
> If we could escape from the brain shackles imposed on us by our
> crunchy-primate ancestors that'd be very nice, since we rarely need to worry
> about quickly counting members of our primate pod (was Cousin Mo eaten by a
> tiger last night, and is the tiger now full), or to make split-second
> decisions regarding striped creepers swinging down from branches (is it a
> hidden leopard, a deadly Striped Death viper, or a harmless vine?) -- but
> these instinctive modes of thought seriously mess up our collective ability
> to perform complex science.
>
> Artem
>
> On Mon, Oct 17, 2011 at 12:14 AM, Frank von Delft
>  wrote:
>>
>> On 17/10/2011 01:52, Wladek Minor wrote:
>>
>> Frank,
>>
>> This is serious problem for biologists. There is a structure with ligand.
>> The same data were re-interpreted and people did not find the ligand. This
>> re-interpretation is not really valid until we will look into diffraction
>> data. Biologist lost tremendous amount of time and effort looking into
>> interfratation and ...
>>
>> NOw this is very important biomedical structure.
>>
>> Yes I know and agree partially.
>>
>> That said:  I reckon we vastly overestimate the value of individual
>> structures;  it's the ensemble that is informative.  A decade from now,
>> depositing a single structure of a protein will be seen to be as just as
>> silly as it is currently not to deposit structure factors.
>>
>> Including those "very important biomedical structures".  Those things tend
>> to become suddenly "important" (in the grand scheme) only after the ligand's
>> biological/clinical effects could be demonstrated.  And even then the
>> structure itself is only important if it helps a chemist.
>>
>> If this sounds extreme:  consider how much other data it now takes to get
>> structures into nature/science/cell.  Or what happened (or rather didn't) to
>> all those patents on structures that were such a big deal a decade ago.
>>
>> phx.
>>
>>
>>
>> Wladek
>>
>>
>> At 02:59 PM 10/16/2011, you wrote:
>>
>> One other question (for both key issues described):  what exactly is the
>> problem the committees are aiming to address?
>>
>> Because I can't help noticing that Tom's email did not spark an on-list
>> discussion;  do people actually feel either are issues?  Isn't the more
>> burning problem how best to use the 10,000s of structures we're churning
>> out?  In the grand scheme of things, they're pretty inaccurate anyway:
>> static snapshots of crippled fragments of proteins far from their many
>> interaction partners.  So do we need 100,000s of structures instead?  If so,
>> we may soon (collectively) stop being able to care about the original
>> dataset or how to reproduce analysis number 2238 from 2 years ago.
>>
>> (No, I'm not convinced this question is relevant only to structural
>> genomics.)
>>
>> phx.
>>
>>
>>
>> On 16/10/2011 19:38, Frank von Delft wrote:
>>
>> On the deposition of raw data:
>>
>> I recommend to the committee that before it convenes again, every member
>> should go collect some data on a beamline with a Pilatus detector [feel free
>> to join us at Diamond].  Because by the probable time any recommendations
>> actually emerge, most beamlines will have one of those (or similar), we'll
>> be generating more data than the LHC, and users will be happy just to have
>> it integrated, never mind worry about its fate.
>>
>> That's not an endorsement, btw, just an observation/prediction.
>>
>> phx.
>>
>>
>>
>>
>> On 14/10/2011 23:56, 

Re: [ccp4bb] IUCr committees, depositing images

2011-10-16 Thread Artem Evdokimov
We overestimate the value of individual structures because we're human :)

If a problem is important enough that one structure makes or breaks the
case, a sensible thing to do would be to get more structures and strive to
obtain some other flavor of pertinent information by methods that are
unlikely to suffer from the same bias as structures.

Objectivity of the experimenter is key.
I personally would love to see the development of computationally objective
(i.e. human-free) methods for integrating various kinds of scientific data.
If we could escape from the brain shackles imposed on us by our
crunchy-primate ancestors that'd be very nice, since we rarely need to worry
about quickly counting members of our primate pod (was Cousin Mo eaten by a
tiger last night, and is the tiger now full), or to make split-second
decisions regarding striped creepers swinging down from branches (is it a
hidden leopard, a deadly Striped Death viper, or a harmless vine?) -- but
these instinctive modes of thought seriously mess up our collective ability
to perform complex science.

Artem

On Mon, Oct 17, 2011 at 12:14 AM, Frank von Delft <
frank.vonde...@sgc.ox.ac.uk> wrote:

>
> On 17/10/2011 01:52, Wladek Minor wrote:
>
>
> Frank,
>
> This is serious problem for biologists. There is a structure with ligand.
> The same data were re-interpreted and people did not find the ligand. This
> re-interpretation is not really valid until we will look into diffraction
> data. Biologist lost tremendous amount of time and effort looking into
> interfratation and ...
>
> NOw this is very important biomedical structure.
>
>
> Yes I know and agree partially.
>
> That said:  I reckon we vastly overestimate the value of individual
> structures;  it's the ensemble that is informative.  A decade from now,
> depositing a single structure of a protein will be seen to be as just as
> silly as it is currently not to deposit structure factors.
>
> Including those "very important biomedical structures".  Those things tend
> to become suddenly "important" (in the grand scheme) only after the ligand's
> biological/clinical effects could be demonstrated.  And even then the
> structure itself is only important if it helps a chemist.
>
> If this sounds extreme:  consider how much other data it now takes to get
> structures into nature/science/cell.  Or what happened (or rather didn't) to
> all those patents on structures that were such a big deal a decade ago.
>
> phx.
>
>
>
> Wladek
>
>
> At 02:59 PM 10/16/2011, you wrote:
>
> One other question (for both key issues described):  what exactly is the
> problem the committees are aiming to address?
>
> Because I can't help noticing that Tom's email did not spark an on-list
> discussion;  do people actually feel either are issues?  Isn't the more
> burning problem how best to use the 10,000s of structures we're churning
> out?  In the grand scheme of things, they're pretty inaccurate anyway:
> static snapshots of crippled fragments of proteins far from their many
> interaction partners.  So do we need 100,000s of structures instead?  If so,
> we may soon (collectively) stop being able to care about the original
> dataset or how to reproduce analysis number 2238 from 2 years ago.
>
> (No, I'm not convinced this question is relevant only to structural
> genomics.)
>
> phx.
>
>
>
> On 16/10/2011 19:38, Frank von Delft wrote:
>
> On the deposition of raw data:
>
> I recommend to the committee that before it convenes again, every member
> should go collect some data on a beamline with a Pilatus detector [feel free
> to join us at Diamond].  Because by the probable time any recommendations
> actually emerge, most beamlines will have one of those (or similar), we'll
> be generating more data than the LHC, and users will be happy just to have
> it integrated, never mind worry about its fate.
>
> That's not an endorsement, btw, just an observation/prediction.
>
> phx.
>
>
>
>
> On 14/10/2011 23:56, Thomas C. Terwilliger wrote:
>
> For those who have strong opinions on what data should be deposited...
>
> The IUCR is just starting a serious discussion of this subject. Two
> committees, the "Data Deposition Working Group", led by John Helliwell,
> and the Commission on Biological Macromolecules (chaired by Xiao-Dong Su)
> are working on this.
>
> Two key issues are (1) feasibility and importance of deposition of raw
> images and (2) deposition of sufficient information to fully reproduce the
> crystallographic analysis.
>
> I am on both committees and would be happy to hear your ideas (off-list).
> I am sure the other members of the committees would welcome your thoughts
> as well.
>
> -Tom T
>
> Tom Terwilliger
> terwilli...@lanl.gov
>
>
>  This is a follow up (or a digression) to James comparing test set to
> missing reflections.  I also heard this issue mentioned before but was
> always too lazy to actually pursue it.
>
> So.
>
> The role of the test set is to prevent overfitting.  Let's say I have
> the final model

Re: [ccp4bb] IUCr committees, depositing images

2011-10-16 Thread Frank von Delft


On 17/10/2011 01:52, Wladek Minor wrote:


Frank,

This is serious problem for biologists. There is a structure with 
ligand. The same data were re-interpreted and people did not find the 
ligand. This re-interpretation is not really valid until we will look 
into diffraction data. Biologist lost tremendous amount of time and 
effort looking into interfratation and ...


NOw this is very important biomedical structure.


Yes I know and agree partially.

That said:  I reckon we vastly overestimate the value of individual 
structures;  it's the ensemble that is informative.  A decade from now, 
depositing a single structure of a protein will be seen to be as just as 
silly as it is currently not to deposit structure factors.


Including those "very important biomedical structures".  Those things 
tend to become suddenly "important" (in the grand scheme) only after the 
ligand's biological/clinical effects could be demonstrated.  And even 
then the structure itself is only important if it helps a chemist.


If this sounds extreme:  consider how much other data it now takes to 
get structures into nature/science/cell.  Or what happened (or rather 
didn't) to all those patents on structures that were such a big deal a 
decade ago.


phx.




Wladek


At 02:59 PM 10/16/2011, you wrote:
One other question (for both key issues described):  what exactly is 
the problem the committees are aiming to address?


Because I can't help noticing that Tom's email did not spark an 
on-list discussion;  do people actually feel either are issues?  
Isn't the more burning problem how best to use the 10,000s of 
structures we're churning out?  In the grand scheme of things, 
they're pretty inaccurate anyway:
static snapshots of crippled fragments of proteins far from their 
many interaction partners.  So do we need 100,000s of structures 
instead?  If so, we may soon (collectively) stop being able to care 
about the original dataset or how to reproduce analysis number 2238 
from 2 years ago.


(No, I'm not convinced this question is relevant only to structural 
genomics.)


phx.



On 16/10/2011 19:38, Frank von Delft wrote:

On the deposition of raw data:

I recommend to the committee that before it convenes again, every 
member should go collect some data on a beamline with a Pilatus 
detector [feel free to join us at Diamond].  Because by the probable 
time any recommendations actually emerge, most beamlines will have 
one of those (or similar), we'll be generating more data than the 
LHC, and users will be happy just to have it integrated, never mind 
worry about its fate.


That's not an endorsement, btw, just an observation/prediction.

phx.




On 14/10/2011 23:56, Thomas C. Terwilliger wrote:

For those who have strong opinions on what data should be deposited...

The IUCR is just starting a serious discussion of this subject. Two
committees, the "Data Deposition Working Group", led by John Helliwell,
and the Commission on Biological Macromolecules (chaired by 
Xiao-Dong Su)

are working on this.

Two key issues are (1) feasibility and importance of deposition of raw
images and (2) deposition of sufficient information to fully 
reproduce the

crystallographic analysis.

I am on both committees and would be happy to hear your ideas 
(off-list).
I am sure the other members of the committees would welcome your 
thoughts

as well.

-Tom T

Tom Terwilliger
terwilli...@lanl.gov



This is a follow up (or a digression) to James comparing test set to
missing reflections.  I also heard this issue mentioned before 
but was

always too lazy to actually pursue it.

So.

The role of the test set is to prevent overfitting.  Let's say I have
the final model and I monitored the Rfree every step of the way 
and can
conclude that there is no overfitting.  Should I do the final 
refinement

against complete dataset?

IMCO, I absolutely should.  The test set reflections contain
information, and the "final" model is actually biased towards the
working set.  Refining using all the data can only improve the 
accuracy

of the model, if only slightly.

The second question is practical.  Let's say I want to deposit the
results of the refinement against the full dataset as my final model.
Should I not report the Rfree and instead insert a remark 
explaining the
situation?  If I report the Rfree prior to the test set removal, 
it is
certain that every validation tool will report a mismatch.  It 
does not

seem that the PDB has a mechanism to deal with this.

Cheers,

Ed.



--
Oh, suddenly throwing a giraffe into a volcano to make water is 
crazy?
 Julian, King of 
Lemurs



Dr. Wladek Minor
Professor of Molecular Physiology and Biological Physics
Phone: 434-243-6865
Fax: 434-982-1616
http://krzys.med.virginia.edu/CrystUVa/wladek.htm

US-mail address:
Department of Molecular Physiology and Biological Physics
University of Virginia
PO Box 800736, Charlottesville, VA 22908-0736

Fed-Ex address:
Depart

Re: [ccp4bb] IUCr committees, depositing images

2011-10-16 Thread Craig A. Bingman
I'm simultaneously embarrassed to have not read the thread to completion before 
replying, and also totally pumped that I would answer a question the same way 
as Bernhard Rupp!

On Oct 16, 2011, at 7:34 PM, Bernhard Rupp (Hofkristallrat a.D.) wrote:

> Ø  Not if you are interested in scattering that falls between reciprocal 
> lattice maxima, or if you want to preserve the possibility of applying future 
> data reduction packages.  
>  
> Yep, this is exactly what I expressed in my original statement:
>  
> “diffuse solvent contributions, commensurate and incommensurate 
> superstructures, split reflections,  and similar stuff that presently just 
> gets indexed away”
>  
> BR



Re: [ccp4bb] IUCr committees, depositing images

2011-10-16 Thread Bernhard Rupp (Hofkristallrat a.D.)
Ø  Not if you are interested in scattering that falls between reciprocal
lattice maxima, or if you want to preserve the possibility of applying
future data reduction packages.  

 

Yep, this is exactly what I expressed in my original statement:

 

“diffuse solvent contributions, commensurate and incommensurate
superstructures, split reflections,  and similar stuff that presently just
gets indexed away” 

 

BR



Re: [ccp4bb] IUCr committees, depositing images

2011-10-16 Thread Craig A. Bingman

On Oct 16, 2011, at 4:18 PM, Bosch, Juergen wrote:

> The rest is comparable to a collection of stamps, although with the benefit 
> as BR mentioned of adding an additional hurdle/layer to falsifying structures.

Not if you are interested in scattering that falls between reciprocal lattice 
maxima, or if you want to preserve the possibility of applying future data 
reduction packages.  


Re: [ccp4bb] IUCr committees, depositing images

2011-10-16 Thread Bernhard Rupp (Hofkristallrat a.D.)
Ø  Do you mean to reprocess determination of I and sig(I) from the
diffraction images automatically??? Or just to get an access to the raw
data?

 

Reprocessing the images with the to-be-developed new software that will
process the information in the data in full, and then using the
new-and-improved refinement software to automatically refine a physically
more realistic model. Sounds far-fetched now but will happen soon, in the
cloud of bored petaflop iphones 8, each with 32 TB of memory….

 

Cheers, BR 

 

 

 

 





 



Re: [ccp4bb] IUCr committees, depositing images

2011-10-16 Thread Bosch, Juergen

On Oct 16, 2011, at 4:42 PM, Guenter Fritz wrote:

In the end for the determination of the structure only a few datasets
will be necessary. This means maybe 0.5 Tb  of compressed data to
deposit. I don't think this is too much.

I think those 0.5 TB will be not essential, the more interesting datasets are 
those which failed to produce a structure. That's where our processing guru's 
and the SHARPest thoughts should be I guess :-)
Seriously, the bad data is the data developers need (if somebody ships me a 
disk I can provide you with tons of bad data) , so that we can even get a 
structure out of something you wouldn't have even mounted to begin with. The 
rest is comparable to a collection of stamps, although with the benefit as BR 
mentioned of adding an additional hurdle/layer to falsifying structures.

Jürgen

..
Jürgen Bosch
Johns Hopkins University
Bloomberg School of Public Health
Department of Biochemistry & Molecular Biology
Johns Hopkins Malaria Research Institute
615 North Wolfe Street, W8708
Baltimore, MD 21205
Office: +1-410-614-4742
Lab:  +1-410-614-4894
Fax:  +1-410-955-2926
http://web.mac.com/bosch_lab/







Re: [ccp4bb] IUCr committees, depositing images

2011-10-16 Thread Felix Frolow


On Oct 16, 2011, at 22:38 , Bernhard Rupp (Hofkristallrat a.D.) wrote:

> As in reprocess completely from images, I meant.


Do you mean to reprocess determination of I and sig(I) from the diffraction 
images automatically???
Or just to get an access to the raw data?

FF





>  
> From: Bosch, Juergen [mailto:jubo...@jhsph.edu] 
> Sent: Sunday, October 16, 2011 1:31 PM
> To: hofkristall...@gmail.com
> Cc: CCP4BB@JISCMAIL.AC.UK
> Subject: Re: [ccp4bb] IUCr committees, depositing images
>  
> Wasn't that already implemented in Phenix ?
>  
> Jürgen
>  
> On Oct 16, 2011, at 4:20 PM, Bernhard Rupp (Hofkristallrat a.D.) wrote:
>  
> 
>  REPROCESS_PDB 
>  
> ..
> Jürgen Bosch
> Johns Hopkins University
> Bloomberg School of Public Health
> Department of Biochemistry & Molecular Biology
> Johns Hopkins Malaria Research Institute
> 615 North Wolfe Street, W8708
> Baltimore, MD 21205
> Office: +1-410-614-4742
> Lab:  +1-410-614-4894
> Fax:  +1-410-955-2926
> http://web.mac.com/bosch_lab/
>  
>  
>  
> 

Dr Felix Frolow   
Professor of Structural Biology and Biotechnology
Department of Molecular Microbiology
and Biotechnology
Tel Aviv University 69978, Israel

Acta Crystallographica F, co-editor

e-mail: mbfro...@post.tau.ac.il
Tel:  ++972-3640-8723
Fax: ++972-3640-9407
Cellular: 0547 459 608



Re: [ccp4bb] IUCr committees, depositing images

2011-10-16 Thread Bernhard Rupp (Hofkristallrat a.D.)
As in reprocess completely from images, I meant. 

 

From: Bosch, Juergen [mailto:jubo...@jhsph.edu] 
Sent: Sunday, October 16, 2011 1:31 PM
To: hofkristall...@gmail.com
Cc: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] IUCr committees, depositing images

 

Wasn't that already implemented in Phenix ?

 

Jürgen

 

On Oct 16, 2011, at 4:20 PM, Bernhard Rupp (Hofkristallrat a.D.) wrote:

 

 REPROCESS_PDB 

 

..

Jürgen Bosch

Johns Hopkins University

Bloomberg School of Public Health

Department of Biochemistry & Molecular Biology

Johns Hopkins Malaria Research Institute

615 North Wolfe Street, W8708

Baltimore, MD 21205

Office: +1-410-614-4742

Lab:  +1-410-614-4894

Fax:  +1-410-955-2926

http://web.mac.com/bosch_lab/

 

 

 

 



Re: [ccp4bb] IUCr committees, depositing images

2011-10-16 Thread Guenter Fritz

On the technical feasibility of storage of original data:

Sure, running Pilatus for an olympic record, we will go home with several T of 
data after 24 h (will we?).
  

Yes, we do already. I just checked the number of images from PILATUS 6M
we have collected so far this year : ca. 1.7 millions. End of the year
it will be more than 2 mio.
I compress everything and store it on  RAID systems. Disk space is
meanwhile so cheap and storage on disk is so easy compared to tapes. So
why bothering with the large number of images?

In the end for the determination of the structure only a few datasets
will be necessary. This means maybe 0.5 Tb  of compressed data to
deposit. I don't think this is too much.
But this is an abuse of the system. The final goal is the structure determination, and there are much less 
good crystals everywhere in one year that  one Pilatus could collect in one week.

But to decide fast if the crystal diffraction data from Pilatus is good for 
storage or even for measurement whatever the speed of data collection is, good 
data processing
software is needed. 

The special properties of PILATUS forces you to think more about your
data collection AND (!) it gives you the time to think at the beamline
about data collection. Fine phi slicing and high redundancy gives much
better data (important if your crystals are not that good).  The people
from Dectris will maybe add a note here.

Best,
Guenter

I personnaly think that there is only one, the one.
Anyhow, I think if the author wish to publish his structure, and it is 
important, and I am a reviewer, and it is going
to prestigious journal,  I will reprocess his data and will check his way to 
the final crystal structure solution from the beginning.
It is as in mathematics. If someone claim that he solved a long-staing problem from the past, he will not go away from his envious colleagues, who 
will drop everything and will sit and check, until they will find a mistake. What a pleasure!!!

And if there are no mistakes - chapeau !!!

FF
 
Dr Felix Frolow   
Professor of Structural Biology and Biotechnology

Department of Molecular Microbiology
and Biotechnology
Tel Aviv University 69978, Israel

Acta Crystallographica F, co-editor

e-mail: mbfro...@post.tau.ac.il
Tel:  ++972-3640-8723
Fax: ++972-3640-9407
Cellular: 0547 459 608

On Oct 16, 2011, at 20:38 , Frank von Delft wrote:

  

On the deposition of raw data:

I recommend to the committee that before it convenes again, every member should 
go collect some data on a beamline with a Pilatus detector [feel free to join 
us at Diamond].  Because by the probable time any recommendations actually 
emerge, most beamlines will have one of those (or similar), we'll be generating 
more data than the LHC, and users will be happy just to have it integrated, 
never mind worry about its fate.

That's not an endorsement, btw, just an observation/prediction.

phx.




On 14/10/2011 23:56, Thomas C. Terwilliger wrote:


For those who have strong opinions on what data should be deposited...

The IUCR is just starting a serious discussion of this subject. Two
committees, the "Data Deposition Working Group", led by John Helliwell,
and the Commission on Biological Macromolecules (chaired by Xiao-Dong Su)
are working on this.

Two key issues are (1) feasibility and importance of deposition of raw
images and (2) deposition of sufficient information to fully reproduce the
crystallographic analysis.

I am on both committees and would be happy to hear your ideas (off-list).
I am sure the other members of the committees would welcome your thoughts
as well.

-Tom T

Tom Terwilliger
terwilli...@lanl.gov


  

This is a follow up (or a digression) to James comparing test set to
missing reflections.  I also heard this issue mentioned before but was
always too lazy to actually pursue it.

So.

The role of the test set is to prevent overfitting.  Let's say I have
the final model and I monitored the Rfree every step of the way and can
conclude that there is no overfitting.  Should I do the final refinement
against complete dataset?

IMCO, I absolutely should.  The test set reflections contain
information, and the "final" model is actually biased towards the
working set.  Refining using all the data can only improve the accuracy
of the model, if only slightly.

The second question is practical.  Let's say I want to deposit the
results of the refinement against the full dataset as my final model.
Should I not report the Rfree and instead insert a remark explaining the
situation?  If I report the Rfree prior to the test set removal, it is
certain that every validation tool will report a mismatch.  It does not
seem that the PDB has a mechanism to deal with this.

Cheers,

Ed.



--
Oh, suddenly throwing a giraffe into a volcano to make water is crazy?
Julian, King of Lemurs

  


Re: [ccp4bb] IUCr committees, depositing images

2011-10-16 Thread Bosch, Juergen
Wasn't that already implemented in Phenix ?

Jürgen

On Oct 16, 2011, at 4:20 PM, Bernhard Rupp (Hofkristallrat a.D.) wrote:

 REPROCESS_PDB

..
Jürgen Bosch
Johns Hopkins University
Bloomberg School of Public Health
Department of Biochemistry & Molecular Biology
Johns Hopkins Malaria Research Institute
615 North Wolfe Street, W8708
Baltimore, MD 21205
Office: +1-410-614-4742
Lab:  +1-410-614-4894
Fax:  +1-410-955-2926
http://web.mac.com/bosch_lab/







Re: [ccp4bb] IUCr committees, depositing images

2011-10-16 Thread Bernhard Rupp (Hofkristallrat a.D.)
Hi Fellows,

I was attending the inaugural meeting of the Data Deposition Working Group
in Madrid. They are aware of the various points raised, and a
document/recommendation has been prepared that I assume will be soon made
public (John?). Amount of data seems not an insurmountable technical
problem. The most important point is that the data are getting consistently
better and contain more information than we currently make use of. Just
think of diffuse solvent contributions, commensurate and incommensurate
superstructures, split reflections,  and similar stuff that presently just
gets indexed away. Better data processing software will certainly be
developed and will provide a better data model, ultimately allowing better
structure models. At the same time, almost all forgery issues disappear
automatically as an added (but imho minor) bonus.  

Cheers, BR

PS: on a cynical note, what makes one believe that data processing is
carried out at any higher level of competency than say the refinement? The
only comfort here is that when it is done immediately by the beamline
fellows, it is probably done well. In any case, maybe REPROCESS_PDB is the
thing of the future.

-Original Message-
From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Frank
von Delft
Sent: Sunday, October 16, 2011 12:00 PM
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] IUCr committees, depositing images

One other question (for both key issues described):  what exactly is the
problem the committees are aiming to address?

Because I can't help noticing that Tom's email did not spark an on-list
discussion;  do people actually feel either are issues?  Isn't the more
burning problem how best to use the 10,000s of structures we're churning
out?  In the grand scheme of things, they're pretty inaccurate anyway:  
static snapshots of crippled fragments of proteins far from their many
interaction partners.  So do we need 100,000s of structures instead?  If so,
we may soon (collectively) stop being able to care about the original
dataset or how to reproduce analysis number 2238 from 2 years ago.

(No, I'm not convinced this question is relevant only to structural
genomics.)

phx.



On 16/10/2011 19:38, Frank von Delft wrote:
> On the deposition of raw data:
>
> I recommend to the committee that before it convenes again, every 
> member should go collect some data on a beamline with a Pilatus 
> detector [feel free to join us at Diamond].  Because by the probable 
> time any recommendations actually emerge, most beamlines will have one 
> of those (or similar), we'll be generating more data than the LHC, and 
> users will be happy just to have it integrated, never mind worry about 
> its fate.
>
> That's not an endorsement, btw, just an observation/prediction.
>
> phx.
>
>
>
>
> On 14/10/2011 23:56, Thomas C. Terwilliger wrote:
>> For those who have strong opinions on what data should be deposited...
>>
>> The IUCR is just starting a serious discussion of this subject. Two 
>> committees, the "Data Deposition Working Group", led by John 
>> Helliwell, and the Commission on Biological Macromolecules (chaired 
>> by Xiao-Dong
>> Su)
>> are working on this.
>>
>> Two key issues are (1) feasibility and importance of deposition of 
>> raw images and (2) deposition of sufficient information to fully 
>> reproduce the crystallographic analysis.
>>
>> I am on both committees and would be happy to hear your ideas 
>> (off-list).
>> I am sure the other members of the committees would welcome your 
>> thoughts as well.
>>
>> -Tom T
>>
>> Tom Terwilliger
>> terwilli...@lanl.gov
>>
>>
>>>> This is a follow up (or a digression) to James comparing test set 
>>>> to missing reflections.  I also heard this issue mentioned before 
>>>> but was always too lazy to actually pursue it.
>>>>
>>>> So.
>>>>
>>>> The role of the test set is to prevent overfitting.  Let's say I 
>>>> have the final model and I monitored the Rfree every step of the 
>>>> way and can conclude that there is no overfitting.  Should I do the 
>>>> final refinement against complete dataset?
>>>>
>>>> IMCO, I absolutely should.  The test set reflections contain 
>>>> information, and the "final" model is actually biased towards the 
>>>> working set.  Refining using all the data can only improve the 
>>>> accuracy of the model, if only slightly.
>>>>
>>>> The second question is practical.  Let's say I want to deposit the 
>>>> results of th

Re: [ccp4bb] IUCr committees, depositing images

2011-10-16 Thread Felix Frolow
On the deposition of raw data:
Committees, wherever you are!
I guess that abstaining from storing the raw diffraction data in the form of  
frames is not very wise
whatever its size is. I regret that for some PDB entries I do not have 
diffraction data (needless to say that authors
do not submitted even  structure factors). 
I maintain a bit more than 1.2 T diffraction data starting from 2001 and all is 
nicely
resides on two small WD pocket disks (needless to say that I have several 
copies of the data).
Generally I have all data I ever collected going back to beginning of 80's, but 
I am to lazy to reform DAT tapes.
Sure, running Pilatus for an olympic record, we will go home with several T of 
data after 24 h (will we?).
But this is an abuse of the system. The final goal is the structure 
determination, and there are much less 
good crystals everywhere in one year that  one Pilatus could collect in one 
week.
But to decide fast if the crystal diffraction data from Pilatus is good for 
storage or even for measurement whatever the speed of data collection is, good 
data processing
software is needed. I personnaly think that there is only one, the one.
Anyhow, I think if the author wish to publish his structure, and it is 
important, and I am a reviewer, and it is going
to prestigious journal,  I will reprocess his data and will check his way to 
the final crystal structure solution from the beginning.
It is as in mathematics. If someone claim that he solved a long-staing problem 
from the past, he will not go away from his envious colleagues, who 
will drop everything and will sit and check, until they will find a mistake. 
What a pleasure!!!
And if there are no mistakes - chapeau !!!

FF
 
Dr Felix Frolow   
Professor of Structural Biology and Biotechnology
Department of Molecular Microbiology
and Biotechnology
Tel Aviv University 69978, Israel

Acta Crystallographica F, co-editor

e-mail: mbfro...@post.tau.ac.il
Tel:  ++972-3640-8723
Fax: ++972-3640-9407
Cellular: 0547 459 608

On Oct 16, 2011, at 20:38 , Frank von Delft wrote:

> On the deposition of raw data:
> 
> I recommend to the committee that before it convenes again, every member 
> should go collect some data on a beamline with a Pilatus detector [feel free 
> to join us at Diamond].  Because by the probable time any recommendations 
> actually emerge, most beamlines will have one of those (or similar), we'll be 
> generating more data than the LHC, and users will be happy just to have it 
> integrated, never mind worry about its fate.
> 
> That's not an endorsement, btw, just an observation/prediction.
> 
> phx.
> 
> 
> 
> 
> On 14/10/2011 23:56, Thomas C. Terwilliger wrote:
>> For those who have strong opinions on what data should be deposited...
>> 
>> The IUCR is just starting a serious discussion of this subject. Two
>> committees, the "Data Deposition Working Group", led by John Helliwell,
>> and the Commission on Biological Macromolecules (chaired by Xiao-Dong Su)
>> are working on this.
>> 
>> Two key issues are (1) feasibility and importance of deposition of raw
>> images and (2) deposition of sufficient information to fully reproduce the
>> crystallographic analysis.
>> 
>> I am on both committees and would be happy to hear your ideas (off-list).
>> I am sure the other members of the committees would welcome your thoughts
>> as well.
>> 
>> -Tom T
>> 
>> Tom Terwilliger
>> terwilli...@lanl.gov
>> 
>> 
 This is a follow up (or a digression) to James comparing test set to
 missing reflections.  I also heard this issue mentioned before but was
 always too lazy to actually pursue it.
 
 So.
 
 The role of the test set is to prevent overfitting.  Let's say I have
 the final model and I monitored the Rfree every step of the way and can
 conclude that there is no overfitting.  Should I do the final refinement
 against complete dataset?
 
 IMCO, I absolutely should.  The test set reflections contain
 information, and the "final" model is actually biased towards the
 working set.  Refining using all the data can only improve the accuracy
 of the model, if only slightly.
 
 The second question is practical.  Let's say I want to deposit the
 results of the refinement against the full dataset as my final model.
 Should I not report the Rfree and instead insert a remark explaining the
 situation?  If I report the Rfree prior to the test set removal, it is
 certain that every validation tool will report a mismatch.  It does not
 seem that the PDB has a mechanism to deal with this.
 
 Cheers,
 
 Ed.
 
 
 
 --
 Oh, suddenly throwing a giraffe into a volcano to make water is crazy?
 Julian, King of Lemurs
 


Re: [ccp4bb] IUCr committees, depositing images

2011-10-16 Thread Frank von Delft
One other question (for both key issues described):  what exactly is the 
problem the committees are aiming to address?


Because I can't help noticing that Tom's email did not spark an on-list 
discussion;  do people actually feel either are issues?  Isn't the more 
burning problem how best to use the 10,000s of structures we're churning 
out?  In the grand scheme of things, they're pretty inaccurate anyway:  
static snapshots of crippled fragments of proteins far from their many 
interaction partners.  So do we need 100,000s of structures instead?  If 
so, we may soon (collectively) stop being able to care about the 
original dataset or how to reproduce analysis number 2238 from 2 years ago.


(No, I'm not convinced this question is relevant only to structural 
genomics.)


phx.



On 16/10/2011 19:38, Frank von Delft wrote:

On the deposition of raw data:

I recommend to the committee that before it convenes again, every 
member should go collect some data on a beamline with a Pilatus 
detector [feel free to join us at Diamond].  Because by the probable 
time any recommendations actually emerge, most beamlines will have one 
of those (or similar), we'll be generating more data than the LHC, and 
users will be happy just to have it integrated, never mind worry about 
its fate.


That's not an endorsement, btw, just an observation/prediction.

phx.




On 14/10/2011 23:56, Thomas C. Terwilliger wrote:

For those who have strong opinions on what data should be deposited...

The IUCR is just starting a serious discussion of this subject. Two
committees, the "Data Deposition Working Group", led by John Helliwell,
and the Commission on Biological Macromolecules (chaired by Xiao-Dong 
Su)

are working on this.

Two key issues are (1) feasibility and importance of deposition of raw
images and (2) deposition of sufficient information to fully 
reproduce the

crystallographic analysis.

I am on both committees and would be happy to hear your ideas 
(off-list).
I am sure the other members of the committees would welcome your 
thoughts

as well.

-Tom T

Tom Terwilliger
terwilli...@lanl.gov



This is a follow up (or a digression) to James comparing test set to
missing reflections.  I also heard this issue mentioned before but was
always too lazy to actually pursue it.

So.

The role of the test set is to prevent overfitting.  Let's say I have
the final model and I monitored the Rfree every step of the way and 
can
conclude that there is no overfitting.  Should I do the final 
refinement

against complete dataset?

IMCO, I absolutely should.  The test set reflections contain
information, and the "final" model is actually biased towards the
working set.  Refining using all the data can only improve the 
accuracy

of the model, if only slightly.

The second question is practical.  Let's say I want to deposit the
results of the refinement against the full dataset as my final model.
Should I not report the Rfree and instead insert a remark 
explaining the

situation?  If I report the Rfree prior to the test set removal, it is
certain that every validation tool will report a mismatch.  It does 
not

seem that the PDB has a mechanism to deal with this.

Cheers,

Ed.



--
Oh, suddenly throwing a giraffe into a volcano to make water is crazy?
 Julian, King of 
Lemurs




Re: [ccp4bb] IUCr committees, depositing images

2011-10-16 Thread Frank von Delft

On the deposition of raw data:

I recommend to the committee that before it convenes again, every member 
should go collect some data on a beamline with a Pilatus detector [feel 
free to join us at Diamond].  Because by the probable time any 
recommendations actually emerge, most beamlines will have one of those 
(or similar), we'll be generating more data than the LHC, and users will 
be happy just to have it integrated, never mind worry about its fate.


That's not an endorsement, btw, just an observation/prediction.

phx.




On 14/10/2011 23:56, Thomas C. Terwilliger wrote:

For those who have strong opinions on what data should be deposited...

The IUCR is just starting a serious discussion of this subject. Two
committees, the "Data Deposition Working Group", led by John Helliwell,
and the Commission on Biological Macromolecules (chaired by Xiao-Dong Su)
are working on this.

Two key issues are (1) feasibility and importance of deposition of raw
images and (2) deposition of sufficient information to fully reproduce the
crystallographic analysis.

I am on both committees and would be happy to hear your ideas (off-list).
I am sure the other members of the committees would welcome your thoughts
as well.

-Tom T

Tom Terwilliger
terwilli...@lanl.gov



This is a follow up (or a digression) to James comparing test set to
missing reflections.  I also heard this issue mentioned before but was
always too lazy to actually pursue it.

So.

The role of the test set is to prevent overfitting.  Let's say I have
the final model and I monitored the Rfree every step of the way and can
conclude that there is no overfitting.  Should I do the final refinement
against complete dataset?

IMCO, I absolutely should.  The test set reflections contain
information, and the "final" model is actually biased towards the
working set.  Refining using all the data can only improve the accuracy
of the model, if only slightly.

The second question is practical.  Let's say I want to deposit the
results of the refinement against the full dataset as my final model.
Should I not report the Rfree and instead insert a remark explaining the
situation?  If I report the Rfree prior to the test set removal, it is
certain that every validation tool will report a mismatch.  It does not
seem that the PDB has a mechanism to deal with this.

Cheers,

Ed.



--
Oh, suddenly throwing a giraffe into a volcano to make water is crazy?
 Julian, King of Lemurs