Hi Alexis,

A brief summary of the relevant points in the paper that Pavel mentioned 
(https://journals.iucr.org/d/issues/2020/03/00/ba5308 
<https://journals.iucr.org/d/issues/2020/03/00/ba5308>):

The paper is about how to estimate the amount of information gained by making a 
diffraction measurement, not really about defining resolution limits.  Based on 
the results presented, I would argue that it's not a good idea to define a 
traditional resolution cutoff based on average numbers in a resolution shell, 
because there will almost certainly be useful data beyond that point (as long 
as you account properly for the effect of the measurement error, which is 
something that needs work for a lot of programs!).  In our program Phaser, we 
use all of the data provided to scale the data set and refine parameters that 
define the systematic variation in diffraction intensity (and hence signal).  
In this step, knowing which reflections are weak helps to define the parameters 
characterising the systematic variation due to effects like anisotropy and 
translational non-crystallographic symmetry (tNCS).  However, after this point 
the information gained by measuring any one of these reflections tells you how 
much power that observation will have in subsequent likelihood-based 
calculations.  As the information gain drops, the usefulness of the observation 
in determining refineable parameters with the likelihood also drops.  In the 
context of Phaser, we've found that there's a small amount of benefit from 
including reflections down to an information gain of 0.01 bit, but below that 
the observations can safely be ignored (after the scaling, anisotropy and tNCS 
steps).

However, it's possible that average information content is a useful way to 
think about *nominal* resolution.  We should probably do this systematically, 
but our impression from looking at a variety of deposited diffraction data is 
that the average information gain in the highest resolution shell is typically 
around 0.5 to 1 bit per reflection.  So it looks like the half-bit level agrees 
reasonably well with how people currently choose their resolution limits.

For the future, what I would like to see is, first, that everyone adopts 
something like our LLGI target that accounts very well for the effect of 
intensity measurement error:  the current Rice likelihood target using French & 
Wilson posterior amplitudes breaks down for very weak data with very low 
information gain.  Second, I would like to see people depositing at least their 
unpruned intensity data:  not just derived amplitudes, because the conversion 
from intensities to amplitudes cannot be reversed effectively, and not 
intensity data prescaled to remove anisotropy.  Third, I would like to see 
people distinguishing between nominal resolution (which is a useful number to 
make a first guess about which structures are likely to be most accurate) and 
the actual resolution of the data deposited.  There are diminishing returns to 
including weaker and weaker data, but the resolution cutoff shouldn't exclude a 
substantial number of observations conveying more than, say, 0.1 bit of 
information.

Best wishes,

Randy

> On 9 Mar 2020, at 04:06, Alexis Rohou <a.ro...@gmail.com> wrote:
> 
> Hi Colin,
> 
> It sounds to me like you are mostly asking about whether 1/2-bit is the 
> "correct" target to aim for, the "correct" criterion for a resolution claim. 
> I have no view on that. I have yet to read Randy's work on the topic - it 
> sounds very informative. 
> 
> What I do have a view on is, once one has decided one likes 1/2 bit 
> information content (equiv to SNR 0.207) or C_ref = 0.5, aka FSC=0.143 (equiv 
> to SNR 0.167) as a criterion, how one should turn that into an FSC threshold.
> 
> You say you were not convinced by Marin's derivation in 2005. Are you 
> convinced now and, if not, why?
> 
> No. I was unable to follow Marin's derivation then, and last I tried (a 
> couple of years back), I was still unable to follow it. This is despite being 
> convinced that Marin is correct that fixed FSC thresholds are not desirable. 
> To be clear, my objections have nothing to do with whether 1/2-bit is an 
> appropriate criterion, they're entirely about how you turn a target SNR into 
> an FSC threshold.
> 
> A few years ago, an equivalent thread on 3DEM/CCPEM (I think CCP4BB was 
> spared) led me to re-examine the foundations of the use of the FSC in 
> general. You can read more details in the manuscript I posted to bioRxiv a 
> few days ago (https://www.biorxiv.org/content/10.1101/2020.03.01.972067v1 
> <https://www.biorxiv.org/content/10.1101/2020.03.01.972067v1>), but 
> essentially I conclude that:
> 
> (1) fixed-threshold criteria are not always appropriate, because they rely on 
> a biased estimator of the SNR, and in cases where n (the number of 
> independent samples in a Fourier shell) is low, this bias is significant
> (2) thresholds in use today do not involve a significance test; they just 
> ignore the variance of the FSC as an estimator of SNR; to caricature, this is 
> like the whole field were satisfied with p values of ~0.5.
> (3) as far as I can tell, ignoring the bias and variance of the FSC as an 
> estimator of SNR is _mostly OK_ when doing global resolution estimates, when 
> the estimated resolution is pretty high (large n) and when the FSC curve has 
> a steep falloff. That's a lot of hand-waving, which I think we should aim to 
> dispense of.
> (4) when doing local resolution estimation using small sub-volumes in low-res 
> parts of maps, I'm convinced the fixed threshold are completely off.
> (5) I see no good reason to keep using fixed FSC thresholds, even for global 
> resolution estimates, but I still don' t know whether Marin's 1/2-bit-based 
> FSC criterion is correct (if I had to bet, I'd say not). Aiming for 1/2-bit 
> information content per Fourier component may be the correct target to aim 
> for, and fixed threshold are definitely not the way to go, but I am not 
> convinced that the 2005 proposal is the correct way forward
> (6) I propose a framework for deriving non-fixed FSC thresholds based on 
> desired SNR and confidence levels. Under some conditions, my proposed 
> thresholds behave similarly to Marin's 1/2-bit-based curve, which convinces 
> me further that Marin really is onto something.
> 
> To re-iterate: the choice of target SNR (or information content) is 
> independent of the choice of SNR estimator and of statistical testing 
> framework.
> 
> Hope this helps,
> Alexis
> 
> 
> 
> On Sat, Feb 22, 2020 at 2:06 AM Nave, Colin (DLSLtd,RAL,LSCI) 
> <colin.n...@diamond.ac.uk <mailto:colin.n...@diamond.ac.uk>> wrote:
> Alexis
> 
> This is a very useful summary.
> 
>  
> 
> You say you were not convinced by Marin's derivation in 2005. Are you 
> convinced now and, if not, why?
> 
>  
> 
> My interest in this is that the FSC with half bit thresholds have the danger 
> of being adopted elsewhere because they are becoming standard for protein 
> structure determination (by EM or MX). If it is used for these mature 
> techniques it must be right!
> 
>  
> 
> It is the adoption of the ½ bit threshold I worry about. I gave a rather weak 
> example for MX which consisted of partial occupancy of side chains, 
> substrates etc. For x-ray imaging a wide range of contrasts can occur and, if 
> you want to see features with only a small contrast above the surroundings 
> then I think the half bit threshold would be inappropriate.
> 
>  
> 
> It would be good to see a clear message from the MX and EM communities as to 
> why an information content threshold of ½ a bit is generally appropriate for 
> these techniques and an acknowledgement that this threshold is 
> technique/problem dependent.
> 
>  
> 
> We might then progress from the bronze age to the iron age.
> 
>  
> 
> Regards
> 
> Colin
> 
>  
> 
>  
> 
>  
> 
> From: CCP4 bulletin board <CCP4BB@JISCMAIL.AC.UK 
> <mailto:CCP4BB@JISCMAIL.AC.UK>> On Behalf Of Alexis Rohou
> Sent: 21 February 2020 16:35
> To: CCP4BB@JISCMAIL.AC.UK <mailto:CCP4BB@JISCMAIL.AC.UK>
> Subject: Re: [ccp4bb] [3dem] Which resolution?
> 
>  
> 
> Hi all,
> 
>  
> 
> For those bewildered by Marin's insistence that everyone's been messing up 
> their stats since the bronze age, I'd like to offer what my understanding of 
> the situation. More details in this thread from a few years ago on the exact 
> same topic: 
> 
> https://mail.ncmir.ucsd.edu/pipermail/3dem/2015-August/003939.html 
> <https://mail.ncmir.ucsd.edu/pipermail/3dem/2015-August/003939.html>
> https://mail.ncmir.ucsd.edu/pipermail/3dem/2015-August/003944.html 
> <https://mail.ncmir.ucsd.edu/pipermail/3dem/2015-August/003944.html>
>  
> 
> Notwithstanding notational problems (e.g. strict equations as opposed to 
> approximation symbols, or omission of symbols to denote estimation), I 
> believe Frank & Al-Ali and "descendent" papers (e.g. appendix of Rosenthal & 
> Henderson 2003) are fine. The cross terms that Marin is agitated about indeed 
> do in fact have an expectation value of 0.0 (in the ensemble; if the 
> experiment were performed an infinite number of times with different 
> realizations of noise). I don't believe Pawel or Jose Maria or any of the 
> other authors really believe that the cross-terms are orthogonal.
> 
>  
> 
> When N (the number of independent Fouier voxels in a shell) is large enough, 
> mean(Signal x Noise) ~ 0.0 is only an approximation, but a pretty good one, 
> even for a single FSC experiment. This is why, in my book, derivations that 
> depend on Frank & Al-Ali are OK, under the strict assumption that N is large. 
> Numerically, this becomes apparent when Marin's half-bit criterion is plotted 
> - asymptotically it has the same behavior as a constant threshold.
> 
>  
> 
> So, is Marin wrong to worry about this? No, I don't think so. There are 
> indeed cases where the assumption of large N is broken. And under those 
> circumstances, any fixed threshold (0.143, 0.5, whatever) is dangerous. This 
> is illustrated in figures of van Heel & Schatz (2005). Small boxes, 
> high-symmetry, small objects in large boxes, and a number of other conditions 
> can make fixed thresholds dangerous.
> 
>  
> 
> It would indeed be better to use a non-fixed threshold. So why am I not using 
> the 1/2-bit criterion in my own work? While numerically it behaves well at 
> most resolution ranges, I was not convinced by Marin's derivation in 2005. 
> Philosophically though, I think he's right - we should aim for FSC thresholds 
> that are more robust to the kinds of edge cases mentioned above. It would be 
> the right thing to do.
> 
>  
> 
> Hope this helps,
> 
> Alexis 
> 
>  
> 
>  
> 
>  
> 
> On Sun, Feb 16, 2020 at 9:00 AM Penczek, Pawel A <pawel.a.penc...@uth.tmc.edu 
> <mailto:pawel.a.penc...@uth.tmc.edu>> wrote:
> 
> Marin,
> 
>  
> 
> The statistics in 2010 review is fine. You may disagree with assumptions, but 
> I can assure you the “statistics” (as you call it) is fine. Careful reading 
> of the paper would reveal to you this much. 
> 
> Regards,
> 
> Pawel
> 
> 
> 
> 
> On Feb 16, 2020, at 10:38 AM, Marin van Heel <marin.vanh...@googlemail.com 
> <mailto:marin.vanh...@googlemail.com>> wrote:
> 
> 
> 
> **** EXTERNAL EMAIL ****
> 
> Dear Pawel and All others ....
> 
> This 2010 review is - unfortunately - largely based on the flawed statistics 
> I mentioned before, namely on the a priori assumption that the inner product 
> of a signal vector and a noise vector are ZERO (an orthogonality assumption). 
>  The (Frank & Al-Ali 1975) paper we have refuted on a number of occasions 
> (for example in 2005, and most recently in our BioRxiv paper) but you still 
> take that as the correct relation between SNR and FRC (and you never cite the 
> criticism...). 
> 
> Sorry
> 
> Marin
> 
>  
> 
> On Thu, Feb 13, 2020 at 10:42 AM Penczek, Pawel A 
> <pawel.a.penc...@uth.tmc.edu <mailto:pawel.a.penc...@uth.tmc.edu>> wrote:
> 
> Dear Teige,
> 
>  
> 
> I am wondering whether you are familiar with
> 
>  
> Resolution measures in molecular electron microscopy.
> Penczek PA. Methods Enzymol. 2010.
> 
> Citation
> Methods Enzymol. 2010;482:73-100. doi: 10.1016/S0076-6879(10)82003-8.
>  
> 
> You will find there answers to all questions you asked and much more. 
> 
>  
> 
> Regards,
> 
> Pawel Penczek
> 
>  
> 
> Regards,
> 
> Pawel
> 
> _______________________________________________
> 3dem mailing list
> 3...@ncmir.ucsd.edu <mailto:3...@ncmir.ucsd.edu>
> https://mail.ncmir.ucsd.edu/mailman/listinfo/3dem 
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__mail.ncmir.ucsd.edu_mailman_listinfo_3dem&d=DwMFaQ&c=bKRySV-ouEg_AT-w2QWsTdd9X__KYh9Eq2fdmQDVZgw&r=yEYHb4SF2vvMq3W-iluu41LlHcFadz4Ekzr3_bT4-qI&m=3-TZcohYbZGHCQ7azF9_fgEJmssbBksaI7ESb0VIk1Y&s=XHMq9Q6Zwa69NL8kzFbmaLmZA9M33U01tBE6iAtQ140&e=>
> _______________________________________________
> 3dem mailing list
> 3...@ncmir.ucsd.edu <mailto:3...@ncmir.ucsd.edu>
> https://mail.ncmir.ucsd.edu/mailman/listinfo/3dem 
> <https://mail.ncmir.ucsd.edu/mailman/listinfo/3dem>
>  
> 
> To unsubscribe from the CCP4BB list, click the following link:
> https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB&A=1 
> <https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB&A=1>
>  
> -- 
> 
> This e-mail and any attachments may contain confidential, copyright and or 
> privileged material, and are for the use of the intended addressee only. If 
> you are not the intended addressee or an authorised recipient of the 
> addressee please notify us of receipt by returning the e-mail and do not use, 
> copy, retain, distribute or disclose the information in or attached to the 
> e-mail.
> Any opinions expressed within this e-mail are those of the individual and not 
> necessarily of Diamond Light Source Ltd. 
> Diamond Light Source Ltd. cannot guarantee that this e-mail or any 
> attachments are free from viruses and we cannot accept liability for any 
> damage which you may sustain as a result of software viruses which may be 
> transmitted in or with the message.
> Diamond Light Source Limited (company no. 4375679). Registered in England and 
> Wales with its registered office at Diamond House, Harwell Science and 
> Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom
>  
> 
> 
> To unsubscribe from the CCP4BB list, click the following link:
> https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB&A=1 
> <https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB&A=1>
------
Randy J. Read
Department of Haematology, University of Cambridge
Cambridge Institute for Medical Research     Tel: + 44 1223 336500
The Keith Peters Building                               Fax: + 44 1223 336827
Hills Road                                                       E-mail: 
rj...@cam.ac.uk
Cambridge CB2 0XY, U.K.                             www-structmed.cimr.cam.ac.uk


########################################################################

To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB&A=1

Reply via email to