On 05/23/12 08:06, Nicholas M Glykos wrote:
> Hi Ed,
> 
> 
>> I may be wrong here (and please by all means correct me), but I think
>> it's not entirely true that experimental errors are not used in modern
>> map calculation algorithm.  At the very least, the 2mFo-DFc maps are
>> calibrated to the model error (which can be ideologically seen as the
>> "error of experiment" if you include model inaccuracies into that).
> 
> This is an amplitude modification. It does not change the fact that the 
> sigmas are not being used in the inversion procedure [and also does not 
> change the (non) treatment of missing data]. A more direct and relevant 
> example to discuss (with respect to Francisco's question) would be the 
> calculation of a Patterson synthesis (where the phases are known and 
> fixed).
> 
> 
>> I have not done extensive (or any for that matter) testing, but my 
>> evidence-devoid gut feeling is that maps not using experimental errors 
>> (which in REFAMC can be done either via gui button or by excluding SIGFP 
>> from LABIN in a script) will for a practicing crystallographer be 
>> essentially indistinguishable.
> 
> It seems that although you are not doubting the importance of maximum 
> likelihood for refinement, you do seem to doubt the importance of closely 
> related probabilistic methods (such as maximum entropy methods) for map 
> calculation. I think you can't have it both ways ... :-)
> 
> 
> 
>> The reason for this is that "model errors" as estimated by various
>> maximum likelihood algorithms tend to exceed experimental errors.  It
>> may be that these estimates are inflated (heretical thought but when you
>> think about it uniform inflation of the SIGMA_wc may have only
>> proportional impact on the log-likelihood or even less so when they
>> correlate with experimental errors).  Or it may be that the experimental
>> errors are underestimated (another heretical thought).
> 
> My experience from comparing conventional (FFT-based) and maximum-entropy- 
> related maps is that the main source of differences between the two maps 
> has more to do with missing data (especially low resolution overloaded 
> reflections) and putative outliers (for difference Patterson maps), but in 
> certain cases (with very accurate or inaccurate data) standard deviations 
> do matter.

   In a continuation of this torturous diversion from the original question...

   Since your concern is not how the sigma(Fo) plays out in refinement but
how uncertainties are dealt with in the map calculation itself (where an
FFT calculates the most probable density values and maximum entropy would
calculate the "best", or centroid, density values) I believe the most
relevant measure of the uncertainty of the Fourier coefficients would be
sigma(2mFo-DFc).  This would be estimated from a complex calculation of
sigma(sigmaA), sigma(Fo), sigma(Fc) and sigma(Phic).  I expect that the
contribution of sigma(Fo) would be one of the smallest contributors to this
calculation, as long as Fo is "observed".  I wouldn't expect the loss of
sigma(Fo) to be catastrophic.

   Wouldn't sigma(sigmaA) be the largest component since sigmaA is a function
of resolution and based only on the test set?

Dale Tronrud


> 
> 
> All the best,
> Nicholas
> 
> 

Reply via email to