On 1 June 2012 03:22, Edward A. Berry <ber...@upstate.edu> wrote:
> Leo will probably answer better than I can, but I would say I/SigI counts
> only
> the present reflection, so eliminating noise by anisotropic truncation
> should
> improve it, raising the average I/SigI in the last shell.

We always include unmeasured reflections with I/sigma(I) = 0 in the
calculation of the mean I/sigma(I) (i.e. we divide the sum of
I/sigma(I) for measureds by the predicted total no of reflections incl
unmeasureds), since for unmeasureds I is (almost) completely unknown
and therefore sigma(I) is effectively infinite (or at least finite but
large since you do have some idea of what range I must fall in).  A
shell with <I/sigma(I)> = 2 and 50% completeness clearly doesn't carry
the same information content as one with the same <I/sigma(I)> and
100% complete; therefore IMO it's very misleading to quote
<I/sigma(I)> including only the measured reflections.  This also means
we can use a single cut-off criterion (we use mean I/sigma(I) > 1),
and we don't need another arbitrary cut-off criterion for
completeness.  As many others seem to be doing now, we don't use
Rmerge, Rpim etc as criteria to estimate resolution, they're just too
unreliable - Rmerge is indeed dead and buried!

Actually a mean value of I/sigma(I) of 2 is highly statistically
significant, i.e. very unlikely to have arisen by chance variations,
and the significance threshold for the mean must be much closer to 1
than to 2.  Taking an average always increases the statistical
significance, therefore it's not valid to compare an _average_ value
of I/sigma(I) = 2 with a _single_ value of I/sigma(I) = 3 (taking 3
sigma as the threshold of statistical significance of an individual
measurement): that's a case of "comparing apples with pears".  In
other words in the outer shell you would need a lot of highly
significant individual values >> 3 to attain an overall average of 2
since the majority of individual values will be < 1.

> F/sigF is expected to be better than I/sigI because dx^2 = 2Xdx,
> dx^2/x^2 = 2dx/x, dI/I = 2* dF/F  (or approaches that in the limit . . .)

That depends on what you mean by 'better': every metric must be
compared with a criterion appropriate to that metric. So if we are
comparing I/sigma(I) with a criterion value = 3, then we must compare
F/sigma(F) with criterion value = 6 ('in the limit' of zero I), in
which case the comparison is no 'better' (in terms of information
content) with I than with F: they are entirely equivalent.  It's
meaningless to compare F/sigma(F) with the criterion value appropriate
to I/sigma(I): again that's "comparing apples and pears"!

Cheers

-- Ian

Reply via email to