Well, I tend to think Ian is probably right, that doing things the "proper" way (vs French-Wilson) will not make much of a difference in the end.
Nevertheless, I don't think refining against the (possibly negative) intensities is a good solution to dealing with negative intensities --- that just ignores the problem, and will end up overweighting large negative intensities. Wouldn't it be better to correct the negative intensities with FW and then refine against that? On Jun 20, 2013, at 3:38 PM, Kay Diederichs <kay.diederi...@uni-konstanz.de> wrote: > Douglas, > > as soon as you come up with an algorithm that gives accurate, unbiased > intensity estimates together with their standard deviations, everybody will > be happy. But I'm not aware of progress in this question (Poisson signal with > background) in the last decades - I'd be glad to be proven wrong! > > Kay > > Am 20.06.13 21:27, schrieb Douglas Theobald: >> Kay, I understand the French-Wilson way of currently doing things, as you >> outline below. My point is that it is not optimal --- we could do things >> better --- since even French-Wilson accepts the idea of negative intensity >> measurements. I am trying to disabuse the (very stubborn) view that when >> the background is more than the spot, the only possible estimate of the >> intensity is a negative value. This is untrue, and unjustified by the >> physics involved. In principle, there is no reason to use French-Wilson, as >> we should never have reported a negative integrated intensity to begin with. >> >> I also understand that (Iobs-Icalc)^2 is not the actual refinement target, >> but the same point applies, and the actual target is based on a fundamental >> Gaussian assumption for the Is. >> >> >> On Jun 20, 2013, at 2:13 PM, Kay Diederichs <kay.diederi...@uni-konstanz.de> >> wrote: >> >>> Douglas, >>> >>> the intensity is negative if the integrated spot has a lower intensity than >>> the estimate of the background under the spot. So yes, we are not >>> _measuring_ negative intensities, rather we are estimating intensities, and >>> that estimate may turn out to be negative. In a later step we try to >>> "correct" for this, because it is non-physical, as you say. At that point, >>> the "proper statistical model" comes into play. Essentially we use this as >>> a "prior". In the order of increasing information, we can have more or less >>> informative priors for weak reflections: >>> 1) I > 0 >>> 2) I has a distribution looking like the right half of a Gaussian, and we >>> estimate its width from the variance of the intensities in a resolution >>> shell >>> 3) I follows a Wilson distribution, and we estimate its parameters from the >>> data in a resolution shell >>> 4) I must be related to Fcalc^2 (i.e. once the structure is solved, we >>> re-integrate using the Fcalc as prior) >>> For a given experiment, the problem is chicken-and-egg in the sense that >>> only if you know the characteristics of the data can you choose the correct >>> prior. >>> I guess that using prior 4) would be heavily frowned upon because there is >>> a danger of model bias. You could say: A Bayesian analysis done properly >>> should not suffer from model bias. This is probably true, but the theory to >>> ensure the word "properly" is not available at the moment. >>> Crystallographers usually use prior 3) which, as I tried to point out, also >>> has its weak spots, namely if the data do not behave like those of an ideal >>> crystal - and today's projects often result in data that would have been >>> discarded ten years ago, so they are far from ideal. >>> Prior 2) is available as an option in XDSCONV >>> Prior 1) seems to be used, or is available, in ctruncate in certain cases >>> (I don't know the details) >>> >>> Using intensities instead of amplitudes in refinement would avoid having to >>> choose a prior, and refinement would therefore not be compromised in case >>> of data violating the assumptions underlying the prior. >>> >>> By the way, it is not (Iobs-Icalc)^2 that would be optimized in refinement >>> against intensities, but rather the corresponding maximum likelihood >>> formula (which I seem to remember is more complicated than the amplitude ML >>> formula, or is not an analytical formula at all, but maybe somebody knows >>> better). >>> >>> best, >>> >>> Kay >>> >>> >>> On Thu, 20 Jun 2013 13:14:28 -0400, Douglas Theobald >>> <dtheob...@brandeis.edu> wrote: >>> >>>> I still don't see how you get a negative intensity from that. It seems >>>> you are saying that in many cases of a low intensity reflection, the >>>> integrated spot will be lower than the background. That is not equivalent >>>> to having a negative measurement (as the measurement is actually positive, >>>> and sometimes things are randomly less positive than backgroiund). If you >>>> are using a proper statistical model, after background correction you will >>>> end up with a positive (or 0) value for the integrated intensity. >>>> >>>> >>>> On Jun 20, 2013, at 1:08 PM, Andrew Leslie <and...@mrc-lmb.cam.ac.uk> >>>> wrote: >>>> >>>>> >>>>> The integration programs report a negative intensity simply because that >>>>> is the observation. >>>>> >>>>> Because of noise in the Xray background, in a large sample of intensity >>>>> estimates for reflections whose true intensity is very very small one >>>>> will inevitably get some measurements that are negative. These must not >>>>> be rejected because this will lead to bias (because some of these >>>>> intensities for symmetry mates will be estimated too large rather than >>>>> too small). It is not unusual for the intensity to remain negative even >>>>> after averaging symmetry mates. >>>>> >>>>> Andrew >>>>> >>>>> >>>>> On 20 Jun 2013, at 11:49, Douglas Theobald <dtheob...@brandeis.edu> wrote: >>>>> >>>>>> Seems to me that the negative Is should be dealt with early on, in the >>>>>> integration step. Why exactly do integration programs report negative >>>>>> Is to begin with? >>>>>> >>>>>> >>>>>> On Jun 20, 2013, at 12:45 PM, Dom Bellini <dom.bell...@diamond.ac.uk> >>>>>> wrote: >>>>>> >>>>>>> Wouldnt be possible to take advantage of negative Is to >>>>>>> extrapolate/estimate the decay of scattering background (kind of Wilson >>>>>>> plot of background scattering) to flat out the background and push all >>>>>>> the Is to positive values? >>>>>>> >>>>>>> More of a question rather than a suggestion ... >>>>>>> >>>>>>> D >>>>>>> >>>>>>> >>>>>>> >>>>>>> From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of >>>>>>> Ian Tickle >>>>>>> Sent: 20 June 2013 17:34 >>>>>>> To: ccp4bb >>>>>>> Subject: Re: [ccp4bb] ctruncate bug? >>>>>>> >>>>>>> Yes higher R factors is the usual reason people don't like I-based >>>>>>> refinement! >>>>>>> >>>>>>> Anyway, refining against Is doesn't solve the problem, it only >>>>>>> postpones it: you still need the Fs for maps! (though errors in Fs may >>>>>>> be less critical then). >>>>>>> -- Ian >>>>>>> >>>>>>> On 20 June 2013 17:20, Dale Tronrud >>>>>>> <det...@uoxray.uoregon.edu<mailto:det...@uoxray.uoregon.edu>> wrote: >>>>>>> If you are refining against F's you have to find some way to avoid >>>>>>> calculating the square root of a negative number. That is why people >>>>>>> have historically rejected negative I's and why Truncate and cTruncate >>>>>>> were invented. >>>>>>> >>>>>>> When refining against I, the calculation of (Iobs - Icalc)^2 couldn't >>>>>>> care less if Iobs happens to be negative. >>>>>>> >>>>>>> As for why people still refine against F... When I was distributing >>>>>>> a refinement package it could refine against I but no one wanted to do >>>>>>> that. The "R values" ended up higher, but they were looking at R >>>>>>> values calculated from F's. Of course the F based R values are lower >>>>>>> when you refine against F's, that means nothing. >>>>>>> >>>>>>> If we could get the PDB to report both the F and I based R values >>>>>>> for all models maybe we could get a start toward moving to intensity >>>>>>> refinement. >>>>>>> >>>>>>> Dale Tronrud >>>>>>> >>>>>>> >>>>>>> On 06/20/2013 09:06 AM, Douglas Theobald wrote: >>>>>>> Just trying to understand the basic issues here. How could refining >>>>>>> directly against intensities solve the fundamental problem of negative >>>>>>> intensity values? >>>>>>> >>>>>>> >>>>>>> On Jun 20, 2013, at 11:34 AM, Bernhard Rupp >>>>>>> <hofkristall...@gmail.com<mailto:hofkristall...@gmail.com>> wrote: >>>>>>> As a maybe better alternative, we should (once again) consider to >>>>>>> refine against intensities (and I guess George Sheldrick would agree >>>>>>> here). >>>>>>> >>>>>>> I have a simple question - what exactly, short of some sort of historic >>>>>>> inertia (or memory lapse), is the reason NOT to refine against >>>>>>> intensities? >>>>>>> >>>>>>> Best, BR >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> >>>>>>> This e-mail and any attachments may contain confidential, copyright and >>>>>>> or privileged material, and are for the use of the intended addressee >>>>>>> only. If you are not the intended addressee or an authorised recipient >>>>>>> of the addressee please notify us of receipt by returning the e-mail >>>>>>> and do not use, copy, retain, distribute or disclose the information in >>>>>>> or attached to the e-mail. >>>>>>> >>>>>>> Any opinions expressed within this e-mail are those of the individual >>>>>>> and not necessarily of Diamond Light Source Ltd. >>>>>>> >>>>>>> Diamond Light Source Ltd. cannot guarantee that this e-mail or any >>>>>>> attachments are free from viruses and we cannot accept liability for >>>>>>> any damage which you may sustain as a result of software viruses which >>>>>>> may be transmitted in or with the message. >>>>>>> >>>>>>> Diamond Light Source Limited (company no. 4375679). Registered in >>>>>>> England and Wales with its registered office at Diamond House, Harwell >>>>>>> Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United >>>>>>> Kingdom >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>> >>