James, 

I am saying that my answer to "what is the expectation and variance if I 
observe a 10x10 patch of pixels with zero
counts?" is Iobs=0.01 sigIobs=0.01 (and Iobs=sigIobs=1 if there is only one 
pixel) IF the uniform prior applies. I agree with Gergely and others that this 
prior (with its high expectation value and variance) appears unrealistic.

In your posting of Sat, 16 Oct 2021 12:00:30 -0700 you make a calculation of 
Ppix that appears like a more suitable expectation value of a prior to me. A 
suitable prior might then be 1/Ppix * e^(-l/Ppix) (Agostini §7.7.1). The 
Bayesian argument is IIUC that the prior plays a minor role if you do repeated 
measurements of the same value, because you use the posterior of the first 
measurement as the prior for the second, and so on. What this means is that 
your Ppix must play the role of a scale factor if you consider the 100-pixel 
experiment.
However, for the 1-pixel experiment, having a more suitable prior should be 
more important.

best,
Kay




On Mon, 18 Oct 2021 12:40:45 -0700, James Holton <jmhol...@lbl.gov> wrote:

>Thank you very much for this Kay!
>
>So, to summarize, you are saying the answer to my question "what is the
>expectation and variance if I observe a 10x10 patch of pixels with zero
>counts?" is:
>Iobs = 0.01
>sigIobs = 0.01     (defining sigIobs = sqrt(variance(Iobs)))
>
>And for the one-pixel case:
>Iobs = 1
>sigIobs = 1
>
>but in both cases the distribution is NOT Gaussian, but rather
>exponential. And that means adding variances may not be the way to
>propagate error.
>
>Is that right?
>
>-James Holton
>MAD Scientist
>
>
>
>On 10/18/2021 7:00 AM, Kay Diederichs wrote:
>> Hi James,
>>
>> I'm a bit behind ...
>>
>> My answer about the basic question ("a patch of 100 pixels each with zero 
>> counts - what is the variance?") you ask is the following:
>>
>> 1) we all know the Poisson PDF (Probability Distribution Function)  P(k|l) = 
>> l^k*e^(-l)/k!  (where k stands for for an integer >=0 and l is lambda) which 
>> tells us the probability of observing k counts if we know l. The PDF is 
>> normalized: SUM_over_k (P(k|l)) is 1 when k=0...infinity is 1.
>> 2) you don't know before the experiment what l is, and you assume it is some 
>> number x with 0<=x<=xmax (the xmax limit can be calculated by looking at the 
>> physics of the experiment; it is finite and less than the overload value of 
>> the pixel, otherwise you should do a different experiment). Since you don't 
>> know that number, all the x values are equally likely - you use a uniform 
>> prior.
>> 3) what is the PDF P(l|k) of l if we observe k counts?  That can be found 
>> with Bayes theorem, and it turns out that (due to the uniform prior) the 
>> right hand side of the formula looks the same as in 1) : P(l|k) = 
>> l^k*e^(-l)/k! (again, the ! stands for the factorial, it is not a semantic 
>> exclamation mark). This is eqs. 7.42 and 7.43 in Agostini "Bayesian 
>> Reasoning in Data Analysis".
>> 3a) side note: if we calculate the expectation value for l, by multiplying 
>> with l and integrating over l from 0 to infinity, we obtain E(P(l|k))=k+1, 
>> and similarly for the variance (Agostini eqs 7.45 and 7.46)
>> 4) for k=0 (zero counts observed in a single pixel), this reduces to 
>> P(l|0)=e^(-l) for a single observation (pixel). (this is basic math; see 
>> also §7.4.1 of Agostini.
>> 5) since we have 100 independent pixels, we must multiply the individual 
>> PDFs to get the overall PDF f, and also normalize to make the integral over 
>> that PDF to be 1: the result is f(l|all 100 pixels are 0)=n*e^(-n*l). (basic 
>> math). A more Bayesian procedure would be to realize that the posterior PDF 
>> P(l|0)=e^(-l) of the first pixel should be used as the prior for the second 
>> pixel, and so forth until the 100th pixel. This has the same result f(l|all 
>> 100 pixels are 0)=n*e^(-n*l) (Agostini § 7.7.2)!
>> 6) the expectation value INTEGRAL_0_to_infinity over l*n*e^(-n*l) dl is 1/n 
>> .  This is 1 if n=1 as we know from 3a), and 1/100 for 100 pixels with 0 
>> counts.
>> 7) the variance is then INTEGRAL_0_to_infinity over (l-1/n)^2*n*e^(-n*l) dl 
>> . This is 1/n^2
>>
>> I find these results quite satisfactory. Please note that they deviate from 
>> the MLE result: expectation value=0, variance=0 . The problem appears to be 
>> that a Maximum Likelihood Estimator may give wrong results for small n; 
>> something that I've read a couple of times but which appears not to be 
>> universally known/taught. Clearly, the result in 6) and 7) for large n 
>> converges towards 0, as it should be.
>> What this also means is that one should really work out the PDF instead of 
>> just adding expectation values and variances (and arriving at 100 if all 100 
>> pixels have zero counts) because it is contradictory to use a uniform prior 
>> for all the pixels if OTOH these agree perfectly in being 0!
>>
>> What this means for zero-dose extrapolation I have not thought about. At 
>> least it prevents infinite weights!
>>
>> Best,
>> Kay
>>
>>
>>
>>
>
>########################################################################
>
>To unsubscribe from the CCP4BB list, click the following link:
>https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB&A=1
>
>This message was issued to members of www.jiscmail.ac.uk/CCP4BB, a mailing 
>list hosted by www.jiscmail.ac.uk, terms & conditions are available at 
>https://www.jiscmail.ac.uk/policyandsecurity/

########################################################################

To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB&A=1

This message was issued to members of www.jiscmail.ac.uk/CCP4BB, a mailing list 
hosted by www.jiscmail.ac.uk, terms & conditions are available at 
https://www.jiscmail.ac.uk/policyandsecurity/

Reply via email to