Dear All,

This is way over my head, but did Jaynes not say something about getting different results depending whether you assume both outcomes are possible? I think his example was that if you observe the sun rise for 100 days in a row, the probability that it will rise on day 101 is no greater than about 99% - *provided* you can assume that there may be some days where the sun does not rise. In your case could it make a difference if you need to consider the possibility that the synchrotron is powered down or the shutter is closed, so there will never be any photons?

Yours,
Rasmus

On 18/10/2021 20:48, James Holton wrote:
HDF5 is still "framing", but using better compression than the "byte offset" one implemented in Pilatus CBFs, which has a minimum of one byte per pixel. Very fast, but not designed for near-blank images.

Assuming entropy-limited compression the ultimate data rate is the number of photons/s hitting the detector multiplied by log2(Npix) where Npix is the number of pixels. The reason its log2() is because that's the number of bits needed to store the address of which pixel got the photon, and since the arrival of each photon is basically random further compression is generally not possible without loss of information. There might be some additional bits about the time interval, but it might be more efficient to store that implicitly in the framing. As long as storing "no photons" only takes up one bit that would probably be more efficient.

So, for a 100 micron thick sample, flux = 1e12 photons/s and ~4000 pixels you get ~3.4 GB/s of perfectly and losslessly compressed data. Making it smaller than that requires throwing away information.

I'm starting to think this might be the best prior. If you start out assuming nothing (not even uniform), then the variance of 0 photons may well be infinite. However, it is perhaps safe to assume that the dataset as a whole as at least one photon in it. And then if you happen to know the whole data set contains N photons and you have F images of Q pixels, then maybe a reasonable prior distribution is Poissonian with mean=variance= N/F/Q photons/pixel ?

-James Holton
MAD Scientist

On 10/17/2021 11:30 PM, Frank von Delft wrote:
Thanks, I learnt two things now - one of which being that I'm credited with coining that word!  Stap me vittals...

If it's single photon events you're after, isn't it quantum statistics where you need to go find that prior?  (Or is that what you're doing in this thread - I wouldn't be able to tell.)

Also:  should the detectors change how they read out things, then? Just write out the events with timestamp, rather than dumping all pixels all the time into these arbitrary containers called "image". Or is that what's already happening in HDF5 (which I don't understand one bit, I should add).

Frank




On 17/10/2021 18:12, James Holton wrote:

Well Frank, I think it comes down to something I believe you were the first to call "dose slicing".

Like fine phi slicing, collecting a larger number of weaker images records the same photons, but with more information about the sample before it dies. In fine phi slicing the extra information allows you to do better background rejection, and in "dose slicing" the extra information is about radiation damage. We lose that information when we use longer exposures per image, and if you burn up the entire useful life of your crystal in one shot, then all information about how the spots decayed during the exposure is lost. Your data are also rather incomplete.

How much information is lost? Well, how much more disk space would be taken up, even after compression, if you collected only 1 photon per image?  And kept collecting all the way out to 30 MGy in dose? That's about 1 million photons (images) per cubic micron of crystal.  So, I'd say the amount of information lost is "quite a bit".

But what makes matters worse is that if you did collect this data set and preserved all information available from your crystal you'd have no way to process it. This is not because its impossible, its just that we don't have the software. Your only choice would be to go find images with the same "phi" value and add them together until you have enough photons/pixel to index it. Once you've got an indexing solution you can map every photon hit to a position in reciprocal space as well as give it a time/dose stamp. What do you do with that?  You can do zero-dose extrapolation, of course!  Damage-free data! Wouldn't that be nice. Or can you?  The data you will have in hand for each reciprocal-space pixel might look something like: tic tic .. tic . tic ... tic tic........tic ........... tic............................tic.

So. Eight photons.  With time-of-arrival information.  How do you fit a straight line to that?  You could "bin" the data or do some kind of smoothing thing, but then you are losing information again. Perhaps also making ill-founded assumptions. You need error bars of some kind, and, better yet, the shape of the distribution implied by those error bars.

And all this makes me think somebody must have already done this. I'm willing to bet probably some time in the late 1700s to early 1800s. All we're really talking about here is augmenting maximum-likelihood estimation of an average value to maximum-likelihood estimation of a straight line. That is, slope and intercept, with sigmas on both. I suspect the proper approach is to first bring everything down to the exact information content of a single photon (or lack of a photon), and build up from there.  If you are lucky enough to have a large number of photons then linear regression will work, and you are back to Diederichs (2003). But when you're photon-starved the statistics of single photons become more and more important.  This led me to: is it k? or k+1 ?  When k=0 getting this wrong could introduce a factor of infinity.

So, perhaps the big "consequence of getting it wrong" is embarrassing myself by re-making a 200-year old mistake I am not currently aware of. I am confident a solution exists, but only recently started working on this.  So, I figured ... ask the world?

-James Holton
MAD Scientist


On 10/17/2021 1:51 AM, Frank Von Delft wrote:
James, I've been watching the thread with fascination, but also the confusion of wild ignorance. I've finally realised why.

What I've missed is: what exactly makes the question so important? I've understood what brought it up, if course, but not the consequence of getting it wrong.

Frank

Sent from tiny silly touch screen
------------------------------------------------------------------------
*From:* James Holton <jmhol...@lbl.gov>
*Sent:* Saturday, 16 October 2021 20:01
*To:* CCP4BB@JISCMAIL.AC.UK
*Subject:* Re: [ccp4bb] am I doing this right?

Thank you everyone for your thoughtful and thought-provoking responses!

But, I am starting to think I was not as clear as I could have been
about my question.  I am actually concerning myself with background, not
necessarily Bragg peaks.  With Bragg photons you want the sum, but for
background you want the average.

What I'm getting at is: how does one properly weight a zero-photon
observation when it comes time to combine it with others?  Hopefully
they are not all zero.  If they are, check your shutter.

So, ignoring Bragg photons for the moment (let us suppose it is a
systematic absence) what I am asking is: what is the variance, or,
better yet,what is the WEIGHT one should assign to the observation of
zero photons in a patch of 10x10 pixels?

In the absence of any prior knowledge this is a difficult question, but
a question we kind of need to answer if we want to properly measure data
from weak images.  So, what do we do?

Well, with the "I have no idea" uniform prior, it would seem that
expectation (Epix) and variance (Vpix) would be k+1 = 1 for each pixel,
and therefore the sum of Epix and Vpix over the 100 independent pixels is:

Epatch=Vpatch=100 photons

I know that seems weird to assume 100 photons should have hit when we
actually saw none, but consider what that zero-photon count, all by
itself, is really telling you:
a) Epix > 20 ? No way. That is "right out". Given we know its Poisson
distributed, and that background is flat, it is VERY unlikely you have E
that big when you saw zero. Cross all those E values off your list.
b) Epix=0 ? Well, that CAN be true, but other things are possible and
all of them are E>0. So, most likely E is not 0, but at least a little
bit higher.
c) Epix=1e-6 ?  Yeah, sure, why not?
d) Epix= -1e-6 ?  No. Don't be silly.
e) If I had to guess? Meh. 1 photon per pixel?  That would be k+1

I suppose my objection to E=V=0 is because V=0 implies infinite
confidence in the value of E, and that we don't have. Yes, it is true
that we are quite confident in the fact that we did not see any photons
this time, but the remember that E and V are the mean and variance that
you would see if you did a million experiments under the same
conditions. We are trying to guess those from what we've got. Just
because you've seen zero a hundred times doesn't mean the 101st
experiment won't give you a count.  If it does, then maybe Epatch=0.01
and Epix=0.0001?  But what do you do before you see your first photon?
All you can really do is bracket it.

But what if you come up with a better prior than "I have no idea" ?
Well, we do have other pixels on the detector, and presuming the
background is flat, or at least smooth, maybe the average counts/pixel
is a better prior?

So, let us consider an ideal detector with 1e6 independent pixels. Let
us further say that 1e5 background photons have hit that detector.  I
want to still ignore Bragg photons because those have a very different
prior distribution to the background.  Let us say we have masked off all
the Bragg areas.

The average overall background is then 0.1 photons/pixel. Let us assign
that to the prior probability Ppix = 0.1.  Now let us look again at that
patch of 10x10 pixels with zero counts on it.  We expected to see 10,
but got 0.  What are the odds of that?  Pretty remote. Less than 1 in a
million.

I suspect in this situation where such an unlikely event has occurred it
should perhaps be given a variance larger than 100. Perhaps quite a bit
larger?  Subsequent "sigma-weighted" summation would then squash its
contribution down to effectively 0. So, relative to any other
observation with even a shred of merit it would have no impact. Giving
it V=0, however? That can't be right.

But what if Ppix=0.01 ?  Then we expect to see zero counts on our
100-pixel patch about 1/3 of the time. Same for 1-photon observations.
Giving these two kinds of observations the same weight seems more
sensible, given the prior.

Another prior might be to take the flux and sample thickness into
account.  Given the cross section of light elements the expected
photons/pixel on most any detector would be:

Ppix = 1.2e-5*flux*exposure*thickness*omega/Npixels
where:
Ppix = expected photons/pixel
Npixels = number of pixels on the detector
omega  = fraction of scattered photons that hit it (about 0.5)
thickness = thickness of sample and loop in microns
exposure = exposure time in seconds
flux = incident beam flux in photons/s
1.2e-5 = 1e-4 cm/um * 1.2 g/cm^3 * 0.2 cm^2/g (cross section of oxygen)

If you don't know anything else about the sample, you can at least know
that.

Or am I missing something?

-James Holton
MAD Scientist


On 10/16/2021 12:47 AM, Kay Diederichs wrote:
> Dear Gergely,
>
> with " 10 x 10 patch of pixels ", I believe James means that he observes 100 neighbouring pixels each with 0 counts. Thus the frequentist view can be taken, and results in 0 as the variance, right?
>
> best,
> Kay
>
>
> On Fri, 15 Oct 2021 21:07:26 +0000, Gergely Katona <gergely.kat...@gu.se> wrote:
>
>> Dear James,
>>
>> Uniform distribution sounds like “I have no idea”, but a uniform distribution does not go from -inf to +inf. If I believe that every count from 0 to 65535 has the same probability, then I also expect counts with an average of 32768 on the image. It is not an objective belief in the end and probably not a very good idea for an X-ray experiment if the number of observations are small. Concerning which variance is the right one, the frequentist view requires frequencies to be observed. In the absence of frequencies, there is no error estimate. Bayesians at least can determine a single distribution as an answer without observations and that will be their prior belief of the variance. Again, I would avoid a uniform a priori distribution for the variance. For a Poisson distribution the convenient conjugate prior is the gamma distribution. It can control the magnitude of k and strength of belief with its location and scale parameter, respectively.
>>
>> Best wishes,
>>
>> Gergely
>>
>> Gergely Katona, Professor, Chairman of the Chemistry Program Council
>> Department of Chemistry and Molecular Biology, University of Gothenburg
>> Box 462, 40530 Göteborg, Sweden
>> Tel: +46-31-786-3959 / M: +46-70-912-3309 / Fax: +46-31-786-3910
>> Web: http://katonalab.eu <http://katonalab.eu>, Email: gergely.kat...@gu.se
>>
>> From: CCP4 bulletin board <CCP4BB@JISCMAIL.AC.UK> On Behalf Of James Holton
>> Sent: 15 October, 2021 18:06
>> To: CCP4BB@JISCMAIL.AC.UK
>> Subject: Re: [ccp4bb] am I doing this right?
>>
>> Well I'll be...
>>
>> Kay Diederichs pointed out to me off-list that the k+1 expectation and variance from observing k photons is in "Bayesian Reasoning in Data Analysis: A Critical Introduction" by Giulio D. Agostini.  Granted, that is with a uniform prior, which I take as the Bayesean equivalent of "I have no idea".
>>
>> So, if I'm looking to integrate a 10 x 10 patch of pixels on a weak detector image, and I find that area has zero counts, what variance shall I put on that observation?  Is it:
>>
>> a) zero
>> b) 1.0
>> c) 100
>>
>> Wish I could say there are no wrong answers, but I think at least two of those are incorrect,
>>
>> -James Holton
>> MAD Scientist
>> On 10/13/2021 2:34 PM, Filipe Maia wrote:
>> I forgot to add probably the most important. James is correct, the expected value of u, the true mean, given a single observation k is indeed k+1 and k+1 is also the mean square error of using k+1 as the estimator of the true mean.
>>
>> Cheers,
>> Filipe
>>
>> On Wed, 13 Oct 2021 at 23:17, Filipe Maia <fil...@xray.bmc.uu.se<mailto:fil...@xray.bmc.uu.se>> wrote:
>> Hi,
>>
>> The maximum likelihood estimator for a Poisson distributed variable is equal to the mean of the observations. In the case of a single observation, it will be equal to that observation. As Graeme suggested, you can calculate the probability mass function for a given observation with different Poisson parameters (i.e. true means) and see that function peaks when the parameter matches the observation.
>>
>> The root mean squared error of the estimation of the true mean from a single observation k seems to be sqrt(k+2). Or to put it in another way, mean squared error, that is the expected value of (k-u)**2, for an observation k and a true mean u, is equal to k+2.
>>
>> You can see some example calculations at https://colab.research.google.com/drive/1eoaNrDqaPnP-4FTGiNZxMllP7SFHkQuS?usp=sharing <https://colab.research.google.com/drive/1eoaNrDqaPnP-4FTGiNZxMllP7SFHkQuS?usp=sharing>
>>
>> Cheers,
>> Filipe
>>
>> On Wed, 13 Oct 2021 at 17:14, Winter, Graeme (DLSLtd,RAL,LSCI) <00006a19cead4548-dmarc-requ...@jiscmail.ac.uk<mailto:00006a19cead4548-dmarc-requ...@jiscmail.ac.uk>> wrote: >> This rang a bell to me last night, and I think you can derive this from first principles
>>
>> If you assume an observation of N counts, you can calculate the probability of such an observation for a given Poisson rate constant X. If you then integrate over all possible value of X to work out the central value of the rate constant which is most likely to result in an observation of N I think you get X = N+1
>>
>> I think it is the kind of calculation you can perform on a napkin, if memory serves
>>
>> All the best Graeme
>>
>>
>> On 13 Oct 2021, at 16:10, Andrew Leslie - MRC LMB <and...@mrc-lmb.cam.ac.uk<mailto:and...@mrc-lmb.cam.ac.uk>> wrote:
>>
>> Hi Ian, James,
>>
>>                       I have a strong feeling that I have seen this result before, and it was due to Andy Hammersley at ESRF. I’ve done a literature search and there is a paper relating to errors in analysis of counting statistics (se below), but I had a quick look at this and could not find the (N+1) correction, so it must have been somewhere else. I Have cc’d Andy on this Email (hoping that this Email address from 2016 still works) and maybe he can throw more light on this. What I remember at the time I saw this was the simplicity of the correction.
>>
>> Cheers,
>>
>> Andrew
>>
>> Reducing bias in the analysis of counting statistics data
>> Hammersley, AP<https://www.webofscience.com/wos/author/record/2665675 <https://www.webofscience.com/wos/author/record/2665675>> (Hammersley, AP) Antoniadis, A<https://www.webofscience.com/wos/author/record/13070551 <https://www.webofscience.com/wos/author/record/13070551>> (Antoniadis, A) >> NUCLEAR INSTRUMENTS & METHODS IN PHYSICS RESEARCH SECTION A-ACCELERATORS SPECTROMETERS DETECTORS AND ASSOCIATED EQUIPMENT
>> Volume
>> 394
>> Issue
>> 1-2
>> Page
>> 219-224
>> DOI
>> 10.1016/S0168-9002(97)00668-2
>> Published
>> JUL 11 1997
>>
>>
>> On 12 Oct 2021, at 18:55, Ian Tickle <ianj...@gmail.com<mailto:ianj...@gmail.com>> wrote:
>>
>>
>> Hi James
>>
>> What the Poisson distribution tells you is that if the true count is N then the expectation and variance are also N.  That's not the same thing as saying that for an observed count N the expectation and variance are N.  Consider all those cases where the observed count is exactly zero.  That can arise from any number of true counts, though as you noted larger values become increasingly unlikely.  However those true counts are all >= 0 which means that the mean and variance of those true counts must be positive and non-zero.  From your results they are both 1 though I haven't been through the algebra to prove it.
>>
>> So what you are saying seems correct: for N observed counts we should be taking the best estimate of the true value and variance as N+1.  For reasonably large N the difference is small but if you are concerned with weak images it might start to become significant.
>>
>> Cheers
>>
>> -- Ian
>>
>>
>> On Tue, 12 Oct 2021 at 17:56, James Holton <jmhol...@lbl.gov<mailto:jmhol...@lbl.gov>> wrote:
>> All my life I have believed that if you're counting photons then the
>> error of observing N counts is sqrt(N). However, a calculation I just
>> performed suggests its actually sqrt(N+1).
>>
>> My purpose here is to understand the weak-image limit of data
>> processing. Question is: for a given pixel, if one photon is all you
>> got, what do you "know"?
>>
>> I simulated millions of 1-second experiments. For each I used a "true"
>> beam intensity (Itrue) between 0.001 and 20 photons/s. That is, for
>> Itrue= 0.001 the average over a very long exposure would be 1 photon
>> every 1000 seconds or so. For a 1-second exposure the observed count (N) >> is almost always zero. About 1 in 1000 of them will see one photon, and
>> roughly 1 in a million will get N=2. I do 10,000 such experiments and
>> put the results into a pile.  I then repeat with Itrue=0.002,
>> Itrue=0.003, etc. All the way up to Itrue = 20. At Itrue > 20 I never
>> see N=1, not even in 1e7 experiments. With Itrue=0, I also see no N=1
>> events.
>> Now I go through my pile of results and extract those with N=1, and
>> count up the number of times a given Itrue produced such an event. The
>> histogram of Itrue values in this subset is itself Poisson, but with
>> mean = 2 ! If I similarly count up events where 2 and only 2 photons
>> were seen, the mean Itrue is 3. And if I look at only zero-count events
>> the mean and standard deviation is unity.
>>
>> Does that mean the error of observing N counts is really sqrt(N+1) ?
>>
>> I admit that this little exercise assumes that the distribution of Itrue
>> is uniform between 0.001 and 20, but given that one photon has been
>> observed Itrue values outside this range are highly unlikely. The
>> Itrue=0.001 and N=1 events are only a tiny fraction of the whole.  So, I
>> wold say that even if the prior distribution is not uniform, it is
>> certainly bracketed. Now, Itrue=0 is possible if the shutter didn't
>> open, but if the rest of the detector pixels have N=~1, doesn't this
>> affect the prior distribution of Itrue on our pixel of interest?
>>
>> Of course, two or more photons are better than one, but these days with >> small crystals and big detectors N=1 is no longer a trivial situation. >> I look forward to hearing your take on this. And no, this is not a trick.
>>
>> -James Holton
>> MAD Scientist
>>
>> ########################################################################
>>
>> To unsubscribe from the CCP4BB list, click the following link:
>> https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB&A=1 <https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB&A=1>
>>
>> This message was issued to members of www.jiscmail.ac.uk/CCP4BB<http://www.jiscmail.ac.uk/CCP4BB> <http://www.jiscmail.ac.uk/CCP4BB<http://www.jiscmail.ac.uk/CCP4BB>>, a mailing list hosted by www.jiscmail.ac.uk<http://www.jiscmail.ac.uk/ <http://www.jiscmail.ac.uk/>>, terms & conditions are available at https://www.jiscmail.ac.uk/policyandsecurity/ <https://www.jiscmail.ac.uk/policyandsecurity/>
>>
>> ________________________________
>>
>> To unsubscribe from the CCP4BB list, click the following link:
>> https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB&A=1 <https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB&A=1>
>>
>>
>> ________________________________
>>
>> To unsubscribe from the CCP4BB list, click the following link:
>> https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB&A=1 <https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB&A=1>
>>
>>
>>
>>
>> --
>>
>> This e-mail and any attachments may contain confidential, copyright and or privileged material, and are for the use of the intended addressee only. If you are not the intended addressee or an authorised recipient of the addressee please notify us of receipt by returning the e-mail and do not use, copy, retain, distribute or disclose the information in or attached to the e-mail. >> Any opinions expressed within this e-mail are those of the individual and not necessarily of Diamond Light Source Ltd. >> Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments are free from viruses and we cannot accept liability for any damage which you may sustain as a result of software viruses which may be transmitted in or with the message. >> Diamond Light Source Limited (company no. 4375679). Registered in England and Wales with its registered office at Diamond House, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom
>>
>>
>> ________________________________
>>
>> To unsubscribe from the CCP4BB list, click the following link:
>> https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB&A=1 <https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB&A=1>
>>
>> ________________________________
>>
>> To unsubscribe from the CCP4BB list, click the following link:
>> https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB&A=1 <https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB&A=1>
>>
>>
>> ________________________________
>>
>> To unsubscribe from the CCP4BB list, click the following link:
>> https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB&A=1 <https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB&A=1>
>>
>> ########################################################################
>>
>> To unsubscribe from the CCP4BB list, click the following link:
>> https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB&A=1 <https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB&A=1>
>>
>> This message was issued to members of www.jiscmail.ac.uk/CCP4BB <http://www.jiscmail.ac.uk/CCP4BB>, a mailing list hosted by www.jiscmail.ac.uk <http://www.jiscmail.ac.uk>, terms & conditions are available at https://www.jiscmail.ac.uk/policyandsecurity/ <https://www.jiscmail.ac.uk/policyandsecurity/> > ########################################################################
>
> To unsubscribe from the CCP4BB list, click the following link:
> https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB&A=1 <https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB&A=1>
>
> This message was issued to members of www.jiscmail.ac.uk/CCP4BB <http://www.jiscmail.ac.uk/CCP4BB>, a mailing list hosted by www.jiscmail.ac.uk <http://www.jiscmail.ac.uk>, terms & conditions are available at https://www.jiscmail.ac.uk/policyandsecurity/ <https://www.jiscmail.ac.uk/policyandsecurity/>

########################################################################

To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB&A=1 <https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB&A=1>

This message was issued to members of www.jiscmail.ac.uk/CCP4BB <http://www.jiscmail.ac.uk/CCP4BB>, a mailing list hosted by www.jiscmail.ac.uk <http://www.jiscmail.ac.uk>, terms & conditions are available at https://www.jiscmail.ac.uk/policyandsecurity/ <https://www.jiscmail.ac.uk/policyandsecurity/>




------------------------------------------------------------------------

To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB&A=1 <https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB&A=1>


--
Rasmus H. Fogh                                   Tel.: +44 (0)1223 353033
Global Phasing Ltd.,                             Fax.: +44 (0)1223 366889
Sheraton House,
Castle Park,
Cambridge CB3 0AX
United Kingdom

########################################################################

To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB&A=1

This message was issued to members of www.jiscmail.ac.uk/CCP4BB, a mailing list 
hosted by www.jiscmail.ac.uk, terms & conditions are available at 
https://www.jiscmail.ac.uk/policyandsecurity/

Reply via email to