Re: [ccp4bb] number of cycles in refmac

2011-09-01 Thread James Holton

I noticed this kind of thing myself a long time ago, and wondered what
refmac was doing to make things worse, so I let it keep going. And
going and going. I was delighted to discover that although R and/or
Rfree could rise over up to hundreds of cycles, it almost invariably
turns around again, usually getting much better than that blip I saw
after a measly 2-3 cycles. Leaving me to wonder why I was so greedy in
the first place. Then I got even more greedy and let it keep running.
There can be several hills and valleys when you do this, usually
corresponding to larger-scale wiggle motions of backbone and the like,
or even side chains like lysine flipping rotamers. It is perhaps
understandable that these don't happen right away. After all, you are
performing a search in a ~10,000-dimensional space. But eventually you
get to the point where refmac no longer moves any of the atoms and the
input and output files are essentially identical. (rmsd  0.002 A or
so). This is what I call convergence, and this seems to be one of
those rare occasions where Ian and I are in total agreement!

After all, if you think about it, if you stop refinement just before R
and Rfree go on a significant upswing, how are you supposed to
interpret a significant upswing in your next round of refinement? Is
it due to the rebuild you just did? Or is it something far away from the
atoms you moved that was about to wiggle? The only way to really know
is to let the whole thing settle down first. Then you can be sure that
the subsequent change in R is due to something you just did, and not
something you did a hundred cycles ago.

Oh, and BTW, as far as I know, it is still a good idea to let refmac
itself run for only ~5 cycles and just keep re-cycling the input file.
In the old days, you wanted to do this because after the atoms moved a
bit you had to re-run PROTIN to re-determine the restraints. If you
didn't, you could get what I called water cannons where a
poorly-modeled water would fly unrestrained through the unit cell,
knocking side chains out of their density. Admittedly, this was a long
time ago, and I'm sure it has been fixed by now. Garib can correct me if
I'm wrong about that.

Anyway, to answer the OP's question, I saw let refmac, phenix.refine, 
shelx or whatever refinement program you are using do it's thing. You 
wouldn't inject your protein until the column had a stable baseline 
would you?


-James Holton
MAD Scientist


On Aug 24, 2011, at 5:24 PM, Tim Gruene t...@shelx.uni-ac.gwdg.de wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Dear all,

especially at the beginning of model building and/or at low resolution
both Rfree and -LL free as reported in the refmac logfile show a
minimum at a some cycle before rising again.

I am certainly not the only one tempted to first run refmac with a large
number of refinement cycles, determine that minimum and rerun refmac
with ncyc set to that minimum.

Of course I want the resulting model and phases/map to be as close to
the what's in the crystal as possible in order to facilitate model building.

Is it therefore good practice to interrupt refmac wherever it finds a
minimum (if so, the minimum w.r.t. which number reported in the log-file)?

Thanks for everyone's opinion and experience,

Tim

- --
Dr Tim Gruene
Institut fuer anorganische Chemie
Tammannstr. 4
D-37077 Goettingen

GPG Key ID = A46BEE1A

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iD8DBQFOVReTUxlJ7aRr7hoRAqyzAKCZpMJPVSQJTDEoWGxZEymwvqfFcACeMNLL
rvIDPlXiL5HQmoNm7yrTt6k=
=UnKT
-END PGP SIGNATURE-


Re: [ccp4bb] number of cycles in refmac

2011-08-26 Thread Ian Tickle
Frank,

Point #1 - fair point;  the reason Rfree is popular, though, is because it
 is a *relative* metric, i.e. by now we have a sense of what good is.  So
 I predict an uphill fight for LLfree.


Why? I don't see any difference.  As you say Rfree is a relative metric so
your sense of what 'good' is relies on comparisons with other Rfrees (i.e.
it can only be 'better' or 'worse' not 'good' or 'bad'), but then the same
is true of LLfree (note that they both assume that exactly the same data
were used and that only the model has changed).  So when choosing between
alternative model parameterisations in order to minimise over-fitting we
compare their Rfrees and choose the lower one - same with LLfree, or we
compare the observed Rfree with the expected Rfree based on Rwork and the
obs/param ratio to check for problems with the model - same with LLfree.  In
fact you can do it better because the observations in LLfree are weighted in
exactly the same way as those in the target function.


 Point #2 would hold if we routinely let our refinements run to
 convergence;  seems common though to run 10 cycles or 50 cycles instead
 and draw conclusions from the behaviour of the metrics.  Are the conclusions
 really much different from the comparison-at-convergence you advocate?
 Which is in practice often less convenient.

 You might do 10 cycles for a quick optimisation of the coordinates, but
then I wouldn't place much faith in the R factors!  How can you draw any
conclusions from their behaviour: there's no way of predicting how they will
change in further cycles, the only way to find out is to do it.  I'm not
saying that you need to refine exhaustively on every run, that would be
silly since you don't need to know the correct value of the R  factors for
every run; but certainly on the final run before PDB submission I would
regard stopping the refinement early based on Rfree as implied in Tim's
original posting as something akin to 'cheating'.

Cheers

-- Ian


Re: [ccp4bb] number of cycles in refmac

2011-08-26 Thread protein chemistry
Dear Dr Ian

from your argument i could not understand how many cycles to refine before
submitting the coordinates to the PDB. what is the upper limit 100 or
thousand or million according to my understanding, its more logical to
stop the refinement when over refinement is  taking place (when R and Rfree
are going in opposite directions and LLG is stabilized )

AR


Re: [ccp4bb] number of cycles in refmac

2011-08-26 Thread Robbie Joosten
Dear Protein Chemistry (?),

When R and R-free drift off you are probably refining with suboptimal weights. 
If anything, it proves you still have work to do. At convergence R and R-free 
do not really change anymore so neither does the difference. If you have 
already done a lot of rebuilding and refinement 20 cycles is usually enough 
(but more cycles shouldn't hurt).

Cheers,
Robbie

Date: Fri, 26 Aug 2011 20:29:59 +0530
From: proteinchemistr...@gmail.com
Subject: Re: [ccp4bb] number of cycles in refmac
To: CCP4BB@JISCMAIL.AC.UK



Dear Dr Ian



from your argument i could not understand how many cycles to refine 
before submitting the coordinates to the PDB. what is the upper limit 
100 or thousand or million according to my understanding, its more 
logical to stop the refinement when over refinement is  taking place 
(when R and Rfree are going in opposite directions and LLG is stabilized
 )

AR

  

Re: [ccp4bb] number of cycles in refmac

2011-08-26 Thread Ian Tickle
Hi AR

Please define what you mean by 'over-refinement' as it's not a term I use:
does it mean 'convergence', or 'over-fitting', or 'over-optimisation'
(whatever that means) or something else?

If by LLG is stabilized you mean it has converged then I agree that's a
possible stopping criterion, but then so must all the refinement indicators
including R and Rfree ( RMSDs etc) since by definition at convergence there
can be no further significant changes in the parameters to cause R and Rfree
to change further.  You say R  Rfree are going in opposite directions
when LLG has stabilized.  It's not possible for R and Rfree to continue to
change if the refinement has converged, since that clearly implies that it
hasn't converged.

Cheers

-- Ian

PS 3 copies of your email is 2 too many (or if this is the list server
acting up again, my apologies).


On Fri, Aug 26, 2011 at 3:55 PM, protein chemistry 
proteinchemistr...@gmail.com wrote:

 Dear Dr Ian

 from your argument i could not understand how many cycles to refine before
 submitting the coordinates to the PDB. what is the upper limit 100 or
 thousand or million according to my understanding, its more logical to
 stop the refinement when over refinement is  taking place (when R and Rfree
 are going in opposite directions and LLG is stabilized )


 On Fri, Aug 26, 2011 at 4:01 PM, Ian Tickle ianj...@gmail.com wrote:

 Frank,

 Point #1 - fair point;  the reason Rfree is popular, though, is because it
 is a *relative* metric, i.e. by now we have a sense of what good is.
 So I predict an uphill fight for LLfree.


 Why? I don't see any difference.  As you say Rfree is a relative metric so
 your sense of what 'good' is relies on comparisons with other Rfrees (i.e.
 it can only be 'better' or 'worse' not 'good' or 'bad'), but then the same
 is true of LLfree (note that they both assume that exactly the same data
 were used and that only the model has changed).  So when choosing between
 alternative model parameterisations in order to minimise over-fitting we
 compare their Rfrees and choose the lower one - same with LLfree, or we
 compare the observed Rfree with the expected Rfree based on Rwork and the
 obs/param ratio to check for problems with the model - same with LLfree.  In
 fact you can do it better because the observations in LLfree are weighted in
 exactly the same way as those in the target function.


 Point #2 would hold if we routinely let our refinements run to
 convergence;  seems common though to run 10 cycles or 50 cycles instead
 and draw conclusions from the behaviour of the metrics.  Are the conclusions
 really much different from the comparison-at-convergence you advocate?
 Which is in practice often less convenient.

 You might do 10 cycles for a quick optimisation of the coordinates, but
 then I wouldn't place much faith in the R factors!  How can you draw any
 conclusions from their behaviour: there's no way of predicting how they will
 change in further cycles, the only way to find out is to do it.  I'm not
 saying that you need to refine exhaustively on every run, that would be
 silly since you don't need to know the correct value of the R  factors for
 every run; but certainly on the final run before PDB submission I would
 regard stopping the refinement early based on Rfree as implied in Tim's
 original posting as something akin to 'cheating'.

 Cheers

 -- Ian





Re: [ccp4bb] number of cycles in refmac

2011-08-25 Thread Tim Gruene
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hello Ian,

I dare say that the goal is to get phases which match as good as
possible with what is inside the crystal. If this coincides with
maximising the likelihood, why don't we run refinement until the LL
stabilises?

@Garib: I have seen runs where Refmac actually does stop prematurely,
i.e. before the number of cycles set in the input script. It seems that
it stops when LL does not drop between two cycles, but according to Ian
this would be the point to reach.

My question: does this indeed provide the best interpretable map (be it
cheating or not)?

Cheers, Tim

On 08/24/2011 05:36 PM, Ian Tickle wrote:
 Hi Tim
 
 The answer is a definite NO, the goal of refinement is to maximise the
 likelihood from the working set and the restraints.  It is definitely not to
 optimise Rfree or LLfree.  The correct value of the latter is whatever you
 get at convergence of refinement, i.e. at maximum of the likelihood,
 anything else is 'cheating'.
 
 Cheers
 
 -- Ian
 
 On Wed, Aug 24, 2011 at 4:24 PM, Tim Gruene t...@shelx.uni-ac.gwdg.de wrote:
 
 Dear all,
 
 especially at the beginning of model building and/or at low resolution
 both Rfree and -LL free as reported in the refmac logfile show a
 minimum at a some cycle before rising again.
 
 I am certainly not the only one tempted to first run refmac with a large
 number of refinement cycles, determine that minimum and rerun refmac
 with ncyc set to that minimum.
 
 Of course I want the resulting model and phases/map to be as close to
 the what's in the crystal as possible in order to facilitate model
 building.
 
 Is it therefore good practice to interrupt refmac wherever it finds a
 minimum (if so, the minimum w.r.t. which number reported in the log-file)?
 
 Thanks for everyone's opinion and experience,
 
 Tim
 


- -- 
- --
Dr Tim Gruene
Institut fuer anorganische Chemie
Tammannstr. 4
D-37077 Goettingen

GPG Key ID = A46BEE1A

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iD8DBQFOVgiBUxlJ7aRr7hoRAq+dAKDj2B6iUMD7C4uu8UMznTlKoclYzACdF8nP
Q6DmIFGPcfoP6xbJRwooWWI=
=7r5T
-END PGP SIGNATURE-


Re: [ccp4bb] number of cycles in refmac

2011-08-25 Thread Garib N Murshudov
Hello Tim

It was in one or two versions and I did not get consistent results. However 
code is there and I can activate it if you want. If you know what criteria you 
would like to use I can code that also.

In some cases it happens that R/Rfree go up and then they start coming down. It 
may be case when starting model has not very nice geometry. 
We are minimising total function. And there  are several hidden parameters in 
the total function (weights between Xray and geometry etc), they may cause 
problems. 



Cheers
Garib

On 25 Aug 2011, at 10:32, Tim Gruene wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 Hello Ian,
 
 I dare say that the goal is to get phases which match as good as
 possible with what is inside the crystal. If this coincides with
 maximising the likelihood, why don't we run refinement until the LL
 stabilises?
 
 @Garib: I have seen runs where Refmac actually does stop prematurely,
 i.e. before the number of cycles set in the input script. It seems that
 it stops when LL does not drop between two cycles, but according to Ian
 this would be the point to reach.
 
 My question: does this indeed provide the best interpretable map (be it
 cheating or not)?
 
 Cheers, Tim
 
 On 08/24/2011 05:36 PM, Ian Tickle wrote:
 Hi Tim
 
 The answer is a definite NO, the goal of refinement is to maximise the
 likelihood from the working set and the restraints.  It is definitely not to
 optimise Rfree or LLfree.  The correct value of the latter is whatever you
 get at convergence of refinement, i.e. at maximum of the likelihood,
 anything else is 'cheating'.
 
 Cheers
 
 -- Ian
 
 On Wed, Aug 24, 2011 at 4:24 PM, Tim Gruene t...@shelx.uni-ac.gwdg.de 
 wrote:
 
 Dear all,
 
 especially at the beginning of model building and/or at low resolution
 both Rfree and -LL free as reported in the refmac logfile show a
 minimum at a some cycle before rising again.
 
 I am certainly not the only one tempted to first run refmac with a large
 number of refinement cycles, determine that minimum and rerun refmac
 with ncyc set to that minimum.
 
 Of course I want the resulting model and phases/map to be as close to
 the what's in the crystal as possible in order to facilitate model
 building.
 
 Is it therefore good practice to interrupt refmac wherever it finds a
 minimum (if so, the minimum w.r.t. which number reported in the log-file)?
 
 Thanks for everyone's opinion and experience,
 
 Tim
 
 
 
 - -- 
 - --
 Dr Tim Gruene
 Institut fuer anorganische Chemie
 Tammannstr. 4
 D-37077 Goettingen
 
 GPG Key ID = A46BEE1A
 
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.11 (GNU/Linux)
 Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
 
 iD8DBQFOVgiBUxlJ7aRr7hoRAq+dAKDj2B6iUMD7C4uu8UMznTlKoclYzACdF8nP
 Q6DmIFGPcfoP6xbJRwooWWI=
 =7r5T
 -END PGP SIGNATURE-

Garib N Murshudov 
Structural Studies Division
MRC Laboratory of Molecular Biology
Hills Road 
Cambridge 
CB2 0QH UK
Email: ga...@mrc-lmb.cam.ac.uk 
Web http://www.mrc-lmb.cam.ac.uk





Re: [ccp4bb] number of cycles in refmac

2011-08-25 Thread Ian Tickle
TIm,

I dare say that the goal is to get phases which match as good as
 possible with what is inside the crystal. If this coincides with
 maximising the likelihood, why don't we run refinement until the LL
 stabilises?


That's exactly what you should do: any optimisation procedure attains the
correct solution only when changes in the parameters become 'insignificant'
i.e. the optimisation is deemed to have converged.  This corresponds to the
extremum in the target function being attained so that the gradients w.r.t.
the refined parameters become zero (within some margin of numerical
precision),

A very simple 1-parameter analogy is Hero's method to obtain the square root
of a number (named after the 1st century Greek mathematician Hero of
Alexandria though attributed to the Babylonians:
http://en.wikipedia.org/wiki/Babylonian_method).  This uses 'fixed-point
iteration' to optimise the initial estimate X of sqrt(A) (X can be any
positive number, the method has infinite radius of convergence): just
replace X by the new estimate (X+A/X)/2 and iterate to convergence.  So
let's say I want sqrt(2): I make an initial guess of 1, so (1+2/1)/2 = 1.5,
then (1.5+2/1.5)/2 = 1.4167 and keep iterating until the result doesn't
change: only then will you have the correct answer.

Since the target function in MX refinement is the total likelihood (working
set + restraints), there's no reason whatsoever why any another function,
such as Rfree  LLfree, should have an extremum at the same point in
parameter space as the target function.  Rfree is particular is problematic
because it is unweighted, so poorly measured reflections in the test set are
going to have a disproportionate influence on the result (e.g. see *Acta
Cryst.* (1970). A*26*, 162).

Cheers

-- Ian


Re: [ccp4bb] number of cycles in refmac

2011-08-25 Thread Frank von Delft
Hmm, I used to think I understood this, but I'm feeling a bit dim right 
now.


On 25/08/2011 11:07, Ian Tickle wrote:
Since the target function in MX refinement is the total likelihood 
(working set + restraints), there's no reason whatsoever why any 
another function, such as Rfree  LLfree, should have an extremum at 
the same point in parameter space as the target function. 
This is self-evident;  what is not obvious is why the target function 
should be having the final word.  Wasn't the word over-refinement 
introduced to describe exactly this:  that the target function was 
wrong?  Isn't this the purpose of cross-validation, to use an 
independent measure to judge when the refinement is /not/ producing the 
best model?



Rfree is particular is problematic because it is unweighted, so poorly 
measured reflections in the test set are going to have a 
disproportionate influence on the result (e.g. see /Acta Cryst./ 
(1970). A*26*, 162).
This may be true;  but as it is independent of refinement, is it not 
nevertheless the only measure I should trust?



Or maybe what you intended to say:  only trust refinements for which 
Rfree decreases monotonically, because only then do you have a valid 
choice of parameters.


phx.


Re: [ccp4bb] number of cycles in refmac

2011-08-25 Thread Ian Tickle
Hi Frank

 This is self-evident;  what is not obvious is why the target function
should be having the final word.  Wasn't the word over-refinement
introduced to describe exactly this:  that the target function was wrong?

I assumed people were confusing 'over-refinement' with 'over-fitting'; there
is no such thing as over-refinement (you won't find 'over-optimisation'
mentioned in any textbooks on optimisation!), since by definition refining
beyond convergence is a waste of time: it cannot result in any further
significant changes in the parameters.  Refinement is simply a method of
solving a set of equations  clearly we want to solve those equations as
accurately as possible.  If we could obtain an exact solution without
iteration we would use that; the mechanics of refinement and the path that
the program by dint of numerical quirks happens to take to reach the
solution are irrelevant as far as the user is concerned.  Stopping the
refinement before convergence will not produce more accurate parameters, it
will just introduce random and unquantifiable errors.

 Isn't this the purpose of cross-validation, to use an independent measure
to judge when the refinement is *not* producing the best model?

If the value of your chosen X-validation metric at convergence indicates a
problem with the model, parameterisation, weighting etc then clearly the
target function is not indeed the final word: the solution is to fix
whatever is wrong and do the refinement again, until you get a satisfactory
value for your metric.


 This may be true;  but as it is independent of refinement, is it not
 nevertheless the only measure I should trust?

 No there several possible functions of the test set (e.g. Hamilton Rfree,
LLfree) that you could use, all potentially equally valid X-validation
metrics.  I would have more faith in a function such as LLfree in which the
contributions of the reflections are at least weighted according to their
reliability.  It just seems bizarre that important decisions are being based
on measurements that may have gross errors without taking those errors into
account.


 Or maybe what you intended to say:  only trust refinements for which Rfree
 decreases monotonically, because only then do you have a valid choice of
 parameters.


No, as I indicated above, what Rfree does before convergence is attained is
totally meaningless, only the value obtained _at_ convergence is meaningful
as a X-validation statistic.  We wouldn't be having this discussion if the
refinement program omitted the meaningless intermediate values and only
printed out the final Rfree or LLfree.  I'm saying that Rfree is not the
best X-validation metric because poorly measured data are not properly
weighted: this is what Acta paper I referenced is saying.

Cheers

-- Ian


Re: [ccp4bb] number of cycles in refmac

2011-08-25 Thread Frank von Delft

Hi Ian

(Yes, your technical point about semantics is correct, I meant 
over-fitting.)


To pin down your points, though, you're saying:
1) Don't use Rfree, instead look at LLfree or Hamilton Rfree.
2) Compare only the final values at convergence when choosing 
between different parametrizations (=models)


Point #1 - fair point;  the reason Rfree is popular, though, is because 
it is a /relative/ metric, i.e. by now we have a sense of what good 
is.  So I predict an uphill fight for LLfree.


Point #2 would hold if we routinely let our refinements run to 
convergence;  seems common though to run 10 cycles or 50 cycles 
instead and draw conclusions from the behaviour of the metrics.  Are the 
conclusions really much different from the comparison-at-convergence you 
advocate?  Which is in practice often less convenient.


Cheers
phx






 Isn't this the purpose of cross-validation, to use an independent 
measure to judge when the refinement is /not/ producing the best model?


If the value of your chosen X-validation metric at convergence 
indicates a problem with the model, parameterisation, weighting etc 
then clearly the target function is not indeed the final word: the 
solution is to fix whatever is wrong and do the refinement again, 
until you get a satisfactory value for your metric.



This may be true;  but as it is independent of refinement, is it
not nevertheless the only measure I should trust?

No there several possible functions of the test set (e.g. Hamilton 
Rfree, LLfree) that you could use, all potentially equally valid 
X-validation metrics.  I would have more faith in a function such as 
LLfree in which the contributions of the reflections are at least 
weighted according to their reliability.  It just seems bizarre that 
important decisions are being based on measurements that may have 
gross errors without taking those errors into account.


Or maybe what you intended to say:  only trust refinements for
which Rfree decreases monotonically, because only then do you have
a valid choice of parameters.


No, as I indicated above, what Rfree does before convergence is 
attained is totally meaningless, only the value obtained _at_ 
convergence is meaningful as a X-validation statistic.  We wouldn't be 
having this discussion if the refinement program omitted the 
meaningless intermediate values and only printed out the final Rfree 
or LLfree.  I'm saying that Rfree is not the best X-validation metric 
because poorly measured data are not properly weighted: this is what 
Acta paper I referenced is saying.


Cheers

-- Ian


Re: [ccp4bb] number of cycles in refmac

2011-08-25 Thread Pavel Afonine
 Point #2 would hold if we routinely let our refinements run to
 convergence;  seems common though to run 10 cycles or 50 cycles instead
 and draw conclusions from the behaviour of the metrics.  Are the conclusions
 really much different from the comparison-at-convergence you advocate?
 Which is in practice often less convenient.


May be run refinement many times with slightly different starting points and
consider looking at an ensemble? Convergence seems to have a vague meaning
when it comes to macromolecular crystallographic structure refinement (p.
75-76 here:
http://www.phenix-online.org/presentations/latest/pavel_refinement_general.pdf
)

Pavel


Re: [ccp4bb] number of cycles in refmac

2011-08-24 Thread Garib N Murshudov
Dear Tim

At the moment there is no option to stop refmac prematurely. I can add if it is 
necessary. I can only give my experience. 
After molecular replacement before running ARP/wARP or buccaneer I usually run 
60-100 cycles of refinement with jelly body with sigma set to 0.01.
Then automatic model building works better. I have seen this behaviour in 
several cases (if there are more than one copy I also would use local ncs 
restraints).
Using jelly body slows down convergence but shifts seem to make more sense.

regards
Garib

On 24 Aug 2011, at 17:24, Tim Gruene wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 Dear all,
 
 especially at the beginning of model building and/or at low resolution
 both Rfree and -LL free as reported in the refmac logfile show a
 minimum at a some cycle before rising again.
 
 I am certainly not the only one tempted to first run refmac with a large
 number of refinement cycles, determine that minimum and rerun refmac
 with ncyc set to that minimum.
 
 Of course I want the resulting model and phases/map to be as close to
 the what's in the crystal as possible in order to facilitate model building.
 
 Is it therefore good practice to interrupt refmac wherever it finds a
 minimum (if so, the minimum w.r.t. which number reported in the log-file)?
 
 Thanks for everyone's opinion and experience,
 
 Tim
 
 - --
 Dr Tim Gruene
 Institut fuer anorganische Chemie
 Tammannstr. 4
 D-37077 Goettingen
 
 GPG Key ID = A46BEE1A
 
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.11 (GNU/Linux)
 Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
 
 iD8DBQFOVReTUxlJ7aRr7hoRAqyzAKCZpMJPVSQJTDEoWGxZEymwvqfFcACeMNLL
 rvIDPlXiL5HQmoNm7yrTt6k=
 =UnKT
 -END PGP SIGNATURE-

Garib N Murshudov 
Structural Studies Division
MRC Laboratory of Molecular Biology
Hills Road 
Cambridge 
CB2 0QH UK
Email: ga...@mrc-lmb.cam.ac.uk 
Web http://www.mrc-lmb.cam.ac.uk