Re: [ccp4bb] Coot in i2

2019-06-05 Thread Huw Jenkins
> On 5 Jun 2019, at 20:40, Paul Emsley  wrote:
> 
> Ctrl-S will do Quick save as, saving your state files and any unsaved models 
> with file-name increments.  That should put your mind to rest about unsaved 
> changes.  Also, investigate the coot-backup directory - coot should be saving 
> models there when you make modifications. 

Please note that this only applies if you have turned off the CCP4i2 option 
'Delete interactive jobs (such as Coot) that have no output files' otherwise 
the unsaved changes will be lost.

Best wishes,


Huw


To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1


Re: [ccp4bb] Coot in i2

2019-06-05 Thread Paul Emsley

On 05/06/2019 20:31, Jonathan Cooper wrote:
> For the past few weeks I have noticed that when you start Coot from
> the i2 gui it works fine for up to an hour or so but then its hangs
> completely and any unsaved changes go missing. It seems more stable if
> I start it outside the i2 gui. Is this just me or my linux box?
>

I don't know the answer to your question - It sounds like it might be
filling up a buffer in ccp4i2. I haven't used --ccp4-mode for more than
5 minutes or so. I have some related information:

Ctrl-S will do Quick save as, saving your state files and any unsaved
models with file-name increments.  That should put your mind to rest
about unsaved changes.  Also, investigate the coot-backup directory -
coot should be saving models there when you make modifications. 


Paul.





To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1


[ccp4bb] Coot in i2

2019-06-05 Thread Jonathan Cooper
For the past few weeks I have noticed that when you start Coot from the i2 gui 
it works fine for up to an hour or so but then its hangs completely and any 
unsaved changes go missing. It seems more stable if I start it outside the i2 
gui. Is this just me or my linux box?



To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1

[ccp4bb] PhD Position in Grenoble, France

2019-06-05 Thread Carlo Petosa
Dear  all,


A PhD position is available in my group at the Institute for Structural Biology 
in Grenoble.


We are looking for a highly motivated student to work on a chromatin-binding 
complex as a potential target for developing new antifungal drugs. The project 
involves protein expression and purification, biochemical and biophysical 
assays, and structure determination by cryo-EM and crystallography.


Candidates with a background in chemistry or life sciences and a strong 
interest in structural biology are encouraged to apply.


More details and application instructions are here: http://ow.ly/Cs6I50uwxDo

Lab web site: http://ow.ly/rSWb50uwxNJ

Feel free to contact me for informal inquiries.

Best regards,

Carlo

--

Carlo Petosa

Institut de Biologie Structurale
71 Avenue des Martyrs, CS 10090
38044 Grenoble Cedex 9 France
Tel. +33 (0)4 57 42 86 17

email: carlo.pet...@ibs.fr




To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1


Re: [ccp4bb] (EXTERNAL) Re: [ccp4bb] Does ncs bias R-free? And if so, can it be avoided by special selection of the free set?

2019-06-05 Thread Edward A. Berry

On 06/05/2019 10:07 AM, Randy Read wrote:

Dear Ian,

I think the missing ingredient in your argument is an assumption that may be 
implicit in what others have written: if you have NCS in your crystal, you 
should be restraining that NCS in your model.  If you do that, then the 
NCS-related Fcalcs will be similar (especially in the particularly problematic 
case where the NCS is nearly crystallographic), and if the working reflections 
are over-fit to match the Fobs values, then the free reflections that are 
related by the same NCS will also be overfit.  So the measurement errors don't 
have to be correlated, just the modelling errors.



Randy,
"overfit" is a rather vague term, at least for me. I would prefer to consider 
definite quantities, like reduction in |Fo-Fc| of a free reflection as a result of 
refining against a quasi-sym-related working reflection (quasi because in cases of real 
ncs the operator does not directly elate reflections).
If (as Ian is assuming) the errors in Fobs are random, and IFF that implies 
that (Fo-Fc) are uncorrelated, then it wouldn't matter that the changes in Fc 
are correlated:

Say the model is pretty close but error in Fobs makes Fobs greater than Fc for the 
working reflection. Refining against the working reflections will tend to change the 
structure in a way that inappropriately increases Fcalc for the working reflection to 
more closely match the erroneously high Fobs ("fitting the noise"). And if we 
are constraining symmetry, it will equally increase Fcalc for the sym-related free 
reflection. But will this increase or decrease Rfree?

If Fobs for the free reflection is too low due to the random error, Fo-Fc for 
it will be negative, and increasing Fc will make it greater, _increasing_ Rfree.

To get ncs bias, you need BOTH ncs-correlation in the dFc's from a step of 
refinement and in the (Fo-Fc) values. If either of these fails, there is no 
explanation for NCS-bias. (And since no counter-examples have been brought 
forward, and the results of Jonathans's experiments compliment those of mine 
nicely, there doesn't seem to be any such phenomenon to be explained! There 
seems to be no evidence for ncs bias, at least when ncs is not restrained, 
which is what Dirk Kostrewa was maintaining.


Best wishes,

Randy


On 5 Jun 2019, at 13:58, Ian Tickle mailto:ianj...@gmail.com>> wrote:


Hi Jon

Sorry I didn't intend for my response to be interpreted as saying that anyone 
has suggested directly that the measurement errors of NCS-related reflection 
amplitudes are correlated.  In fact the opposite is almost certainly true since 
the only obvious way in practice that errors in Fobs could be correlated is via 
errors in the batch scale factors which would introduce correlations between 
errors in Fobs for reflections in the same or adjacent images, but that has 
nothing to do with NCS.  That's the 'elephant in the room': no-one has 
suggested that reflections on the same or adjacent images should not be split 
between the working and test sets, yet that's easily the biggest contributor to 
CV bias with or without NCS!  I think taking that effect into account would be 
much more productive than worrying about NCS, but performing the test-set 
sampling in shells can't possibly address that, since the images obviously cut 
across all shells.

The point I was making was that correlation of errors in NCS-related Fobs would 
appear to be the inevitable _implication_ of what certainly has been claimed, 
namely that NCS can introduce bias into CV statistics if the test-set sampling 
is not done correctly, i.e. by splitting NCS-related Fobs between the working 
and test sets.  Unless there's something I've missed that's the only possible 
explanation for that claim.  This is because overfitting results from fitting 
the model to the errors in Fobs, and the CV bias arises from correlation of 
those errors if the NCS-related Fobs are split up, thus causing the degree of 
overfitting to be underestimated and giving a too-rosy picture of the structure 
quality.  Indeed you seem to be saying that because the NCS-related Fobs are 
correlated (a patently true statement), then it follows that the errors in 
those Fobs are also correlated, or at least no more correlated than for 
non-NCS-related Fobs, but I just don't see how that can be
true.

Rfree is not unbiased: as a measure of the agreement it is biased upwards by 
overfitting (otherwise how could it be used to detect overfitting?), by failing 
to fit with the uncorrelated errors in the test-set Fobs, just as Rwork is 
biased downwards by fitting to the errors in the working-set Fobs.  Overfitting 
becomes immediately apparent whenever you perform any refinement, so the only 
point at which there is no overfitting is for the initial model when Rwork and 
Rfree are equal, apart from a small difference arising from random sampling of 
the test-set (that sampling error could be reduced by performing refinements 
with all 20 

[ccp4bb] Symposium on Structure Biology for Drug Discovery@SwissFEL

2019-06-05 Thread Schertler Gebhard (PSI)
Dear colleagues

I would like to invite you to an interesting event focusing on XFEL and Cryo-EM 
science. we have an amazing line-up of speakers. Also Switzerland is always 
worth a visit!

I am excited to announce the final agenda for the International Symposium on 
Structure Biology for Drug Discovery@SwissFEL taking place from June 25th-27th 
in Villigen, Switzerland!

Check out the program on http://indico.psi.ch/e/ISDD Online registration is 
open until 9 Jun 2019

Kind regards
Gebhard

Prof. Gebhard F.X. Schertler
Structural Biology ETH Zürich D-BIOL

Head of Biology and Chemistry
Division
Paul Scherrer Institut
Laboratory of Biomolecular Research,
LBR
OFLC 109
CH-5232 Villigen PSI
gebhard.schert...@psi.ch
phone +41 56 310 4265






To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1


Re: [ccp4bb] Does ncs bias R-free? And if so, can it be avoided by special selection of the free set?

2019-06-05 Thread Randy Read
Dear Ian,

I think the missing ingredient in your argument is an assumption that may be 
implicit in what others have written: if you have NCS in your crystal, you 
should be restraining that NCS in your model.  If you do that, then the 
NCS-related Fcalcs will be similar (especially in the particularly problematic 
case where the NCS is nearly crystallographic), and if the working reflections 
are over-fit to match the Fobs values, then the free reflections that are 
related by the same NCS will also be overfit.  So the measurement errors don't 
have to be correlated, just the modelling errors.

Best wishes,

Randy

> On 5 Jun 2019, at 13:58, Ian Tickle  wrote:
> 
> 
> Hi Jon
> 
> Sorry I didn't intend for my response to be interpreted as saying that anyone 
> has suggested directly that the measurement errors of NCS-related reflection 
> amplitudes are correlated.  In fact the opposite is almost certainly true 
> since the only obvious way in practice that errors in Fobs could be 
> correlated is via errors in the batch scale factors which would introduce 
> correlations between errors in Fobs for reflections in the same or adjacent 
> images, but that has nothing to do with NCS.  That's the 'elephant in the 
> room': no-one has suggested that reflections on the same or adjacent images 
> should not be split between the working and test sets, yet that's easily the 
> biggest contributor to CV bias with or without NCS!  I think taking that 
> effect into account would be much more productive than worrying about NCS, 
> but performing the test-set sampling in shells can't possibly address that, 
> since the images obviously cut across all shells.
> 
> The point I was making was that correlation of errors in NCS-related Fobs 
> would appear to be the inevitable _implication_ of what certainly has been 
> claimed, namely that NCS can introduce bias into CV statistics if the 
> test-set sampling is not done correctly, i.e. by splitting NCS-related Fobs 
> between the working and test sets.  Unless there's something I've missed 
> that's the only possible explanation for that claim.  This is because 
> overfitting results from fitting the model to the errors in Fobs, and the CV 
> bias arises from correlation of those errors if the NCS-related Fobs are 
> split up, thus causing the degree of overfitting to be underestimated and 
> giving a too-rosy picture of the structure quality.  Indeed you seem to be 
> saying that because the NCS-related Fobs are correlated (a patently true 
> statement), then it follows that the errors in those Fobs are also 
> correlated, or at least no more correlated than for non-NCS-related Fobs, but 
> I just don't see how that can be true.
> 
> Rfree is not unbiased: as a measure of the agreement it is biased upwards by 
> overfitting (otherwise how could it be used to detect overfitting?), by 
> failing to fit with the uncorrelated errors in the test-set Fobs, just as 
> Rwork is biased downwards by fitting to the errors in the working-set Fobs.  
> Overfitting becomes immediately apparent whenever you perform any refinement, 
> so the only point at which there is no overfitting is for the initial model 
> when Rwork and Rfree are equal, apart from a small difference arising from 
> random sampling of the test-set (that sampling error could be reduced by 
> performing refinements with all 20 working/test sets combinations and 
> averaging the R values).  From there on the 'gap' between Rwork and Rfree is 
> a measure of the degree of overfitting, so we should really be taking some 
> average of Rwork and Rfree as the true measure of agreement (though the 
> biases are not exactly equal and opposite so it's not a simple arithmetic 
> mean).  The goal of choosing the appropriate refinement parameters, 
> restraints and weights is to _minimise_ overfitting, not eliminate it.  It is 
> not possible to eliminate it completely: if it were then Rwork and Rfree 
> would become equal (apart from that small effect from random sampling).
> 
> I don't follow your argument about correlation of Fobs from NCS.  
> Overfitting, and therefore CV bias, arises from the _errors_ in the Fobs not 
> from the Fobs themselves, and there's no reason to believe that the Fobs 
> should be correlated with their errors.  You say "any correlation between the 
> test-set and the working-set F's due to NCS would be expected to reduce 
> R-free".  If the working and test sets are correlated by NCS that would mean 
> that Rwork is correlated with Rfree so they would be reduced equally!  There 
> are two components of the Fobs - Fcalc difference: Fcalc - Ftrue (the model 
> error) and Fobs - Ftrue (the data error).  The former is completely 
> correlated between the working and test sets (obviously since it's the same 
> model) so what you do to one you must do to the other.  The latter can only 
> be correlated by NCS if NCS has an effect on errors in the Fobs, which it 
> doesn't, or by some other effect such as 

[ccp4bb] Reminder: CCP4/BCA Protein Crystallography Summer School at York, 1-7 September 2019

2019-06-05 Thread Johan Turkenburg
Just a reminder that the deadline for the summer school (14 June) is fast
approaching.


This year the University of York will be hosting the CCP4/BCA Protein
Crystallography Summer School, generously supported by CCP4, BCA, Alpha
Biotech, Bruker, Hampton Research, MiTeGen, Molecular Dimensions and
Rigaku, which will take place 1-7 September. (This school has previously
been held biennially in St Andrews). Next year's will be in Norwich
organised by David Lawson and Clare Stevenson.



The course aims to cover the theoretical and practical aspects of protein
crystallography from protein expression and purification, through crystal
growth to data collection on in-house and synchrotron sources, phasing
methods, automated model building and phase extension, refinement, and
validation. Hands-on sessions in data processing (XIA2, DIALS), molecular
replacement, MAD/SAD phasing, refinement and electron density map
interpretation (in COOT) will be included.



The School is intended for postgraduates or postdocs new to
crystallography. The week is very intensive, and gives a concentrated
overview of the methods which many find useful.



As in the past, priority is given to UK applicants because of the funding
arrangements, but is open to a limited number of overseas applicants.



For further details and the application form please see
https://synergy.st-andrews.ac.uk/proteincrystallography/



Tracey Gloster (University of St Andrews)

Johan Turkenburg (University of York)

-- 
Dr. Johan P. Turkenburg X-ray facilities manager
York Structural Biology Laboratory
University of York
York YO10 5DD   UK  Phone (+) 44 1904 328251
http://orcid.org/-0001-6992-6838
EMAIL DISCLAIMER http://www.york.ac.uk/docs/disclaimer/email.htm




To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1


Re: [ccp4bb] Does ncs bias R-free? And if so, can it be avoided by special selection of the free set?

2019-06-05 Thread Ian Tickle
Hi Jon

Sorry I didn't intend for my response to be interpreted as saying that
anyone has suggested directly that the measurement errors of NCS-related
reflection amplitudes are correlated.  In fact the opposite is almost
certainly true since the only obvious way in practice that errors in Fobs
could be correlated is via errors in the batch scale factors which would
introduce correlations between errors in Fobs for reflections in the same
or adjacent images, but that has nothing to do with NCS.  That's the
'elephant in the room': no-one has suggested that reflections on the same
or adjacent images should not be split between the working and test sets,
yet that's easily the biggest contributor to CV bias with or without NCS!
I think taking that effect into account would be much more productive than
worrying about NCS, but performing the test-set sampling in shells can't
possibly address that, since the images obviously cut across all shells.

The point I was making was that correlation of errors in NCS-related Fobs
would appear to be the inevitable _implication_ of what certainly has been
claimed, namely that NCS can introduce bias into CV statistics if the
test-set sampling is not done correctly, i.e. by splitting NCS-related Fobs
between the working and test sets.  Unless there's something I've missed that's
the only possible explanation for that claim.  This is because overfitting
results from fitting the model to the errors in Fobs, and the CV bias
arises from correlation of those errors if the NCS-related Fobs are split
up, thus causing the degree of overfitting to be underestimated and giving
a too-rosy picture of the structure quality.  Indeed you seem to be saying
that because the NCS-related Fobs are correlated (a patently true
statement), then it follows that the errors in those Fobs are also
correlated, or at least no more correlated than for non-NCS-related Fobs,
but I just don't see how that can be true.

Rfree is not unbiased: as a measure of the agreement it is biased upwards
by overfitting (otherwise how could it be used to detect overfitting?), by
failing to fit with the uncorrelated errors in the test-set Fobs, just as
Rwork is biased downwards by fitting to the errors in the working-set
Fobs.  Overfitting becomes immediately apparent whenever you perform any
refinement, so the only point at which there is no overfitting is for the
initial model when Rwork and Rfree are equal, apart from a small difference
arising from random sampling of the test-set (that sampling error could be
reduced by performing refinements with all 20 working/test sets
combinations and averaging the R values).  From there on the 'gap' between
Rwork and Rfree is a measure of the degree of overfitting, so we should
really be taking some average of Rwork and Rfree as the true measure of
agreement (though the biases are not exactly equal and opposite so it's not
a simple arithmetic mean).  The goal of choosing the appropriate refinement
parameters, restraints and weights is to _minimise_ overfitting, not
eliminate it.  It is not possible to eliminate it completely: if it were
then Rwork and Rfree would become equal (apart from that small effect from
random sampling).

I don't follow your argument about correlation of Fobs from NCS.
Overfitting, and therefore CV bias, arises from the _errors_ in the Fobs
not from the Fobs themselves, and there's no reason to believe that the
Fobs should be correlated with their errors.  You say "any correlation
between the test-set and the working-set F's due to NCS would be expected
to reduce R-free".  If the working and test sets are correlated by NCS that
would mean that Rwork is correlated with Rfree so they would be reduced
equally!  There are two components of the Fobs - Fcalc difference: Fcalc -
Ftrue (the model error) and Fobs - Ftrue (the data error).  The former is
completely correlated between the working and test sets (obviously since
it's the same model) so what you do to one you must do to the other.  The
latter can only be correlated by NCS if NCS has an effect on errors in the
Fobs, which it doesn't, or by some other effect such as errors in batch
scales that are unrelated to NCS.

Overfitting is related to the data/parameter ratio so you don't observe the
effects of overfitting until you change the model, the parameter set or the
restraints.  If there were no errors there would be no overfitting and no
CV bias (actually there would be no need for cross-validation!).

Of course as you say, your tests suggest that there is no CV bias from NCS,
in which case there's absolutely nothing to explain!

Cheers

-- Ian


On Tue, 4 Jun 2019 at 21:33, Jonathan Cooper <
0c2488af9525-dmarc-requ...@jiscmail.ac.uk> wrote:

> Ian, statistics is not my forte, but I don't think anyone is suggesting
> that the measurement errors of NCS-related reflection amplitudes are
> correlated. In simple terms, since NCS-related F's should be correlated,
> the working-set reflection amplitudes 

Re: [ccp4bb] Does ncs bias R-free? And if so, can it be avoided by special selection of the free set?

2019-06-05 Thread Tim Gruene
Dear Jonathan,

if you could make changes to your model parameters in the PDB file such that 
the intensity of exactly one reflection changes, and all others stay the same, 
in that case, you could say your free R set was unbiased from the model. I 
have read somewhere, that intensities were mutually orthogonal, which implies 
that such a modification was possible. However, I find this very hard to 
imagine, especially since you have in general many more reflections than 
parameters.
Therefore your Rfree value is as much biased from the model as the R1 value. 

You have to distinguish between bias and model bias. The difference between R1 
and Rfree is a measure of bias, i.e. how much R1 is low because of errors in 
the data. 

Best,
Tim


On Tuesday, June 4, 2019 10:33:08 PM CEST Jonathan Cooper wrote:
>  Ian, statistics is not my forte, but I don't think anyone is suggesting
> that the measurement errors of NCS-related reflection amplitudes are
> correlated. In simple terms, since NCS-related F's should be correlated,
> the working-set reflection amplitudes could be correlated with those in the
> test-set, if the latter is chosen randomly, rather than in shells. Am I
> right in saying that R-free not just indicates over-fitting but, also, acts
> as an unbiased measure of the agreement between Fo and Fc? During a
> well-behaved refinement run, in the cycles before any over-fitting becomes
> apparent, the decrease in R-free value will indicate that the changes being
> made to the model are making it more consistent with Fo's. In these stages,
> any correlation between the test-set and the working-set F's due to NCS
> would be expected to affect the R-free (cross-validation bias), making it
> lower than it would be if the test set had been chosen in resolution
> shells? However, you are always right and, as you know, I failed to detect
> any such effect in my limited tests. Thanks to you and others for
> replying. 
> 
> On Tuesday, 4 June 2019, 02:07:10 BST, Edward A. Berry
>  wrote:
> 
>  On 05/19/2019 08:21 AM, Ian Tickle wrote:
> ~~~
> 
> >> So there you have it: what matters is that the _errors_ in the
> >> NCS-related amplitudes are uncorrelated, or at least no more correlated
> >> than the errors in the non-NCS-related amplitudes, NOT the amplitudes
> >> themselves.
> Thanks, Ian!
> 
> I would like to think that it is the errors in Fobs that matter (as may be
> the case), because then: 1. ncs would not bias R-free even if you _do_ use
> ncs constraints/restraints. (changes in Fcalc due to a step of refinement
> would be positively correlated between sym-mates, but if the sign of
> (Fo-Fc) is opposite at the sym-mate, what impoves the working reflection
> would worsen the free) 2. There would be no need to use the same free set
> when you refine the structure against a new dataset (as for ligand studies)
> since the random errors of measurement in Fobs in the two sets would be
> unrelated.
> 
> However when I suggested that in a previous post, I was reminded that errors
> in Fobs account for only a small part of the difference (Fo-Fc). The
> remainder must be due to inability of our simple atomic models to represent
> the actual electron density, or its diffraction; and for a symmetric
> structure and a symmetric model, that difference is likely to be
> symmetric.  Whether that difference represents "noise" that we want to
> avoid fitting is another question, but it is likely that (Fo-Fc) will be
> correlated with sym-mates. So I settled for convincing myself that the
> changes in Fc brought about by refinement would be uncorrelated, and thus
> the _changes_ in (Fo-Fc) at each step would be uncorrelated.
> 
> Below are some of the ideas I come up with in trying to think about this,
> and about bias in general. (Not very well organized and not the best of
> prose, but if one is a glutton for punishment, or just wants to see how the
> mind of a madman works . . .)
> 
> Warning- some of this is contrary to current consensus opinion and the
> conclusions may be, in the words of a popular autobuilding program, partly
> WRONG!  In particular, the idea that coupling by the G-function does not
> bias R-free, but rather is the only reason that R-free works at all! - - -
> - - - - - - -
> 
> The differences (Fo-Fc) can be divided between (1) errors in measurement
> of reflection intensities and (2)failure of the model to represent the
> true structure. The first can be considered "noise" and we would expect
> it to be random, with no correlation between symm mates.
> However most of the difference between Fc and Fobs is not due to random
> noise in the data, but to failures of our model to accurately represent
> the real thing. These differences are likely to be ncs-symmetric.
> Leaving aside the question of whether or not we want to fit this kind of
> "noise" (bringing the model closer to the real structure?), we conclude
> that (Fo-Fc) is likely to be correlated between ncs-mates.
> 
> But for refinement 

[ccp4bb] PSDI 2019 meeting - registration open!

2019-06-05 Thread Debreczeni, Judit
Dear all,

registration for the 27th Protein Structure Determination in Industry (PSDI) 
meeting, 3-5 November 2019, is now open:

https://psdi2019.org/

The meeting will be held at the Wellcome Genome Campus, Hinxton, 
Cambridgeshire, UK. Workshops will take place on Sunday the 3rd of November, 
followed by a welcome reception. Scientific sessions and exhibition will be 
held on Monday the 4th and Tuesday the 5th of November, with a conference 
dinner in central Cambridge on the 4th.

Important dates:
  - Early Bird registration closes 22nd July 2019
  - Registration closes: 28th October 2019

Scientific talks will cluster around the following major themes:
  - structure and biophysics based drug design stories
  - cryoEM in a pharmaceutical setting
  - popular biophysics techniques, eg. HDX-MS
  - popular drug discovery trends, eg. degraders

Workshops will include three consecutive sessions about biophysics, 
crystallography and cryoEM.

If you have an exciting scientific story to share in form of a talk (especially 
on HDX and degraders), please feel free to get in touch with the organisers: 
Judit Debreczeni (judit dot debreczeni at astrazeneca dot com) or Chris 
Phillips (chris dot phillips at astrazeneca dot com). Exhibitors or sponsors: 
please use the registration pages to book your place.

Confirmed speakers include:
  - Gerd Bader (Boehringer Ingelheim)
  - Celine Be (Novartis)
  - Gerard Bricogne (Global Phasing)
  - Ana Casañal (MRC-LMB)
  - Chun-wa Chung (GSK)
  - Paul Emsley (MRC-LMB)
  - Richard Henderson (keynote lecture, MRC-LMB)
  - Rob van Montfort (ICR)
  - Djordje Musil (Merck)
  - Herbert Nar (Boehringer Ingelheim)
  - Colin Palmer (CCP-EM)
  - Alexey Rak (Sanofi)
  - Armin Ruf (Roche)
  - Kasim Sader (ThermoFisher)
  - Martina Schaefer (Bayer)
  - Stacey Southall (Sosei Heptares)
  - Pamela Williams (Astex)

This year's meeting is organised by AstraZeneca UK, supported by Hg3 
Conferences.

We are looking forward to seeing you at the PSDI 2019!

Judit Debreczeni and Chris Phillips (AZ UK)

---
Judit Debreczeni
Associate Principal Scientist
SBF, Discovery Sciences
AstraZeneca

AstraZeneca
Darwin (310) Building
Cambridge Science Park
Milton Road
Cambridge
CB4 0WG
UK




AstraZeneca UK Limited is a company incorporated in England and Wales with 
registered number:03674842 and its registered office at 1 Francis Crick Avenue, 
Cambridge Biomedical Campus, Cambridge, CB2 0AA.

This e-mail and its attachments are intended for the above named recipient only 
and may contain confidential and privileged information. If they have come to 
you in error, you must not copy or show them to anyone; instead, please reply 
to this e-mail, highlighting the error to the sender and then immediately 
delete the message. For information about how AstraZeneca UK Limited and its 
affiliates may process information, personal data and monitor communications, 
please see our privacy notice at 
www.astrazeneca.com



To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1


[ccp4bb] Postdoctoral Position at NNF-CPR University of Copenhagen

2019-06-05 Thread Guillermo Montoya
Dear colleagues,
We are seeking an excellent and highly motivated postdoctoral candidate to lead 
a project combining biochemistry, SPA cryoEM and crystallography to study 
protein-DNA complexes that allow the insertion of large DNA segments into a 
genome. The position is funded through an NNF distinguished investigator grant. 
The ideal candidate for this project will be a structural biologist with 
experience in protein-DNA biochemistry and cryoEM and/or crystallography 
expertise. The position is funded for 3 years (2+1) with a possible extension 
for a 4th year. For informal inquiries you can contact me or Lotta Avesson (in 
cc).
Our institute provides the state-of-the art infrastructure and instrumentation 
including a Titan Krios  TEM equipped with the state-of-the-art direct electron 
detector and 3 auxiliary TEMs for sample characterisation. We expect to 
incorporate another 200 Kv system with a Falcon III camera by the end of the 
year. In addition, the selected person will benefit from an intellectually 
stimulating, supportive and collaborative environment, including access to soft 
skills workshops run by our center and the University of Copenhagen (project 
writing courses, career development and  leadership workshops), and a range of 
enabling core facilities in the centre including crystallisation, protein 
production, molecular biophysics, high performance computing, bioinformatics, 
and mass spectrometry.

best regards


G.

Guillermo Montoya, Prof., Dr.
Research Director, Protein Structure and Function Programme
Novo Nordisk Foundation Center for Protein Research
Faculty of Health and Medical Sciences, University of Copenhagen,
Blegdamsvej 3B, DK-2200 Copenhagen, Denmark
web: www.cpr.ku.dk
PC: Lotta Avesson lotta.aves...@cpr.ku.dk



To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1