Re: [ccp4bb] Tricks to solubilize protein

2008-03-17 Thread M T
Hi

The low pH and its interaction with PIP1 can be conflicting. The low
pH can modify the interaction site.
Nevertheless, one good way to study protein stability is thermal shift assay.
You can read this publication too : PubMed ID 16604423. (Rapid
determination of protein solubility and stability conditions for NMR
studies using incomplete factorial design).

Michel.


Re: [ccp4bb] question about processing data

2008-03-17 Thread James Stroud
I think the answer to your question depends on why the data is  
incomplete.


James

On Mar 17, 2008, at 3:06 AM, Melody Lin wrote:


Hi all,

I have always been wondering... for a data set diffracting to say  
2.15 Angstrom but in the highest resolution shell (2.25-2.15) the  
completeness is 74%, should I use merge all the data and call it a  
2.15 A dataset or I should cut the data set to say 2.25 A where the  
highest resolution shell has better completeness (85%)? What is an  
acceptable completeness value for the highest resolution shell?


Thank you.

Best,
Melody


--
James Stroud
UCLA-DOE Institute for Genomics and Proteomics
Box 951570
Los Angeles, CA  90095

http://www.jamesstroud.com


[ccp4bb] question about processing data

2008-03-17 Thread Melody Lin
Hi all,

I have always been wondering... for a data set diffracting to say
2.15Angstrom but in the highest resolution shell (
2.25-2.15) the completeness is 74%, should I use merge all the data and call
it a 2.15 A dataset or I should cut the data set to say 2.25 A where the
highest resolution shell has better completeness (85%)? What is an
acceptable completeness value for the highest resolution shell?

Thank you.

Best,
Melody


Re: [ccp4bb] question about processing data

2008-03-17 Thread Partha Chakrabarti
Hi Melody,

There was a nice discussion in this year's ccp4 study weekend. In
general, one needs to consider several factors.. If you were at 3A, or
low symmetry, you would of course try to get the maximum out of it, on
the other hand, there are requirements for experimental phasing.. in
general, judge it from:

1. Completeness
2. Redundancy
3. I / Sigma
4. R merge statistics

Not just one of them. If you are pushing it too far, you will see the
effect in later refinement step..
With 74% completeness, how does the other parameters look like?

HTH, Partha


On Mon, Mar 17, 2008 at 10:06 AM, Melody Lin [EMAIL PROTECTED] wrote:
 Hi all,

 I have always been wondering... for a data set diffracting to say 2.15
 Angstrom but in the highest resolution shell (2.25-2.15) the completeness is
 74%, should I use merge all the data and call it a 2.15 A dataset or I
 should cut the data set to say 2.25 A where the highest resolution shell has
 better completeness (85%)? What is an acceptable completeness value for the
 highest resolution shell?

 Thank you.

 Best,
 Melody




-- 
MRC National Institute for Medical Research
Division of Molecular Structure
The Ridgeway, NW7 1AA, UK
Email: [EMAIL PROTECTED]
Phone: + 44 208 816 2515


Re: [ccp4bb] question about processing data

2008-03-17 Thread Melody Lin
well, redundancy for the highest shell is 4.8, I/sigma is 3, Rmerge for
overall is 0.08 for highest shell is 0.336. I/sigma and Rmerge don't seem
quite nice...

thanks.

On Mon, Mar 17, 2008 at 11:51 AM, Partha Chakrabarti [EMAIL PROTECTED]
wrote:

 Hi Melody,

 There was a nice discussion in this year's ccp4 study weekend. In
 general, one needs to consider several factors.. If you were at 3A, or
 low symmetry, you would of course try to get the maximum out of it, on
 the other hand, there are requirements for experimental phasing.. in
 general, judge it from:

 1. Completeness
 2. Redundancy
 3. I / Sigma
 4. R merge statistics

 Not just one of them. If you are pushing it too far, you will see the
 effect in later refinement step..
 With 74% completeness, how does the other parameters look like?

 HTH, Partha


 On Mon, Mar 17, 2008 at 10:06 AM, Melody Lin [EMAIL PROTECTED] wrote:
  Hi all,
 
  I have always been wondering... for a data set diffracting to say 2.15
  Angstrom but in the highest resolution shell (2.25-2.15) the
 completeness is
  74%, should I use merge all the data and call it a 2.15 A dataset or I
  should cut the data set to say 2.25 A where the highest resolution shell
 has
  better completeness (85%)? What is an acceptable completeness value for
 the
  highest resolution shell?
 
  Thank you.
 
  Best,
  Melody
 



 --
 MRC National Institute for Medical Research
 Division of Molecular Structure
 The Ridgeway, NW7 1AA, UK
 Email: [EMAIL PROTECTED]
 Phone: + 44 208 816 2515



Re: [ccp4bb] question about processing data

2008-03-17 Thread Partha Chakrabarti
Looks ok I guess.. for the highest shell, if Rmerge is less than 0.45
and I/sigma is about 2, it is worth a try.. as James said,
completeness might be from why it is incomplete.. is it something like
C2?

experts might tell us more..
Best, Partha

On Mon, Mar 17, 2008 at 11:03 AM, Melody Lin [EMAIL PROTECTED] wrote:
 well, redundancy for the highest shell is 4.8, I/sigma is 3, Rmerge for
 overall is 0.08 for highest shell is 0.336. I/sigma and Rmerge don't seem
 quite nice...

 thanks.



 On Mon, Mar 17, 2008 at 11:51 AM, Partha Chakrabarti [EMAIL PROTECTED]
 wrote:

  Hi Melody,
 
  There was a nice discussion in this year's ccp4 study weekend. In
  general, one needs to consider several factors.. If you were at 3A, or
  low symmetry, you would of course try to get the maximum out of it, on
  the other hand, there are requirements for experimental phasing.. in
  general, judge it from:
 
  1. Completeness
  2. Redundancy
  3. I / Sigma
  4. R merge statistics
 
  Not just one of them. If you are pushing it too far, you will see the
  effect in later refinement step..
  With 74% completeness, how does the other parameters look like?
 
  HTH, Partha
 
 
 
 
 
  On Mon, Mar 17, 2008 at 10:06 AM, Melody Lin [EMAIL PROTECTED] wrote:
   Hi all,
  
   I have always been wondering... for a data set diffracting to say 2.15
   Angstrom but in the highest resolution shell (2.25-2.15) the
 completeness is
   74%, should I use merge all the data and call it a 2.15 A dataset or I
   should cut the data set to say 2.25 A where the highest resolution shell
 has
   better completeness (85%)? What is an acceptable completeness value for
 the
   highest resolution shell?
  
   Thank you.
  
   Best,
   Melody


Re: [ccp4bb] Missing reflections

2008-03-17 Thread Ian Tickle
 
Can anyone explain the rationale for treating the test set reflections
as 'unobserved' for the maps, even though they have perfectly good
observed Fo values?  This doesn't make a great deal of sense to me!
Looking at the mtzdump output for the MTZ file output by Refmac, I
indeed note that for these reflections, the amplitude for the weighted
difference Fourier (i.e. column labelled DELFWT) is set to zero, and the
amplitude for the weighted Fourier (i.e. column labelled FWT) is set to
D.Fc (which is at least logical since if we set Fo such that mFo-DFc = 0
then mFo = DFc and so 2mFo-DFc = DFc).  But it would seem more sensible
to compute the coefficients based on the observed Fo since we have them!

My rationale for this would be: in as much as the maps obtained from a
refinement with the test set excluded do not reflect the 'final' density
for publication, which I think is generally agreed should be obtained
from a final refinement using all data (working + test set), the maps
using the observed mFo instead of DFc for the test set would reflect the
true differences between this and the final maps, i.e. they would
indicate the true effect of omitting the test set.

I accept there is a practical problem here in that the sigmaA values
which are needed to compute the map coefficients are only unbiased if
computed from the test set, but a way around this would be to carry
forward the sigmaA values from the penultimate refinement excluding the
test set and use those in the final refinement using all data.  This is
not a perfect solution, but it's better than nothing.

-- Ian

 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of George M. Sheldrick
 Sent: 12 March 2008 16:01
 To: Simon Kolstoe
 Cc: CCP4BB@JISCMAIL.AC.UK
 Subject: Re: [ccp4bb] Missing reflections
 
 All these programs only refine against reflections that were actually 
 measured. REFMAC, but not SHELXL, provides the 'Sigma-A' weight 
 coefficients for Coot to use DFc instead of 2mFo-DFc for the 
 reflections 
 for which Fo is not known (or is reserved for the free R) to 
 calculate a 
 map. This will in general improve the appearence of the map 
 at the cost 
 of introducing a little model bias. As far as I know these 
 'unobserved' 
 reflections are not used in calculating the difference map. CNS is 
 probably like SHELXL, I'm not sure what phenix.refine does. 
 
 George
 
 Prof. George M. Sheldrick FRS
 Dept. Structural Chemistry,
 University of Goettingen,
 Tammannstr. 4,
 D37077 Goettingen, Germany
 Tel. +49-551-39-3021 or -3068
 Fax. +49-551-39-2582
 
 
 On Wed, 12 Mar 2008, Simon Kolstoe wrote:
 
  Dear CCP4bb,
  
  I was looking through the REFMAC manual today and found the 
 following advice:
  
  Completing the data to include all possible hkls. Should 
 do this after data
  reduction, and certainly before using REFMAC. This is now 
 done with the
  uniqueify script. It is best done using CCP4i.
  
  http://www.ccp4.ac.uk/dist/html/refmac5/usage/examples.html#exam0
  
  Is it a good idea to always run uniqueify on data before 
 running REFMAC - what
  about other refinement programs such as SHELX, CNS or phenix.refine?
  
  Simon
  
 
 


Disclaimer
This communication is confidential and may contain privileged information 
intended solely for the named addressee(s). It may not be used or disclosed 
except for the purpose for which it has been sent. If you are not the intended 
recipient you must not review, use, disclose, copy, distribute or take any 
action in reliance upon it. If you have received this communication in error, 
please notify Astex Therapeutics Ltd by emailing [EMAIL PROTECTED] and destroy 
all copies of the message and any attached documents. 
Astex Therapeutics Ltd monitors, controls and protects all its messaging 
traffic in compliance with its corporate email policy. The Company accepts no 
liability or responsibility for any onward transmission or use of emails and 
attachments having left the Astex Therapeutics domain.  Unless expressly 
stated, opinions in this message are those of the individual sender and not of 
Astex Therapeutics Ltd. The recipient should check this email and any 
attachments for the presence of computer viruses. Astex Therapeutics Ltd 
accepts no liability for damage caused by any virus transmitted by this email. 
E-mail is susceptible to data corruption, interception, unauthorized amendment, 
and tampering, Astex Therapeutics Ltd only send and receive e-mails on the 
basis that the Company is not liable for any such alteration or any 
consequences thereof.
Astex Therapeutics Ltd., Registered in England at 436 Cambridge Science Park, 
Cambridge CB4 0QA under number 3751674


[ccp4bb] 2008 Gordon Research Conference on Diffraction Methods in Structural Biology

2008-03-17 Thread Elspeth Garman

Gordon Research Conference on Diffraction Methods in Structural Biology
 July 13-18, 2008, Bates College, Lewiston, Maine, USA

  Co-Chairs: Elspeth Garman  Andrew Leslie

The 2008 Gordon Research Conference on Diffraction Methods in Structural
Biology will encompass advances in the methodology for macromolecular
X-ray crystallography, and other diffraction/scattering applications.

The full confirmed programme and timetable for the meeting can now be found at:

http://www.grc.org/programs.aspx?year=2008program=diffrac

as well as details on how to register (registration closes on 22nd June 2008 but 
attendance is limited to 125 researchers).


All macromolecular crystallographers interested in Methods development are
encouraged to consider taking part in this meeting. Participants are expected to 
contribute to discussion, to present a poster, and to

attend the entire conference.



-
 Dr. Elspeth F. Garman,
 Reader in Molecular Biophysics, University of Oxford
 Visiting Professor in Chemistry, University of Durham
 Postal address:
 Department of Biochemistry,
 Rex Richards Building,
 University of Oxford,  Tel: (44)-1865-275398
 South Parks Road,  FAX: (44)-1865-275182
 OXFORD, OX1 3QU, U.K.  E-mail: [EMAIL PROTECTED]
-


[ccp4bb] problem installing arp/warp

2008-03-17 Thread Mario Sanches

Dear all,

I am trying to install arp/warp and I am stuck with the following error:

--
Checking refmac5 installation - refmac5: Command not found.
*** ERROR ***
Cannot execute refmac5

   *** INSTALLATION OF ARP/wARP 7.0.1 FAILED ***
--

CCP4 is installed and refmac5 is running. A few more information:

The system is Kubuntu 7.10

My shell is bash (tried tcsh as well and got the same error)

Typing refmac5 -i returns:

--
CCP4 software suite: library version 6.0
CCP4 software suite: patch level 6.0.2
Program: refmac5; version 5.2.0019
--

when I type refmac5 I get:


--
###
###
###
### CCP4 6.0: Refmac_5.2.0019version 5.2.0019  : 06/09/05##
###
User: manager  Run date: 17/ 3/2008 Run time: 09:45:33


Please reference: Collaborative Computational Project, Number 4. 1994.
The CCP4 Suite: Programs for Protein Crystallography. Acta Cryst. 
D50, 760-763.

as well as any specific reference in the program write-up.

!--SUMMARY_END--/FONT/B
$TEXT:Reference1: $$ comment $$
  Refinement of Macromolecular Structures by the  Maximum-Likelihood 
Method:

  G.N. Murshudov, A.A.Vagin and E.J.Dodson,(1997)
  Acta Crystallogr. D53, 240-255
  EU  Validation contract: BIO2CT-92-0524

$$
$SUMMARY :Reference1:  $$ Refmac: $$
:TEXT:Reference1: $$
--


Thank you all in advance for any help.


Re: [ccp4bb] exclude range within data in scala , discontinuous run in scala problem

2008-03-17 Thread hari jayaram
Hi Phil,Thanks for your email. I did get it to work with the options you
suggested .
i.e
 Toggle ON Override automatic definition of runs to mark discontinuities in
data
 Toggle ON Define Runs
 Toggle OFF Use run 1 as reference run.

 Setup runs as follows ( intended to exclude batches 200 to 400 from a total
dataset of 1 to 720 )

 Run 1 from  1 to 200
 Run 2 from 400 to 720

This combination worked as you said it would.
Thank you for your help

Hari Jayaram
Postdoc
Brandeis University







On Sat, Mar 15, 2008 at 3:44 AM, Phil Evans [EMAIL PROTECTED] wrote:

 You are right: this is a bug I should fix sometime

 Assigning two runs should work though

 You should not assign a reference run, and you shouldn't need to
 reassign the datasets

 Best wishes
 Phil

 On 14 Mar 2008, at 21:45, hari jayaram wrote:

  Hi.
  I did try that beforehand  when I tried excluding  a range of
  batches with the ccp4i gui
 
  But I got an error
 
   Scala:  *** Gap in time (rotation) ***
 
  Sorry...both versions of the protocol for handling a bad internal
  wedge are giving me either a gap in rotation error or a Run 2
  has not been assigned to a dataset error
 
  I am still stuck.
 
  ( error and com file for the exclude data range option is attached
  below)
  Thanks for your help.
 
  Hari Jayaram
 
 
 
  
  Error
  ---
 
    Large gap in time (rotation) coordinate:3.5 
    See WARNING above 
    Smoothed B-factor impossible 
 
 
 ***
  * Information from CCP4Interface script
 
 ***
  The program run with command: scala HKLIN /Users/hari/aps_feb08/
  p2-2/p2-2_A1_1_0001_sorted.mtz HKLOUT /tmp/hari/p2_2_13_2_mtz.tmp
  SCALES /Users/hari/aps_feb08/p2-2/p2_2_13.scala ROGUES /Users/
  hari/aps_feb08/p2-2/p2_2_13_rogues.log NORMPLOT /Users/hari/
  aps_feb08/p2-2/p2_2_13_normplot.xmgr ANOMPLOT /Users/hari/
  aps_feb08/p2-2/p2_2_13_anomplot.xmgr PLOT /Users/hari/aps_feb08/
  p2-2/p2_2_13_surface_plot.plt CORRELPLOT /Users/hari/aps_feb08/
  p2-2/p2_2_13_correlplot.xmgr
  has failed with error message
   Scala:  *** Gap in time (rotation) ***
 
 ***
 
 
  #CCP4I TERMINATION STATUS 0  Scala:  *** Gap in time (rotation)
  ***
  #CCP4I TERMINATION TIME 14 Mar 2008  17:38:34
  #CCP4I TERMINATION OUTPUT_FILES  /Users/hari/aps_feb08/p2-2/
  p2-2_A1_1_0001_sorted.mtz p2_2
  #CCP4I MESSAGE Task failed
 
  
  The com script was
  
  ***
  /tmp/hari/p2_2_13_3_com.tmp
  ***
   title Scala anon and deleted batches 200_400b try with two run
  definitions
  name project p2_2 crystal p2-2_A1_1 dataset p2_2
  exclude EMAX -
  10.0
  exclude batch -
  400 to 540
  partials -
  check -
  test 0.95 1.05 -
  nogap
  intensities PROFILE -
  PARTIALS
  final PARTIALS
  scales -
  rotation SPACING 5 -
  secondary 6 -
  bfactor ON -
  BROTATION SPACING 20
  UNFIX V
  FIX A0
  UNFIX A1
  initial MEAN
  tie surface 0.001
  tie bfactor 0.3
  cycles 10 converge 0.3 reject 2
  anomalous on
  output AVERAGE
  print cycles nooverlap
  RSIZE 80
 
 
  ## This script run with the command   ##
  # scala HKLIN /Users/hari/aps_feb08/p2-2/p2-2_A1_1_0001_sorted.mtz
  HKLOUT /tmp/hari/p2_2_13_2_mtz.tmp SCALES /Users/hari/aps_feb08/
  p2-2/p2_2_13.scala ROGUES /Users/hari/aps_feb08/p2-2/
  p2_2_13_rogues.log NORMPLOT /Users/hari/aps_feb08/p2-2/
  p2_2_13_normplot.xmgr ANOMPLOT /Users/hari/aps_feb08/p2-2/
  p2_2_13_anomplot.xmgr PLOT /Users/hari/aps_feb08/p2-2/
  p2_2_13_surface_plot.plt CORRELPLOT /Users/hari/aps_feb08/p2-2/
  p2_2_13_correlplot.xmgr
  
 
 
  On Fri, Mar 14, 2008 at 5:11 PM, Phil Evans [EMAIL PROTECTED]
  wrote:
  In ccp4i Scala task, click to open the Excluded data panel, click on
  Exclude selected batches
 
  There you can define one or more ranges of batches or lists to exclude
 
  If you just want to exclude the last part you can define a range eg
  301 to 999
 
  You don't need to explicitly define runs
 
  Phil
 
  On 14 Mar 2008, at 18:57, hari jayaram wrote:
 
   Hi I am trying to exclude a bad wedge  of data during scaling in
   scala in the newest ccp4
  
   ( fink install this morning from W.G Scotts sage.ucsc Binaries ..so
   they should be version 6)
  
   The batches I need are
   1 to 200 and 400-720
   I have clicked the Override automatic definition of runs to mark
   discontinuities in data button as well as created two runs to
   contain the required data
  
   But I get a  Run 2 has not been assigned to a dataset error.
   How I can exclude a bad wedge 

Re: [ccp4bb] question about processing data

2008-03-17 Thread Bart Hazes

Melody Lin wrote:

Hi all,

I have always been wondering... for a data set diffracting to say 2.15 
Angstrom but in the highest resolution shell (2.25-2.15) the 
completeness is 74%, should I use merge all the data and call it a 2.15 
A dataset or I should cut the data set to say 2.25 A where the highest 
resolution shell has better completeness (85%)? What is an acceptable 
completeness value for the highest resolution shell?


Thank you.

Best,
Melody


Hi Melody,

This reply is not aimed at you directly as this situation seems to have 
become systemic in the field. So thanks for bringing it up!



We can have a long, and mostly aimless, discussion on what resolution 
you should claim for your data set but DON'T throw away good data to 
make the statistics look better. At high resolution the statistics are 
supposed to get worse! What matters is if the data still contain useful 
information. The fact that 26% of the data is missing does not normally 
mean that anything is wrong with the 74% that you did measure. Perhaps 
you used a square detector and didn't place it close enough to capture 
the full resolution, or perhaps your diffraction pattern is anisotropic.


The only reason to throw out data is if they are too inaccurate for your 
purpose. When your data is used for phasing, especially anomalous 
phasing, there is reason to focus on data quality, but I see far too 
many native data sets that make poor use of the diffraction potential of 
the crystal. I thought this was due to people not properly collecting 
the data, but now it seems that people are simply throwing away good 
data because they don't like the statistics.


So my advice; if your high resolution shell data has poor completeness 
then check why this happened. If you did not collect the data properly 
then let it be a lesson for the next data collection trip. If it 
resulted from some issue of the crystal then decide if the measured data 
is messed up as well. If not then use all the data you trust, which 
means there is useful signal (I/SigI 1.5 or 2.0 depending who you talk 
to) and no problems leading to systematic errors or outliers.


Bart

==

Bart Hazes (Assistant Professor)
Dept. of Medical Microbiology  Immunology
University of Alberta
1-15 Medical Sciences Building
Edmonton, Alberta
Canada, T6G 2H7
phone:  1-780-492-0042
fax:1-780-492-7521

==


Re: [ccp4bb] question about processing data

2008-03-17 Thread Ed Pozharski
On Mon, 2008-03-17 at 10:51 +, Partha Chakrabarti wrote:
 Not just one of them. If you are pushing it too far, you will see the
 effect in later refinement step..

And the effect in later refinement step will be the slight increase in
R-factor?  IMHO, this does not justify throwing away data (which
ultimately reduces the quality of your model).  


-- 
Edwin Pozharski, PhD, Assistant Professor
University of Maryland, Baltimore
--
When the Way is forgotten duty and justice appear;
Then knowledge and wisdom are born along with hypocrisy.
When harmonious relationships dissolve then respect and devotion arise;
When a nation falls to chaos then loyalty and patriotism are born.
--   / Lao Tse /


Re: [ccp4bb] question about processing data

2008-03-17 Thread Jim Pflugrath
I would use all the data myself and report that the model was built from a 
a dataset with 74% completeness in the 2.25 to 2.15 Anngstrom shell.  I 
would not put the number 2.15 A in the manuscript title nor in the poster 
title.


For me the acceptable completeness is 90% in the highest resolution shell 
for the number to get in the title.  You will know I reviewed your 
paper if you see my telltale reviewer comment.  You can put whatever you 
want in the PDB deposition field.


Jim

On Mon, 17 Mar 2008, Melody Lin wrote:


Hi all,

I have always been wondering... for a data set diffracting to say
2.15Angstrom but in the highest resolution shell (
2.25-2.15) the completeness is 74%, should I use merge all the data and call
it a 2.15 A dataset or I should cut the data set to say 2.25 A where the
highest resolution shell has better completeness (85%)? What is an
acceptable completeness value for the highest resolution shell?

Thank you.

Best,
Melody



Re: [ccp4bb] problem installing arp/warp

2008-03-17 Thread Gerrit Langer

Dear Mario,

it seems that inside the install script the path variable is not set and 
therefore the call to refmac5 fails. We are not sure why this happend in 
your case. The install script was tested with both (t)csh and bash and 
something in your machine setup was not anticipated by us. Have you run 
the install.sh utility in the same terminal/shell, in which you later 
verified your refmac installation by typing 'refmac5 -i'? Could it be 
that you need to run some setup util for ccp4 manually before?


To get you started with ARP/wARP we can send you a taylored install 
script with some diagnostic output to find a fix. In the meanwhile, can 
you try to change the line

alias runtestrefmac 'refmac5 -i'   (line 16 in install_csh.sh)
to incorporate the absolute path to the refmac5 binary? Which refmac5 
should provide you with this.


Regards,
Gerrit and Victor.

Mario Sanches wrote:


Dear all,

I am trying to install arp/warp and I am stuck with the following error:

-- 


Checking refmac5 installation - refmac5: Command not found.
*** ERROR ***
Cannot execute refmac5

   *** INSTALLATION OF ARP/wARP 7.0.1 FAILED ***
-- 



CCP4 is installed and refmac5 is running. A few more information:

The system is Kubuntu 7.10

My shell is bash (tried tcsh as well and got the same error)

Typing refmac5 -i returns:

-- 


CCP4 software suite: library version 6.0
CCP4 software suite: patch level 6.0.2
Program: refmac5; version 5.2.0019
-- 



when I type refmac5 I get:


-- 


###
###
###
### CCP4 6.0: Refmac_5.2.0019version 5.2.0019  : 06/09/05##
###
User: manager  Run date: 17/ 3/2008 Run time: 09:45:33


Please reference: Collaborative Computational Project, Number 4. 1994.
The CCP4 Suite: Programs for Protein Crystallography. Acta Cryst. 
D50, 760-763.

as well as any specific reference in the program write-up.

!--SUMMARY_END--/FONT/B
$TEXT:Reference1: $$ comment $$
  Refinement of Macromolecular Structures by the  Maximum-Likelihood 
Method:

  G.N. Murshudov, A.A.Vagin and E.J.Dodson,(1997)
  Acta Crystallogr. D53, 240-255
  EU  Validation contract: BIO2CT-92-0524

$$
$SUMMARY :Reference1:  $$ Refmac: $$
:TEXT:Reference1: $$
-- 




Thank you all in advance for any help.




[ccp4bb] Post-doctoral job opportunity

2008-03-17 Thread mjvdwoerd
All,

Below you will find the pertinent information for a job opening at the Howard 
Hughes Medical Research Institute. All information can be found at this site: 
http://www.hhmi.org/jobs/main?action=jobjob_id=548. 


If you are interested, please follow the instructions in the advertizement and 
do NOT e-mail applications to me.

Thanks,

Mark

Mark van der Woerd, PhD
Research Scientist II
Dept. of Biochemistry and Molecular Biology
Colorado State University
Fort Collins, CO 80523
Phone (970) 491-0469
?


Job Summary:


Looking for a highly motivated individual with a strong interest in
integrated approaches to problems in structural biology. The lab has
extensive crystallographic and spectroscopic resources, and is part of
the W.M. Keck Center for Chromatin Structure and Function. 

  

Principal Responsibilities:





Investigate
the structure, function, and dynamic properties of eukaryotic
chromatin, using a wide variety of biochemical, biophysical and in vivo
approaches.

Investigate how cellular or viral factors interact
with histones, nucleosomes, or chromatin and how these interactions may
lead to cancer or other diseases.

Use multipronged approaches
such as x-ray crystallography, small angle x-ray scattering, atomic
force microscopy, fluorescence spectroscopy, analytical
untracentrifugation, and methods in mechanistic biochemistry/molecular
biology as well as genetic approaches to understand the mechanism by
which structural transitions in chromatin occur.



  

  
 Preferred Qualifications:





Ph.D. in Molecular Biology, Biochemistry, Biophysics or an appropriately 
related field. 

Researchers with a strong background in biochemistry preferred.

Previous experience with the structure determination of protein/protein and/or 
nucleic-acid complexes preferred. 

Extensive
biochemical experience with protein purification, functional
characterization, and yeast genetics would be highly valued.? 

Applicants should be strongly motivated, ambitious and function well in a 
highly collaborative environment.



  

  
  
Additional Information:


Please send cover letter, CV, and names of three references to Dr.
Karolin Luger. Be sure to reference job #099100-01. 

  

 To Apply


To apply for this position, please email or send your 
resume to:


Dr. Karolin Luger, PhD

Investigator

HHMI at Colorado State University

Dept of Biochemistry  Molecular Biology

1870 Campus Delivery, 383 MRB


Fort Collins, 
Colorado 
80523-1870



  
  E-mail: [EMAIL PROTECTED] 


  
 Application Deadline:


  Open Until Filled 

  
  
  
  
  
We are an Equal Opportunity Employer

  
  
  


[ccp4bb] Crystallization and Biophysics Courses

2008-03-17 Thread Anastassis Perrakis

Dear all,

We would like to announce that we are organizing two courses on:

Biophysical Characterisation of Macromolecules (20-23 May)

HTP Crystallization and Information Management (18-20 June)

Both will take place at the NKI at Amsterdam, at our brand new lab  
space (we have not even moved yet ...)!


More information for the respective course goals and preliminary  
schedules can be found at http://xtal.nki.nl


... and for even more events and conferences do not forget to look at:

http://strucbio.biologie.uni-konstanz.de/ccp4wiki/index.php/ 
Current_events


Tassos


Re: [ccp4bb] problem installing arp/warp

2008-03-17 Thread Mario Sanches

Dear Gerrit,

Changing the alias to include the full path for refmac5 worked. Thank 
you very much for your help.


I don't know if it helps you somehow, but I was running it on the same 
terminal where I typed refmac5 -i. Actually, the paths for the CCP4 
programs are configured in /etc/bash.bashrc so that they will be visible 
everywhere for all users, so I think that was not the issue here.


Thank you again for the quick reply,

Mario Sanches


Gerrit Langer wrote:


Dear Mario,

it seems that inside the install script the path variable is not set 
and therefore the call to refmac5 fails. We are not sure why this 
happend in your case. The install script was tested with both (t)csh 
and bash and something in your machine setup was not anticipated by 
us. Have you run the install.sh utility in the same terminal/shell, in 
which you later verified your refmac installation by typing 'refmac5 
-i'? Could it be that you need to run some setup util for ccp4 
manually before?


To get you started with ARP/wARP we can send you a taylored install 
script with some diagnostic output to find a fix. In the meanwhile, 
can you try to change the line

alias runtestrefmac 'refmac5 -i'   (line 16 in install_csh.sh)
to incorporate the absolute path to the refmac5 binary? Which 
refmac5 should provide you with this.


Regards,
Gerrit and Victor.

Mario Sanches wrote:


Dear all,

I am trying to install arp/warp and I am stuck with the following error:

-- 


Checking refmac5 installation - refmac5: Command not found.
*** ERROR ***
Cannot execute refmac5

   *** INSTALLATION OF ARP/wARP 7.0.1 FAILED ***
-- 



CCP4 is installed and refmac5 is running. A few more information:

The system is Kubuntu 7.10

My shell is bash (tried tcsh as well and got the same error)

Typing refmac5 -i returns:

-- 


CCP4 software suite: library version 6.0
CCP4 software suite: patch level 6.0.2
Program: refmac5; version 5.2.0019
-- 



when I type refmac5 I get:


-- 


###
###
###
### CCP4 6.0: Refmac_5.2.0019version 5.2.0019  : 06/09/05##
###
User: manager  Run date: 17/ 3/2008 Run time: 09:45:33


Please reference: Collaborative Computational Project, Number 4. 1994.
The CCP4 Suite: Programs for Protein Crystallography. Acta Cryst. 
D50, 760-763.

as well as any specific reference in the program write-up.

!--SUMMARY_END--/FONT/B
$TEXT:Reference1: $$ comment $$
  Refinement of Macromolecular Structures by the  Maximum-Likelihood 
Method:

  G.N. Murshudov, A.A.Vagin and E.J.Dodson,(1997)
  Acta Crystallogr. D53, 240-255
  EU  Validation contract: BIO2CT-92-0524

$$
$SUMMARY :Reference1:  $$ Refmac: $$
:TEXT:Reference1: $$
-- 




Thank you all in advance for any help.







Re: [ccp4bb] question about processing data

2008-03-17 Thread James Stroud
Redundancy of 4.8 for a 74% complete shell (if I understand which  
shell these stats are for) suggests you have assumed too much symmetry  
and are rejecting a lot of reflections during scaling. Is this the  
case? The I/sigma suggests you could drop the symmetry and re-scale  
without losing a lot of data if this is the case.



On Mar 17, 2008, at 4:03 AM, Melody Lin wrote:

well, redundancy for the highest shell is 4.8, I/sigma is 3, Rmerge  
for overall is 0.08 for highest shell is 0.336. I/sigma and Rmerge  
don't seem quite nice...


thanks.

On Mon, Mar 17, 2008 at 11:51 AM, Partha Chakrabarti  
[EMAIL PROTECTED] wrote:

Hi Melody,

There was a nice discussion in this year's ccp4 study weekend. In
general, one needs to consider several factors.. If you were at 3A, or
low symmetry, you would of course try to get the maximum out of it, on
the other hand, there are requirements for experimental phasing.. in
general, judge it from:

1. Completeness
2. Redundancy
3. I / Sigma
4. R merge statistics

Not just one of them. If you are pushing it too far, you will see the
effect in later refinement step..
With 74% completeness, how does the other parameters look like?

HTH, Partha


On Mon, Mar 17, 2008 at 10:06 AM, Melody Lin [EMAIL PROTECTED]  
wrote:

 Hi all,

 I have always been wondering... for a data set diffracting to say  
2.15
 Angstrom but in the highest resolution shell (2.25-2.15) the  
completeness is
 74%, should I use merge all the data and call it a 2.15 A dataset  
or I
 should cut the data set to say 2.25 A where the highest resolution  
shell has
 better completeness (85%)? What is an acceptable completeness  
value for the

 highest resolution shell?

 Thank you.

 Best,
 Melody




--
MRC National Institute for Medical Research
Division of Molecular Structure
The Ridgeway, NW7 1AA, UK
Email: [EMAIL PROTECTED]
Phone: + 44 208 816 2515



--
James Stroud
UCLA-DOE Institute for Genomics and Proteomics
Box 951570
Los Angeles, CA  90095

http://www.jamesstroud.com


Re: [ccp4bb] question about processing data

2008-03-17 Thread Anastassis Perrakis

Hi -

I would tend to argue as follows:

An I/sigI of 3, and Rmerge of 33.6% are most definitely acceptable  
values with a redundancy of 4.8. Thus, despite the 74% completeness,  
that data are most definitely useful and should be included in  
refinement.


A good question now is why is the data only 74% complete.

I can think of a few reasons, eg

1. not enough 'degrees' collected in total: too bad, better do better  
next time, but thats not likely to be your problem.
2. overlaps at high resolution: again be more careful next time, but  
could you play with the mosaicity to decrease overlaps a bit ?
3. High resolution collected in the corners of detector: put the  
detector closer next time and dont collect data at the corners ...
4. Severe anisotropy: tough luck, have to live with it .. or try and  
deal better with it during data collection (adjust exposure)


Whatever the case, I would use the data and clearly report in the MM  
in my paper not only what the numbers are,
but also WHY they are like that. And, of course if its trivial to do  
a better data collection experiment and get the best data,

as it often is, then do a better data collection experiment ...

My main point is that you should know clearly WHY your high  
resolution shell is incomplete and then decide.

The numbers alone do not always tell the full story.

Best , Tassos

well, redundancy for the highest shell is 4.8, I/sigma is 3,  
Rmerge for overall is 0.08 for highest shell is 0.336. I/sigma and  
Rmerge don't seem quite nice...


[ccp4bb] Coot (OS X) unable to read CRYST1 line

2008-03-17 Thread Pavan
Hello all,
I apologize for the off-topic post. I have a problem with Coot being
unable to read the space group from the CRYST1 line in the PDB file.
Although the space group is specified correctly, Coot seems unable to
read it. It reads the unit cell dimensions and angles just fine - it
seems to have a problem with just the space group.
I use coot 0.4-pre-2 running on OS X Leopard on a Powerbook G4. The
same PDB opens just fine on a Coot 0.4-pre-2 running on Redhat Linux
Enterprise 5.

The terminal window shows the following message when the pdb is opened:

PDB file ABCD.pdb has been read.
No Spacegroup found for this PDB file
Cell: 55.76 118.43 122.38 107.8 98.36 91.4
!! Warning:: No symmetry available for this molecule
No Symmetry for this model
Molecule 0 read successfully

Has anybody else come across this problem?

Thanks!
Pavan


Re: [ccp4bb] question about processing data

2008-03-17 Thread Melody Lin
Dear all,

Thank you very much for the useful suggestions! I definitely learned a lot
from these discussions. Now looking back at my datasets, I think the
incompleteness likely results from high mosaicity (1.009) and anisotropy of
the crystal. Detector is square, but the distance is short enough for the
resolution, and cell dimensions are not huge (~45x 90x 100 A), so there are
not too much overlapping among high resolution spots. I am confident with
the symmetry. Well, now I know better how to collect good data. Thanks!

Best,
Melody

On Mon, Mar 17, 2008 at 8:29 PM, Anastassis Perrakis [EMAIL PROTECTED]
wrote:

 Hi -

 I would tend to argue as follows:

 An I/sigI of 3, and Rmerge of 33.6% are most definitely acceptable
 values with a redundancy of 4.8. Thus, despite the 74% completeness,
 that data are most definitely useful and should be included in
 refinement.

 A good question now is why is the data only 74% complete.

 I can think of a few reasons, eg

 1. not enough 'degrees' collected in total: too bad, better do better
 next time, but thats not likely to be your problem.
 2. overlaps at high resolution: again be more careful next time, but
 could you play with the mosaicity to decrease overlaps a bit ?
 3. High resolution collected in the corners of detector: put the
 detector closer next time and dont collect data at the corners ...
 4. Severe anisotropy: tough luck, have to live with it .. or try and
 deal better with it during data collection (adjust exposure)

 Whatever the case, I would use the data and clearly report in the MM
 in my paper not only what the numbers are,
 but also WHY they are like that. And, of course if its trivial to do
 a better data collection experiment and get the best data,
 as it often is, then do a better data collection experiment ...

 My main point is that you should know clearly WHY your high
 resolution shell is incomplete and then decide.
 The numbers alone do not always tell the full story.

 Best , Tassos

  well, redundancy for the highest shell is 4.8, I/sigma is 3,
  Rmerge for overall is 0.08 for highest shell is 0.336. I/sigma and
  Rmerge don't seem quite nice...



Re: [ccp4bb] Coot (OS X) unable to read CRYST1 line

2008-03-17 Thread Tim Gruene
your CRYST1 card is most likely missing the actual space group name. You 
can fix this with e.g. pdbset:


pdbset xyzin your.pdb xyzout pdb-with-spacegroup-name.pdb  eof
spac P21212
end
eof

where you replace P21212 with your actual space group name.
Tim

--
Tim Gruene
Institut fuer anorganische Chemie
Tammannstr. 4
D-37077 Goettingen

GPG Key ID = A46BEE1A


On Mon, 17 Mar 2008, Pavan wrote:


Hello all,
I apologize for the off-topic post. I have a problem with Coot being
unable to read the space group from the CRYST1 line in the PDB file.
Although the space group is specified correctly, Coot seems unable to
read it. It reads the unit cell dimensions and angles just fine - it
seems to have a problem with just the space group.
I use coot 0.4-pre-2 running on OS X Leopard on a Powerbook G4. The
same PDB opens just fine on a Coot 0.4-pre-2 running on Redhat Linux
Enterprise 5.

The terminal window shows the following message when the pdb is opened:

PDB file ABCD.pdb has been read.
No Spacegroup found for this PDB file
Cell: 55.76 118.43 122.38 107.8 98.36 91.4
!! Warning:: No symmetry available for this molecule
No Symmetry for this model
Molecule 0 read successfully

Has anybody else come across this problem?

Thanks!
Pavan



Re: [ccp4bb] Summary: Calculating R-factor and maps from a Refmac model containing TLS downloaded from the PDB

2008-03-17 Thread Dale Tronrud

Hi again,

   I guess this is only a partial summary, since I still don't understand
all the issues this question raises.

Pavel Afonine reported that his extensive tests of the PDB reveals that
reproducing R values from models with TLS ADP's is a wide-spread and
serious problem.  The principal problems (IMHO) are

   1) Incorrect or illegal TLS definitions in the REMARK.

   2) Some files list in the ATOM B column the residual B after TLS
  has been accounted for while others list the total B (TLS and
  residual).  There is no clear indication in the PDB file which
  interpretation is being used.

Tassos, Eleanor, and others recommended taking the TLS definition from
the PDB header and running zero cycles of unrestrained refinement in
Refmac to get it to calculate R factors and Maps w/o the need to define
ideal geometry for co-factors.  I have yet to see this work, however
(See below)

Ulrich Baumann wrote to tell me of two of his PDB's that he knows will
give back the reported R values.  They are 2qua and 2qub.

I grabbed 2qua from the RCSB server, extracted the TLS groups with CCP4i,
and found that the TLS definitions were incorrect.  There is one polypeptide
in this model and three TLS groups.  The first and third group did not
have a residue range, while the second group defined a residue range in
the middle of the peptide.  I made the assumption that the first and
third TLS groups were intended to cover the beginning and end of the
peptide and corrected the .tls file.

I loaded this into Refmac and asked for zero cycles of unrestrained
refinement and got an R value of 19.4%.  The PDB file says it should
be 17.3%.  I then asked Refmac to run 10 cycles of TLS and 10 cycles
of restrained refinement and got an R value of 17.5%.  Good enough.

From this result I infer that Refmac is unable to calculate the original
ADP's given this PDB file and TLS definition.  It can reconstruct them
via refinement, basically ignoring the B values of the PDB file.

This particular PDB entry appears to contain in its B column the
residual B's.

I also tried entry 2qub, but with less luck.  This model has seven
peptides and 30 TLS groups.  The first seven TLS groups defined in
the header of the PDB cover each of the seven chains, while the other
23 groups had no residue range.  I can guess that the intension was
to have five TLS groups for each of the seven chains, but without
additional information from Dr. Baumann, I'm unable to even start
trying to reproduce R values and calculate maps.

So...  1) Pavel is correct, there are many clear errors in the TLS
REMARKs of PDB entries.  2) It seems necessary to ask Refmac to
recreate the ADP description for a PDB entry from scratch, assuming
the TLS group definition can be deduced from the PDB header.  This,
currently, requires refinement which requires .cif's for the unusual
groups.

If CCP4I could ask Refmac to perform only TLS/B refinement, holding
positions fixed, the need for detailed .cifs would be greatly reduced.
I have no desire to move the atoms anyway.

Better yet, if someone could find out what Refmac is expecting to find
in its starting PDB (what it wants in the B column) one could add
a tool to CCP4I that could convert a PDB entry to what Refmac wants
w/o refinement.  Since there appear to be two varieties of entries
one could try both possibilities and choose the one with the lowest
R value.

I have to close with additional problems, I'm afraid.  I can't run
the required refinement on 1nkz to test TLS/B refinement but
I have tried it on 3bsd, where I have a good .cif for the Bchl-a
groups.  When I pull out the TLS definition, and perform 10 cycles
of TLS and 10 cycles of restrained refinement I get an R value of
20.2% while the entry asserts that the correct value is 17.8%.  The
final TLS parameters look, by eye, pretty similar to the deposited
ones, so I don't know what is going on here.

Dale Tronrud



Dale Tronrud wrote:

Hi,

   I am looking over a number of models from the PDB but have been
unable to reproduce the R-factors for any model that was refined
with Refmac and contains TLS parameters.  I usually can't get within
5% of the reported value.  On the other hand, I usually do pretty
well for models w/o TLS.

   An example is the model 1nkz.  The PDB header gives an R value
of 17% but even when I use tlsanal in CCP4i to generate a PDB with
anisotropic B's that mimic the TLS parameters I get an R value of
22.4% using SFCheck.  (I'm not implying that I suspect any problem
with 1nkz, in fact I have every reason to believe this is the great
model its published stats indicate.)

   I've found a CCP4 BB letter that stated that SFCheck does not
pay attention to anisotropic B's but that letter was dated 2002.
I hope this limitation has been removed, or at least the output
would mention this limitation.

   Setting up a refinement in Refmac involves a large overhead,
since even for zero cycles of refinement the program insists on
a complete 

Re: [ccp4bb] Summary: Calculating R-factor and maps from a Refmac model containing TLS downloaded from the PDB

2008-03-17 Thread Ethan A Merritt
On Monday 17 March 2008 16:20, Dale Tronrud wrote:
 Hi again,
 
 I guess this is only a partial summary, since I still don't understand
 all the issues this question raises.
 
 Pavel Afonine reported that his extensive tests of the PDB reveals that
 reproducing R values from models with TLS ADP's is a wide-spread and
 serious problem.  The principal problems (IMHO) are
 
 1) Incorrect or illegal TLS definitions in the REMARK.

Yes. I have noticed the same. It is unfortunate, and not at all clear at
what point the error creeps in.
 
 2) Some files list in the ATOM B column the residual B after TLS
has been accounted for while others list the total B (TLS and
residual).  There is no clear indication in the PDB file which
interpretation is being used.

That is a fundamental deficiency in the existing PDB standard.  It simply
doesn't specify how to present this critical information.  A draft change
covering this was circulated at the PDB get-together of last summer's ACA
meeting, and I discussed with Garib and Eleanor that we should as a community
decide how we would like it handled.  The consensus as I understand it is
that people would prefer that the B field of individual ATOM records contain
the *net* B rather than the residual B, so that old programs will continue
to work as expected.  However, this puts even more importance on the TLS
description in the header being correct, since the information is otherwise
not recoverable.  We were going to circulate a letter, but I plead guilty
to letting the matter slide.

 Tassos, Eleanor, and others recommended taking the TLS definition from
 the PDB header and running zero cycles of unrestrained refinement in
 Refmac to get it to calculate R factors and Maps w/o the need to define
 ideal geometry for co-factors.  I have yet to see this work, however
 (See below)

Well, it has worked reasonably well for me in the past, for some structures.
But it may well have broken again.

 Ulrich Baumann wrote to tell me of two of his PDB's that he knows will
 give back the reported R values.  They are 2qua and 2qub.
 
 I grabbed 2qua from the RCSB server, extracted the TLS groups with CCP4i,
 and found that the TLS definitions were incorrect.  There is one polypeptide
 in this model and three TLS groups.  The first and third group did not
 have a residue range, while the second group defined a residue range in
 the middle of the peptide.  I made the assumption that the first and
 third TLS groups were intended to cover the beginning and end of the
 peptide and corrected the .tls file.

That is interesting, because the mmCIF file for that structure contains
the following:

#
_pdbx_refine_tls_group.id  1
_pdbx_refine_tls_group.refine_tls_id   2
_pdbx_refine_tls_group.beg_auth_asym_idA
_pdbx_refine_tls_group.beg_auth_seq_id 250
_pdbx_refine_tls_group.beg_label_asym_id   A
_pdbx_refine_tls_group.beg_label_seq_id252
_pdbx_refine_tls_group.end_auth_asym_idA
_pdbx_refine_tls_group.end_auth_seq_id 461
_pdbx_refine_tls_group.end_label_asym_id   A
_pdbx_refine_tls_group.end_label_seq_id463
_pdbx_refine_tls_group.selection   ?
#

This set of records is also a bit mangled, but does seem to contain
additional traces of the correct residue ranges for each group.
I wonder if the internal PDB database is storing incorrectly formatted XML
descriptions of the groups, and then further corrupting the information
when it generates a PDB format file?

 I also tried entry 2qub, but with less luck.

Indeed. That one has no additional information in the mmCIF file either.
So I don't know what's up.

Here's a recent deposition of ours:  3BJE
This one downloads from today's www.pdb.org with full TLS information.
So the process clearly works at least some of the time.

 I have to close with additional problems, I'm afraid.  I can't run
 the required refinement on 1nkz to test TLS/B refinement but
 I have tried it on 3bsd, where I have a good .cif for the Bchl-a
 groups.  When I pull out the TLS definition, and perform 10 cycles
 of TLS and 10 cycles of restrained refinement I get an R value of
 20.2% while the entry asserts that the correct value is 17.8%.  The
 final TLS parameters look, by eye, pretty similar to the deposited
 ones, so I don't know what is going on here.

The issue of proper TLS description is not the only difficulty in
reproducing R factors from a PDB file.  Another notable omission is
the lack of scattering factors.  If you have refined a SAS data set,
e.g. a Se-edge dataset of a SeMet metallo-protein, then the R factors may
vary by 1% just because of incorrectly reproduced f' terms for the
Se and metal atoms.
 
Ethan Merritt



 I loaded this into Refmac and asked for zero cycles of unrestrained
 refinement and got an R value of 19.4%.  The PDB file says it should
 be 17.3%.  I then asked Refmac to run 10 cycles of TLS and 10 cycles
 of restrained refinement and got an R value of 17.5%.  

Re: [ccp4bb] Summary: Calculating R-factor and maps from a Refmac model containing TLS downloaded from the PDB

2008-03-17 Thread Pavel Afonine



2) Some files list in the ATOM B column the residual B after TLS
   has been accounted for while others list the total B (TLS and
   residual).  There is no clear indication in the PDB file which
   interpretation is being used.



That is a fundamental deficiency in the existing PDB standard.  It simply
doesn't specify how to present this critical information.  A draft change
covering this was circulated at the PDB get-together of last summer's ACA
meeting, and I discussed with Garib and Eleanor that we should as a community
decide how we would like it handled.  The consensus as I understand it is
that people would prefer that the B field of individual ATOM records contain
the *net* B rather than the residual B, so that old programs will continue
to work as expected.  However, this puts even more importance on the TLS
description in the header being correct, since the information is otherwise
not recoverable.  We were going to circulate a letter, but I plead guilty
to letting the matter slide.
  


This is exactly what phenix.refine does (since 2005, I guess): it always 
prints out the total B-factor for each atom (Bindividual+Btls+Boverall). 
The TLS information (TLS matrices, origin coordinates and TLS group 
selections) are reported as well in PDB file header, so if necessary one 
can always extract the information about individual contributions.


This makes it more straightforward to reproduce the R-factor statistics 
without any prior manipulations with the model.



Another notable omission is
the lack of scattering factors.  If you have refined a SAS data set,
e.g. a Se-edge dataset of a SeMet metallo-protein, then the R factors may
vary by 1% just because of incorrectly reproduced f' terms for the
Se and metal atoms.
 
	Ethan Merritt
  


phenix.refine also always reports f' and f'' in output PDB file if they 
were used in refinement. I hope they don't get stripped off when 
deposited with PDB.


Pavel.





[ccp4bb] Postdoctoral Fellowship at NIH Structural Biology of Neurotransmitter Receptors

2008-03-17 Thread Mark Mayer
A postdoctoral position is available to study the structure and function of 
neurotransmitter receptors. 

Recent papers describing work from the lab include: Nat Struct Mol Biol. 2006 
13:1120-7; Nature. 
2006  440 :456-462; Neuron 2007 53:829-841.We are looking for a highly 
motivated person who 
would like to combine crystallographic studies with biochemical and 
electrophysiological experiments 
to understand the function of glutamate receptor ion channels, the major 
mediators of excitatory 
synaptic transmission in the brain. A major component of the work involves 
protein expression and 
pre-crystallization screening. Strong skills in biochemistry and molecular 
biology are essential. 
Preference will be given to candidates with a proven track record using baculo 
virus, insect cell, and 
other eukaryotic expression systems. Must have excellent communication skills 
and a superb ability 
to work as part of a team. The research facilities at NIH are outstanding with 
regular access to the 
SER-CAT beamline at APS.

Position available fall 2008. Initial duration 2 years, renewable for up to 5 
years.  J1 visa sponsorship 
for Non US Nationals To be eligible candidates must have a PhD awarded no more 
than 5 years ago. 
NIH is an EEO emplyer.


Re: [ccp4bb] Coot (OS X) unable to read CRYST1 line - Solved!

2008-03-17 Thread Pavan
Once again,  thanks a lot for all your replies.

I just found out the problem. Ultimately it was quite a silly mistake
- I had an old and defunct SYMINFO environment variable from solve/
resolve in my .cshrc file, which clashed with the one that coot was
setting (in /sw/share/coot/setup/coot.sh). Once I removed the old
SYMINFO reference, coot could open the PDB files properly.
Thank you very much for all the help.

Pavan

On Mon, Mar 17, 2008 at 8:38 PM, Noinaj, Nicholas [EMAIL PROTECTED] wrote:
 just a quick question, have you just tried opening coot, then opening the 
 *.pdb file from the file  open menu?  also, do you get errors when you just 
 open coot alone, sometimes even single spaces where they shouldn't be makes a 
 huge difference here.  my next suggestion was (as you eluded to below) to 
 open another file to see if any problems exist with it.  hypothetically 
 speaking, what happens if you simply remove the CRYST1 line?  I am not as 
 familiar with the OS version you are running, but with WinXP, i don't even 
 need the CRYST1 card to view the structure, only when trying to view symmetry 
 partners, etc. where it is needed.

  hope some of this helps...best of luck!  I would be happy to look at your 
 file here if you can shoot me the coordinates, or the first 50 lines or so, 
 just enough to open in COOT.  again, good luck!



  Cheers,
  Nick