Re: [ccp4bb] Quote source inquiry

2020-07-15 Thread Jeffrey, Philip D.
:: took a working dataset and increased (only) the error on unit cell 
dimensions in the instruction file for the final round of full matrix :: least 
squares refinement in shelxl. Sure enough, the errors on the bonds and angles 
shot up. I was more careful

Question: did you change the unit cell dimensions (UNIT) or the reported 
standard error in the unit cell dimensions (ZERR) ?  If just the latter don't 
you think that the error propagation is just a factor of SHELXL converting from 
fractional to orthogonal coordinates to give you bond lengths and bond angles 
(i.e. the bonds and angles would be numerically the same, but the estimated 
error associated with them would be higher).  Did the e.s.d.'s of the actual 
coordinates in fractional space change ?

Phil Jeffrey
Princeton

From: CCP4 bulletin board  on behalf of Jeffrey B 
Bonanno 
Sent: Wednesday, July 15, 2020 12:36 PM
To: CCP4BB@JISCMAIL.AC.UK 
Subject: Re: [ccp4bb] Quote source inquiry

Hi Gerard and Bernhard,

As a postdoc in an unnamed small molecule lab, I was instructed by my lab head 
to get better unit cell estimates prior to data collection owing to error 
propagation from the uncertainty on cell dimensions through to the esd on 
atomic bond lengths and angles when refining in shelxl. To verify this (what, 
you believed everything your postdoc advisor told you?), I took a working 
dataset and increased (only) the error on unit cell dimensions in the 
instruction file for the final round of full matrix least squares refinement in 
shelxl. Sure enough, the errors on the bonds and angles shot up. I was more 
careful in determining the unit cell thereafter. That is, until, I became a 
macromolecular crystallographer...

After an inciteful (sp? lol) discussion with Wladek about cell dimensions, I 
was directed to read this paper:

Acta Crystallogr D Biol Crystallogr. 2015 Nov 1; 71(Pt 11): 2217–2226.

Have a look, it is interesting.

Having never followed up on these studies to see what happened to bonds and 
angles in proteins and their ligands when varying cell dimensions, I can't say 
with any confidence. However, I would guess that the quality of the refined 
ligand coordinates could only be as good as some combination of factors 
including but not limited to 1) the data (resolution, B factor, etc), 2) the 
actual occupancy of the ligand, and 3) the restraints employed.

jbb

Jeffrey B. Bonanno, Ph.D.
Department of Biochemistry
Albert Einstein College of Medicine
1300 Morris Park Avenue
Bronx, NY 10461
off. 718-430-2452 fax. 718-430-8565
email jeffrey.bona...@einsteinmed.org


-Original Message-
From: CCP4 bulletin board  On Behalf Of Gerard DVD 
Kleywegt
Sent: Wednesday, July 15, 2020 11:49 AM
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] Quote source inquiry

Well, I've had this in my CSHL X-ray Course talk for many years.

In the attached 2007 Acta D paper it says (p 95): "Macromolecular X-ray 
crystallography is a notoriously poor method for determining the structure of 
small molecules that are bound to macromolecules [...]" and then goes on to 
explain why this is the case.

In the attached 2003 paper (pooling the wisdom of several of the usual 
suspects, including Eleanor) it says something similar (p 1057):

"Coordinates of molecules that have been determined in complex with 
macromolecules previously can of course also be retrieved from the PDB 
(Bernstein et al., 1977; Berman et al., 2000), HIC-Up (Kleywegt and Jones, 
1998), or CHEMPDB (Boutselakis et al., 2003). However, one should keep in mind 
that these coordinates are the result of refinement against comparatively
low-resolu- tion data where the small molecule constituted only a minute 
fraction of the total scattering matter. This makes these coordinates 
inherently much less accurate than those obtained from the CSD. In addition, 
the coordi- nates may contain errors due to the use of incorrect restraints.
Hence, such coordinate sets should only be used as a last resort, and only 
after verification that they are reliable. The latter can be facilitated by 
inspection of the electron density for the compound in question, for instance 
at the Uppsala Electron-Density Server (http:// fsrv1.bmc.uu.se/eds) (G.J.K.
et al., submitted)."

Happy to be confused with George though!

--Gerard (no, the other one)



On Tue, 14 Jul 2020, Bernhard Rupp wrote:

> Hi Fellows,
>
>
>
> afaicrimps (as far as I can recall in my progressing senility)
> someone once wrote/stated/cursed somewhere that "Macromolecular
> refinement is not a small molecule structure determination method".
>
>
>
> Any citable source - George Sheldrick might be a suspect.
>
>
>
> Thanks & best regards, BR
>
>
>
> --
>
> Bernhard Rupp
>
>  hofkristallamt.org%2Fdata=02%7C01%7Cjeffrey.bonanno%40einsteinmed
> 

Re: [ccp4bb] How many microfocus beamlines are in the world?

2020-06-24 Thread Jeffrey, Philip D.
I'm fairly sure that the 300-ish micron focus on my old (and retired) Rigaku 
RuH3R home system  - a perfectly good workhorse - was consider micro-focus by 
precisely nobody.

Phil Jeffrey
Princeton


From: CCP4 bulletin board  on behalf of Gianluca Santoni 

Sent: Wednesday, June 24, 2020 2:06 PM
To: CCP4BB@JISCMAIL.AC.UK 
Subject: Re: [ccp4bb] How many microfocus beamlines are in the world?

Since we commonly consider nanofocus a beamline that can go below 1 micron, I 
would say anything below 1mm for the sake of uniformity.



On June 24, 2020 8:02:10 PM GMT+02:00, James Holton  wrote:
Define "micro focus" ?

-James Holton
MAD Scientist

On 6/24/2020 9:18 AM, Murpholino Peligro wrote:
I would like to know how many MX beamlines are micro focus?


Thanks.



To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1




To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1

--
Sent from my Android device with K-9 Mail. Please excuse my brevity.


To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1



To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1

This message was issued to members of www.jiscmail.ac.uk/CCP4BB, a mailing list 
hosted by www.jiscmail.ac.uk, terms & conditions are available at 
https://www.jiscmail.ac.uk/policyandsecurity/


Re: [ccp4bb] Structure solution - hexapeptide

2018-08-02 Thread Jeffrey, Philip D.
"Very decent" means different things to different people.  Is your Rmerge < 20% 
in the 0.84 Å shell ?  If so that's a small molecule quality data set and 
something like that should solve relatively straightforwardly with e.g. SHELXT. 
 However the classical program would be SHELXD and perhaps a CPU day or three 
(speaking from recent experience with data that did not go quite as far).

If it doesn't solve, then there's probably something interesting about the 
data.  P3x 1 2 is a rare space group in both protein world and small molecule 
world.  I would suggest checking for signs of twinning and dropping back to 
point group 3.

You should not be surprised  if your bulk solvent content is almost 
non-existent and you have 5 molecules in the asymmetric unit.

Cheers
Phil Jeffrey
Princeton

From: CCP4 bulletin board [CCP4BB@JISCMAIL.AC.UK] on behalf of Kristof Van 
Hecke [kristofrg.vanhe...@gmail.com]
Sent: Thursday, August 02, 2018 8:53 AM
To: CCP4BB@JISCMAIL.AC.UK
Subject: [ccp4bb] Structure solution - hexapeptide

Dear all,

I’m trying to solve a structure of a (modified) hexapeptide:
- inhouse (very decent) data up to 0.8 Angstrom
- average redundancy = 10
- according to the Matthews coefficient of 1.88 with 34.77 %solvent, there 
should be 3 Nmol/asym
- ‘large’ unit cell of about a=54, b=54, c=12
- SG = P3(1)12 or P3(2)12

As there’s (presumably) only C, H, N and O in the structure, I’m not able to 
solve this via Direct Methods, Charge Flipping etc,.
Trying MR (with Phaser) doesn’t give any results either, as there’s hardly any 
homologous models


Has anyone encountered a similar problem please, and could provide any possible 
solutions?
(building in heavy atoms isn’t my first option at the moment,. )


Thank you very much

Regards

Kristof


To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1



To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1


Re: [ccp4bb] Effects of Multiplicity and Fine Phi with Equivalent Count Numbers

2016-11-30 Thread Jeffrey, Philip D.
Jacob,

If you fine slice and everything is then a partial, isn't that *more* sensitive 
to lack of synchronization between the shutter and rotation axis than the 
wide-frame method where there's a larger proportion of fulls that don't 
approach the frame edges (in rotation space) ?  Especially if you're 3D profile 
fitting ?

Is fine slicing more or less beneficial at high resolutions relative to lower 
ones ?

Phil Jeffrey
Princeton

From: CCP4 bulletin board [CCP4BB@JISCMAIL.AC.UK] on behalf of Keller, Jacob 
[kell...@janelia.hhmi.org]
Sent: Wednesday, November 30, 2016 5:44 PM
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] Effects of Multiplicity and Fine Phi with Equivalent 
Count Numbers

If the mosaicity is, say, 0.5 deg, and one is measuring 1 deg frames, about 
half the time is spent measuring non-spot background noise under spots in phi, 
which is all lumped into the intensity measurement. Fine slicing reduces this. 
But I am conjecturing that there is also fine-slicing-mediated improvement due 
to averaging out things like shutter jitter, which would also be averaged out 
through plain ol’ multiplicity.

I guess a third equal-count dataset would be useful as well: one sweep with 
six-fold finer slicing. So it would be:

One sweep, 0.6 deg, 60s
Six sweeps, 0.6 deg, 10s
One sweep, 0.1 deg, 10s

Or something roughly similar. Who will arrange the bets?

JPK


From: Boaz Shaanan [mailto:bshaa...@bgu.ac.il]
Sent: Wednesday, November 30, 2016 5:19 PM
To: Keller, Jacob ; CCP4BB@JISCMAIL.AC.UK
Subject: RE: Effects of Multiplicity and Fine Phi with Equivalent Count Numbers

Hi Jacob,

I may have missed completely your point but as far as my memory goes, the main 
argument in favour of fine slicing has always been reduction of the noise 
arising from incoherent scattering, which in the old days arose from the 
capillary, solvent, air, you name it. The noise reduction in fine slicing is 
achieved by shortening the exposure time per frame. This argument still holds 
today although the sources of incoherent scattering could be different. Of 
course, there are other reasons to go for fine slicing such as long axes and 
others. In any case it's the recommended method these days, and for good 
reasons, isn't it?

  Best regards,

   Boaz

Boaz Shaanan, Ph.D.
Dept. of Life Sciences
Ben-Gurion University of the Negev
Beer-Sheva 84105
Israel

E-mail: bshaa...@bgu.ac.il
Phone: 972-8-647-2220  Skype: boaz.shaanan
Fax:   972-8-647-2992 or 972-8-646-1710




From: CCP4 bulletin board [CCP4BB@JISCMAIL.AC.UK] on behalf of Keller, Jacob 
[kell...@janelia.hhmi.org]
Sent: Wednesday, November 30, 2016 11:37 PM
To: CCP4BB@JISCMAIL.AC.UK
Subject: [ccp4bb] Effects of Multiplicity and Fine Phi with Equivalent Count 
Numbers
Dear Crystallographers,

I am curious whether the observed effects of fine phi slicing might in part or 
in toto be due to simply higher “pseudo-multiplicity.” In other words, under 
normal experimental conditions, does simply increasing the number of 
measurements increase the signal and improve precision, even with the same 
number of total counts in the dataset?

As such, I am looking for a paper which, like Pflugrath’s 1999 paper, compares 
two data sets with equivalent total counts but, in this case, different 
multiplicities. For example, is a single sweep with 0.5 degree 60s exposures 
empirically, in real practice, equivalent statistically to six passes with 0.5 
degree 10s frames? Better? Worse? Our home source has been donated away to 
Connecticut, so I can’t do this experiment myself anymore.

All the best,

Jacob Keller


***
Jacob Pearson Keller, PhD
Research Scientist
HHMI Janelia Research Campus / Looger lab
Phone: (571)209-4000 x3159
Email: kell...@janelia.hhmi.org
***



Re: [ccp4bb] PDB deposition - sequence file

2014-12-30 Thread Jeffrey, Philip D.
Mohamed,

You always list the sequence of what's actually in the crystal, e.g. 1-105. 
(Not: what's in the model or what the sequence of the full length protein is).  
Make sure that if there's any lingering residues from any affinity/purification 
tags they get included in the sequence too.

Phil Jeffrey
Princeton



From: CCP4 bulletin board [CCP4BB@JISCMAIL.AC.UK] on behalf of Mohamed Noor 
[mohamed.n...@staffmail.ul.ie]
Sent: Tuesday, December 30, 2014 3:26 PM
To: CCP4BB@JISCMAIL.AC.UK
Subject: [ccp4bb] PDB deposition - sequence file

Dear all

The protein that was crystallized is only the first 105 residues of a 
230-residue protein. In the structure, I can see density for residues 6-72. For 
deposition, should the whole native/biological sequence be deposited?

Thanks.
Mohamed


Re: [ccp4bb] Rmerge of the last shell is zero

2013-08-14 Thread Jeffrey, Philip D.
Hello Yafang,

The answer lies in the fact that you used HKL2000.  Scalepack has a long 
standing feature where it reports Rmerge  100% as zero.  Quite why they do 
that is a mystery, but your Rmerge in the outermost shell is NOT zero - the 
Rmerge for the lower resolution shells will show up as non-zero if Rmerge  
100%.

That feature is overdue for a fix.

Alternatively export your scaled data with NO MERGE ORIGINAL INDEX and import 
into CCP4 via Pointless and have Scala or Aimless report the correct 
statistics.  Reprocessing the data using XDS or Mosflm will ultimately lead you 
to scaling the data with a program that doesn't have that bug.  If you do this, 
report Rmeas rather than Rmerge, the former being a better measure.

Phil Jeffrey
Princeton



From: CCP4 bulletin board [CCP4BB@JISCMAIL.AC.UK] on behalf of Yafang Chen 
[yafangche...@gmail.com]
Sent: Wednesday, August 14, 2013 11:32 AM
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] Rmerge of the last shell is zero

Dear All,

Here are some more details about the question I asked earlier about Rmerge is 
0 in the last shell. I processed the data using HKL2000. The space group is 
I213. Redundancy is 10.2 (10.3). I/sigma is 34.8 (2.3). Rmerge is 6.5 (0.0). 
Since I/sigmaI is more than 2 in the last shell, I preferred not to cut back 
the resolution any more. But I don't know how to explain Rmerge in the last 
shell being 0. Besides, I am wondering if this data is publishable (with Rmerge 
being 0 in the last shell). Thank you so much for your help!

Best,
Yafang


On Wed, Aug 14, 2013 at 10:59 AM, Yafang Chen 
yafangche...@gmail.commailto:yafangche...@gmail.com wrote:
Dear All,

I recently processed a dataset, in which I/sigmaI of the last shell is 2.3, 
while Rmerge of the last shell is 0. Does anyone know why the Rmerge is 0? The 
completeness is 100 (100). Thank you so much for your help in advance!

Best,
Yafang

--
Yafang Chen

Graduate Research Assistant
Mesecar Lab
Department of Biological Sciences
Purdue University
Hockmeyer Hall of Structural Biology
240 S. Martin Jischke Drive
West Lafayette, IN 47907



--
Yafang Chen

Graduate Research Assistant
Mesecar Lab
Department of Biological Sciences
Purdue University
Hockmeyer Hall of Structural Biology
240 S. Martin Jischke Drive
West Lafayette, IN 47907


Re: [ccp4bb] mmCIF as working format?

2013-08-07 Thread Jeffrey, Philip D.
Are all the APIs open source ?  I was under the impression that CCP4 had moved 
away from that, which might justifiably reduce interest in any 
limited-availability API.

Phil Jeffrey
Princeton

From: CCP4 bulletin board [CCP4BB@JISCMAIL.AC.UK] on behalf of James Stroud 
[xtald...@gmail.com]
Sent: Wednesday, August 07, 2013 1:51 PM
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] mmCIF as working format?

On Aug 5, 2013, at 4:33 AM, Eugene Krissinel wrote:
 I just hope that one day we all will be discussing a sort of universal API to 
 read/write structural information instead of referencing to raw formats, and 
 routines to query MX data, which would be more appropriate than grep (would 
 many SB students/postdocs use grep these days? but many if them would need to 
 inspect files somehow). This, in essence, is similar to discussing read/write 
 primitives in C/C++/Fortran rather than I/O functions of BIOS and HDD/BUS 
 commands that they drive.

I just want to reinforce this point by quoting it verbatim and also emphasize 
that it was not lost on some of us.

In the long term, the MM structure community should perhaps get its inspiration 
from SQL, which focuses on the scope of data and the semantics its 
manipulation, rather than how the data is encoded beneath the surface.

James


Re: [ccp4bb] mmCIF as working format?

2013-08-07 Thread Jeffrey, Philip D.
 I.e. programs would look like this

 ---
 GRAB protein FROM FILE best_model_ever.cif;
 SELECT CHAIN A FROM protein AS chA;
 SET chA BFACTORS TO 30.0;
 GRAB data FROM FILE best_data_ever.cif;
 BIND protein TO data;
 REFINE protein USING BUSTER WITH TLS+ANISO;
 DROP protein INTO FILE better_model_yet.cif;
 ---

This brings to mind James Holton's Elves program(s):
http://bl831.als.lbl.gov/~jamesh/elves/

Phil Jeffrey
Princeton


Re: [ccp4bb] mmCIF as working format?

2013-08-07 Thread Jeffrey, Philip D.
 Nat Echols wrote:
 Personally, if I need to change a chain ID, I can use Coot or pdbset or many 
 other tools.  Writing code for
 this should only be necessary if you're processing large numbers of models, 
 or have a spectacularly
 misformatted PDB file.

Problem.  Coot is bad at the chain label aspect.
Create a pdb file containing residues A1-A20 and X101-X120 - non-overlapping 
numbering.
Try to change the chain label of X to A.
I get WARNING:: CONFLICT: chain id already exists in this molecule

This is (IMHO) a bizarre feature because this is exactly the sort of thing you 
do when building structures.

Therefore I do one of two things:
1.  Open it in (x)emacs, replace  X  with  A  and Bob's your uncle.
2.  Start Peek2 - that's my interactive program for doing simple and stupid 
things like this.  I type read test.pdb and chain and Peek2 prompts me at 
perceived chain breaks (change in chain label, CA-CA breaks, ATOM/HETATM 
transitions c) and then write test.pdb.   Takes less than 10 seconds.  CCP4i 
would probably still be launching, as would Phenix.

The reason I do #1 or #2 is not to be a Luddite, but to do something trivial 
and boring quickly so I can get back to something interesting like building 
structures, or beating subjects to death on CCP4bb.

What's lacking is an interactive, or just plain fast method in any guise, way 
of doing simple PDB manipulations that we do tons of times when building 
protein structures.  I've used Peek2 thousands of times for this purpose, which 
is the only reason it still exists because it's a fairly stupid program.  A 
truly interactive version of PDBSET would be splendid.  But, again, it always 
runs in batch mode.

mmCIF looked promising, apropos emacs, when I looked at the spec page at:
http://www.iucr.org/__data/iucr/cifdic_html/2/cif_mm.dic/Catom_site.html
because that ATOM data is column-formatted.  Cool.  However looking at 6LYZ.cif 
from RCSB's site revealed that the XYZ's were LEFT-justified: 
http://www.rcsb.org/pdb/files/6LYZ.cif
which makes me recoil in horror and resolve to use PDB format until someone 
puts a gun to my head.

Really, guys, if you can put multiple successive spaces to the RIGHT of the 
number, why didn't you put them to the LEFT of it instead ?  Same parsing, 
better readability.

Phil Jeffrey
Princeton
(using the vernacular but deathly serious about protein structure)








Re: [ccp4bb] list/library of most commonly co-crystallized ligands/solvents and/or their electron density shapes

2013-05-21 Thread Jeffrey, Philip D.
Top 20 HETNAM entries based on 58,469 PDB entries at better than 2.5 Angstrom 
resolution (arbitrary cut):

Number of entries in histogram: 14864
Total number of instances : 195481

   0 14502 0.0742 GOL(glycerol)
   1 10952 0.0560 SO4
   2  8064 0.0413  ZN
   3  7628 0.0390  MG
   4  6930 0.0355 MSE (SeMet)
   5  6685 0.0342  CA
   6  6555 0.0335 EDO (Ethylene glycol)
   7  6315 0.0323  CL
   8  5856 0.0300 HEM
   9  3922 0.0201  NA
  10  3647 0.0187 NAG
  11  3148 0.0161 PO4
  12  2360 0.0121 ACT(Acetate)
  13  1874 0.0096  MN
  14  1561 0.0080 NAP
  15  1387 0.0071   K
  16  1338 0.0068 FAD
  17  1277 0.0065 PLP (PYRIDOXAL-5'-PHOSPHATE)
  18  1228 0.0063 TRS(Tris buffer)
  19  1205 0.0062 FMN

(numeric columns are ranking; count; frequency)
No electron density, sorry.
Clearly I should be adding more glycerols.

Phil Jeffrey
Princeton

From: CCP4 bulletin board [CCP4BB@JISCMAIL.AC.UK] on behalf of 孙庆祥 
[baby_ten...@163.com]
Sent: Tuesday, May 21, 2013 3:29 PM
To: CCP4BB@JISCMAIL.AC.UK
Subject: [ccp4bb] list/library of most commonly co-crystallized 
ligands/solvents and/or their electron density shapes

hi all,

Sorry if this has been asked before. I wonder if there is an list or library of 
most commonly co-crystallized ligands(or solvent molecules) available? Better 
if the electron density maps of the ligands are also shown with different 
resolutions. That could help a lot for an inexperienced crystallographer (like 
me) to quickly identify extra electron densities in a new structure, by simply 
comparing the electron density shapes.

I remember a few days ago somebody asked about a PEG electron density, which 
looks like a string of beads. If I knew that earlier, I could have modeled it 
in, instead of waters...

Thanks,
Jeremy




Re: [ccp4bb] Unusually low B factors with phenix

2012-11-02 Thread Jeffrey, Philip D.
That sounds like the bug in Phenix.refine v1.8 that a few of us encountered - 
updating to the latest release will help.

Actually wasn't so much a bug but a feature, albeit not the best one 
imaginable.  Anyone using older v1.8 versions should update.

---
Phil Jeffrey
Princeton

On Nov 2, 2012, at 6:41 AM, Demetres D. Leonidas ddleoni...@bio.uth.gr 
wrote:

 Dear Tim,
 
 this affects all atoms and yes I did reset the B-factors to 20.00 prior to 
 refinement. I have not tried REFMAC but now I will give it a try since phenix 
 does not seem to do the job.
 
 best
 
 Demetres
 
 
 On 2/11/2012 1:34 μμ, Tim Gruene wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 Dear Demetres,
 
 does this affect all atoms, or only a few selected ones?
 
 Did you compare with refmac5 (for e.g. input script errors), or did
 you reset the B-factors to a reasonable value (e.g. 20-30) prior to
 refinement? pdbset can do this conveniently.
 
 Best,
 Tim
 
 On 11/02/2012 11:06 AM, Demetres D. Leonidas wrote:
 Hello,
 
 I am experiencing a weird problem with B factor refinement
 (individual, isotropic) in phenix.refine (1.8.1.-1168) and  a
 structure at 1.9 A resolution. The ADP values after the refinement
 are very very low, less than 2 and sometimes 0. I am getting the
 same result with and without optimization of the X-ray/ADP weight.
 Has anyone else noticed that and is there a workaround ?
 
 Demetres
 
 - -- Dr Tim Gruene
 Institut fuer anorganische Chemie
 Tammannstr. 4
 D-37077 Goettingen
 
 GPG Key ID = A46BEE1A
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.12 (GNU/Linux)
 Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
 
 iD8DBQFQk6+vUxlJ7aRr7hoRAvCZAKDxzmLYV74HKanTuoNiqLEu6AmlRwCgkVQs
 Q+SZVN6ISb0X38ZekG119/s=
 =xKhB
 -END PGP SIGNATURE-
 
 -- 
 ---
 Dr. Demetres D. Leonidas
 Associate Professor of Biochemistry
 Department of Biochemistry  Biotechnology
 University of Thessaly
 26 Ploutonos Str.
 41221 Larissa, Greece
 -
 Tel. +302410 565278
 Tel. +302410 565297 (Lab)
 Fax. +302410 565290
 E-mail: ddleoni...@bio.uth.gr
 http://www.bio.uth.gr
 ---


Re: [ccp4bb] SCALA keywords for merging Scalepack (no merge original index) data ?

2012-07-04 Thread Jeffrey, Philip D.
Hi Tim,

While I use Scalepack2mtz (not from the gui) all the time, Scalepack has the 
unfortunate feature that once Rsym  1.0 it gets reported as 0.0.  Now that I'm 
exploring higher resolution limits along the lines of the recent Karplus and 
Diederichs paper (Science Vol. 336, pp. 1030-33 (2012)) I actually want a real 
number for this.

Additionally:
Scala reports Rmeas and Rpim but Scalepack does not 
Scala reports CC_IMEAN which I see from Materials and Methods in the Karplus  
Diederichs paper is the same as CC1/2.

Since I do intend on exploiting data for which Rsym  1, at least for this 
quite redundant F432 dataset I'm working with, I think these extra statistics 
are very useful.

Thanks
Phil


From: Tim Gruene [t...@shelx.uni-ac.gwdg.de]

Hi Phil,

why do you want to use scala to merge the data?