Dear Juergen,

On Mon, Oct 27, 2008 at 11:37:15AM -0700, Juergen Bosch wrote:
> we've been comparing various programs during our SGPP times (that was about 
> 2 years ago) Sharp, Shelx, bp3, mlphare, SnB (BnP). If I remember it 
> correctly at that time Sharp was the winner, although the site finding was 
> the major bottleneck/problem. What we ended doing was finding sites (SeMet) 
> by an external program (most of the time Shelx) and feeding them into 
> Sharp.

I can easily imagine that: the SHELXC/D interface we have in autoSHARP
tries it's best in finding the sites - but it can never be as good as
doing it by hand. Especially when the number of sites you're looking
for isn't quite clear (larger NCS, soaks etc). But as you said: finding
the sites in an external program, putting them into a so-called
*.hatom file and starting autoSHARP with those initial sites as a
starting point is a very good approach.

> On a side note we also looked at HKL, Mosflm, XDS processing of data, we 
> observed differences in the ability of e.g. Shelx to locate SeMet sites 
> depending on the processing program you used. Of course perfect data was 
> undistinguishable, but some datasets which were more tricky to process 
> showed significant differences in locating the SeMet positions. In those 
> cases XDS was better. This is of course running all programs with their 
> default parameters and not tweaking them.

I've done that a few times as well (using d*TREK in that mix but not
HKL) - unfortunately not with a clear winner (or maybe that is a good
thing, showing healthy competition?). I have the impression that the
tricky bit is looking at different features of those programs (or
packages) when comparing:

 a) how well do they refine parameters like mosaicity, distance,
    missetting angles etc throughout integration?

    There seem to be big differences for problematic crystals between
    packages.

 b) how well do they integrate the actual spots (depends crucially on
    step a)?

 c) how well does the scaling program perform? And how easy is it for
    a user to pick the correct scaling protocol, especially when
    dealing with heavy-atom signal?

I tend to use SCALA with any of those programs (you can convert XDS
data with POINTLESS and d*TREK data with DREK2SCALA): mainly because
it's familiar and I understand the output (I hope), but also because
using the same scaling program makes the comparison of statistics
similar.

In general, finding heavy-atom sites requires much better data than
the phasing step. And since we usually work with E-values during
site-detection, outliers can be very nasty indeed - that's why the
capability of each integration package for masking the beamstop
correctly (to avoid low-resolution outliers) I find very important
(something where d*TREK is great, MOSFLM very good and XDS a bit of a
problem).

A nice task would be to compare different integration/scaling packages
at various stages: for finding the sites e.g. in SHELXD and separately
(using known sites) for giving best phases e.g. in SHARP. there could
be differences.

Cheers

Clemens

-- 

***************************************************************
* Clemens Vonrhein, Ph.D.     vonrhein AT GlobalPhasing DOT com
*
*  Global Phasing Ltd.
*  Sheraton House, Castle Park 
*  Cambridge CB3 0AX, UK
*--------------------------------------------------------------
* BUSTER Development Group      (http://www.globalphasing.com)
***************************************************************

Reply via email to