[ccp4bb] control of nucleation

2010-05-06 Thread zq deng
hello,everybody . due to excess nucleation,I often get many tiny crystals
instead of  few,large crystals.i wana optimize the condition, does anyone
have adivce about this?

Best regards.


Re: [ccp4bb] control of nucleation

2010-05-06 Thread Tim Gruene
Dear zq deng,

The standard method, I dare say, in such a case is micro seeding:

reduce the precipitant to a concentration so that you just do not get crystals
any more, wait until the drop equilibrates (about one day in the case your
precipitant is a salt, up to 5-7 days for high MW PEGs),

then use a cat's whisker for micro seeding into that drop.

Well, that's micro seeding in a nut shell.

Tim

On Thu, May 06, 2010 at 04:03:46PM +0800, zq deng wrote:
 hello,everybody . due to excess nucleation,I often get many tiny crystals
 instead of  few,large crystals.i wana optimize the condition, does anyone
 have adivce about this?
 
 Best regards.

-- 
--
Tim Gruene
Institut fuer anorganische Chemie
Tammannstr. 4
D-37077 Goettingen

GPG Key ID = A46BEE1A



signature.asc
Description: Digital signature


Re: [ccp4bb] control of nucleation

2010-05-06 Thread Thomas Edwards
Dear Zq,

A few ideas:

1) Vary protein concentration, temperature, or protein : mother liquor ratio.
2) Try dioxane - it is supposed to reduce nucleation.
3) give your protein a good hard spin before you set up drops to remove 
aggregates.
4) seeded factorial screen.
5) re-purify on gel filtration?

Ed

__
T.Edwards Ph.D.
Garstang 8.53d
Astbury Centre for Structural Molecular Biology
University of Leeds, Leeds, LS2 9JT
Telephone: 0113 343 3031
http://www.bmb.leeds.ac.uk/staff/tae/
-- Nature composes some of her loveliest poems for the microscope and the 
telescope.  ~Theodore Roszak




From: zq deng dengzq1...@gmail.com
Reply-To: zq deng dengzq1...@gmail.com
Date: Thu, 6 May 2010 09:03:46 +0100
To: CCP4BB@JISCMAIL.AC.UK
Subject: [ccp4bb] control of nucleation

hello,everybody . due to excess nucleation,I often get many tiny crystals 
instead of  few,large crystals.i wana optimize the condition, does anyone have 
adivce about this?

Best regards.


Re: [ccp4bb] software to represent raw reflections in hkl zones

2010-05-06 Thread Nicholas M Glykos
 ... only in the [0kl] plane. ...

I'm sure you've already checked, but if during data collection the [0kl] 
axis was nearly perpendicular to the rotation axis, then you may only have 
to superimpose (with ipdisp) few suitably selected images to obtain a 
small (low resolution) portion of what you are after.



-- 


  Dr Nicholas M. Glykos, Department of Molecular Biology
 and Genetics, Democritus University of Thrace, University Campus,
  Dragana, 68100 Alexandroupolis, Greece, Tel/Fax (office) +302551030620,
Ext.77620, Tel (lab) +302551030615, http://utopia.duth.gr/~glykos/


Re: [ccp4bb] control of nucleation

2010-05-06 Thread Enrico Stura



Dear Zq  CCP4BB readers,The precipitant is the main component that affects nucleation.In specific cases other factors can be used to modulate nucleation as mentioned before by others: protein concentration, temperature, drop size, initial protein/precipitant ratio etc. All good components of a very long list that will give a student years of work ahead.Just to take the first item: "Protein concentration"Most will think that "Protein concentration" should be reduced to reduce nucleation. Unfortunately,what will happen when the protein concentration is reduced is not so easily predictable.Let's do the opposite! Just for fun, let's increase the protein concentration instead.We will increase the protein concentration while severely reducing the precipitant concentration. The 15 mins lysozyme crystallization is a good example of this. Enrico's 15mins Lysozyme recipeLysozyme concentrations of 100-150 mg/ml are used. The high protein concentration allows crystals to grow rapidly, so each nucleus has a chance to grow before more nuclei are formed. This is because each growing nucleus ends up depleting its local environment and making the nucleation of others nearby less likely. Nucleation requires a higher degree of supersaturation than crystal growth. Getting it to work and controlling it requires accuracy, but it is great fun to do ... and lysozyme is cheap.HOT STUFF crystallization is ! In the case of lysozyme: Do it in a hot room you for better control. This ambiguity persists for other items:Thomas Edwards suggests that: "dioxane - it is supposed to reduce nucleation.""Supposed to" when it does not do the opposite:Ménétrey, J., Perderiset, M., Cicolari, J., Houdusse, A.  Stura, E.A. (2007) Improving Diffraction from 3 to 2 Å for a Complex between a Small GTPase and Its Effector by Analysis of Crystal Contacts and Use of Reverse Screening. Cryst. Growth Des. 7:2140-2146.This is the one additive that really increases nucleation in the above paper.Online access:  Improving DiffractionThe procedures in "reverse screening" are used to identify what each proposed effector really does and use it toachieve better crystals.As screening is done with smaller and smaller drops, the conditions that will emerge more often are those where the nucleation rate is very high. Nanodrops will yield "overnucleation".To conclude dear Zq, listen to all the advice, but unless you really understand your protein you willfind that you will achieve the opposite of what you are trying to do.Enrico.On Thu, 06 May 2010 10:10:37 +0200, Thomas Edwards t.a.edwa...@leeds.ac.uk wrote: Dear Zq, A few ideas: 1) Vary protein concentration, temperature, or protein : mother liquor   ratio. 2) Try dioxane - it is supposed to reduce nucleation. 3) give your protein a good hard spin before you set up drops to remove   aggregates. 4) seeded factorial screen. 5) re-purify on gel filtration? Ed __ T.Edwards Ph.D. Garstang 8.53d Astbury Centre for Structural Molecular Biology University of Leeds, Leeds, LS2 9JT Telephone: 0113 343 3031 http://www.bmb.leeds.ac.uk/staff/tae/ -- Nature composes some of her loveliest poems for the microscope and   the telescope.  ~Theodore Roszak  From: zq deng dengzq1...@gmail.com Reply-To: zq deng dengzq1...@gmail.com Date: Thu, 6 May 2010 09:03:46 +0100 To: CCP4BB@JISCMAIL.AC.UK Subject: [ccp4bb] control of nucleation hello,everybody . due to excess nucleation,I often get many tiny   crystals instead of  few,large crystals.i wana optimize the condition,   does anyone have adivce about this? Best regards.-- Enrico A. Stura D.Phil. (Oxon) ,Tel: 33 (0)1 69 08 4302 OfficeRoom 19, Bat.152,   Tel: 33 (0)1 69 08 9449LabLTMB, SIMOPRO, IBiTec-S, CE Saclay, 91191 Gif-sur-Yvette,   FRANCE   http://www-dsv.cea.fr/en/institutes/institute-of-biology-and-technology-saclay-ibitec-s/unites-de-recherche/department-of-molecular-engineering-of-proteins-simopro/molecular-toxinology-and-biotechnology-laboratory-ltmb/crystallogenesis-e.-sturahttp://www.chem.gla.ac.uk/protein/mirror/stura/index2.htmle-mail: est...@cea.fr Fax: 33 (0)1 69 08 90 71

[ccp4bb] Processing compressed diffraction images?

2010-05-06 Thread Ian Tickle
All -

No doubt this topic has come up before on the BB: I'd like to ask
about the current capabilities of the various integration programs (in
practice we use only MOSFLM  XDS) for reading compressed diffraction
images from synchrotrons.  AFAICS XDS has limited support for reading
compressed images (TIFF format from the MARCCD detector and CCP4
compressed format from the Oxford Diffraction CCD); MOSFLM doesn't
seem to support reading compressed images at all (I'm sure Harry will
correct me if I'm wrong about this!).  I'm really thinking about
gzipped files here: bzip2 no doubt gives marginally smaller files but
is very slow.  Currently we bring back uncompressed images but it
seems to me that this is not the most efficient way of doing things -
or is it just that my expectation that it's more efficient to read
compressed images and uncompress in memory not realised in practice?
For example the AstexViewer molecular viewer software currently reads
gzipped CCP4 maps directly and gunzips them in memory; this improves
the response time by a modest factor of ~ 1.5, but this is because
electron density maps are 'dense' from a compression point of view;
X-ray diffraction images tend to have much more 'empty space' and the
compression factor is usually considerably higher (as much as
10-fold).

On a recent trip we collected more data than we anticipated  the
uncompressed data no longer fitted on our USB disk (the data is backed
up to the USB disk as it's collected), so we would have definitely
benefited from compression!  However file size is *not* the issue:
disk space is cheap after all.  My point is that compressed images
surely require much less disk I/O to read.  In this respect bringing
back compressed images and then uncompressing back to a local disk
completely defeats the object of compression - you actually more than
double the I/O instead of reducing it!  We see this when we try to
process the ~150 datasets that we bring back on our PC cluster and the
disk I/O completely cripples the disk server machine (and everyone
who's trying to use it at the same time!) unless we're careful to
limit the number of simultaneous jobs.  When we routinely start to use
the Pilatus detector on the beamlines this is going to be even more of
an issue.  Basically we have plenty of processing power from the
cluster: the disk I/O is the bottleneck.  Now you could argue that we
should spread the load over more disks or maybe spend more on faster
disk controllers, but the whole point about disks is they're cheap, we
don't need the extra I/O bandwidth for anything else, and you
shouldn't need to spend a fortune, particularly if there are ways of
making the software more efficient, which after all will benefit
everyone.

Cheers

-- Ian


Re: [ccp4bb] Processing compressed diffraction images?

2010-05-06 Thread Jim Pflugrath
d*TREK will process compressed images with the following extensions: .gz
.bz2 .Z .pck and .cbf
 

-Original Message-
From: CCP4 bulletin board [mailto:ccp...@jiscmail.ac.uk] On Behalf Of Ian
Tickle
Sent: Thursday, May 06, 2010 6:25 AM
To: CCP4BB@JISCMAIL.AC.UK
Subject: [ccp4bb] Processing compressed diffraction images?

All -

No doubt this topic has come up before on the BB: I'd like to ask about the
current capabilities of the various integration programs (in practice we use
only MOSFLM  XDS) for reading compressed diffraction images from
synchrotrons.  A...

cluster: the disk I/O is the bottleneck.  Now you could argue that we should
spread the load over more disks or maybe spend more on faster disk
controllers, but the whole point about disks is they're cheap, we don't need
the extra I/O bandwidth for anything else, and you shouldn't need to spend a
fortune, particularly if there are ways of making the software more
efficient, which after all will benefit everyone.

Cheers

-- Ian


Re: [ccp4bb] MacBookPro problems

2010-05-06 Thread Joachim Reichelt




Hi all,

the problem for Zalman monitor to newer MACs etc. is :
From Zalman (
j...@zalman.co.kr ) I got this Mail:


 I
can provide the fix – a custom DVI cable specifically for the ZM-M220W.
It
resolves the issue of the swapped pins 15 16 of the ZM-M220W,
which is not
an issue with older graphic cards/GPUs(I infact use the DVI connection
with my
office PC that has an Nvidia GeForce 7600 card), but is an issue with
newer /
notebook GPUs.
 
 Basically,
newer GPUs seem to have acquired a function that actively sends and
receives a
signal to and from the monitor to check if a connection is made, and if
it does
not get a return signal that it is looking for, the GPU’s output shuts
down.
With the ZM-M220W’s swapped pins 15  16, the newer GPUs don’t get
the
return signal and thinks that no monitor is connected and shuts the
output off.
On older GPUs, this function is not common and so the GPU outputs
constantly
without regarding for the pins 15  16.
...
Jihoon

Jihoon Jo
Manager, 3D Business Development

Zalman Tech Co., Ltd.
#1007 Daeryung Techno Town III
448 Gasan-dong, Gumchun-gu
Seoul, 153-803, Korea
Tel : +82-2-2107-3109
Fax: +82-2-2107-3322


So I'l get a new DVI cable.
-- 
Joachim





Re: [ccp4bb] Processing compressed diffraction images?

2010-05-06 Thread Tim Gruene
Entering xds gzip at www.ixquick.com came up with
http://www.mpimf-heidelberg.mpg.de/~kabsch/xds/html_doc/xds_parameters.html:

To save space it is allowed to compress the images by using the UNIX compress,
gzip, or bzip2 routines. On data processing XDS will automatically recognize and
expand the compressed images files. The file name extensions (.Z, .z, .gz, bz2)
due to the compression routines should not be included in the generic file name
template. 

I thought to remember that mosflm also supports gzipped images but didn't find a
reference within 2 minutes.

I'm surprised to hear that you get such a high compression rate with mccd
images. 

Cheers, Tim


On Thu, May 06, 2010 at 12:24:47PM +0100, Ian Tickle wrote:
 All -
 
 No doubt this topic has come up before on the BB: I'd like to ask
 about the current capabilities of the various integration programs (in
 practice we use only MOSFLM  XDS) for reading compressed diffraction
 images from synchrotrons.  AFAICS XDS has limited support for reading
 compressed images (TIFF format from the MARCCD detector and CCP4
 compressed format from the Oxford Diffraction CCD); MOSFLM doesn't
 seem to support reading compressed images at all (I'm sure Harry will
 correct me if I'm wrong about this!).  I'm really thinking about
 gzipped files here: bzip2 no doubt gives marginally smaller files but
 is very slow.  Currently we bring back uncompressed images but it
 seems to me that this is not the most efficient way of doing things -
 or is it just that my expectation that it's more efficient to read
 compressed images and uncompress in memory not realised in practice?
 For example the AstexViewer molecular viewer software currently reads
 gzipped CCP4 maps directly and gunzips them in memory; this improves
 the response time by a modest factor of ~ 1.5, but this is because
 electron density maps are 'dense' from a compression point of view;
 X-ray diffraction images tend to have much more 'empty space' and the
 compression factor is usually considerably higher (as much as
 10-fold).
 
 On a recent trip we collected more data than we anticipated  the
 uncompressed data no longer fitted on our USB disk (the data is backed
 up to the USB disk as it's collected), so we would have definitely
 benefited from compression!  However file size is *not* the issue:
 disk space is cheap after all.  My point is that compressed images
 surely require much less disk I/O to read.  In this respect bringing
 back compressed images and then uncompressing back to a local disk
 completely defeats the object of compression - you actually more than
 double the I/O instead of reducing it!  We see this when we try to
 process the ~150 datasets that we bring back on our PC cluster and the
 disk I/O completely cripples the disk server machine (and everyone
 who's trying to use it at the same time!) unless we're careful to
 limit the number of simultaneous jobs.  When we routinely start to use
 the Pilatus detector on the beamlines this is going to be even more of
 an issue.  Basically we have plenty of processing power from the
 cluster: the disk I/O is the bottleneck.  Now you could argue that we
 should spread the load over more disks or maybe spend more on faster
 disk controllers, but the whole point about disks is they're cheap, we
 don't need the extra I/O bandwidth for anything else, and you
 shouldn't need to spend a fortune, particularly if there are ways of
 making the software more efficient, which after all will benefit
 everyone.
 
 Cheers
 
 -- Ian

-- 
--
Tim Gruene
Institut fuer anorganische Chemie
Tammannstr. 4
D-37077 Goettingen

GPG Key ID = A46BEE1A



signature.asc
Description: Digital signature


Re: [ccp4bb] Processing compressed diffraction images?

2010-05-06 Thread Ian Tickle
Jim, thanks for the info.  At present we use d*TREK mostly only for
in-house data (Saturn, Jupiter  R-axis) so the data collection rate
is much lower and in any case we would gain nothing by compressing
them since the I/O is the same whether it's gzip reading in the images
or d*TREK.  Our problem is that our people bring back a large no of
datasets ( 150) from each synchrotron trip, dump them all on the file
server and then try to process them all at the same time!

Cheers

-- Ian

On Thu, May 6, 2010 at 12:38 PM, Jim Pflugrath jim.pflugr...@rigaku.com wrote:
 d*TREK will process compressed images with the following extensions: .gz
 .bz2 .Z .pck and .cbf


 -Original Message-
 From: CCP4 bulletin board [mailto:ccp...@jiscmail.ac.uk] On Behalf Of Ian
 Tickle
 Sent: Thursday, May 06, 2010 6:25 AM
 To: CCP4BB@JISCMAIL.AC.UK
 Subject: [ccp4bb] Processing compressed diffraction images?

 All -

 No doubt this topic has come up before on the BB: I'd like to ask about the
 current capabilities of the various integration programs (in practice we use
 only MOSFLM  XDS) for reading compressed diffraction images from
 synchrotrons.  A...

 cluster: the disk I/O is the bottleneck.  Now you could argue that we should
 spread the load over more disks or maybe spend more on faster disk
 controllers, but the whole point about disks is they're cheap, we don't need
 the extra I/O bandwidth for anything else, and you shouldn't need to spend a
 fortune, particularly if there are ways of making the software more
 efficient, which after all will benefit everyone.

 Cheers

 -- Ian




Re: [ccp4bb] Processing compressed diffraction images?

2010-05-06 Thread Harry Powell
Hi Ian

I've looked briefly at implementing gunzip in Mosflm  in the past, but never 
really pursued it. It could probably be done when I have some free time, but 
who knows when that will be? gzip'ing one of my standard test sets gives around 
a 40-50% reduction in size, bzip2 ~60-70%. The speed of doing the compression 
is important too, and is considerably slower than uncompressing (since  with 
uncompressing you know where you are going and have the instructions, whereas 
with compressing you have to find it all out as you proceed).

There are several ways of writing compressed images that (I believe) all the 
major processing packages have implemented - for example, Jan Pieter Abrahams 
has one which has been used for Mar images for a long time, and CBF has more 
than one. There are very good reasons for all detectors to write their images 
using CBFs with some kind of compression (I think that all new MX detectors at 
Diamond, for example, are required to be able to). 

Pilatus images are written using a fast compressor and read (in Mosflm and XDS, 
anyway - I have no idea about d*Trek or HKL, but imagine they would do the job 
every bit as well) using a fast decompressor - so this goes some way towards 
dealing with that particular problem - the image files aren't as big as you'd 
expect from their physical size and 20-bit dynamic range (from the 6M they're 
roughly 6MB, rather than 6MB * 2.5). So that seems about as good as you'd get 
from bzip2 anyway.

I'd be somewhat surprised to see a non-lossy fast algorithm that could give you 
10-fold compression with normal MX type images - the empty space between 
Bragg maxima is full of detail (noise, diffuse scatter). If you had a truly 
flat background you could get much better compression, of course. 

On 6 May 2010, at 11:24, Ian Tickle wrote:

 All -
 
 No doubt this topic has come up before on the BB: I'd like to ask
 about the current capabilities of the various integration programs (in
 practice we use only MOSFLM  XDS) for reading compressed diffraction
 images from synchrotrons.  AFAICS XDS has limited support for reading
 compressed images (TIFF format from the MARCCD detector and CCP4
 compressed format from the Oxford Diffraction CCD); MOSFLM doesn't
 seem to support reading compressed images at all (I'm sure Harry will
 correct me if I'm wrong about this!).  I'm really thinking about
 gzipped files here: bzip2 no doubt gives marginally smaller files but
 is very slow.  Currently we bring back uncompressed images but it
 seems to me that this is not the most efficient way of doing things -
 or is it just that my expectation that it's more efficient to read
 compressed images and uncompress in memory not realised in practice?
 For example the AstexViewer molecular viewer software currently reads
 gzipped CCP4 maps directly and gunzips them in memory; this improves
 the response time by a modest factor of ~ 1.5, but this is because
 electron density maps are 'dense' from a compression point of view;
 X-ray diffraction images tend to have much more 'empty space' and the
 compression factor is usually considerably higher (as much as
 10-fold).
 
 On a recent trip we collected more data than we anticipated  the
 uncompressed data no longer fitted on our USB disk (the data is backed
 up to the USB disk as it's collected), so we would have definitely
 benefited from compression!  However file size is *not* the issue:
 disk space is cheap after all.  My point is that compressed images
 surely require much less disk I/O to read.  In this respect bringing
 back compressed images and then uncompressing back to a local disk
 completely defeats the object of compression - you actually more than
 double the I/O instead of reducing it!  We see this when we try to
 process the ~150 datasets that we bring back on our PC cluster and the
 disk I/O completely cripples the disk server machine (and everyone
 who's trying to use it at the same time!) unless we're careful to
 limit the number of simultaneous jobs.  When we routinely start to use
 the Pilatus detector on the beamlines this is going to be even more of
 an issue.  Basically we have plenty of processing power from the
 cluster: the disk I/O is the bottleneck.  Now you could argue that we
 should spread the load over more disks or maybe spend more on faster
 disk controllers, but the whole point about disks is they're cheap, we
 don't need the extra I/O bandwidth for anything else, and you
 shouldn't need to spend a fortune, particularly if there are ways of
 making the software more efficient, which after all will benefit
 everyone.
 
 Cheers
 
 -- Ian

Harry
--
Dr Harry Powell, MRC Laboratory of Molecular Biology, MRC Centre, Hills Road, 
Cambridge, CB2 0QH


Re: [ccp4bb] Processing compressed diffraction images?

2010-05-06 Thread Ian Tickle
Hi Tim thanks for that, sorry yes I missed that page.  But I'm still
not clear: is it uncompressing to disk or is it doing it in memory?  I
assume the latter: if the former then obviously nothing is gained.
You're right about the compression factor, it's more like a factor of
2 or 3, I should have looked at the image in question as the one I
picked had no spots!

Cheers

-- Iam

On Thu, May 6, 2010 at 12:54 PM, Tim Gruene t...@shelx.uni-ac.gwdg.de wrote:
 Entering xds gzip at www.ixquick.com came up with
 http://www.mpimf-heidelberg.mpg.de/~kabsch/xds/html_doc/xds_parameters.html:

 To save space it is allowed to compress the images by using the UNIX 
 compress,
 gzip, or bzip2 routines. On data processing XDS will automatically recognize 
 and
 expand the compressed images files. The file name extensions (.Z, .z, .gz, 
 bz2)
 due to the compression routines should not be included in the generic file 
 name
 template. 

 I thought to remember that mosflm also supports gzipped images but didn't 
 find a
 reference within 2 minutes.

 I'm surprised to hear that you get such a high compression rate with mccd
 images.

 Cheers, Tim


 On Thu, May 06, 2010 at 12:24:47PM +0100, Ian Tickle wrote:
 All -

 No doubt this topic has come up before on the BB: I'd like to ask
 about the current capabilities of the various integration programs (in
 practice we use only MOSFLM  XDS) for reading compressed diffraction
 images from synchrotrons.  AFAICS XDS has limited support for reading
 compressed images (TIFF format from the MARCCD detector and CCP4
 compressed format from the Oxford Diffraction CCD); MOSFLM doesn't
 seem to support reading compressed images at all (I'm sure Harry will
 correct me if I'm wrong about this!).  I'm really thinking about
 gzipped files here: bzip2 no doubt gives marginally smaller files but
 is very slow.  Currently we bring back uncompressed images but it
 seems to me that this is not the most efficient way of doing things -
 or is it just that my expectation that it's more efficient to read
 compressed images and uncompress in memory not realised in practice?
 For example the AstexViewer molecular viewer software currently reads
 gzipped CCP4 maps directly and gunzips them in memory; this improves
 the response time by a modest factor of ~ 1.5, but this is because
 electron density maps are 'dense' from a compression point of view;
 X-ray diffraction images tend to have much more 'empty space' and the
 compression factor is usually considerably higher (as much as
 10-fold).

 On a recent trip we collected more data than we anticipated  the
 uncompressed data no longer fitted on our USB disk (the data is backed
 up to the USB disk as it's collected), so we would have definitely
 benefited from compression!  However file size is *not* the issue:
 disk space is cheap after all.  My point is that compressed images
 surely require much less disk I/O to read.  In this respect bringing
 back compressed images and then uncompressing back to a local disk
 completely defeats the object of compression - you actually more than
 double the I/O instead of reducing it!  We see this when we try to
 process the ~150 datasets that we bring back on our PC cluster and the
 disk I/O completely cripples the disk server machine (and everyone
 who's trying to use it at the same time!) unless we're careful to
 limit the number of simultaneous jobs.  When we routinely start to use
 the Pilatus detector on the beamlines this is going to be even more of
 an issue.  Basically we have plenty of processing power from the
 cluster: the disk I/O is the bottleneck.  Now you could argue that we
 should spread the load over more disks or maybe spend more on faster
 disk controllers, but the whole point about disks is they're cheap, we
 don't need the extra I/O bandwidth for anything else, and you
 shouldn't need to spend a fortune, particularly if there are ways of
 making the software more efficient, which after all will benefit
 everyone.

 Cheers

 -- Ian

 --
 --
 Tim Gruene
 Institut fuer anorganische Chemie
 Tammannstr. 4
 D-37077 Goettingen

 GPG Key ID = A46BEE1A


 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.9 (GNU/Linux)

 iD8DBQFL4q3xUxlJ7aRr7hoRAibGAKDJvFsy+GUZQ3E/tqQMVovkJxPTRACgoSjb
 QaVZzpgtXv4IUTx5Kt8d5eM=
 =OvRA
 -END PGP SIGNATURE-




Re: [ccp4bb] Processing compressed diffraction images?

2010-05-06 Thread Ian Tickle
Hi Harry

Thanks for the info.  Speed of compression is not an issue I think
since compression  backing up of the images are done asynchronously
with data collection, and currently backing up easily keeps up, so I
think compression straight to the backup disk would too.  As you saw
from my reply to Tim my compression factor of 10 was a bit optimistic,
for images with spots on them (!) it's more like 2 or 3 with gzip, as
you say.

I found an old e-mail from James Holton where he suggested lossy
compression for diffraction images (as long as it didn't change the
F's significantly!) - I'm not sure whether anything came of that!

Cheers

-- Ian

On Thu, May 6, 2010 at 2:04 PM, Harry Powell ha...@mrc-lmb.cam.ac.uk wrote:
 Hi Ian

 I've looked briefly at implementing gunzip in Mosflm  in the past, but never 
 really pursued it. It could probably be done when I have some free time, but 
 who knows when that will be? gzip'ing one of my standard test sets gives 
 around a 40-50% reduction in size, bzip2 ~60-70%. The speed of doing the 
 compression is important too, and is considerably slower than uncompressing 
 (since  with uncompressing you know where you are going and have the 
 instructions, whereas with compressing you have to find it all out as you 
 proceed).

 There are several ways of writing compressed images that (I believe) all the 
 major processing packages have implemented - for example, Jan Pieter Abrahams 
 has one which has been used for Mar images for a long time, and CBF has more 
 than one. There are very good reasons for all detectors to write their images 
 using CBFs with some kind of compression (I think that all new MX detectors 
 at Diamond, for example, are required to be able to).

 Pilatus images are written using a fast compressor and read (in Mosflm and 
 XDS, anyway - I have no idea about d*Trek or HKL, but imagine they would do 
 the job every bit as well) using a fast decompressor - so this goes some way 
 towards dealing with that particular problem - the image files aren't as big 
 as you'd expect from their physical size and 20-bit dynamic range (from the 
 6M they're roughly 6MB, rather than 6MB * 2.5). So that seems about as good 
 as you'd get from bzip2 anyway.

 I'd be somewhat surprised to see a non-lossy fast algorithm that could give 
 you 10-fold compression with normal MX type images - the empty space 
 between Bragg maxima is full of detail (noise, diffuse scatter). If you 
 had a truly flat background you could get much better compression, of course.

 On 6 May 2010, at 11:24, Ian Tickle wrote:

 All -

 No doubt this topic has come up before on the BB: I'd like to ask
 about the current capabilities of the various integration programs (in
 practice we use only MOSFLM  XDS) for reading compressed diffraction
 images from synchrotrons.  AFAICS XDS has limited support for reading
 compressed images (TIFF format from the MARCCD detector and CCP4
 compressed format from the Oxford Diffraction CCD); MOSFLM doesn't
 seem to support reading compressed images at all (I'm sure Harry will
 correct me if I'm wrong about this!).  I'm really thinking about
 gzipped files here: bzip2 no doubt gives marginally smaller files but
 is very slow.  Currently we bring back uncompressed images but it
 seems to me that this is not the most efficient way of doing things -
 or is it just that my expectation that it's more efficient to read
 compressed images and uncompress in memory not realised in practice?
 For example the AstexViewer molecular viewer software currently reads
 gzipped CCP4 maps directly and gunzips them in memory; this improves
 the response time by a modest factor of ~ 1.5, but this is because
 electron density maps are 'dense' from a compression point of view;
 X-ray diffraction images tend to have much more 'empty space' and the
 compression factor is usually considerably higher (as much as
 10-fold).

 On a recent trip we collected more data than we anticipated  the
 uncompressed data no longer fitted on our USB disk (the data is backed
 up to the USB disk as it's collected), so we would have definitely
 benefited from compression!  However file size is *not* the issue:
 disk space is cheap after all.  My point is that compressed images
 surely require much less disk I/O to read.  In this respect bringing
 back compressed images and then uncompressing back to a local disk
 completely defeats the object of compression - you actually more than
 double the I/O instead of reducing it!  We see this when we try to
 process the ~150 datasets that we bring back on our PC cluster and the
 disk I/O completely cripples the disk server machine (and everyone
 who's trying to use it at the same time!) unless we're careful to
 limit the number of simultaneous jobs.  When we routinely start to use
 the Pilatus detector on the beamlines this is going to be even more of
 an issue.  Basically we have plenty of processing power from the
 cluster: the disk I/O is the bottleneck.  Now 

Re: [ccp4bb] control of nucleation

2010-05-06 Thread Mark Brooks
Dear Zq Deng,
I've had success with the dilution method as described
by Dunlop and Hazes: http://scripts.iucr.org/cgi-bin/paper?en5016

For me it worked for a couple of projects, and gives you a bit more to
permutate than just varying the protein:mother liquor ratios, as Thomas
Edwards helpfully suggested.

Good luck,

Mark

On 6 May 2010 09:03, zq deng dengzq1...@gmail.com wrote:

 hello,everybody . due to excess nucleation,I often get many tiny crystals
 instead of  few,large crystals.i wana optimize the condition, does anyone
 have adivce about this?

 Best regards.




-- 
Skype: markabrooks


Re: [ccp4bb] Processing compressed diffraction images?

2010-05-06 Thread Phil Evans
Compression methods such as gzip are unlikely to be optimum for diffraction 
images, and AFAIK the methods in CBF are better (I think Jim Pflugrath did some 
races a long time ago, and I guess others have too). There is no reason for 
data acquisition software ever to write uncompressed images (let alone having 
57 different ways of doing it)

Phil

On 6 May 2010, at 13:38, Ian Tickle wrote:

 Hi Harry
 
 Thanks for the info.  Speed of compression is not an issue I think
 since compression  backing up of the images are done asynchronously
 with data collection, and currently backing up easily keeps up, so I
 think compression straight to the backup disk would too.  As you saw
 from my reply to Tim my compression factor of 10 was a bit optimistic,
 for images with spots on them (!) it's more like 2 or 3 with gzip, as
 you say.
 
 I found an old e-mail from James Holton where he suggested lossy
 compression for diffraction images (as long as it didn't change the
 F's significantly!) - I'm not sure whether anything came of that!
 
 Cheers
 
 -- Ian
 
 On Thu, May 6, 2010 at 2:04 PM, Harry Powell ha...@mrc-lmb.cam.ac.uk wrote:
 Hi Ian
 
 I've looked briefly at implementing gunzip in Mosflm  in the past, but never 
 really pursued it. It could probably be done when I have some free time, but 
 who knows when that will be? gzip'ing one of my standard test sets gives 
 around a 40-50% reduction in size, bzip2 ~60-70%. The speed of doing the 
 compression is important too, and is considerably slower than uncompressing 
 (since  with uncompressing you know where you are going and have the 
 instructions, whereas with compressing you have to find it all out as you 
 proceed).
 
 There are several ways of writing compressed images that (I believe) all the 
 major processing packages have implemented - for example, Jan Pieter 
 Abrahams has one which has been used for Mar images for a long time, and CBF 
 has more than one. There are very good reasons for all detectors to write 
 their images using CBFs with some kind of compression (I think that all new 
 MX detectors at Diamond, for example, are required to be able to).
 
 Pilatus images are written using a fast compressor and read (in Mosflm and 
 XDS, anyway - I have no idea about d*Trek or HKL, but imagine they would do 
 the job every bit as well) using a fast decompressor - so this goes some way 
 towards dealing with that particular problem - the image files aren't as big 
 as you'd expect from their physical size and 20-bit dynamic range (from the 
 6M they're roughly 6MB, rather than 6MB * 2.5). So that seems about as good 
 as you'd get from bzip2 anyway.
 
 I'd be somewhat surprised to see a non-lossy fast algorithm that could give 
 you 10-fold compression with normal MX type images - the empty space 
 between Bragg maxima is full of detail (noise, diffuse scatter). If you 
 had a truly flat background you could get much better compression, of course.
 
 On 6 May 2010, at 11:24, Ian Tickle wrote:
 
 All -
 
 No doubt this topic has come up before on the BB: I'd like to ask
 about the current capabilities of the various integration programs (in
 practice we use only MOSFLM  XDS) for reading compressed diffraction
 images from synchrotrons.  AFAICS XDS has limited support for reading
 compressed images (TIFF format from the MARCCD detector and CCP4
 compressed format from the Oxford Diffraction CCD); MOSFLM doesn't
 seem to support reading compressed images at all (I'm sure Harry will
 correct me if I'm wrong about this!).  I'm really thinking about
 gzipped files here: bzip2 no doubt gives marginally smaller files but
 is very slow.  Currently we bring back uncompressed images but it
 seems to me that this is not the most efficient way of doing things -
 or is it just that my expectation that it's more efficient to read
 compressed images and uncompress in memory not realised in practice?
 For example the AstexViewer molecular viewer software currently reads
 gzipped CCP4 maps directly and gunzips them in memory; this improves
 the response time by a modest factor of ~ 1.5, but this is because
 electron density maps are 'dense' from a compression point of view;
 X-ray diffraction images tend to have much more 'empty space' and the
 compression factor is usually considerably higher (as much as
 10-fold).
 
 On a recent trip we collected more data than we anticipated  the
 uncompressed data no longer fitted on our USB disk (the data is backed
 up to the USB disk as it's collected), so we would have definitely
 benefited from compression!  However file size is *not* the issue:
 disk space is cheap after all.  My point is that compressed images
 surely require much less disk I/O to read.  In this respect bringing
 back compressed images and then uncompressing back to a local disk
 completely defeats the object of compression - you actually more than
 double the I/O instead of reducing it!  We see this when we try to
 process the ~150 datasets that we bring 

[ccp4bb] Fwd: [ccp4bb] control of nucleation

2010-05-06 Thread Charles W. Carter, Jr
I mistakenly sent this off to Enrico, rather than to the CCP4BB. Apologies to 
Enrico.

Begin forwarded message:

 From: Charles W. Carter, Jr car...@med.unc.edu
 Date: May 6, 2010 6:50:23 AM EDT
 To: est...@cea.fr
 Subject: Re: [ccp4bb] control of nucleation
 
 In fact, there is quite good experimental evidence that the most important 
 parameter affecting the rate of nucleation is the supersaturation ratio, or 
 the [protein]/solubility. Unfortnately, the physical reasons for the 
 unexpected behaviors described by Enrico, which do occur frequently, are that 
 nearly all crystallization experiments are carried out in almost total 
 absence of any knowledge of the solubility curve and how it depends on the 
 concentrations of other reagents in the screen. To my knowledge, the best 
 data on the relationship between the rate of nucleation and the 
 supersaturation ratio have been compiled by Hofrichter, Eaton, and Ross for 
 hemoglobin and by Ataka and Tanaka for lysozyme. The experiment is 
 conceptually quite simple, but almost never performed, because the solubility 
 behavior is unknown. It involves setting up crystallizations and measuring 
 the time it takes to see crystals and then plotting the data on a log-log 
 plot, which linearizes the power law relation, so that the slope is the 
 exponent.
 
 For lysozyme, the nucleation rate is proportional to a variable, but high 
 power of the supersaturation ratio. For lysozyme, this power is 5, so that 
 lysozyme seeds appear at a rate proportional to ([Lyso]/S)^5 (Ataka, M. and 
 Tanaka, S. (1986) The growth of large, single crystals of lysozyme. 
 Biopolymers 25, 337–350.). 
 
 For hemoglobin, the same experiments suggest a much higher power, 35-40 for 
 sickle-cell hemoglobin gelation. This number has been revised downward 
 significantly by further work by Hofrichter and others, because while the 
 experimental data are reliable, the interpretation in terms of homogeneous 
 nucleation is not:  sickling involves heavy-duty secondary nucleation, which 
 gives a very high apparent exponent. 
 
 There is a small literature on efforts to incorporate the power law 
 relationship into empirical screening:  Carter and Ries-Kautt, 2006 Improving 
 Marginal Crystals, in Methods in Molecular Biology, Macromolecular 
 Crystallography Protocols: Volume 1,Preparation and Crystallization of 
 Macromolecules edited by S. Doublié, 363:153-174.
 
 The success of reverse screening arises primarily because it circumvents the 
 requirement for homogeneous nucleation by providing seeds.
 
 
 
 
 On May 6, 2010, at 6:15 AM, Enrico Stura wrote:
 
 Dear Zq  CCP4BB readers,
 
 The precipitant is the main component that affects nucleation.
 
 In specific cases other factors can be used to modulate nucleation as 
 mentioned before by others: 
 protein concentration, temperature, drop size, initial protein/precipitant 
 ratio etc.
 All good components of a very long list that will give a student years of 
 work ahead.
 
 Just to take the first item: Protein concentration
 Most will think that Protein concentration should be reduced to reduce 
 nucleation. Unfortunately,
 what will happen when the protein concentration is reduced is not so easily 
 predictable.
 Let's do the opposite! Just for fun, let's increase the protein 
 concentration instead.
 We will increase the protein concentration while severely reducing the 
 precipitant concentration. 
 The 15 mins lysozyme crystallization is a good example of this.
 Enrico's 15mins Lysozyme recipe
 Lysozyme concentrations of 100-150 mg/ml are used. The high protein 
 concentration allows crystals to grow rapidly, so each nucleus has a chance 
 to grow before more nuclei are formed. This is because each growing nucleus 
 ends up depleting its local environment and making the nucleation of others 
 nearby less likely.
 Nucleation requires a higher degree of supersaturation than crystal growth. 
 Getting it to work and controlling it requires accuracy, but it is great fun 
 to do ... and lysozyme is cheap.
 HOT STUFF crystallization is ! In the case of lysozyme: Do it in a hot room 
 you for better control. 
 
 This ambiguity persists for other items:
 Thomas Edwards suggests that: dioxane - it is supposed to reduce 
 nucleation.
 Supposed to when it does not do the opposite:
 Ménétrey, J., Perderiset, M., Cicolari, J., Houdusse, A.  Stura, E.A. 
 (2007) Improving Diffraction from 3 to 2 Å for a Complex between a Small 
 GTPase and Its Effector by Analysis of Crystal Contacts and Use of Reverse 
 Screening. Cryst. Growth Des. 7:2140-2146.
 This is the one additive that really increases nucleation in the above paper.
 Online access:  Improving Diffraction
 
 The procedures in reverse screening are used to identify what each 
 proposed effector really does and use it to
 achieve better crystals.
 
 As screening is done with smaller and smaller drops, the conditions that 
 will emerge more often are those where the nucleation rate is 

Re: [ccp4bb] Processing compressed diffraction images?

2010-05-06 Thread Fischmann, Thierry
The results from compressing a diffraction image must vary quite a bit on a 
case by case basis.

I looked into it a long time ago using images from a few datasets from 2 
different projects. Compress was quite faster than gzip or bzip2 in these 
tests. It also delivered the less compression. gzip and bzip2 were about the 
same speed (or lack thereof). But while the difference in speed was marginal 
bzip2 delivered a 20-30% size improvement over gzip.

The tests were with images of diffracting crystals, the diffraction extending 
to the edge of the detector.

Regards,

Thierry

-Original Message-
From: CCP4 bulletin board [mailto:ccp...@jiscmail.ac.uk] On Behalf Of Ian Tickle
Sent: Thursday, May 06, 2010 08:28 AM
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] Processing compressed diffraction images?

Hi Tim thanks for that, sorry yes I missed that page.  But I'm still
not clear: is it uncompressing to disk or is it doing it in memory?  I
assume the latter: if the former then obviously nothing is gained.
You're right about the compression factor, it's more like a factor of
2 or 3, I should have looked at the image in question as the one I
picked had no spots!

Cheers

-- Iam

On Thu, May 6, 2010 at 12:54 PM, Tim Gruene t...@shelx.uni-ac.gwdg.de wrote:
 Entering xds gzip at www.ixquick.com came up with
 http://www.mpimf-heidelberg.mpg.de/~kabsch/xds/html_doc/xds_parameters.html:

 To save space it is allowed to compress the images by using the UNIX 
 compress,
 gzip, or bzip2 routines. On data processing XDS will automatically recognize 
 and
 expand the compressed images files. The file name extensions (.Z, .z, .gz, 
 bz2)
 due to the compression routines should not be included in the generic file 
 name
 template. 

 I thought to remember that mosflm also supports gzipped images but didn't 
 find a
 reference within 2 minutes.

 I'm surprised to hear that you get such a high compression rate with mccd
 images.

 Cheers, Tim


 On Thu, May 06, 2010 at 12:24:47PM +0100, Ian Tickle wrote:
 All -

 No doubt this topic has come up before on the BB: I'd like to ask
 about the current capabilities of the various integration programs (in
 practice we use only MOSFLM  XDS) for reading compressed diffraction
 images from synchrotrons.  AFAICS XDS has limited support for reading
 compressed images (TIFF format from the MARCCD detector and CCP4
 compressed format from the Oxford Diffraction CCD); MOSFLM doesn't
 seem to support reading compressed images at all (I'm sure Harry will
 correct me if I'm wrong about this!).  I'm really thinking about
 gzipped files here: bzip2 no doubt gives marginally smaller files but
 is very slow.  Currently we bring back uncompressed images but it
 seems to me that this is not the most efficient way of doing things -
 or is it just that my expectation that it's more efficient to read
 compressed images and uncompress in memory not realised in practice?
 For example the AstexViewer molecular viewer software currently reads
 gzipped CCP4 maps directly and gunzips them in memory; this improves
 the response time by a modest factor of ~ 1.5, but this is because
 electron density maps are 'dense' from a compression point of view;
 X-ray diffraction images tend to have much more 'empty space' and the
 compression factor is usually considerably higher (as much as
 10-fold).

 On a recent trip we collected more data than we anticipated  the
 uncompressed data no longer fitted on our USB disk (the data is backed
 up to the USB disk as it's collected), so we would have definitely
 benefited from compression!  However file size is *not* the issue:
 disk space is cheap after all.  My point is that compressed images
 surely require much less disk I/O to read.  In this respect bringing
 back compressed images and then uncompressing back to a local disk
 completely defeats the object of compression - you actually more than
 double the I/O instead of reducing it!  We see this when we try to
 process the ~150 datasets that we bring back on our PC cluster and the
 disk I/O completely cripples the disk server machine (and everyone
 who's trying to use it at the same time!) unless we're careful to
 limit the number of simultaneous jobs.  When we routinely start to use
 the Pilatus detector on the beamlines this is going to be even more of
 an issue.  Basically we have plenty of processing power from the
 cluster: the disk I/O is the bottleneck.  Now you could argue that we
 should spread the load over more disks or maybe spend more on faster
 disk controllers, but the whole point about disks is they're cheap, we
 don't need the extra I/O bandwidth for anything else, and you
 shouldn't need to spend a fortune, particularly if there are ways of
 making the software more efficient, which after all will benefit
 everyone.

 Cheers

 -- Ian

 --
 --
 Tim Gruene
 Institut fuer anorganische Chemie
 Tammannstr. 4
 D-37077 Goettingen

 GPG Key ID = A46BEE1A


 -BEGIN PGP SIGNATURE-
 Version: 

[ccp4bb] wwPDB Announcements: Validation Report PDFs, NMR Restraint Files

2010-05-06 Thread Christine Zardecki

wwPDB To Provide Validation Reports as PDFs

As part of the structure annotation process, wwPDB members provide  
depositors with detailed reports that include the results of  
geometric and experimental data checking. Beginning May 17, 2010,  
these documents will be available from all wwPDB annotation sites as  
PDF files so that they may be easily reviewed and shared by depositors.


For more information, please see:
http://www.wwpdb.org/news.html#30-April-2010_1


Version 2 NMR Restraint Files to be Released in the wwPDB FTP

With the June 30, 2010 update, a new set of NMR restraint data files  
will be added to the wwPDB FTP archive. These restraint files, which  
will be identified as Version 2 files, are represented in NMR-STAR  
3.1 format, contain current PDB atom nomenclature, and provide  
accurate atom-level correspondences to the NMR model coordinate files  
in the current wwPDB archive. Restraint files containing restraint  
data as originally deposited (Version 1 files) will remain on the  
site and will continue to be updated regularly as new NMR entries are  
released.


For more information, please see:
http://www.wwpdb.org/news.html#30-April-2010_2



Questions? Please contact i...@wwpdb.org. 

Re: [ccp4bb] control of nucleation

2010-05-06 Thread syed ibrahim

Hi

I have succeeded by using oil (such as parafin oil ... etc ) at reservoir on 
top of the reservoir solution. You have to try several trials to find optimum 
ratio.

With regards

Syed

--- On Thu, 5/6/10, zq deng dengzq1...@gmail.com wrote:

From: zq deng dengzq1...@gmail.com
Subject: [ccp4bb] control of nucleation
To: CCP4BB@JISCMAIL.AC.UK
Date: Thursday, May 6, 2010, 1:33 PM

hello,everybody . due to excess nucleation,I often get many tiny crystals 
instead of  few,large crystals.i wana optimize the condition, does anyone have 
adivce about this?   
 
Best regards.



  

Re: [ccp4bb] low resolution secondary structural restraints

2010-05-06 Thread Bradley Hintze
Hi Greg,

If you want manual control over user-defined restraints, which I've found
helpful when dealing with low-resolution structures, ResDe might help. You
can define restraints in PyMol and then save the restraint file in the
refmac format. I hope if helps. Here is the link:
http://pymolwiki.org/index.php/ResDe

Bradley

On Wed, May 5, 2010 at 7:33 AM, Gregory Bowman gdbow...@jhu.edu wrote:

  Hi -

 I am refining a low (3.7Å) structure with refmac(5.5.0091), and am having
 trouble maintaining some secondary structure elements. I would like to
 restrain the H-bonding in clear secondary structural elements, which should
 help prevent carbonyls in helices from flipping out, etc, but have not had
 success doing this so far. As I understand it, it is not correct to put a
 LINK restraint for hydrogen bonds in refmac, and instead the HYDBND keyword
 should be used. How do we specify these HYDBND restraints (putting them in
 the header of XYZIN doesn't seem to work), and is there a (simple) way to
 decrease the tolerance for breaking these bonds?

 Thanks,
 Greg

  --
 Department of Biophysics
 Johns Hopkins University
 302 Jenkins Hall
 3400 N. Charles St.
 Baltimore, MD 21218
 Phone: (410) 516-7850 (office)
 Phone: (410) 516-3476 (lab)
 Fax: (410) 516-4118
 gdbow...@jhu.edu






-- 
Bradley J. Hintze
Graduate Student
Duke University
School of Medicine
801-712-8799


[ccp4bb] Processing compressed diffraction images?

2010-05-06 Thread Lepore, Bryan
yet another compressor to consider : xz

http://tukaani.org/xz/

hth


Re: [ccp4bb] Processing compressed diffraction images?

2010-05-06 Thread James Holton
Something I have been playing with recently that might address your 
problem in a way you like is SquashFS:

http://squashfs.sourceforge.net/

SquashFS is a read-only compressed file system.  It uses gzip --best, 
which is comparable to bzip2 for diffraction images (in my experience).  
Basically, it works a lot like burning to a CD.  You run mksquashfs to 
create the compressed image and then mount -o loop it.  Then voila!  
You can access everything in the archive as if it were an uncompressed 
file.  Disk I/O then consists of compressed data (decompression is done 
by the kernel), and so does network traffic if you play a clever trick: 
share the compressed file over NFS and mount -o loop it locally.  This 
has much bigger advantages than you might realize because most of the 
NFS traffic that brings a file server to its knees are the tiny little 
writes that are done to update access times.  NFS writes (and RAID 
writes) are all really expensive, and you can actually gain a 
considerable performance increase by just mounting your data disks 
read-only (or by putting noatime as a mount option).


Anyway, SquashFS is not as slick as the transparent compression you can 
get with HFS or NTFS, but I personally like the fact that it is 
read-only (good for data).  For real-time backup, mksquashfs does 
support appending to an existing archive, so you can probably build 
your squashfs file on the usb disk at the beamline (even if the beamline 
computer kernels can't mount it).  However, if you MUST have your 
processing files mixed amongst your images, you can use unionfs to 
overlay a writable file system with the read-only one.  Depends on how 
cooperative your IT guys are...


-James Holton
MAD Scientist

Ian Tickle wrote:

All -

No doubt this topic has come up before on the BB: I'd like to ask
about the current capabilities of the various integration programs (in
practice we use only MOSFLM  XDS) for reading compressed diffraction
images from synchrotrons.  AFAICS XDS has limited support for reading
compressed images (TIFF format from the MARCCD detector and CCP4
compressed format from the Oxford Diffraction CCD); MOSFLM doesn't
seem to support reading compressed images at all (I'm sure Harry will
correct me if I'm wrong about this!).  I'm really thinking about
gzipped files here: bzip2 no doubt gives marginally smaller files but
is very slow.  Currently we bring back uncompressed images but it
seems to me that this is not the most efficient way of doing things -
or is it just that my expectation that it's more efficient to read
compressed images and uncompress in memory not realised in practice?
For example the AstexViewer molecular viewer software currently reads
gzipped CCP4 maps directly and gunzips them in memory; this improves
the response time by a modest factor of ~ 1.5, but this is because
electron density maps are 'dense' from a compression point of view;
X-ray diffraction images tend to have much more 'empty space' and the
compression factor is usually considerably higher (as much as
10-fold).

On a recent trip we collected more data than we anticipated  the
uncompressed data no longer fitted on our USB disk (the data is backed
up to the USB disk as it's collected), so we would have definitely
benefited from compression!  However file size is *not* the issue:
disk space is cheap after all.  My point is that compressed images
surely require much less disk I/O to read.  In this respect bringing
back compressed images and then uncompressing back to a local disk
completely defeats the object of compression - you actually more than
double the I/O instead of reducing it!  We see this when we try to
process the ~150 datasets that we bring back on our PC cluster and the
disk I/O completely cripples the disk server machine (and everyone
who's trying to use it at the same time!) unless we're careful to
limit the number of simultaneous jobs.  When we routinely start to use
the Pilatus detector on the beamlines this is going to be even more of
an issue.  Basically we have plenty of processing power from the
cluster: the disk I/O is the bottleneck.  Now you could argue that we
should spread the load over more disks or maybe spend more on faster
disk controllers, but the whole point about disks is they're cheap, we
don't need the extra I/O bandwidth for anything else, and you
shouldn't need to spend a fortune, particularly if there are ways of
making the software more efficient, which after all will benefit
everyone.

Cheers

-- Ian
  


[ccp4bb] Motif searching

2010-05-06 Thread Rex Palmer
Does anyone know of a program designed to both store information on functional 
motifs in proteins, as described in the literature, and to retrieve such motifs 
within a given protein sequence?

Rex Palmer
Birkbeck College


[ccp4bb] OFF TOPIC: Aquifex aeolicus DNA

2010-05-06 Thread Marcelo Carlos Sousa
Sorry for the off topic question but since Aquifex aeolicus is a favorite of 
many structural biologists I wonder if anybody can point me to a reliable 
source for DNA from this critter (or any other Aquifex). ATTC does not seem to 
have it...

Thanks in  advance

Marcelo


Re: [ccp4bb] control of nucleation

2010-05-06 Thread zq deng
actually,the protein grow in too different condition.both have excess
necleation,and one condition is that just puting protein in the drop and not
mixing with the precipitants.there is one another problem how to
cryo-protect.
2010/5/6 syed ibrahim b_syed_ibra...@yahoo.com


 Hi

 I have succeeded by using oil (such as parafin oil ... etc ) at reservoir
 on top of the reservoir solution. You have to try several trials to find
 optimum ratio.

 With regards

 Syed

 --- On *Thu, 5/6/10, zq deng dengzq1...@gmail.com* wrote:


 From: zq deng dengzq1...@gmail.com

 Subject: [ccp4bb] control of nucleation
 To: CCP4BB@JISCMAIL.AC.UK
 Date: Thursday, May 6, 2010, 1:33 PM


  hello,everybody . due to excess nucleation,I often get many tiny crystals
 instead of  few,large crystals.i wana optimize the condition, does anyone
 have adivce about this?

 Best regards.





[ccp4bb] lossy compression of diffraction images

2010-05-06 Thread James Holton

Ian Tickle wrote:

I found an old e-mail from James Holton where he suggested lossy
compression for diffraction images (as long as it didn't change the
F's significantly!) - I'm not sure whether anything came of that!
  


Well, yes, something did come of this  But I don't think Gerard 
Bricogne is going to like it.


Details are here:
http://bl831.als.lbl.gov/~jamesh/lossy_compression/

Short version is that I found a way to compress a test lysozyme dataset 
by a factor of ~33 with no apparent ill effects on the data.  In fact, 
anomalous differences were completely unaffected, and Rfree dropped from 
0.287 for the original data to 0.275 when refined against Fs from the 
compressed images.  This is no doubt a fluke of the excess noise added 
by compression, but I think it highlights how the errors in 
crystallography are dominated by the inadequacies of the electron 
density models we use, and not the quality of our data.


The page above lists two data sets: A and B, and I am interested to 
know if and how anyone can tell which one of these data sets was 
compressed.  The first image of each data set can be found here:

http://bl831.als.lbl.gov/~jamesh/lossy_compression/firstimage.tar.bz2

-James Holton
MAD Scientist