George Sheldrick <gshe...@shelx.uni-ac.gwdg.de> a écrit :

For all those interested in the technical details about this new Fourier stuff, I saw that the whole paper is available from the web site, not only the simplified account (look at right of this awfully wrong 3-term Fourier synthesis illutration that I would never show to beginners!)
P. Dumas

From the rather non-technical inofrmation available so far, it seems to
me that it is like leaving out all but the strongest reflections (or
perhaps the strongest normalized structure factors). This is unlikely
to improve the quality of structure refinement, the importance of using
as much data as possible and not leaving out the weakest reflections
has often been emphasized. This is quite different to data compression
of music. However there is one case where we are already doing this,
namely small molecule direct methods or using the same programs to find
the heavy atoms in SAD and MAD experiments. These programs use only the
strongest 15-20% of the normalized structure factors (E-values). This
is possible because the data to parameter ratio is still sufficient,
and these reflections contain much of the useful information. However
the Fourier routines used by these programs (at least the ones I wrote)
are not taking full advantage of the 'sparseness' of the data, so if
the MIT people have found a clever way of doing this it might still be
useful for heavy atom location  (though FFTW and the Intel MKL FFT will
be difficult to beat).

George

On 01/20/2012 06:57 PM, Ethan Merritt wrote:
On Friday, 20 January 2012, Jim Fairman wrote:
New Fourier transform algorithm supposedly improves the speed of Fourier
transforms get up to "a tenfold increase in speed" depending upon
circumstances.  Hopefully this will get incorporated into our refinement
programs.

http://web.mit.edu/newsoffice/2012/faster-fourier-transforms-0118.html
This report is interesting, but it is not immediately obvious to me that
crystallographic transforms are in the class of problems for which
this algorithm is applicable.

From reading the very non-technical article linked above, I conclude that
a better summary would be "New approach to Fourier approximation provides
a very cheap (fast) way of identifying and then discarding components that
contribute very little to the signal".  In other words, it seems to be a
way of increasing the compression ratio for lossy image/audio compression
without increasing the amount of time required for compression.

So if you're doing map fitting while listening to streamed mp3 music
files, perhaps your map inversion will get a slightly larger slice of
the CPU time relative to LastFM.

On the other hand, it is possible that somewhere in here lies a clever
approach to faster solvent flattening.

        Ethan




Philippe DUMAS, responsable d'équipe
Directeur de Recherche au CNRS
Equipe de Biophysique & Biologie Structurale
Unité 'Architecture & Réactivité de l'ARN', UPR9002
Institut de Biologie Moléculaire et Cellulaire
15, rue René Descartes F67084 STRASBOURG
+33 (0)388 41 70 02
http://www-ibmc.u-strasbg.fr/arn/Dumas/index_dum_fr.html

Reply via email to