Dear Andrew.

Thank you for your valuable comments!

It becomes clear for me, that I should sit down with theory to grasp,
why I actually do:

1) matrix exponential
2) matrix power

in all the numerical models.
Then I would be able to select which best method to apply.

For:
ns_cpmg_2site_star.py = 2x2 matrix, ns_mmq_2site.py = 2x2 matrix.
I should write up the closed form.

So, the corresponding characteristic polynomial is quadratic so the
eigenvalues can be expressed in closed form in terms of the matrix
elements.

M = [ [a b], [b c] ]

discriminant: D = sqrt(a**2 + 4b**2 - 2ac + c**2=
Eigenvalues, lamda1, lamda2.
l1 = 0.5*(a + c - D), l2 = 0.5*(a + c + D)

Define M:
M = V^-1 * [ [l1 0], [0 l2] ] * V

V11 = (a - c - D)/2b
V12 = 1
V21 = (a - c + D)/2b
V22 = 1

V = [ [v11 v12], [v21 v22] ]

For:
ns_mmq_3site.py = 3x3 matrix, ns_cpmg_2site_3d.py = 7 X 7 matrix,
ns_r1rho_2site.py = 6x6 matrix, ns_r1rho_3site.py = 9x9 matrix.

I also asked on the Scipy mailing.
(Their mail server link is not working, so here a short reference).

### Charles R Harris:
Are you solving a differential equation? If so, an explicit solution
may not be the best way to go.

### Eric Moore:
There are expansions for the matrix exponential in terms of Chebyshev
and Laguerre matrix polynomials.
(The first of which does get use a bit for calculating propagators in
NMR, for instance, spinevolution and simpson both can/do use it.)
However, your matrices seem quite small, even if there are a large
number of them, so it may be very difficult to speed this up by much.

### Stephan Hoyer:
My experience with solving very similar quantum mechanics problems is
that using an ODE integrator like zofe (via scipy.integrate.ode)
can be much faster than solving the propagator exactly.

### Michael Kreim:
I agree with Stephan and Charles that it my be worth trying to solve
the ODE using any suitable ODE solver.
In Matlab I made the experience that it is very problem depended if
the ode solver or the matrix exponential method is faster.

Also you can have a look at expokit which is a very good
implementation of matrix exponential solvers (using Krylov subspace
methods):
http://www.maths.uq.edu.au/expokit/. Unfortunately there is no Python
implementation but maybe you can call the fortran or C++ version from
python.

And you can have a look at operator splitting methods. If it is
possible to split your problem in two (or more) sub problems, then
these methods
often speed up the numerical computations. The problem usually lies in
finding a suitable splitting. The implementation is not very
complicated, if you have numerical methods for the sub problems.

######

So here, it sounds like I should figure out to do a ordinary
differential equation, for magnetisation evolution.

Would this be the right way to go?

I am very happy for any comments.

Best
Troels

2014-06-26 13:46 GMT+02:00 Andrew Baldwin <andrew.bald...@chem.ox.ac.uk>:
>>I am looking for a solution I can apply to all numerical models :-)
>
> The 'close formed' solutions I describe are eigenvalue decompositions. So
> that's just a slightly smart way of doing that rather than asking a
> minimiser to do it for you. No general closed form solutions exist for the
> eigenvalues (hence eigenvectors) I think above 3x3.
>
> Incidentally, the theoretical treatments of the R1rho (needing minimum of
> 6x6) relies on being able to approximate the smallest real eigenvalues of
> the six. So the theoretical R1rho theoretical treatments are in essence
> methods for approximating one of the six eigenvalues (which is hard enough).
> Getting the other five is far from straightforward and might not even be
> possible. This is also why a general CEST expression isn't going to happen:
> to get a complete treatment of the CEST experiment you need all six
> eigenvalues, and no-one has (yet) come up with a way of getting them all.
>
> If the matrix is sparse then you can get some speedups. But in my hands at
> least, those algorithms only tend to work well on huge matrices that are
> really sparse (eg100x100 with not much over 10% of elements having entries).
>
>
>
>
>
>
> On 26/06/2014 09:44, Troels Emtekær Linnet wrote:
>
> Hi Andrew.
>
> I am looking for a solution I can apply to all numerical models :-)
>
> The current implementation, is the eigenvalue decomposition.
> http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/matrix_exponential.py
>
> It the bottom, I provide the profiling for NS R1rho 2site.
> You will see, that using the eig function, takes 50% of the time:
>
> That is i little sad, that the reason why Numerical solutions is so slow, is
> numpy.linalg.eig().
>
> They differ a little in matrix size.
>
> http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/ns_cpmg_2site_3d.py
> 7 X 7 matrix.
>
> http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/ns_cpmg_2site_star.py
> 2x2 matrix
>
> http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/ns_mmq_2site.py
> 2x2 matrix.
>
> http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/ns_mmq_3site.py
> 3x3 matrix,
>
> http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/ns_cpmg_2site_3d.py
> 6x6 matrix
>
> http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/ns_r1rho_3site.py
> 9x9 matrix.
>
> ####### ns_cpmg_2site_3d
> Thu Jun 26 10:42:13 2014
> /var/folders/ww/1jkhkh315x57jglgxnr9g24w0000gp/T/tmp0buvpw
>
>          211077 function calls in 5.073 seconds
>
>    Ordered by: cumulative time
>
>    ncalls  tottime  percall  cumtime  percall filename:lineno(function)
>         1    0.000    0.000    5.073    5.073 <string>:1(<module>)
>         1    0.007    0.007    5.073    5.073
> profiling_ns_r1rho_2site.py:553(cluster)
>        10    0.000    0.000    4.592    0.459
> profiling_ns_r1rho_2site.py:515(calc)
>        10    0.015    0.002    4.592    0.459
> relax_disp.py:1585(func_ns_r1rho_2site)
>        10    0.113    0.011    4.574    0.457
> ns_r1rho_2site.py:190(ns_r1rho_2site)
>        10    0.093    0.009    4.138    0.414
> matrix_exponential.py:33(matrix_exponential_rank_NE_NS_NM_NO_ND_x_x)
>        10    2.529    0.253    2.581    0.258 linalg.py:982(eig)
>        40    0.890    0.022    0.890    0.022 {numpy.core.multiarray.einsum}
>        10    0.524    0.052    0.552    0.055 linalg.py:455(inv)
>         1    0.000    0.000    0.474    0.474
> profiling_ns_r1rho_2site.py:112(__init__)
>        10    0.125    0.013    0.304    0.030
> ns_r1rho_2site.py:117(rr1rho_3d_2site_rankN)
>         1    0.248    0.248    0.274    0.274 relax_disp.py:64(__init__)
>        92    0.180    0.002    0.180    0.002 {method 'outer' of
> 'numpy.ufunc' objects}
>         1    0.090    0.090    0.107    0.107
> profiling_ns_r1rho_2site.py:189(return_offset_data)
>         1    0.053    0.053    0.092    0.092
> profiling_ns_r1rho_2site.py:289(return_r2eff_arrays)
>        30    0.065    0.002    0.065    0.002 {method 'astype' of
> 'numpy.ndarray' objects}
>     15109    0.043    0.000    0.043    0.000 {numpy.core.multiarray.array}
>    118863    0.018    0.000    0.018    0.000 {method 'append' of 'list'
> objects}
>      3004    0.004    0.000    0.018    0.000 numeric.py:136(ones)
>        10    0.001    0.000    0.015    0.002 shape_base.py:761(tile)
>        40    0.015    0.000    0.015    0.000 {method 'repeat' of
> 'numpy.ndarray' objects}
>        10    0.012    0.001    0.013    0.001 linalg.py:214(_assertFinite)
>
>
>
>
>
>
> 2014-06-26 10:23 GMT+02:00 Andrew Baldwin <andrew.bald...@chem.ox.ac.uk>:
>>
>> Hi Troels,
>>
>> If you're dealing almost entirely with 2x2 matricies, ie:
>> R=[(a_11,a_12),
>>       (a_21,a_22)]
>> You can express the eigenvalues and eigenvectors exactly in terms of these
>> coefficients:
>>
>> eg: if J is the matrix of eigen vectors and R_D is the diagonal matrix of
>> eigenvalues such that:
>>
>> R=JR_DJ^{-1}
>>
>> Then R^N = J R_D^N J^{-1}
>>
>> Raising R_D to the power of N is the same as raising the diagonals to a
>> power.
>>
>> You can do a similar trick with exponentials. I've never tried this, but
>> in the case of a 2x2 matrix it should be possible to work out an exact
>> expression for the individual elements of the matrix exponential in terms of
>> the four values in R. This would be a lot faster than numerically getting
>> eigenvalues. Will also work for 3x3, but anything much bigger than that, and
>> the expression is going to get nasty.
>>
>> If that's useful and you're not sure how to attack the maths, I can take a
>> look.
>>
>> Best,
>>
>> Andy.
>>
>>
>>
>> On 26/06/2014 09:00, Troels Emtekær Linnet wrote:
>>>
>>> Dear Peixiang, Dear Andrew.
>>>
>>> Just a little heads-up.
>>>
>>> Within a week or two, we should be able to release a new version of
>>> relax.
>>>
>>> This update has focused on speed, recasting the data to higher
>>> dimensional numpy arrays, and
>>> moving the multiple dot operations out of the for loops.
>>>
>>> So we have quite much better speed now. :-)
>>>
>>> But we still have a bottleneck with numerical models, where doing the
>>> matrix exponential via eigenvalue decomposition is slow!
>>> Do any of you any experience with a faster method for doing matrix
>>> exponential?
>>>
>>> These initial results shows that if you are going to use the R1rho
>>> 2site model, you can expect:
>>>
>>> -R1rho
>>> 100 single spins analysis:
>>> DPL94:                      23.525+/-0.409 ->   1.138+/-0.035,  20.676x
>>> faster.
>>> TP02:                       20.191+/-0.375 ->   1.238+/-0.020,  16.308x
>>> faster.
>>> TAP03:                      31.993+/-0.235 ->   1.956+/-0.025,  16.353x
>>> faster.
>>> MP05:                       24.892+/-0.354 ->   1.431+/-0.014,  17.395x
>>> faster.
>>> NS R1rho 2-site:           245.961+/-2.637 ->  36.308+/-0.458,   6.774x
>>> faster.
>>>
>>> Cluster of 100 spins analysis:
>>> DPL94:                      23.872+/-0.505 ->   0.145+/-0.002, 164.180x
>>> faster.
>>> TP02:                       20.445+/-0.411 ->   0.172+/-0.004, 118.662x
>>> faster.
>>> TAP03:                      32.212+/-0.234 ->   0.294+/-0.002, 109.714x
>>> faster.
>>> MP05:                       25.013+/-0.362 ->   0.188+/-0.003, 132.834x
>>> faster.
>>> NS R1rho 2-site:           246.024+/-3.724 ->  33.119+/-0.320,   7.428x
>>> faster.
>>>
>>> -CPMG
>>> 100 single spins analysis:
>>> CR72:                        2.615+/-0.018 ->   1.614+/-0.024,   1.621x
>>> faster.
>>> CR72 full:                   2.678+/-0.020 ->   1.839+/-0.165,   1.456x
>>> faster.
>>> IT99:                        1.837+/-0.030 ->   0.881+/-0.010,   2.086x
>>> faster.
>>> TSMFK01:                     1.665+/-0.049 ->   0.742+/-0.007,   2.243x
>>> faster.
>>> B14:                         5.851+/-0.133 ->   3.978+/-0.049,   1.471x
>>> faster.
>>> B14 full:                    5.789+/-0.102 ->   4.065+/-0.059,   1.424x
>>> faster.
>>> NS CPMG 2-site expanded:     8.225+/-0.196 ->   4.140+/-0.062,   1.987x
>>> faster.
>>> NS CPMG 2-site 3D:         240.027+/-3.182 ->  45.056+/-0.584,   5.327x
>>> faster.
>>> NS CPMG 2-site 3D full:    240.910+/-4.882 ->  45.230+/-0.540,   5.326x
>>> faster.
>>> NS CPMG 2-site star:       186.480+/-2.299 ->  36.400+/-0.397,   5.123x
>>> faster.
>>> NS CPMG 2-site star full:  187.111+/-2.791 ->  36.745+/-0.689,   5.092x
>>> faster.
>>>
>>> Cluster of 100 spins analysis:
>>> CR72:                        2.610+/-0.035 ->   0.118+/-0.001,  22.138x
>>> faster.
>>> CR72 full:                   2.674+/-0.021 ->   0.122+/-0.001,  21.882x
>>> faster.
>>> IT99:                        0.018+/-0.000 ->   0.009+/-0.000,
>>> 2.044x faster. IT99: Cluster of only 1 spin analysis, since v. 3.2.2
>>> had a bug with clustering analysis.:
>>> TSMFK01:                     1.691+/-0.091 ->   0.039+/-0.000,  43.369x
>>> faster.
>>> B14:                         5.891+/-0.127 ->   0.523+/-0.004,  11.264x
>>> faster.
>>> B14 full:                    5.818+/-0.127 ->   0.515+/-0.021,  11.295x
>>> faster.
>>> NS CPMG 2-site expanded:     8.167+/-0.159 ->   0.702+/-0.008,  11.638x
>>> faster.
>>> NS CPMG 2-site 3D:         238.717+/-3.025 ->  41.380+/-0.950,   5.769x
>>> faster.
>>> NS CPMG 2-site 3D full:    507.411+/-803.089 ->  41.852+/-1.317,
>>> 12.124x faster.
>>> NS CPMG 2-site star:       187.004+/-1.935 ->  30.823+/-0.371,   6.067x
>>> faster.
>>> NS CPMG 2-site star full:  187.852+/-2.698 ->  30.882+/-0.671,   6.083x
>>> faster.
>>>
>>>
>>> The reason we cant tweak NS R1rho 2-site anymore, is that we are
>>> computing the
>>> matrix exponential
>>>
>>> ##### Lib function for NS R1rho 2-site.
>>> Before:
>>> http://svn.gna.org/svn/relax/trunk/lib/dispersion/ns_r1rho_2site.py
>>>
>>> Now: moving all essential computations to be calculated before.
>>>
>>> http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/ns_r1rho_2site.py
>>>
>>> def ns_r1rho_2site(M0=None, M0_T=None, .....
>>>
>>>      # Once off parameter conversions.
>>>      pB = 1.0 - pA
>>>      k_BA = pA * kex
>>>      k_AB = pB * kex
>>>
>>>      # Extract shape of experiment.
>>>      NE, NS, NM, NO = num_points.shape
>>>
>>>      # The matrix that contains all the contributions to the evolution,
>>> i.e. relaxation, exchange and chemical shift evolution.
>>>      R_mat = rr1rho_3d_2site_rankN(R1=r1, r1rho_prime=r1rho_prime,
>>> dw=dw, omega=omega, offset=offset, w1=spin_lock_fields, k_AB=k_AB,
>>> k_BA=k_BA, relax_time=relax_time)
>>>
>>>      # This matrix is a propagator that will evolve the magnetization
>>> with the matrix R.
>>>      Rexpo_mat = matrix_exponential_rank_NE_NS_NM_NO_ND_x_x(R_mat)
>>>
>>>      # Magnetization evolution.
>>>      Rexpo_M0_mat = einsum('...ij,...jk', Rexpo_mat, M0)
>>>
>>>      # Magnetization evolution, which include all dimensions.
>>>      MA_mat = einsum('...ij,...jk', M0_T, Rexpo_M0_mat)[:, :, :, :, :, 0,
>>> 0]
>>>
>>>      # Do back calculation.
>>>      back_calc[:] = -inv_relax_time * log(MA_mat)
>>>
>>> #######
>>>
>>> The profiling scripts, shows that
>>> matrix_exponential_rank_NE_NS_NM_NO_ND_x_x(R_mat) is stealing the
>>> time.
>>>
>>> #####
>>>
>>> http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/matrix_exponential.py
>>>
>>> matrix_exponential_rank_NE_NS_NM_NO_ND_x_x(A
>>>      # The eigenvalue decomposition.
>>>      W, V = eig(A)
>>>
>>>      # Calculate the exponential of all elements in the input array
>>>      # Add one axis, to allow for broadcasting multiplication
>>>      W_exp = exp(W).reshape(NE, NS, NM, NO, ND, Row, 1)
>>>
>>>      # Make a eye matrix, with Shape [NE][NS][NM][NO][ND][X][X]
>>>      eye_mat = tile(eye(Row)[newaxis, newaxis, newaxis, newaxis,
>>> newaxis, ...], (NE, NS, NM, NO, ND, 1, 1) )
>>>
>>>      # Transform it to a diagonal matrix, with elements from vector down
>>> th
>>>      W_exp_diag = multiply(W_exp, eye_mat)
>>>
>>>      # Make dot products for higher dimension.
>>>      dot_V_W = einsum('...ij,...jk', V, W_exp_diag)
>>>
>>>      # Compute the (multiplicative) inverse of a matrix.
>>>      inv_V = inv(V)
>>>
>>>      # Calculate the exact exponential.
>>>      eA = einsum('...ij,...jk', dot_V_W, inv_V)
>>>
>>> ###
>>>
>>> The numpy implementation of eig() is stealing up 86% of the time.
>>>
>>> If you have any other way to do this efficient, I would be happy to hear
>>> it!
>>>
>>> I have looked through this, do you have any experience with some of the
>>> methods?
>>>
>>> Moler, C. and Van Loan, C. (2003)  Nineteen Dubious Ways to Compute
>>> the Exponential of a Matrix, Twenty-Five Years Later.  SIAM Review,
>>> 45, 3-49.  (http://dx.doi.org/10.1137/S00361445024180 or
>>> http://www.cs.cornell.edu/cv/researchpdf/19ways+.pdf).
>>>
>>> Best
>>> Troels
>>>
>>> 2014-06-11 10:36 GMT+02:00 Edward d'Auvergne <edw...@nmr-relax.com>:
>>>>
>>>> Hi Peixiang,
>>>>
>>>> Actually, for comparison purposes for applying the 'NS R1rho 2-site'
>>>> model (http://wiki.nmr-relax.com/NS_R1rho_2-site) to variable-time
>>>> R1rho-type data, Art Palmer's MP05 model would be much better
>>>> (http://wiki.nmr-relax.com/MP05) than the DPL94 model
>>>> (http://wiki.nmr-relax.com/DPL94) as it is of much higher quality.
>>>> Andy Baldwin apparently has derived an even better analytic model,
>>>> especially when R20A and R20B are significantly different, see:
>>>>
>>>> http://gna.org/support/?3155#comment0
>>>>
>>>> and the discussions in the thread:
>>>>
>>>> http://thread.gmane.org/gmane.science.nmr.relax.devel/5414/focus=5447
>>>>
>>>> and:
>>>>
>>>> http://thread.gmane.org/gmane.science.nmr.relax.devel/5410/focus=5433
>>>>
>>>> This last thread is about the B14 model (Baldwin 2014,
>>>> http://wiki.nmr-relax.com/B14) implemented in relax by Troels Linnet,
>>>> but there are mentions of Andy's R1rho model.  However the R1rho model
>>>> from Andy is not implemented in relax yet.  Do you have much
>>>> experience with variable-time R1rho numeric models?  Looking at the
>>>> code for where the relax_time variable comes from, it is not very
>>>> clear which relaxation time is being used:
>>>>
>>>>
>>>> http://www.nmr-relax.com/api/3.2/specific_analyses.relax_disp.data-module.html#loop_time
>>>>
>>>>  From the code itself:
>>>>
>>>>
>>>> http://www.nmr-relax.com/api/3.2/specific_analyses.relax_disp.data-pysrc.html#loop_time
>>>>
>>>> it looks like this loop_time() function assumes fixed-time data and
>>>> hence only the first encountered time value for the given experiment,
>>>> magnetic field strength, offset, and dispersion point is used.  So
>>>> your expertise will be very useful for resolving this variable-time
>>>> R1rho numeric model problem!
>>>>
>>>> Note that there are a few improvements to the R1rho models that are
>>>> yet to be implemented in relax:
>>>>
>>>> http://thread.gmane.org/gmane.science.nmr.relax.devel/5414/focus=5808
>>>>
>>>> http://www.nmr-relax.com/manual/do_dispersion_features_yet_be_implemented.html
>>>>
>>>> Cheers,
>>>>
>>>> Edward
>>>>
>>>>
>>>>
>>>> On 11 June 2014 10:07, Edward d'Auvergne <edw...@nmr-relax.com> wrote:
>>>>>
>>>>> Hi Peixiang,
>>>>>
>>>>> Please see below:
>>>>>
>>>>>
>>>>>> Congratulations about the new version of 3.2.2, I tried, it works well
>>>>>> :)
>>>>>
>>>>> Cheers.  If you notice any other problems or strange behaviour, please
>>>>> don't hesitate to submit a bug report
>>>>> (https://gna.org/bugs/?func=additem&group=relax).  Then that problem
>>>>> will likely be fixed for the next relax version.
>>>>>
>>>>>
>>>>>> Still one question of using the different relaxation time periods.
>>>>>>
>>>>>> My R1rho RD experiment has different relaxation time periods, I could
>>>>>> input all the peaks by the loop.
>>>>>>
>>>>>> Then I fit with 'NS 2-site R1 model', they could also do the fitting
>>>>>> and give the results and also a nice fitting of the dispersion curve.
>>>>>> Still I did not figure out, which Trelax is it using in the NS model
>>>>>> in the case of different relaxation time periods.
>>>>>> Only the last relaxation time period? Then fit as fixed time
>>>>>> experiment?
>>>>>
>>>>> As this code was directly contributed by Paul Schanda and Dominique
>>>>> Marion, and I'm guessing that their offices are not too far from yours
>>>>> at the IBS, maybe you could ask them directly ;)  Well, it was Paul
>>>>> who organised that the code be contributed to relax.  In reality the
>>>>> original authors were Nikolai Skrynnikov and Martin Tollinger.  The
>>>>> API documentation is also a useful resource for answering such
>>>>> questions (http://www.nmr-relax.com/api/3.2/).  For this, see the
>>>>> relax library documentation for that model:
>>>>>
>>>>>
>>>>> http://www.nmr-relax.com/api/3.2/lib.dispersion.ns_r1rho_2site-module.html
>>>>>
>>>>> This documentation describes the origin and history of the code.  You
>>>>> could even look at the source code for the direct implementation:
>>>>>
>>>>>
>>>>> http://www.nmr-relax.com/api/3.2/lib.dispersion.ns_r1rho_2site-pysrc.html
>>>>>
>>>>> Trelax is the 'relax_time' argument here.  You can find all
>>>>> implementation details in this API documentation.  Which relaxation
>>>>> time would you suggest as being correct?  I'm actually no longer sure
>>>>> which is being used.  And I'm not sure if the original code or even
>>>>> the numeric model itself was designed to handle variable time data.
>>>>>
>>>>>
>>>>>> Maybe I am the minority to use such time consuming experiments, so I
>>>>>> always have such strange questions ...
>>>>>
>>>>> relax should still handle the situation.  Do you know if there is a
>>>>> special treatment for the numerical models for such data?  Do you know
>>>>> of a good citation?  Maybe the 'NS R1rho 2-site' model
>>>>> (http://wiki.nmr-relax.com/NS_R1rho_2-site) is not suitable for
>>>>> variable time data, and a different - and importantly published -
>>>>> solution is required.  The analytic models do not use the relaxation
>>>>> time value, so those are safe.  Hence, as a check, you should see very
>>>>> similar results from the 'DPL94' model
>>>>> (http://wiki.nmr-relax.com/DPL94) and the 'NS R1rho 2-site' model.  If
>>>>> not, something is wrong.
>>>>>
>>>>> If the 'NS R1rho 2-site' model is really only for fixed-time data,
>>>>> then we should modify relax to raise a RelaxError when this model is
>>>>> chosen for optimisation and the data is variable time.  As not many
>>>>> people optimise numeric models to variable-time data, your input into
>>>>> this question would be very valuable.  Cheers!
>>>>>
>>>>>
>>>>>> Maybe another annoying question for the fix time people:
>>>>>> Another question, does it necessary to check how mono-exponential
>>>>>> about their relaxation curve under certain rf-field? If not, how did they
>>>>>> make sure they can use the mono-exponential assumption to get R2eff by 
>>>>>> two
>>>>>> points?
>>>>>
>>>>>  From what I've seen and heard, some people do check, but the majority
>>>>> just assume that the curves will be mono-exponential and publish the
>>>>> fixed-time data and results.  Such a check is probably much more
>>>>> important for those collecting R1rho-type data rather than CPMG-type
>>>>> data.  Anyway, maybe you should ask people in front of their posters
>>>>> at conferences to get a better overview of what the field does.
>>>>>
>>>>> Regards,
>>>>>
>>>>> Edward
>>>>>
>>>>>
>>>>>> Best,
>>>>>>
>>>>>> Peixiang
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On 05/19/2014 05:49 PM, Edward d'Auvergne wrote:
>>>>>>
>>>>>> Hi Peixiang,
>>>>>>
>>>>>> Welcome to the relax mailing lists!  The relaxation dispersion
>>>>>> analysis implemented in relax is quite flexible, and the data you have
>>>>>> is supported.  This is well documented in the relax manual which you
>>>>>> should have with your copy of relax (the docs/relax.pdf file).  Have a
>>>>>> look at section 'The R2eff model' in the dispersion chapter of the
>>>>>> manual (http://www.nmr-relax.com/manual/R2eff_model.html),
>>>>>> specifically the 'Variable relaxation period experiments' subsection.
>>>>>>
>>>>>> Unfortunately the sample scripts are all for the fixed time dispersion
>>>>>> experiments.  However you could have a look at one of the scripts used
>>>>>> for the test suite in relax:
>>>>>>
>>>>>> test_suite/system_tests/scripts/relax_disp/exp_fit.py
>>>>>>
>>>>>> This script is run in the test suite to ensue that the data you have
>>>>>> will always be supported.  There are many more scripts in that
>>>>>> directory which you might find interesting.  The 'r1rho_on_res_m61.py'
>>>>>> script also involve an exponential fit with many different relaxation
>>>>>> time periods.
>>>>
>>>> _______________________________________________
>>>> relax (http://www.nmr-relax.com)
>>>>
>>>> This is the relax-users mailing list
>>>> relax-users@gna.org
>>>>
>>>> To unsubscribe from this list, get a password
>>>> reminder, or change your subscription options,
>>>> visit the list information page at
>>>> https://mail.gna.org/listinfo/relax-users
>>
>>
>
>

_______________________________________________
relax (http://www.nmr-relax.com)

This is the relax-users mailing list
relax-users@gna.org

To unsubscribe from this list, get a password
reminder, or change your subscription options,
visit the list information page at
https://mail.gna.org/listinfo/relax-users

Reply via email to