Re: R1rho RD analysis
Dear Peixiang. I have just discovered a bug in relax, where the 'relax_times' is not properly loaded from the data-store and sent to the target function of model NS R1rho 2-site. This means that currently 'NS R1rho 2-site' can only be used for fixed time at the moment. This bug has no influence on all other R1rho models, as they don't use relax_time in their equations. Best Troels 2014-05-22 16:34 GMT+02:00 pma peixiang...@ibs.fr: Hi, Edward, Thanks for you reply. I used the script mode in linux. I tried with 'r1rho_on_res_m61.py', but I think in the script it has a mistake. in line27 ds.fixed = True, I think it should be False, because this experiment has different time period, they do not need the ref. I changed to False, the M61 model still give right result. But I still have some other questions. 1. How did you declare replicates in such Variable relaxation period experiments? I saw in the example r1rho_on_res_m61.py, you read all the data into a big Matrix data and did not declare the replicate. But for the fixed time period you declare by the spectrum ID. Did you have a solution for this? 2. I want to test NS R1rho 2-site, so I simply add it into the MODELS in 'r1rho_on_res_m61.py'. It seems does not work. first, they give the warning when it comes to NS R1rho 2-site, (grid_inc =8 ) -- Searching through 32768 grid nodes. lib/dispersion/ns_r1rho_2site.py:118: RuntimeWarning: divide by zero encountered in double_scalars theta = atan(spin_lock_fields[i]/dA) lib/dispersion/ns_r1rho_2site.py:132: RuntimeWarning: invalid value encountered in double_scalars back_calc[i]= -inv_relax_time * log(MA) - I think the grid search is too heavy for the NS model, do you have a solution to constrain it? Or for example, if I want to limit grid search only for kex and pA, but not for others, how can I declare that? For RuntimeWarning: divide by zero encountered in double_scalars, do you have suggestions for this? The final results for 'NS R1rho 2-site' did not fit at all, so the data for M61 can only be fit with M61 model but not other models? Thanks a lot! Peixiang On 05/19/2014 05:49 PM, Edward d'Auvergne wrote: Hi Peixiang, Welcome to the relax mailing lists! The relaxation dispersion analysis implemented in relax is quite flexible, and the data you have is supported. This is well documented in the relax manual which you should have with your copy of relax (the docs/relax.pdf file). Have a look at section 'The R2eff model' in the dispersion chapter of the manual (http://www.nmr-relax.com/manual/R2eff_model.html), specifically the 'Variable relaxation period experiments' subsection. Unfortunately the sample scripts are all for the fixed time dispersion experiments. However you could have a look at one of the scripts used for the test suite in relax: test_suite/system_tests/scripts/relax_disp/exp_fit.py This script is run in the test suite to ensue that the data you have will always be supported. There are many more scripts in that directory which you might find interesting. The 'r1rho_on_res_m61.py' script also involve an exponential fit with many different relaxation time periods. You could also try using the relax graphical user interface (GUI) by running: $ relax -g Here you simply load the peak list data via the 'Add' button in the 'Spectra list' GUI element and, when you reach the relaxation time wizard page, simply specify different times for the different spectra - relax will handle the rest. Have a look at http://www.nmr-relax.com/manual/Dispersion_GUI_mode_loading_data.html (the GUI section of the dispersion chapter). The GUI should be much easier to use than the script UI, though a lot less flexible if you wish to perform custom, non-standard analyses. Regards, Edward On 19 May 2014 16:25, pma peixiang...@ibs.fr wrote: Hi, all, Did anybody manage to fit the R1rho relaxation dispersion from some calculated R2 data (e.g. I fitted the different relax_delay time to extract R1rho, then calculate R2 = R1rho/sin^2 - R1/tan^2 ) I found a tutorial on wiki, but it stopped after processing the spectra, actually I finished this part already. Besides,in the manual for R1rho calc (page 173 in the manual) in the R2eff model, it defined R1rho(w1)=-1/T*ln(I1(w1)/I0), e.g. it assume the mono-dispersion between time 0 and a fix time point. We did a bit different, we recorded a series of time points and got the R1rho for each w1. Now I have all the R2s, I want to use the implemented R1rho models to fit my data. I think relax should be possible to do the fitting with custom calculated R2, what kind of format should I use? Can anyone show an example of
Re: R1rho RD analysis
Hi, For a permanent reference, here are the timings of all of the dispersion models in relax, with statistics from 10 repetitions, of the calculation time for 100 function calls for 100 spins. I.e. either 10,000 function calls for single spins or 100 function calls for the cluster of 100 spins. These are for Troels' recent linear algrebra optimisations in the disp_spin_speed relax branch (http://svn.gna.org/viewcvs/relax/branches/disp_spin_speed/?pathrev=24362). The absolute timings are not very interesting however the numbers do allow for comparisons between models. This shows, for example, how Nikolai's numeric solution (the NS CPMG 2-site expanded model at http://www.nmr-relax.com/api/3.2/lib.dispersion.ns_cpmg_2site_expanded-module.html) is now only 1.08 or 1.39 times slower than Andy's newly published analytic model (B14). It also shows that both of these models are ~10 times faster for single spins or ~100 times faster for clustered spins than the currently implemented numeric models which use full evolution matrices. And how the clustered analysis is now significantly faster than the non-clustered analysis, though this advantage would be eliminated when running Gary Thompson's multi-processor framework with OpenMPI on a large cluster. After Troels' impressive optimisations it can be seen that there are almost no optimisation possibilities left. You can see this in the code of the target_functions/relax_disp.py file (http://svn.gna.org/viewcvs/relax/branches/disp_spin_speed/target_functions/relax_disp.py?view=logpathrev=24362) and the files in lib/dispersion/ implementing each dispersion model (http://svn.gna.org/viewcvs/relax/branches/disp_spin_speed/lib/dispersion/?pathrev=24362). Apart from the option of expanding the other NS CPMG 2-site * and NS R1rho * models following Nikolai's example of the NS CPMG 2-site expanded model, there is not much more speed to be obtained. As this is using BLAS/LAPACK+OpenMP via numpy, rewriting the code in C or FORTRAN will not significantly increase the speed of the models. If you feel your model can be faster, any suggestions for speeding up the code are welcome. Note that the math domain error checking and edge case detection code is essential for providing numerical stability for the dispersion models, and is present for all models. Here are the timings: 100 single spins analysis: http://wiki.nmr-relax.com/No_Rex 0.290+/-0.016 seconds http://wiki.nmr-relax.com/LM63 0.792+/-0.015 seconds http://wiki.nmr-relax.com/LM63_3-site 1.042+/-0.018 seconds http://wiki.nmr-relax.com/CR72 1.588+/-0.038 seconds http://wiki.nmr-relax.com/CR72_full1.723+/-0.030 seconds http://wiki.nmr-relax.com/IT99 0.894+/-0.020 seconds http://wiki.nmr-relax.com/TSMFK01 0.751+/-0.013 seconds http://wiki.nmr-relax.com/B14 3.975+/-0.070 seconds http://wiki.nmr-relax.com/B14_full 3.973+/-0.073 seconds http://wiki.nmr-relax.com/NS_CPMG_2-site_expanded 4.234+/-0.089 seconds http://wiki.nmr-relax.com/NS_CPMG_2-site_3D 45.374+/-1.355 seconds http://wiki.nmr-relax.com/NS_CPMG_2-site_3D_full 45.954+/-1.165 seconds http://wiki.nmr-relax.com/NS_CPMG_2-site_star 36.792+/-0.765 seconds http://wiki.nmr-relax.com/NS_CPMG_2-site_star_full36.345+/-0.420 seconds http://wiki.nmr-relax.com/MMQ_CR72 4.211+/-0.084 seconds http://wiki.nmr-relax.com/NS_MMQ_2-site 83.476+/-2.326 seconds http://wiki.nmr-relax.com/NS_MMQ_3-site_linear91.677+/-1.460 seconds http://wiki.nmr-relax.com/NS_MMQ_3-site 93.030+/-0.840 seconds http://wiki.nmr-relax.com/M61 0.907+/-0.057 seconds http://wiki.nmr-relax.com/DPL941.128+/-0.022 seconds http://wiki.nmr-relax.com/TP02 1.236+/-0.019 seconds http://wiki.nmr-relax.com/TAP031.952+/-0.028 seconds http://wiki.nmr-relax.com/MP05 1.445+/-0.029 seconds http://wiki.nmr-relax.com/NS_R1rho_2-site 36.547+/-1.757 seconds http://wiki.nmr-relax.com/NS_R1rho_3-site_linear 72.351+/-1.483 seconds http://wiki.nmr-relax.com/NS_R1rho_3-site 73.000+/-1.696 seconds Cluster of 100 spins analysis: http://wiki.nmr-relax.com/No_Rex 0.008+/-0.000 seconds http://wiki.nmr-relax.com/LM63 0.038+/-0.002 seconds http://wiki.nmr-relax.com/LM63_3-site 0.069+/-0.001 seconds http://wiki.nmr-relax.com/CR72 0.107+/-0.002 seconds http://wiki.nmr-relax.com/CR72_full0.112+/-0.005 seconds http://wiki.nmr-relax.com/IT99 0.009+/-0.000 seconds http://wiki.nmr-relax.com/TSMFK01 0.039+/-0.000 seconds http://wiki.nmr-relax.com/B14
Re: R1rho RD analysis
Dear Andrew. Thank you for your valuable comments! It becomes clear for me, that I should sit down with theory to grasp, why I actually do: 1) matrix exponential 2) matrix power in all the numerical models. Then I would be able to select which best method to apply. For: ns_cpmg_2site_star.py = 2x2 matrix, ns_mmq_2site.py = 2x2 matrix. I should write up the closed form. So, the corresponding characteristic polynomial is quadratic so the eigenvalues can be expressed in closed form in terms of the matrix elements. M = [ [a b], [b c] ] discriminant: D = sqrt(a**2 + 4b**2 - 2ac + c**2= Eigenvalues, lamda1, lamda2. l1 = 0.5*(a + c - D), l2 = 0.5*(a + c + D) Define M: M = V^-1 * [ [l1 0], [0 l2] ] * V V11 = (a - c - D)/2b V12 = 1 V21 = (a - c + D)/2b V22 = 1 V = [ [v11 v12], [v21 v22] ] For: ns_mmq_3site.py = 3x3 matrix, ns_cpmg_2site_3d.py = 7 X 7 matrix, ns_r1rho_2site.py = 6x6 matrix, ns_r1rho_3site.py = 9x9 matrix. I also asked on the Scipy mailing. (Their mail server link is not working, so here a short reference). ### Charles R Harris: Are you solving a differential equation? If so, an explicit solution may not be the best way to go. ### Eric Moore: There are expansions for the matrix exponential in terms of Chebyshev and Laguerre matrix polynomials. (The first of which does get use a bit for calculating propagators in NMR, for instance, spinevolution and simpson both can/do use it.) However, your matrices seem quite small, even if there are a large number of them, so it may be very difficult to speed this up by much. ### Stephan Hoyer: My experience with solving very similar quantum mechanics problems is that using an ODE integrator like zofe (via scipy.integrate.ode) can be much faster than solving the propagator exactly. ### Michael Kreim: I agree with Stephan and Charles that it my be worth trying to solve the ODE using any suitable ODE solver. In Matlab I made the experience that it is very problem depended if the ode solver or the matrix exponential method is faster. Also you can have a look at expokit which is a very good implementation of matrix exponential solvers (using Krylov subspace methods): http://www.maths.uq.edu.au/expokit/. Unfortunately there is no Python implementation but maybe you can call the fortran or C++ version from python. And you can have a look at operator splitting methods. If it is possible to split your problem in two (or more) sub problems, then these methods often speed up the numerical computations. The problem usually lies in finding a suitable splitting. The implementation is not very complicated, if you have numerical methods for the sub problems. ## So here, it sounds like I should figure out to do a ordinary differential equation, for magnetisation evolution. Would this be the right way to go? I am very happy for any comments. Best Troels 2014-06-26 13:46 GMT+02:00 Andrew Baldwin andrew.bald...@chem.ox.ac.uk: I am looking for a solution I can apply to all numerical models :-) The 'close formed' solutions I describe are eigenvalue decompositions. So that's just a slightly smart way of doing that rather than asking a minimiser to do it for you. No general closed form solutions exist for the eigenvalues (hence eigenvectors) I think above 3x3. Incidentally, the theoretical treatments of the R1rho (needing minimum of 6x6) relies on being able to approximate the smallest real eigenvalues of the six. So the theoretical R1rho theoretical treatments are in essence methods for approximating one of the six eigenvalues (which is hard enough). Getting the other five is far from straightforward and might not even be possible. This is also why a general CEST expression isn't going to happen: to get a complete treatment of the CEST experiment you need all six eigenvalues, and no-one has (yet) come up with a way of getting them all. If the matrix is sparse then you can get some speedups. But in my hands at least, those algorithms only tend to work well on huge matrices that are really sparse (eg100x100 with not much over 10% of elements having entries). On 26/06/2014 09:44, Troels Emtekær Linnet wrote: Hi Andrew. I am looking for a solution I can apply to all numerical models :-) The current implementation, is the eigenvalue decomposition. http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/matrix_exponential.py It the bottom, I provide the profiling for NS R1rho 2site. You will see, that using the eig function, takes 50% of the time: That is i little sad, that the reason why Numerical solutions is so slow, is numpy.linalg.eig(). They differ a little in matrix size. http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/ns_cpmg_2site_3d.py 7 X 7 matrix. http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/ns_cpmg_2site_star.py 2x2 matrix http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/ns_mmq_2site.py 2x2 matrix.
Re: R1rho RD analysis
Hi Troels, For reference for the relax users reading this, the abbreviations Troels used for the relaxation dispersion models can be decoded using the relax wiki page http://wiki.nmr-relax.com/Category:Relaxation_dispersion. Just a little heads-up. Within a week or two, we should be able to release a new version of relax. This update has focused on speed, recasting the data to higher dimensional numpy arrays, and moving the multiple dot operations out of the for loops. So we have quite much better speed now. :-) Troels, you just gave away the surprise of the relax 3.2.3 or 3.2.4 release! Note that relax was not particularly slow before these changes, but that you have taken far greater advantage of the BLAS/LAPACK libraries behind the numpy functions to make the dispersion models in relax super fast, especially when spins are clustered. It's a pity we don't have the means to compare the optimisation target function speed between relax and other software to show how much faster relax now is, as the relax advantage of having a grid search to find the initial non-biased position for optimisation (a very important technique for optimisation that a number of other dispersion software do not provide) will give many relax users the incorrect impression that relax is slower than the other software. Though if relax is used on a cluster with OpenMPI, this is a non-issue. But we still have a bottleneck with numerical models, where doing the matrix exponential via eigenvalue decomposition is slow! Do any of you any experience with a faster method for doing matrix exponential? These initial results shows that if you are going to use the R1rho 2site model, you can expect: -R1rho 100 single spins analysis: DPL94: 23.525+/-0.409 - 1.138+/-0.035, 20.676x faster. TP02: 20.191+/-0.375 - 1.238+/-0.020, 16.308x faster. TAP03: 31.993+/-0.235 - 1.956+/-0.025, 16.353x faster. MP05: 24.892+/-0.354 - 1.431+/-0.014, 17.395x faster. NS R1rho 2-site: 245.961+/-2.637 - 36.308+/-0.458, 6.774x faster. Cluster of 100 spins analysis: DPL94: 23.872+/-0.505 - 0.145+/-0.002, 164.180x faster. TP02: 20.445+/-0.411 - 0.172+/-0.004, 118.662x faster. TAP03: 32.212+/-0.234 - 0.294+/-0.002, 109.714x faster. MP05: 25.013+/-0.362 - 0.188+/-0.003, 132.834x faster. NS R1rho 2-site: 246.024+/-3.724 - 33.119+/-0.320, 7.428x faster. -CPMG 100 single spins analysis: CR72:2.615+/-0.018 - 1.614+/-0.024, 1.621x faster. CR72 full: 2.678+/-0.020 - 1.839+/-0.165, 1.456x faster. IT99:1.837+/-0.030 - 0.881+/-0.010, 2.086x faster. TSMFK01: 1.665+/-0.049 - 0.742+/-0.007, 2.243x faster. B14: 5.851+/-0.133 - 3.978+/-0.049, 1.471x faster. B14 full:5.789+/-0.102 - 4.065+/-0.059, 1.424x faster. NS CPMG 2-site expanded: 8.225+/-0.196 - 4.140+/-0.062, 1.987x faster. NS CPMG 2-site 3D: 240.027+/-3.182 - 45.056+/-0.584, 5.327x faster. NS CPMG 2-site 3D full:240.910+/-4.882 - 45.230+/-0.540, 5.326x faster. NS CPMG 2-site star: 186.480+/-2.299 - 36.400+/-0.397, 5.123x faster. NS CPMG 2-site star full: 187.111+/-2.791 - 36.745+/-0.689, 5.092x faster. Cluster of 100 spins analysis: CR72:2.610+/-0.035 - 0.118+/-0.001, 22.138x faster. CR72 full: 2.674+/-0.021 - 0.122+/-0.001, 21.882x faster. IT99:0.018+/-0.000 - 0.009+/-0.000, 2.044x faster. IT99: Cluster of only 1 spin analysis, since v. 3.2.2 had a bug with clustering analysis.: TSMFK01: 1.691+/-0.091 - 0.039+/-0.000, 43.369x faster. B14: 5.891+/-0.127 - 0.523+/-0.004, 11.264x faster. B14 full:5.818+/-0.127 - 0.515+/-0.021, 11.295x faster. NS CPMG 2-site expanded: 8.167+/-0.159 - 0.702+/-0.008, 11.638x faster. NS CPMG 2-site 3D: 238.717+/-3.025 - 41.380+/-0.950, 5.769x faster. NS CPMG 2-site 3D full:507.411+/-803.089 - 41.852+/-1.317, 12.124x faster. NS CPMG 2-site star: 187.004+/-1.935 - 30.823+/-0.371, 6.067x faster. NS CPMG 2-site star full: 187.852+/-2.698 - 30.882+/-0.671, 6.083x faster. There are a number of ways of computing the matrix exponential. Avoiding eigenvalue decomposition is the essential key. The Pade approximation is probably the best, followed by one of the Taylor series expansion approximations. As I mentioned in a different post to the relax-devel mailing list, this was used earlier in relax via Scipy. But I removed this and wrote my own algorithm using eigenvalue decomposition as the Scipy
Re: R1rho RD analysis
Dear Peixiang, Dear Andrew. Just a little heads-up. Within a week or two, we should be able to release a new version of relax. This update has focused on speed, recasting the data to higher dimensional numpy arrays, and moving the multiple dot operations out of the for loops. So we have quite much better speed now. :-) But we still have a bottleneck with numerical models, where doing the matrix exponential via eigenvalue decomposition is slow! Do any of you any experience with a faster method for doing matrix exponential? These initial results shows that if you are going to use the R1rho 2site model, you can expect: -R1rho 100 single spins analysis: DPL94: 23.525+/-0.409 - 1.138+/-0.035, 20.676x faster. TP02: 20.191+/-0.375 - 1.238+/-0.020, 16.308x faster. TAP03: 31.993+/-0.235 - 1.956+/-0.025, 16.353x faster. MP05: 24.892+/-0.354 - 1.431+/-0.014, 17.395x faster. NS R1rho 2-site: 245.961+/-2.637 - 36.308+/-0.458, 6.774x faster. Cluster of 100 spins analysis: DPL94: 23.872+/-0.505 - 0.145+/-0.002, 164.180x faster. TP02: 20.445+/-0.411 - 0.172+/-0.004, 118.662x faster. TAP03: 32.212+/-0.234 - 0.294+/-0.002, 109.714x faster. MP05: 25.013+/-0.362 - 0.188+/-0.003, 132.834x faster. NS R1rho 2-site: 246.024+/-3.724 - 33.119+/-0.320, 7.428x faster. -CPMG 100 single spins analysis: CR72:2.615+/-0.018 - 1.614+/-0.024, 1.621x faster. CR72 full: 2.678+/-0.020 - 1.839+/-0.165, 1.456x faster. IT99:1.837+/-0.030 - 0.881+/-0.010, 2.086x faster. TSMFK01: 1.665+/-0.049 - 0.742+/-0.007, 2.243x faster. B14: 5.851+/-0.133 - 3.978+/-0.049, 1.471x faster. B14 full:5.789+/-0.102 - 4.065+/-0.059, 1.424x faster. NS CPMG 2-site expanded: 8.225+/-0.196 - 4.140+/-0.062, 1.987x faster. NS CPMG 2-site 3D: 240.027+/-3.182 - 45.056+/-0.584, 5.327x faster. NS CPMG 2-site 3D full:240.910+/-4.882 - 45.230+/-0.540, 5.326x faster. NS CPMG 2-site star: 186.480+/-2.299 - 36.400+/-0.397, 5.123x faster. NS CPMG 2-site star full: 187.111+/-2.791 - 36.745+/-0.689, 5.092x faster. Cluster of 100 spins analysis: CR72:2.610+/-0.035 - 0.118+/-0.001, 22.138x faster. CR72 full: 2.674+/-0.021 - 0.122+/-0.001, 21.882x faster. IT99:0.018+/-0.000 - 0.009+/-0.000, 2.044x faster. IT99: Cluster of only 1 spin analysis, since v. 3.2.2 had a bug with clustering analysis.: TSMFK01: 1.691+/-0.091 - 0.039+/-0.000, 43.369x faster. B14: 5.891+/-0.127 - 0.523+/-0.004, 11.264x faster. B14 full:5.818+/-0.127 - 0.515+/-0.021, 11.295x faster. NS CPMG 2-site expanded: 8.167+/-0.159 - 0.702+/-0.008, 11.638x faster. NS CPMG 2-site 3D: 238.717+/-3.025 - 41.380+/-0.950, 5.769x faster. NS CPMG 2-site 3D full:507.411+/-803.089 - 41.852+/-1.317, 12.124x faster. NS CPMG 2-site star: 187.004+/-1.935 - 30.823+/-0.371, 6.067x faster. NS CPMG 2-site star full: 187.852+/-2.698 - 30.882+/-0.671, 6.083x faster. The reason we cant tweak NS R1rho 2-site anymore, is that we are computing the matrix exponential # Lib function for NS R1rho 2-site. Before: http://svn.gna.org/svn/relax/trunk/lib/dispersion/ns_r1rho_2site.py Now: moving all essential computations to be calculated before. http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/ns_r1rho_2site.py def ns_r1rho_2site(M0=None, M0_T=None, . # Once off parameter conversions. pB = 1.0 - pA k_BA = pA * kex k_AB = pB * kex # Extract shape of experiment. NE, NS, NM, NO = num_points.shape # The matrix that contains all the contributions to the evolution, i.e. relaxation, exchange and chemical shift evolution. R_mat = rr1rho_3d_2site_rankN(R1=r1, r1rho_prime=r1rho_prime, dw=dw, omega=omega, offset=offset, w1=spin_lock_fields, k_AB=k_AB, k_BA=k_BA, relax_time=relax_time) # This matrix is a propagator that will evolve the magnetization with the matrix R. Rexpo_mat = matrix_exponential_rank_NE_NS_NM_NO_ND_x_x(R_mat) # Magnetization evolution. Rexpo_M0_mat = einsum('...ij,...jk', Rexpo_mat, M0) # Magnetization evolution, which include all dimensions. MA_mat = einsum('...ij,...jk', M0_T, Rexpo_M0_mat)[:, :, :, :, :, 0, 0] # Do back calculation. back_calc[:] = -inv_relax_time * log(MA_mat) ### The profiling scripts, shows that matrix_exponential_rank_NE_NS_NM_NO_ND_x_x(R_mat) is stealing the time. # http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/matrix_exponential.py matrix_exponential_rank_NE_NS_NM_NO_ND_x_x(A # The
Re: R1rho RD analysis
Hi Andrew. I am looking for a solution I can apply to all numerical models :-) The current implementation, is the eigenvalue decomposition. http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/matrix_exponential.py It the bottom, I provide the profiling for NS R1rho 2site. You will see, that using the eig function, takes 50% of the time: That is i little sad, that the reason why Numerical solutions is so slow, is numpy.linalg.eig(). They differ a little in matrix size. http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/ns_cpmg_2site_3d.py 7 X 7 matrix. http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/ns_cpmg_2site_star.py 2x2 matrix http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/ns_mmq_2site.py 2x2 matrix. http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/ns_mmq_3site.py 3x3 matrix, http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/ns_cpmg_2site_3d.py 6x6 matrix http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/ns_r1rho_3site.py 9x9 matrix. ### ns_cpmg_2site_3d Thu Jun 26 10:42:13 2014 /var/folders/ww/1jkhkh315x57jglgxnr9g24wgp/T/tmp0buvpw 211077 function calls in 5.073 seconds Ordered by: cumulative time ncalls tottime percall cumtime percall filename:lineno(function) 10.0000.0005.0735.073 string:1(module) 10.0070.0075.0735.073 profiling_ns_r1rho_2site.py:553(cluster) 100.0000.0004.5920.459 profiling_ns_r1rho_2site.py:515(calc) 100.0150.0024.5920.459 relax_disp.py:1585(func_ns_r1rho_2site) 100.1130.0114.5740.457 ns_r1rho_2site.py:190(ns_r1rho_2site) 100.0930.0094.1380.414 matrix_exponential.py:33(matrix_exponential_rank_NE_NS_NM_NO_ND_x_x) 102.5290.2532.5810.258 linalg.py:982(eig) 400.8900.0220.8900.022 {numpy.core.multiarray.einsum} 100.5240.0520.5520.055 linalg.py:455(inv) 10.0000.0000.4740.474 profiling_ns_r1rho_2site.py:112(__init__) 100.1250.0130.3040.030 ns_r1rho_2site.py:117(rr1rho_3d_2site_rankN) 10.2480.2480.2740.274 relax_disp.py:64(__init__) 920.1800.0020.1800.002 {method 'outer' of 'numpy.ufunc' objects} 10.0900.0900.1070.107 profiling_ns_r1rho_2site.py:189(return_offset_data) 10.0530.0530.0920.092 profiling_ns_r1rho_2site.py:289(return_r2eff_arrays) 300.0650.0020.0650.002 {method 'astype' of 'numpy.ndarray' objects} 151090.0430.0000.0430.000 {numpy.core.multiarray.array} 1188630.0180.0000.0180.000 {method 'append' of 'list' objects} 30040.0040.0000.0180.000 numeric.py:136(ones) 100.0010.0000.0150.002 shape_base.py:761(tile) 400.0150.0000.0150.000 {method 'repeat' of 'numpy.ndarray' objects} 100.0120.0010.0130.001 linalg.py:214(_assertFinite) 2014-06-26 10:23 GMT+02:00 Andrew Baldwin andrew.bald...@chem.ox.ac.uk: Hi Troels, If you're dealing almost entirely with 2x2 matricies, ie: R=[(a_11,a_12), (a_21,a_22)] You can express the eigenvalues and eigenvectors exactly in terms of these coefficients: eg: if J is the matrix of eigen vectors and R_D is the diagonal matrix of eigenvalues such that: R=JR_DJ^{-1} Then R^N = J R_D^N J^{-1} Raising R_D to the power of N is the same as raising the diagonals to a power. You can do a similar trick with exponentials. I've never tried this, but in the case of a 2x2 matrix it should be possible to work out an exact expression for the individual elements of the matrix exponential in terms of the four values in R. This would be a lot faster than numerically getting eigenvalues. Will also work for 3x3, but anything much bigger than that, and the expression is going to get nasty. If that's useful and you're not sure how to attack the maths, I can take a look. Best, Andy. On 26/06/2014 09:00, Troels Emtekær Linnet wrote: Dear Peixiang, Dear Andrew. Just a little heads-up. Within a week or two, we should be able to release a new version of relax. This update has focused on speed, recasting the data to higher dimensional numpy arrays, and moving the multiple dot operations out of the for loops. So we have quite much better speed now. :-) But we still have a bottleneck with numerical models, where doing the matrix exponential via eigenvalue decomposition is slow! Do any of you any experience with a faster method for doing matrix exponential? These initial results shows that if you are going to use the R1rho 2site model, you can expect: -R1rho 100 single spins analysis: DPL94: 23.525+/-0.409 - 1.138+/-0.035,
Re: R1rho RD analysis
Hi Troels, If you're dealing almost entirely with 2x2 matricies, ie: R=[(a_11,a_12), (a_21,a_22)] You can express the eigenvalues and eigenvectors exactly in terms of these coefficients: eg: if J is the matrix of eigen vectors and R_D is the diagonal matrix of eigenvalues such that: R=JR_DJ^{-1} Then R^N = J R_D^N J^{-1} Raising R_D to the power of N is the same as raising the diagonals to a power. You can do a similar trick with exponentials. I've never tried this, but in the case of a 2x2 matrix it should be possible to work out an exact expression for the individual elements of the matrix exponential in terms of the four values in R. This would be a lot faster than numerically getting eigenvalues. Will also work for 3x3, but anything much bigger than that, and the expression is going to get nasty. If that's useful and you're not sure how to attack the maths, I can take a look. Best, Andy. On 26/06/2014 09:00, Troels Emtekær Linnet wrote: Dear Peixiang, Dear Andrew. Just a little heads-up. Within a week or two, we should be able to release a new version of relax. This update has focused on speed, recasting the data to higher dimensional numpy arrays, and moving the multiple dot operations out of the for loops. So we have quite much better speed now. :-) But we still have a bottleneck with numerical models, where doing the matrix exponential via eigenvalue decomposition is slow! Do any of you any experience with a faster method for doing matrix exponential? These initial results shows that if you are going to use the R1rho 2site model, you can expect: -R1rho 100 single spins analysis: DPL94: 23.525+/-0.409 - 1.138+/-0.035, 20.676x faster. TP02: 20.191+/-0.375 - 1.238+/-0.020, 16.308x faster. TAP03: 31.993+/-0.235 - 1.956+/-0.025, 16.353x faster. MP05: 24.892+/-0.354 - 1.431+/-0.014, 17.395x faster. NS R1rho 2-site: 245.961+/-2.637 - 36.308+/-0.458, 6.774x faster. Cluster of 100 spins analysis: DPL94: 23.872+/-0.505 - 0.145+/-0.002, 164.180x faster. TP02: 20.445+/-0.411 - 0.172+/-0.004, 118.662x faster. TAP03: 32.212+/-0.234 - 0.294+/-0.002, 109.714x faster. MP05: 25.013+/-0.362 - 0.188+/-0.003, 132.834x faster. NS R1rho 2-site: 246.024+/-3.724 - 33.119+/-0.320, 7.428x faster. -CPMG 100 single spins analysis: CR72:2.615+/-0.018 - 1.614+/-0.024, 1.621x faster. CR72 full: 2.678+/-0.020 - 1.839+/-0.165, 1.456x faster. IT99:1.837+/-0.030 - 0.881+/-0.010, 2.086x faster. TSMFK01: 1.665+/-0.049 - 0.742+/-0.007, 2.243x faster. B14: 5.851+/-0.133 - 3.978+/-0.049, 1.471x faster. B14 full:5.789+/-0.102 - 4.065+/-0.059, 1.424x faster. NS CPMG 2-site expanded: 8.225+/-0.196 - 4.140+/-0.062, 1.987x faster. NS CPMG 2-site 3D: 240.027+/-3.182 - 45.056+/-0.584, 5.327x faster. NS CPMG 2-site 3D full:240.910+/-4.882 - 45.230+/-0.540, 5.326x faster. NS CPMG 2-site star: 186.480+/-2.299 - 36.400+/-0.397, 5.123x faster. NS CPMG 2-site star full: 187.111+/-2.791 - 36.745+/-0.689, 5.092x faster. Cluster of 100 spins analysis: CR72:2.610+/-0.035 - 0.118+/-0.001, 22.138x faster. CR72 full: 2.674+/-0.021 - 0.122+/-0.001, 21.882x faster. IT99:0.018+/-0.000 - 0.009+/-0.000, 2.044x faster. IT99: Cluster of only 1 spin analysis, since v. 3.2.2 had a bug with clustering analysis.: TSMFK01: 1.691+/-0.091 - 0.039+/-0.000, 43.369x faster. B14: 5.891+/-0.127 - 0.523+/-0.004, 11.264x faster. B14 full:5.818+/-0.127 - 0.515+/-0.021, 11.295x faster. NS CPMG 2-site expanded: 8.167+/-0.159 - 0.702+/-0.008, 11.638x faster. NS CPMG 2-site 3D: 238.717+/-3.025 - 41.380+/-0.950, 5.769x faster. NS CPMG 2-site 3D full:507.411+/-803.089 - 41.852+/-1.317, 12.124x faster. NS CPMG 2-site star: 187.004+/-1.935 - 30.823+/-0.371, 6.067x faster. NS CPMG 2-site star full: 187.852+/-2.698 - 30.882+/-0.671, 6.083x faster. The reason we cant tweak NS R1rho 2-site anymore, is that we are computing the matrix exponential # Lib function for NS R1rho 2-site. Before: http://svn.gna.org/svn/relax/trunk/lib/dispersion/ns_r1rho_2site.py Now: moving all essential computations to be calculated before. http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/ns_r1rho_2site.py def ns_r1rho_2site(M0=None, M0_T=None, . # Once off parameter conversions. pB = 1.0 - pA k_BA = pA * kex k_AB = pB * kex # Extract shape of experiment. NE, NS, NM, NO = num_points.shape # The matrix that contains all
Re: R1rho RD analysis
I am looking for a solution I can apply to all numerical models :-) The 'close formed' solutions I describe are eigenvalue decompositions. So that's just a slightly smart way of doing that rather than asking a minimiser to do it for you. No general closed form solutions exist for the eigenvalues (hence eigenvectors) I think above 3x3. Incidentally, the theoretical treatments of the R1rho (needing minimum of 6x6) relies on being able to approximate the smallest real eigenvalues of the six. So the theoretical R1rho theoretical treatments are in essence methods for approximating one of the six eigenvalues (which is hard enough). Getting the other five is far from straightforward and might not even be possible. This is also why a general CEST expression isn't going to happen: to get a complete treatment of the CEST experiment you need all six eigenvalues, and no-one has (yet) come up with a way of getting them all. If the matrix is sparse then you can get some speedups. But in my hands at least, those algorithms only tend to work well on huge matrices that are really sparse (eg100x100 with not much over 10% of elements having entries). On 26/06/2014 09:44, Troels Emtekær Linnet wrote: Hi Andrew. I am looking for a solution I can apply to all numerical models :-) The current implementation, is the eigenvalue decomposition. http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/matrix_exponential.py It the bottom, I provide the profiling for NS R1rho 2site. You will see, that using the eig function, takes 50% of the time: That is i little sad, that the reason why Numerical solutions is so slow, is numpy.linalg.eig(). They differ a little in matrix size. http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/ns_cpmg_2site_3d.py 7 X 7 matrix. http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/ns_cpmg_2site_star.py 2x2 matrix http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/ns_mmq_2site.py 2x2 matrix. http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/ns_mmq_3site.py 3x3 matrix, http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/ns_cpmg_2site_3d.py 6x6 matrix http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/ns_r1rho_3site.py 9x9 matrix. ### ns_cpmg_2site_3d Thu Jun 26 10:42:13 2014 /var/folders/ww/1jkhkh315x57jglgxnr9g24wgp/T/tmp0buvpw 211077 function calls in 5.073 seconds Ordered by: cumulative time ncalls tottime percall cumtime percall filename:lineno(function) 10.0000.0005.0735.073 string:1(module) 10.0070.0075.0735.073 profiling_ns_r1rho_2site.py:553(cluster) 100.0000.0004.5920.459 profiling_ns_r1rho_2site.py:515(calc) 100.0150.0024.5920.459 relax_disp.py:1585(func_ns_r1rho_2site) 100.1130.0114.5740.457 ns_r1rho_2site.py:190(ns_r1rho_2site) 100.0930.0094.1380.414 matrix_exponential.py:33(matrix_exponential_rank_NE_NS_NM_NO_ND_x_x) 102.5290.2532.5810.258 linalg.py:982(eig) 400.8900.0220.8900.022 {numpy.core.multiarray.einsum} 100.5240.0520.5520.055 linalg.py:455(inv) 10.0000.0000.4740.474 profiling_ns_r1rho_2site.py:112(__init__) 100.1250.0130.3040.030 ns_r1rho_2site.py:117(rr1rho_3d_2site_rankN) 10.2480.2480.2740.274 relax_disp.py:64(__init__) 920.1800.0020.1800.002 {method 'outer' of 'numpy.ufunc' objects} 10.0900.0900.1070.107 profiling_ns_r1rho_2site.py:189(return_offset_data) 10.0530.0530.0920.092 profiling_ns_r1rho_2site.py:289(return_r2eff_arrays) 300.0650.0020.0650.002 {method 'astype' of 'numpy.ndarray' objects} 151090.0430.0000.0430.000 {numpy.core.multiarray.array} 1188630.0180.0000.0180.000 {method 'append' of 'list' objects} 30040.0040.0000.0180.000 numeric.py:136(ones) 100.0010.0000.0150.002 shape_base.py:761(tile) 400.0150.0000.0150.000 {method 'repeat' of 'numpy.ndarray' objects} 100.0120.0010.0130.001 linalg.py:214(_assertFinite) 2014-06-26 10:23 GMT+02:00 Andrew Baldwin andrew.bald...@chem.ox.ac.uk mailto:andrew.bald...@chem.ox.ac.uk: Hi Troels, If you're dealing almost entirely with 2x2 matricies, ie: R=[(a_11,a_12), (a_21,a_22)] You can express the eigenvalues and eigenvectors exactly in terms of these coefficients: eg: if J is the matrix of eigen vectors and R_D is the diagonal matrix of eigenvalues such that: R=JR_DJ^{-1} Then R^N = J R_D^N J^{-1} Raising R_D to the power of N is the same as raising the diagonals to
Re: R1rho RD analysis
http://www.cs.cornell.edu/cv/researchpdf/19ways+.pdf On 26/06/2014 09:00, Troels Emtekær Linnet wrote: Dear Peixiang, Dear Andrew. Just a little heads-up. Within a week or two, we should be able to release a new version of relax. This update has focused on speed, recasting the data to higher dimensional numpy arrays, and moving the multiple dot operations out of the for loops. So we have quite much better speed now. :-) But we still have a bottleneck with numerical models, where doing the matrix exponential via eigenvalue decomposition is slow! Do any of you any experience with a faster method for doing matrix exponential? These initial results shows that if you are going to use the R1rho 2site model, you can expect: -R1rho 100 single spins analysis: DPL94: 23.525+/-0.409 - 1.138+/-0.035, 20.676x faster. TP02: 20.191+/-0.375 - 1.238+/-0.020, 16.308x faster. TAP03: 31.993+/-0.235 - 1.956+/-0.025, 16.353x faster. MP05: 24.892+/-0.354 - 1.431+/-0.014, 17.395x faster. NS R1rho 2-site: 245.961+/-2.637 - 36.308+/-0.458, 6.774x faster. Cluster of 100 spins analysis: DPL94: 23.872+/-0.505 - 0.145+/-0.002, 164.180x faster. TP02: 20.445+/-0.411 - 0.172+/-0.004, 118.662x faster. TAP03: 32.212+/-0.234 - 0.294+/-0.002, 109.714x faster. MP05: 25.013+/-0.362 - 0.188+/-0.003, 132.834x faster. NS R1rho 2-site: 246.024+/-3.724 - 33.119+/-0.320, 7.428x faster. -CPMG 100 single spins analysis: CR72:2.615+/-0.018 - 1.614+/-0.024, 1.621x faster. CR72 full: 2.678+/-0.020 - 1.839+/-0.165, 1.456x faster. IT99:1.837+/-0.030 - 0.881+/-0.010, 2.086x faster. TSMFK01: 1.665+/-0.049 - 0.742+/-0.007, 2.243x faster. B14: 5.851+/-0.133 - 3.978+/-0.049, 1.471x faster. B14 full:5.789+/-0.102 - 4.065+/-0.059, 1.424x faster. NS CPMG 2-site expanded: 8.225+/-0.196 - 4.140+/-0.062, 1.987x faster. NS CPMG 2-site 3D: 240.027+/-3.182 - 45.056+/-0.584, 5.327x faster. NS CPMG 2-site 3D full:240.910+/-4.882 - 45.230+/-0.540, 5.326x faster. NS CPMG 2-site star: 186.480+/-2.299 - 36.400+/-0.397, 5.123x faster. NS CPMG 2-site star full: 187.111+/-2.791 - 36.745+/-0.689, 5.092x faster. Cluster of 100 spins analysis: CR72:2.610+/-0.035 - 0.118+/-0.001, 22.138x faster. CR72 full: 2.674+/-0.021 - 0.122+/-0.001, 21.882x faster. IT99:0.018+/-0.000 - 0.009+/-0.000, 2.044x faster. IT99: Cluster of only 1 spin analysis, since v. 3.2.2 had a bug with clustering analysis.: TSMFK01: 1.691+/-0.091 - 0.039+/-0.000, 43.369x faster. B14: 5.891+/-0.127 - 0.523+/-0.004, 11.264x faster. B14 full:5.818+/-0.127 - 0.515+/-0.021, 11.295x faster. NS CPMG 2-site expanded: 8.167+/-0.159 - 0.702+/-0.008, 11.638x faster. NS CPMG 2-site 3D: 238.717+/-3.025 - 41.380+/-0.950, 5.769x faster. NS CPMG 2-site 3D full:507.411+/-803.089 - 41.852+/-1.317, 12.124x faster. NS CPMG 2-site star: 187.004+/-1.935 - 30.823+/-0.371, 6.067x faster. NS CPMG 2-site star full: 187.852+/-2.698 - 30.882+/-0.671, 6.083x faster. The reason we cant tweak NS R1rho 2-site anymore, is that we are computing the matrix exponential # Lib function for NS R1rho 2-site. Before: http://svn.gna.org/svn/relax/trunk/lib/dispersion/ns_r1rho_2site.py Now: moving all essential computations to be calculated before. http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/ns_r1rho_2site.py def ns_r1rho_2site(M0=None, M0_T=None, . # Once off parameter conversions. pB = 1.0 - pA k_BA = pA * kex k_AB = pB * kex # Extract shape of experiment. NE, NS, NM, NO = num_points.shape # The matrix that contains all the contributions to the evolution, i.e. relaxation, exchange and chemical shift evolution. R_mat = rr1rho_3d_2site_rankN(R1=r1, r1rho_prime=r1rho_prime, dw=dw, omega=omega, offset=offset, w1=spin_lock_fields, k_AB=k_AB, k_BA=k_BA, relax_time=relax_time) # This matrix is a propagator that will evolve the magnetization with the matrix R. Rexpo_mat = matrix_exponential_rank_NE_NS_NM_NO_ND_x_x(R_mat) # Magnetization evolution. Rexpo_M0_mat = einsum('...ij,...jk', Rexpo_mat, M0) # Magnetization evolution, which include all dimensions. MA_mat = einsum('...ij,...jk', M0_T, Rexpo_M0_mat)[:, :, :, :, :, 0, 0] # Do back calculation. back_calc[:] = -inv_relax_time * log(MA_mat) ### The profiling scripts, shows that matrix_exponential_rank_NE_NS_NM_NO_ND_x_x(R_mat) is stealing the time. #
Re: R1rho RD analysis
Hi Peixiang, Please see below: Congratulations about the new version of 3.2.2, I tried, it works well :) Cheers. If you notice any other problems or strange behaviour, please don't hesitate to submit a bug report (https://gna.org/bugs/?func=additemgroup=relax). Then that problem will likely be fixed for the next relax version. Still one question of using the different relaxation time periods. My R1rho RD experiment has different relaxation time periods, I could input all the peaks by the loop. Then I fit with 'NS 2-site R1 model', they could also do the fitting and give the results and also a nice fitting of the dispersion curve. Still I did not figure out, which Trelax is it using in the NS model in the case of different relaxation time periods. Only the last relaxation time period? Then fit as fixed time experiment? As this code was directly contributed by Paul Schanda and Dominique Marion, and I'm guessing that their offices are not too far from yours at the IBS, maybe you could ask them directly ;) Well, it was Paul who organised that the code be contributed to relax. In reality the original authors were Nikolai Skrynnikov and Martin Tollinger. The API documentation is also a useful resource for answering such questions (http://www.nmr-relax.com/api/3.2/). For this, see the relax library documentation for that model: http://www.nmr-relax.com/api/3.2/lib.dispersion.ns_r1rho_2site-module.html This documentation describes the origin and history of the code. You could even look at the source code for the direct implementation: http://www.nmr-relax.com/api/3.2/lib.dispersion.ns_r1rho_2site-pysrc.html Trelax is the 'relax_time' argument here. You can find all implementation details in this API documentation. Which relaxation time would you suggest as being correct? I'm actually no longer sure which is being used. And I'm not sure if the original code or even the numeric model itself was designed to handle variable time data. Maybe I am the minority to use such time consuming experiments, so I always have such strange questions ... relax should still handle the situation. Do you know if there is a special treatment for the numerical models for such data? Do you know of a good citation? Maybe the 'NS R1rho 2-site' model (http://wiki.nmr-relax.com/NS_R1rho_2-site) is not suitable for variable time data, and a different - and importantly published - solution is required. The analytic models do not use the relaxation time value, so those are safe. Hence, as a check, you should see very similar results from the 'DPL94' model (http://wiki.nmr-relax.com/DPL94) and the 'NS R1rho 2-site' model. If not, something is wrong. If the 'NS R1rho 2-site' model is really only for fixed-time data, then we should modify relax to raise a RelaxError when this model is chosen for optimisation and the data is variable time. As not many people optimise numeric models to variable-time data, your input into this question would be very valuable. Cheers! Maybe another annoying question for the fix time people: Another question, does it necessary to check how mono-exponential about their relaxation curve under certain rf-field? If not, how did they make sure they can use the mono-exponential assumption to get R2eff by two points? From what I've seen and heard, some people do check, but the majority just assume that the curves will be mono-exponential and publish the fixed-time data and results. Such a check is probably much more important for those collecting R1rho-type data rather than CPMG-type data. Anyway, maybe you should ask people in front of their posters at conferences to get a better overview of what the field does. Regards, Edward Best, Peixiang On 05/19/2014 05:49 PM, Edward d'Auvergne wrote: Hi Peixiang, Welcome to the relax mailing lists! The relaxation dispersion analysis implemented in relax is quite flexible, and the data you have is supported. This is well documented in the relax manual which you should have with your copy of relax (the docs/relax.pdf file). Have a look at section 'The R2eff model' in the dispersion chapter of the manual (http://www.nmr-relax.com/manual/R2eff_model.html), specifically the 'Variable relaxation period experiments' subsection. Unfortunately the sample scripts are all for the fixed time dispersion experiments. However you could have a look at one of the scripts used for the test suite in relax: test_suite/system_tests/scripts/relax_disp/exp_fit.py This script is run in the test suite to ensue that the data you have will always be supported. There are many more scripts in that directory which you might find interesting. The 'r1rho_on_res_m61.py' script also involve an exponential fit with many different relaxation time periods. ___ relax (http://www.nmr-relax.com) This is the relax-users mailing list relax-users@gna.org To unsubscribe from this
Re: R1rho RD analysis
Hi Peixiang, Actually, for comparison purposes for applying the 'NS R1rho 2-site' model (http://wiki.nmr-relax.com/NS_R1rho_2-site) to variable-time R1rho-type data, Art Palmer's MP05 model would be much better (http://wiki.nmr-relax.com/MP05) than the DPL94 model (http://wiki.nmr-relax.com/DPL94) as it is of much higher quality. Andy Baldwin apparently has derived an even better analytic model, especially when R20A and R20B are significantly different, see: http://gna.org/support/?3155#comment0 and the discussions in the thread: http://thread.gmane.org/gmane.science.nmr.relax.devel/5414/focus=5447 and: http://thread.gmane.org/gmane.science.nmr.relax.devel/5410/focus=5433 This last thread is about the B14 model (Baldwin 2014, http://wiki.nmr-relax.com/B14) implemented in relax by Troels Linnet, but there are mentions of Andy's R1rho model. However the R1rho model from Andy is not implemented in relax yet. Do you have much experience with variable-time R1rho numeric models? Looking at the code for where the relax_time variable comes from, it is not very clear which relaxation time is being used: http://www.nmr-relax.com/api/3.2/specific_analyses.relax_disp.data-module.html#loop_time From the code itself: http://www.nmr-relax.com/api/3.2/specific_analyses.relax_disp.data-pysrc.html#loop_time it looks like this loop_time() function assumes fixed-time data and hence only the first encountered time value for the given experiment, magnetic field strength, offset, and dispersion point is used. So your expertise will be very useful for resolving this variable-time R1rho numeric model problem! Note that there are a few improvements to the R1rho models that are yet to be implemented in relax: http://thread.gmane.org/gmane.science.nmr.relax.devel/5414/focus=5808 http://www.nmr-relax.com/manual/do_dispersion_features_yet_be_implemented.html Cheers, Edward On 11 June 2014 10:07, Edward d'Auvergne edw...@nmr-relax.com wrote: Hi Peixiang, Please see below: Congratulations about the new version of 3.2.2, I tried, it works well :) Cheers. If you notice any other problems or strange behaviour, please don't hesitate to submit a bug report (https://gna.org/bugs/?func=additemgroup=relax). Then that problem will likely be fixed for the next relax version. Still one question of using the different relaxation time periods. My R1rho RD experiment has different relaxation time periods, I could input all the peaks by the loop. Then I fit with 'NS 2-site R1 model', they could also do the fitting and give the results and also a nice fitting of the dispersion curve. Still I did not figure out, which Trelax is it using in the NS model in the case of different relaxation time periods. Only the last relaxation time period? Then fit as fixed time experiment? As this code was directly contributed by Paul Schanda and Dominique Marion, and I'm guessing that their offices are not too far from yours at the IBS, maybe you could ask them directly ;) Well, it was Paul who organised that the code be contributed to relax. In reality the original authors were Nikolai Skrynnikov and Martin Tollinger. The API documentation is also a useful resource for answering such questions (http://www.nmr-relax.com/api/3.2/). For this, see the relax library documentation for that model: http://www.nmr-relax.com/api/3.2/lib.dispersion.ns_r1rho_2site-module.html This documentation describes the origin and history of the code. You could even look at the source code for the direct implementation: http://www.nmr-relax.com/api/3.2/lib.dispersion.ns_r1rho_2site-pysrc.html Trelax is the 'relax_time' argument here. You can find all implementation details in this API documentation. Which relaxation time would you suggest as being correct? I'm actually no longer sure which is being used. And I'm not sure if the original code or even the numeric model itself was designed to handle variable time data. Maybe I am the minority to use such time consuming experiments, so I always have such strange questions ... relax should still handle the situation. Do you know if there is a special treatment for the numerical models for such data? Do you know of a good citation? Maybe the 'NS R1rho 2-site' model (http://wiki.nmr-relax.com/NS_R1rho_2-site) is not suitable for variable time data, and a different - and importantly published - solution is required. The analytic models do not use the relaxation time value, so those are safe. Hence, as a check, you should see very similar results from the 'DPL94' model (http://wiki.nmr-relax.com/DPL94) and the 'NS R1rho 2-site' model. If not, something is wrong. If the 'NS R1rho 2-site' model is really only for fixed-time data, then we should modify relax to raise a RelaxError when this model is chosen for optimisation and the data is variable time. As not many people optimise numeric models to variable-time data, your input into this
Re: R1rho RD analysis
Hi, all, Congratulations about the new version of 3.2.2, I tried, it works well :) Still one question of using the different relaxation time periods. My R1rho RD experiment has different relaxation time periods, I could input all the peaks by the loop. Then I fit with 'NS 2-site R_1$\scriptstyle \rho$ model', they could also do the fitting and give the results and also a nice fitting of the dispersion curve. Still I did not figure out, which Trelax is it using in the NS model in the case of different relaxation time periods. Only the last relaxation time period? Then fit as fixed time experiment? Maybe I am the minority to use such time consuming experiments, so I always have such strange questions ... Maybe another annoying question for the fix time people: Another question, does it necessary to check how mono-exponential about their relaxation curve under certain rf-field? If not, how did they make sure they can use the mono-exponential assumption to get R2eff by two points? Best, Peixiang On 05/19/2014 05:49 PM, Edward d'Auvergne wrote: Hi Peixiang, Welcome to the relax mailing lists! The relaxation dispersion analysis implemented in relax is quite flexible, and the data you have is supported. This is well documented in the relax manual which you should have with your copy of relax (the docs/relax.pdf file). Have a look at section 'The R2eff model' in the dispersion chapter of the manual (http://www.nmr-relax.com/manual/R2eff_model.html), specifically the 'Variable relaxation period experiments' subsection. Unfortunately the sample scripts are all for the fixed time dispersion experiments. However you could have a look at one of the scripts used for the test suite in relax: test_suite/system_tests/scripts/relax_disp/exp_fit.py This script is run in the test suite to ensue that the data you have will always be supported. There are many more scripts in that directory which you might find interesting. The 'r1rho_on_res_m61.py' script also involve an exponential fit with many different relaxation time periods. ___ relax (http://www.nmr-relax.com) This is the relax-users mailing list relax-users@gna.org To unsubscribe from this list, get a password reminder, or change your subscription options, visit the list information page at https://mail.gna.org/listinfo/relax-users
Re: R1rho RD analysis
Hi Peixiang. For model selection, try this in relax. The help section is the best you can get ! Just start relax in terminal: relax Then write: relax help(relax_disp) to see all options. For example: help(relax_disp.select_model) You can find more info at the wiki: http://wiki.nmr-relax.com/Category:Relaxation_dispersion Here you can find link to the online HTML manual, and the API documentation. You may want to read: help(relax_disp.exp_type) help(spectrum) help(spectrum.replicated) help(spectrum.baseplane_rmsd) help(spectrometer) This a very very fast an useful method to quickly understand the functions. If you are unsure how to find where each method function is put, start the GUI. In the top menu, go through the settings. They are ordered the same way. ! :-) For example. help(grid_search) Best Troels 2014-05-22 16:34 GMT+02:00 pma peixiang...@ibs.fr: Hi, Edward, Thanks for you reply. I used the script mode in linux. I tried with 'r1rho_on_res_m61.py', but I think in the script it has a mistake. in line27 ds.fixed = True, I think it should be False, because this experiment has different time period, they do not need the ref. I changed to False, the M61 model still give right result. But I still have some other questions. 1. How did you declare replicates in such Variable relaxation period experiments? I saw in the example r1rho_on_res_m61.py, you read all the data into a big Matrix data and did not declare the replicate. But for the fixed time period you declare by the spectrum ID. Did you have a solution for this? 2. I want to test NS R1rho 2-site, so I simply add it into the MODELS in 'r1rho_on_res_m61.py'. It seems does not work. first, they give the warning when it comes to NS R1rho 2-site, (grid_inc =8 ) -- Searching through 32768 grid nodes. lib/dispersion/ns_r1rho_2site.py:118: RuntimeWarning: divide by zero encountered in double_scalars theta = atan(spin_lock_fields[i]/dA) lib/dispersion/ns_r1rho_2site.py:132: RuntimeWarning: invalid value encountered in double_scalars back_calc[i]= -inv_relax_time * log(MA) - I think the grid search is too heavy for the NS model, do you have a solution to constrain it? Or for example, if I want to limit grid search only for kex and pA, but not for others, how can I declare that? For RuntimeWarning: divide by zero encountered in double_scalars, do you have suggestions for this? The final results for 'NS R1rho 2-site' did not fit at all, so the data for M61 can only be fit with M61 model but not other models? Thanks a lot! Peixiang On 05/19/2014 05:49 PM, Edward d'Auvergne wrote: Hi Peixiang, Welcome to the relax mailing lists! The relaxation dispersion analysis implemented in relax is quite flexible, and the data you have is supported. This is well documented in the relax manual which you should have with your copy of relax (the docs/relax.pdf file). Have a look at section 'The R2eff model' in the dispersion chapter of the manual (http://www.nmr-relax.com/manual/R2eff_model.html), specifically the 'Variable relaxation period experiments' subsection. Unfortunately the sample scripts are all for the fixed time dispersion experiments. However you could have a look at one of the scripts used for the test suite in relax: test_suite/system_tests/scripts/relax_disp/exp_fit.py This script is run in the test suite to ensue that the data you have will always be supported. There are many more scripts in that directory which you might find interesting. The 'r1rho_on_res_m61.py' script also involve an exponential fit with many different relaxation time periods. You could also try using the relax graphical user interface (GUI) by running: $ relax -g Here you simply load the peak list data via the 'Add' button in the 'Spectra list' GUI element and, when you reach the relaxation time wizard page, simply specify different times for the different spectra - relax will handle the rest. Have a look at http://www.nmr-relax.com/manual/Dispersion_GUI_mode_loading_data.html (the GUI section of the dispersion chapter). The GUI should be much easier to use than the script UI, though a lot less flexible if you wish to perform custom, non-standard analyses. Regards, Edward On 19 May 2014 16:25, pma peixiang...@ibs.fr wrote: Hi, all, Did anybody manage to fit the R1rho relaxation dispersion from some calculated R2 data (e.g. I fitted the different relax_delay time to extract R1rho, then calculate R2 = R1rho/sin^2 - R1/tan^2 ) I found a tutorial on wiki, but it stopped after processing the spectra, actually I finished this part already. Besides,in the manual for R1rho calc (page 173 in the manual) in the R2eff model, it
Re: R1rho RD analysis
Hi Peixiang. I started the R1rho tutorial at the wiki, which I think you refer to: http://wiki.nmr-relax.com/Tutorial_for_Relaxation_dispersion_analysis_r1rho_fixed_time_recorded_on_varian_as_sequential_spectra But I did not have the time to finish it. But, if you are a little adventurous, you can find the rest of the script that will finish it. We use something in relax called system tests. This is to make sure, that everything keeps on functioning as we change code. Actually, that tutorial on the wiki is a translation of one of those systemtests. Try look in the relax software folder. test_suite/shared_data/dispersion/Kjaergaard_et_al_2013 Have a look at r1rho_1_ini.py r1rho_3_spectra_settings.py r1rho_4_run.py Try have a look on these. Do not look at the rx_XX.py file. That is scrap. But please double check everything. I is a little while ago :-) Double check with the systemtest here: test_suite/system_tests/relax_disp.py Look for: def test_r1rho_kjaergaard(self): all self.interpreter. should be deleted for the script. .-) And remember to read in R1 data: relax_data.read(ri_id='R1', ri_type='R1', frq=cdp. See http://wiki.nmr-relax.com/DPL94#Essentials If you find this helpful, you could maybe continue this R1rho tutorial at the wiki? I wrote the tutorial, because I know the learning curve is steep. And it have helped our students in the lab since. Best Troels 2014-05-22 17:13 GMT+02:00 Troels Emtekær Linnet tlin...@nmr-relax.com: Hi Peixiang. For model selection, try this in relax. The help section is the best you can get ! Just start relax in terminal: relax Then write: relax help(relax_disp) to see all options. For example: help(relax_disp.select_model) You can find more info at the wiki: http://wiki.nmr-relax.com/Category:Relaxation_dispersion Here you can find link to the online HTML manual, and the API documentation. You may want to read: help(relax_disp.exp_type) help(spectrum) help(spectrum.replicated) help(spectrum.baseplane_rmsd) help(spectrometer) This a very very fast an useful method to quickly understand the functions. If you are unsure how to find where each method function is put, start the GUI. In the top menu, go through the settings. They are ordered the same way. ! :-) For example. help(grid_search) Best Troels 2014-05-22 16:34 GMT+02:00 pma peixiang...@ibs.fr: Hi, Edward, Thanks for you reply. I used the script mode in linux. I tried with 'r1rho_on_res_m61.py', but I think in the script it has a mistake. in line27 ds.fixed = True, I think it should be False, because this experiment has different time period, they do not need the ref. I changed to False, the M61 model still give right result. But I still have some other questions. 1. How did you declare replicates in such Variable relaxation period experiments? I saw in the example r1rho_on_res_m61.py, you read all the data into a big Matrix data and did not declare the replicate. But for the fixed time period you declare by the spectrum ID. Did you have a solution for this? 2. I want to test NS R1rho 2-site, so I simply add it into the MODELS in 'r1rho_on_res_m61.py'. It seems does not work. first, they give the warning when it comes to NS R1rho 2-site, (grid_inc =8 ) -- Searching through 32768 grid nodes. lib/dispersion/ns_r1rho_2site.py:118: RuntimeWarning: divide by zero encountered in double_scalars theta = atan(spin_lock_fields[i]/dA) lib/dispersion/ns_r1rho_2site.py:132: RuntimeWarning: invalid value encountered in double_scalars back_calc[i]= -inv_relax_time * log(MA) - I think the grid search is too heavy for the NS model, do you have a solution to constrain it? Or for example, if I want to limit grid search only for kex and pA, but not for others, how can I declare that? For RuntimeWarning: divide by zero encountered in double_scalars, do you have suggestions for this? The final results for 'NS R1rho 2-site' did not fit at all, so the data for M61 can only be fit with M61 model but not other models? Thanks a lot! Peixiang On 05/19/2014 05:49 PM, Edward d'Auvergne wrote: Hi Peixiang, Welcome to the relax mailing lists! The relaxation dispersion analysis implemented in relax is quite flexible, and the data you have is supported. This is well documented in the relax manual which you should have with your copy of relax (the docs/relax.pdf file). Have a look at section 'The R2eff model' in the dispersion chapter of the manual (http://www.nmr-relax.com/manual/R2eff_model.html), specifically the 'Variable relaxation period experiments' subsection. Unfortunately the sample scripts are all for the fixed time dispersion experiments. However you could