Looking at my old data, I can see that writing out of data between each
global fit analysis before took around 30 min.

They now take 2-6 mins.

I almost can't believe that speed up!

Could we devise a devel-script, which we could use to simulate the change?

Best
Troels



2014-06-04 14:24 GMT+02:00 Troels Emtekær Linnet <[email protected]>:

> Hi Edward.
>
> After the changes to the lib/dispersion/model.py files, I see massive
> speed-up of the computations.
>
> During 2 days, I performed over 600 global fittings for a 68 residue
> protein, where all residues where clustered.I just did it with 1 cpu.
>
> This is really really impressive.
>
> I did though also alter how the grid search was performed, pre-setting
> some of the values from known values referred to in a paper.
> So I can't really say what has cut the time down.
>
> But looking at the calculations running, the minimisation runs quite fast.
>
> So, how does relax do the collecting of data for global fitting?
>
> Does i collect all the R2eff values for the clustered spins, and sent it
> to the target function
> together with the array of parameters to vary?
>
> Or does it calculate per spin, and share the common parameters?
>
> My current bottle neck actually seems to be the saving of the state file,
> between each iteration of global analysis.
>
> Best
> Troels
>
_______________________________________________
relax (http://www.nmr-relax.com)

This is the relax-devel mailing list
[email protected]

To unsubscribe from this list, get a password
reminder, or change your subscription options,
visit the list information page at
https://mail.gna.org/listinfo/relax-devel

Reply via email to