Hi,

Such a huge speed up cannot be from the changes of the 'disp_speed'
branch alone.  I would expect from that branch a maximum drop from 30
min to 15 min.  Therefore it must be your grid search changes.  When
changing, simplifying, or eliminating the grid search, you have to be
very careful about the introduced bias.  This bias is unavoidable.  It
needs to be mentioned in the methods of any paper.  The key is to be
happy that the bias you have introduced will not negatively impact
your results.  For example if you believe that the grid search
replacement is reasonably close to the true solution that the
optimisation will be able to reach the global minimum.  You also have
to convince the people reading your paper that the introduced bias is
reasonable.

As for a script to show the speed changes, you could have a look at
maybe the test_suite/shared_data/dispersion/Hansen/relax_results/relax_disp.py
file.  This performs a full analysis with a large range of dispersion
models on the truncated data set from Flemming Hansen.  Or
test_suite/shared_data/dispersion/Hansen/relax_disp.py which uses all
of Flemming's data.  These could be run before and after the merger of
the 'disp_speed' branch, maybe with different models and the profile
flag turned on.  You could then create a text file in the
test_suite/shared_data/dispersion/Hansen/relax_results/ directory
called something like 'relax_timings' to permanently record the speed
ups.  This file can be used in the future for documenting any other
speed ups as well.

Regards,

Edward




On 4 June 2014 14:37, Troels Emtekær Linnet <[email protected]> wrote:
> Looking at my old data, I can see that writing out of data between each
> global fit analysis before took around 30 min.
>
> They now take 2-6 mins.
>
> I almost can't believe that speed up!
>
> Could we devise a devel-script, which we could use to simulate the change?
>
> Best
> Troels
>
>
>
> 2014-06-04 14:24 GMT+02:00 Troels Emtekær Linnet <[email protected]>:
>
>> Hi Edward.
>>
>> After the changes to the lib/dispersion/model.py files, I see massive
>> speed-up of the computations.
>>
>> During 2 days, I performed over 600 global fittings for a 68 residue
>> protein, where all residues where clustered.I just did it with 1 cpu.
>>
>> This is really really impressive.
>>
>> I did though also alter how the grid search was performed, pre-setting
>> some of the values from known values referred to in a paper.
>> So I can't really say what has cut the time down.
>>
>> But looking at the calculations running, the minimisation runs quite fast.
>>
>> So, how does relax do the collecting of data for global fitting?
>>
>> Does i collect all the R2eff values for the clustered spins, and sent it
>> to the target function
>> together with the array of parameters to vary?
>>
>> Or does it calculate per spin, and share the common parameters?
>>
>> My current bottle neck actually seems to be the saving of the state file,
>> between each iteration of global analysis.
>>
>> Best
>> Troels
>>
> _______________________________________________
> relax (http://www.nmr-relax.com)
>
> This is the relax-devel mailing list
> [email protected]
>
> To unsubscribe from this list, get a password
> reminder, or change your subscription options,
> visit the list information page at
> https://mail.gna.org/listinfo/relax-devel

_______________________________________________
relax (http://www.nmr-relax.com)

This is the relax-devel mailing list
[email protected]

To unsubscribe from this list, get a password
reminder, or change your subscription options,
visit the list information page at
https://mail.gna.org/listinfo/relax-devel

Reply via email to