Alan Fung <alanfung <at> ust.hk> writes:

> 
> 
> I experience the same performance drop in my numerical simulations.  With
> GSL-1.15, the simulation takes 7.7s.  With GSL-1.16, it takes 2m12s.  My
> simulation is to integrate a group of ODEs.  GCC = 4.8.1.  OS = Kubuntu 13.10.
> 
> some more figures on other computers:
> using the same simulation source code:
> GSL-1.15  <at>  Pentium 4 with GCC3.2: ~30s
> GSL-1.16  <at>  E7-4850 with GCC4.4: 5m
> 
> I am quite sure that the problem is not due to different versions of GCC.  I
> believe that it should be due to differences between 1.15 and 1.16.
> 
> 

I have attached a version of my source code (sorry, this is not well-written
:)).

Actually, I have just found out why there is a regression.

I found that, if i undo the revision 4771
(http://bzr.savannah.gnu.org/lh/gsl/trunk/revision/4771), the performance of
1.16 will be at par with 1.15.

For the source code I have attached, I have done some tests on different
versions.
1.15:
Running time: 0m0.759s
Output result: -0.378978 (center of mass of dynamical variables)

1.16
Running time: 0m10.670s
Output result: -0.378996

1.16 (without revision 4771):
Running time: 0m0.754s
Output result: -0.378978

In revision 4771, the programmer intended to use e->dydt_out to update
e->dydt_in, rather than calculating a new e->dydt_in.  Based on my
understanding on the source code, he thought that memcpy is better than a
new calculation.  But, in my case, it makes a big performance regression. 
Also, in my experiment, without 4771, results by 1.15 and 1.16 agree with
each other.  So, I would like to question which version is not reliable.
For the reliability concern, I have sent an email to bug-gsl to question
about this.

==============================

The source code:
http://www.filedropper.com/cannstdsubdelay


Reply via email to