Le 23/04/2015 23:51, Stéphane Mottelet a écrit :
Hello,

I am currently working on a project where Scilab code is automatically generated, and after many code optimization, the remaining bottleneck is the time that Scilab spends to execute simple code like this (full script (where the vector has 839 lines) with timings is attached) :

M1_v=[v(17)
v(104)
v(149)
-(v(18)+v(63)+v(103))
-(v(18)+v(63)+v(103))
v(17)
...
v(104)
v(149)
]

This kind of large vectors are the used to build a sparse matrix each time the vector v changes, but with a constant sparsity pattern. Actually, the time spent by Scilab in the statement

M1=sparse(M1_ij,M1_v,[n1,n2])

is negligible compared to the time spent to build f M1_v...

I have also noticed that if you need to define such a matrix with more that one column, the time elapsed is not linear with respect to the number of columns: typically 4 times slower for 2 columns. In fact the statement

v=[1 1
...
1000 1000]

is even two times slower than

v1=[1
...
1000];
v2=[1
....
1000];
v=[v1 v2];

So my question to users who have the experience of dynamic link of user code : do you think that using dynamic link of compiled generated C code could improve the timings ?
Mais  apriori je ne fais rien qui soit OS dependant...
As your code is generated it should effectively a good idea to generate C code and use incremental linking (once the code has been compiled and link you can expect a speed factor around 100 times. But the compilation may be slow. So using dynmaic linking is a very good idea if your generated code has to be run many times.
Serge
In advance, thanks for your help !

S.




_______________________________________________
users mailing list
users@lists.scilab.org
http://lists.scilab.org/mailman/listinfo/users

_______________________________________________
users mailing list
users@lists.scilab.org
http://lists.scilab.org/mailman/listinfo/users

Reply via email to