Well, there are many more or less interesting conclusions to draw from your
benchmark, Marting. Not surprisingly, matrix multiplication turns out to be
expensive. It is worth considering using a non-naive algorithm for this
multiplication, but I am not convinced there is very much to gain.
One
Mikkel Krøigård [EMAIL PROTECTED] writes:
Well, there are many more or less interesting conclusions to draw
from your benchmark, Martin. Not surprisingly, matrix multiplication
turns out to be expensive.
Hmm... I did see that there were a bunch of calls to __mul__ in
matrix.py, but I thought
Well, there are many more or less interesting conclusions to draw
from your benchmark, Martin. Not surprisingly, matrix multiplication
turns out to be expensive.
Hmm... I did see that there were a bunch of calls to __mul__ in
matrix.py, but I thought they came from the initialization of
Martin Geisler [EMAIL PROTECTED] writes:
Strangely the time for preprocessing has not improved... It stayed
at an average time of about *20 ms* for a multiplication triple both
before and after the change -- I don't understand that :-(
I do now! :-)
It turned out that the preprocessing was
Mikkel Krøigård [EMAIL PROTECTED] writes:
It only used 0.5 seconds of its own time -- the 21 seconds are the
total time spend in the child-calls made by inc_pc_wrapper. Since
it wraps all important functions its clear that the cumulative time
will be big:
ncalls tottime percall