> > Well, there are many more or less interesting conclusions to draw
> > from your benchmark, Martin. Not surprisingly, matrix multiplication
> > turns out to be expensive.
>
> Hmm... I did see that there were a bunch of calls to __mul__ in
> matrix.py, but I thought they came from the initialization of _hyper
> in ActiveRuntime. The the initialization of _hyper does not use any
> matrix multiplications, so this is wrong?!
No the initialization of hyper should definitely not use matrix multiplications.

> When the mul method uses prss_get_triple, then _hyper should never be
> used or even initialized and so there should be no matrix stuff going
> on... I think I measured the wrong code somehow :-)
Well somehow, matrices were multiplied. And not only that, a lot of time was
spent doing it :)

> > One thing I really do find interesting about the table is the amount
> > of time spent in inc_pc_wrapper. Perhaps it is possible to improve
> > this somehow?
>
> It only used 0.5 seconds of its own time -- the 21 seconds are the
> total time spend in the child-calls made by inc_pc_wrapper. Since it
> wraps all important functions its clear that the cumulative time will
> be big:
>
>        ncalls  tottime  percall  cumtime  percall
>    48003/6003    0.518    0.000   21.195    0.004
>
> But any optimization would be good -- if we can same a tiny bit for
> each of the 48000 calls it might sum up :-)
My observation was just that wrapping is somewhat expensive. I do not quite know
what to do about this. We have already discussed the alternatives to using this
method, and they were not particularly promising.
_______________________________________________
viff-devel mailing list (http://viff.dk/)
viff-devel@viff.dk
http://lists.viff.dk/listinfo.cgi/viff-devel-viff.dk

Reply via email to