Chris Marshall <[email protected]> wrote: > I believe NumPy can use ATLAS for its linear algebra which > would explain this difference.
I think they do, but NumPy is not particularly fast at matmult either. Octave and Scilab really blow PDL and NumPy out of the water when it comes to matmult. This is interesting because in other benchmarks they don't particularly stand out (especially Octave). > Benchmarks of small kernels are useful when those small > kernels are the dominant part of a computation. I mostly agree. I think there's probably an exception to that rule, but I still agree with the basic rule. Anyway, I have two comments: 1) Big-program benchmarks are not any better. 2) If you have many small-kernel benchmarks, the reader can pick and choose which ones they care about. For example, an astronomer might completely ignore a benchmark on matrix multiplication but pay attention to a benchmark on convolution. If you have several mini-benchmarks, the reader has the freedom to decide what's relevant to their work and what isn't. > For scripted languages with interactive development, I > would expect that expressiveness and ability to get > work done would be more interesting metrics. If only we had a way to measure that :-) Problem is that this is subjective. Just think of Perl vs Python. > That said, if PDL is relatively slower than another package at > matmult (e.g.) then that could indicate a direction for > optimization. Yes. I would really like to know how Octave and Scilab do matrix multiplication. For a 200x200 matrix they are 3x faster than PDL, and for a 2000x2000 matrix they are over a magnitude faster. What this tells me is that they are probably using a different algorithm. Daniel. _______________________________________________ Perldl mailing list [email protected] http://mailman.jach.hawaii.edu/mailman/listinfo/perldl
