Thomas Unterthiner <thomas_unterthi...@web.de> wrote: > Sorry for going a bit off-topic, but: do you have any links to the > benchmarks? I googled around, but I haven't found anything. FWIW, on my > own machines OpenBLAS is on par with MKL (on an i5 laptop and an older > Xeon server) and actually slightly faster than ACML (on an FX8150) for > my use cases (I mainly tested DGEMM/SGEMM, and a few LAPACK calls). So > your claim is very surprising for me.
I was thinking about the benchmarks on Eigen's website, but it might be a bit old now and possibly biased: http://eigen.tuxfamily.org/index.php?title=Benchmark It uses a single thread only, but for smaller matrix sizes Eigen tends to be the better. Carl Kleffner alerted me to this benchmark today: http://gcdart.blogspot.de/2013/06/fast-matrix-multiply-and-ml.html It shows superb performance and unparallelled scalability for OpenBLAS on Opteron. MKL might be better on Intel CPUs though. ATLAS is doing quite well too, better than I would expect, and generally better than Eigen. It is also interesting that ACML is crap, except with a single-threaded BLAS. Sturla _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion