Hi Johan, you wrote:
> Thanks for all your info. I've run the tests with a Boost from CVS (from > january 31st), compressed_matrix and axpy_prod, and the results give > roughly the same speed as our implementation, and ca. 30% better memory > efficiency. Great! Kudos to the guys on groups.yahoo.com/group/ublas-dev. Without them sparse matrices would be as bad as in boost_1_29_0. > The -DNDEBUG flag also seems critical, without it > performance is terrible (quadratic). Oh yes. That's my paranoia. Without -DNDEBUG defined ublas is in debug mode and even double checks sparse matrix computations with a dense control computation. You could customize this using the BOOST_UBLAS_TYPE_CHECK preprocessor symbol. > Alexei's proposed optimizations seem interesting. I tried the axpy_prod > you provided, but it didn't give any significant change. I trust your > figures however. Yup. I didn't post the necessary dispatch logic. I'll later update Boost CVS with my current version. > I will propose that we start using ublas as soon as the linear complexity > functions appear in the stable branch. > > I provide our benchmark below for reference (with the timing calls, and > other dependencies stripped out): Thanks, Joerg _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost