On Monday, 4 March 2013 at 12:28:25 UTC, J wrote:
On Monday, 4 March 2013 at 08:02:46 UTC, J wrote:
That's a really good point. I wonder if there is a canonical
matrix that would be preferred?
I'm not sure if they are the recommended/best practice for
matrix handling in D at the moment (please advise if they are
not), but with a little searching, I found that the SciD
library has nice matrices and MattrixViews (column-major
storage, LAPACK compatible).
Now I like MatrixViews because they let me beat the original
(clearly non-optimal) C matrix multiplication by a couple
seconds, and the D code with operator overloading in place
makes matrix multiplication look elegant.
One shootout benchmark down, eleven to go. :-)
- J
p.s. Am I right in concluding there are no multimethods
(multiple dispatch) in D... it seemed a little awkward to have
to wrap the MatrixView in a new struct solely in order to
overload multiplication. Is there a better way that I've missed?
I'm the author of SciD. It's great that you found it useful! :)
When I wrote scid.matrix and scid.linalg, it was because I needed
some linear algebra functionality for a project at work. I knew
I wouldn't have the time to write a full-featured linear algebra
library, so I intentionally gave it a minimalistic design, and
only included the stuff I needed. That's also why I used the
name "MatrixView" rather than "Matrix"; it was only supposed to
be a convenient way to view an array as a matrix, and not a full
matrix type.
Later, Cristi Cobzarenco forked my library for Google Summer of
Code 2011, with David Simcha as his mentor. They removed almost
everything but the linear algebra modules, and redesigned those
from scratch. So basically, there is pretty much nothing left of
my code in their library. ;) I don't think they ever completed
the project, but I believe parts of it are usable. You'll find
it here:
https://github.com/cristicbz/scid
AFAIK, their goal was to use expression templates in order to
transform D expressions like
x*A*B + y*C
where A, B and C are matrices, and x and y are scalars, into as
few optimised BLAS calls as possible -- e.g., a single GEMM call
in the example above. I don't know how far they got on this,
though.
Lars