I am writing a D library based some of the stuff in SLATEC, and I've come to a point where I need to decide on a way to manipulate vectors and matrices. To that end, I have some ideas and questions I would like comments on from the community.

Ideally, I want to restrict the user as little as possible, so I'm writing heavily templated code in which one can use both library-defined vector/matrix types and built-in arrays (both static and dynamic). My reasons for this are:

a) Different problems may benefit from different types. Sparse matrices, dense matrices, triangular matrices, etc. can all be represented differently based on efficiency and/or memory requirements.

b) I hope that, at some point, my library will be of such a quality that it may be useful to others, and in that event I will release it. Interoperability with other libraries is therefore a goal for me, and a part of this is to let the user choose other vector/matrix types than the ones provided by me.

c) Often, for reasons of both efficiency and simplicity, it is desirable to use arrays directly.

My first question goes to those among you who do a lot of linear algebra in D: Do you think supporting both library types and arrays is worth the trouble? Or should I just go with one and be done with it?


A user-defined matrix type would have opIndex(i,j) defined, and to retrieve elements one would write m[i,j]. However, the syntax for two-dimensional arrays is m[i][j], and this means I have to put a lot of static ifs around my code, in order to check the type every time I access a matrix. This leads me to my second question, which is a suggestion for a language change, so I expect a lot of resistance. :)

Would it be problematic to define m[i,j,...] to be equivalent to m[i][j][...] for built-in arrays, so that arrays and user-defined types could be used interchangeably?

(And, importantly, are there anyone but me who think they would benefit from this?)


-Lars

Reply via email to