On Wednesday, 14 March 2018 at 16:16:55 UTC, Andrei Alexandrescu
wrote:
On 03/14/2018 01:01 AM, 9il wrote:
On Tuesday, 13 March 2018 at 17:10:03 UTC, jmh530 wrote:
"Note that using row-major ordering may require more memory
and time than column-major ordering, because the routine must
transpose the row-major order to the column-major order
required by the underlying LAPACK routine."
Maybe we should use only column major order. --Ilya
Has row-major fallen into disuse?
Generally: it would be great to have a standard collection of
the typical data formats used in linear algebra and scientific
coding. This would allow interoperation without having each
library define its own types with identical layout but
different names. I'm thinking of:
* multidimensional hyperrectangular
Already done: Slice [1]
* multidimensional jagged
Done as N-1 Slice composed of jagged rows:
a) Naive:
Slice!(Contiguous, [1], Slice!(Contiguous, [1], double*)*)
or
b) Intermidiate:
Slice!(kind, packs, SubSliceIterator!(Iterator, Slicable)) by
[2]
or
c) Proper, with compressed indexes by [3]:
Slice!(Contiguous, [1], SubSliceIteratorInst!I), where
SubSliceIteratorInst =
SubSliceIterator!(SlideIterator!(size_t*, 2, staticArray), I*);
The c) variant is already used by mir.graph to represent Graph
indexes.
* multidimensional hypertriangular if some libraries use it
I saw only 2D packed triangular matrixes in Lapack.
Ndslice provide it using `stairs` topology [4].
The types definitions are
Slice!(Contiguous, [1],
StairsIterator!(T*))
and
Slice!(Contiguous, [1],
RetroIterator!(MapIterator!(StairsIterator!(RetroIterator!(T*)),
retro)))
The last one can be simplified though.
This types are used in mir-lapack [5], which is a Lapack wrapper
with ndslice API created for Lubeck.
* sparse vector (whatever formats are most common, I assume
array of pairs integral/floating point number, with the
integral either before or after the floating point number)
Done.
Multidimensional Dictionary of Keys (DOK) format
is provided by Sparse (an alias for Slice) by [6].
Multidimensional Compressed Sparse Rows (CSR) forma
is provided by CompressedTensor [7]
There is already sprase BLAS routines for CompressedTensor, [8].
No need for a heavy interface on top of these. These structures
would be low-maintenance and facilitate a common data language
for libraries.
As you can see ndslice already provide common data language.
The next goal is to provide a high level common data language
with memory automated management and Matlab like API.
BTW, could you please help with the following issue?!
struct S(int b, T)
{
}
alias V(T) = S!(1, T);
auto foo (T)(V!T v)
{
}
void main()
{
V!double v;
foo(v);
}
Error: template onlineapp.foo cannot deduce function from
argument types !()(S!(1, double)), candidates are:
onlineapp.d(7): onlineapp.foo(T)(V!T v)
I mean I need this kind of code compiles. Currently I use bad
workarounds or define huge types like:
Slice!(Contiguous, [1],
StairsIterator!(T*))
Slice!(Contiguous, [1],
RetroIterator!(MapIterator!(StairsIterator!(RetroIterator!(T*)),
retro)))
instead of StairsDown!T and StairsUp!T
Of course it is not a problem to add workaround for a standalone
project. But for a open source library it is a huge pain (for
example, see mir-lapack source with the two types above).
ndslice is very powerful when you need to construct a new type.
But there is a problem that a type definition very often is too
complex. So, an engineer should choose ether to use a complex
generic API or provide a simple non generic. A simple generic API
is required for Dlang success in math.
Please let me know you thought if the issue can be fixed.
Andrei
Best regards,
Ilya
[1]
http://docs.algorithm.dlang.io/latest/mir_ndslice_slice.html#.Slice
[2]
http://docs.algorithm.dlang.io/latest/mir_ndslice_topology.html#.mapSubSlices
[3]
http://docs.algorithm.dlang.io/latest/mir_ndslice_topology.html#pairwiseMapSubSlices
[4]
http://docs.algorithm.dlang.io/latest/mir_ndslice_topology.html#.stairs
[5] https://github.com/libmir/mir-lapack
[6] http://docs.mir.dlang.io/latest/mir_sparse.html#.Sparse
[7]
http://docs.mir.dlang.io/latest/mir_sparse.html#.CompressedTensor
[8] http://docs.mir.dlang.io/latest/mir_sparse_blas_gemm.html