On Thu, Jul 10, 2008 at 2:25 PM, Anne Archibald <[EMAIL PROTECTED]>
wrote:

> 2008/7/10 Charles R Harris <[EMAIL PROTECTED]>:
>
> > On Thu, Jul 10, 2008 at 9:33 AM, Anne Archibald <
> [EMAIL PROTECTED]>
> > wrote:
> >>
> >> 2008/7/9 Robert Kern <[EMAIL PROTECTED]>:
> >>
> >> > Because that's just what a buffer= argument *is*. It is not a place
> >> > for presenting the starting pointer to exotically-strided memory. Use
> >> > __array_interface__s to describe the full range of representable
> >> > memory. See below.
> >>
> >> Aha! Is this stuff documented somewhere?
> >>
> >> > I was about a week ahead of you. See numpy/lib/stride_tricks.py in the
> >> > trunk.
> >>
> >> Nice! Unfortunately it can't quite do what I want... for the linear
> >> algebra I need something that can broadcast all but certain axes. For
> >> example, take an array of matrices and an array of vectors. The
> >> "array" axes need broadcasting, but you can't broadcast on all axes
> >> without (incorrectly) turning the vector into a matrix. I've written a
> >> (messy) implementation, but the corner cases are giving me headaches.
> >> I'll let you know when I have something that works.
> >
> > I think something like a matrix/vector dtype would be another way to go
> for
> > stacks of matrices and vectors. It would have to be a user defined type
> to
> > fit into the current type hierarchy for ufuncs, but I think the base
> > machinery is there with the generic inner loops.
>
> There's definitely room for discussion about how such a linear algebra
> system should ultimately work. In the short term, though, I think it's
> valuable to create a prototype system, even if much of the looping has
> to be in python, just to figure out how it should look to the user.
>
> For example, I'm not sure whether a matrix/vector dtype would make
> easy things like indexing operations - fancy indexing spanning both
> array and matrix axes,


True enough. And I would expect the ufunc approach to require the sub
matrices to be contiguous. In any case, ufuncs aren't ready for this yet, in
fact they aren't ready for string types or other numeric types, so there is
a lot of work to be done just at that level.

for example. dtypes can also be a little
> impenetrable to use, so even if this is how elementwise linear algebra
> is ultimately implemented, we may want to provide a different user
> frontend.
>
> My idea for a first sketch was simply to provide (for example)
> np.elementwise_linalg.dot_mm, that interprets its arguments both as
> arrays of matrices and yields an array of matrices. A second sketch
> might be a subclass MatrixArray, which could serve much as Matrix does
> now, to tell various functions which axes to do linear algebra over
> and which axes to iterate over.
>

Definitely needs doing. It's hard to see how things work out in practice
without some experimentation.


> There's also the question of how to make these efficient; one could of
> course write a C wrapper for each linear algebra function that simply
> iterates, but your suggestion of using the ufunc machinery with a
> matrix dtype might be better. Or, if cython acquires sensible
> polymorphism over numpy dtypes, a cython wrapper might be the way to
> go. But I think establishing what it should look like to users is a
> good first step, and I think that's best done with sample
> implementations.
>

Amen.

Chuck
_______________________________________________
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to