Hi Jaime,

Very belated reply, but only with the semester over I seem to have regained
some time to think.

The behaviour of reduceat always has seemed a bit odd to me, logical for
dividing up an array into irregular but contiguous pieces, but illogical
for more random ones (where one effectively passes in pairs of points, only
to remove the unwanted calculations after the fact by slicing with [::2];
indeed, the very first example in the documentation does exactly this [1]).
I'm not sure any of your proposals helps all that much for the latter case,
while it risks breaking existing code in unexpected ways.

For me, for irregular pieces, it would be much nicer to simply pass in
pairs of points. I think this can be quite easily done in the current API,
by expanding it to recognize multidimensional index arrays (with last
dimension of 2; maybe 3 for step as well?). These numbers would just be the
equivalent of start, end (and step?) of `slice`, so I think one can allow
any integer with negative values having the usual meaning and clipping at 0
and length. So, specifically, the first example in the documentation would
change from:

np.add.reduceat(np.arange(8),[0,4, 1,5, 2,6, 3,7])[::2]

to

np.add.reduceat(np.arange(8),[(0, 4), (1, 5), (2, 6), (3,7)])

(Or an equivalent ndarray. Note how horrid the example is: really, you'd
want 4,8 as a pair too, but in the current API, you'd get that by adding a
4.)

What do you think? Would this also be easy to implement?

All the best,

Marten

​[1]
http://docs.scipy.org/doc/numpy/reference/generated/numpy.ufunc.reduceat.html
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to