You could move some of the cost to index-creation time by converting the
per-row indices into flattened indices:
In [1]: a = np.random.random((5, 6))
In [2]: i = a.argmax(axis=1)
In [3]: a[np.arange(len(a)), i]
Out[3]: array([0.95774465, 0.90940106, 0.98025448, 0.97836906, 0.80483784])
In [4
On 01-11-2019 09:51, Allan Haldane wrote:
> my thought was to try `take` or `take_along_axis`:
>
>ind = np.argmin(a, axis=1)
>np.take_along_axis(a, ind[:,None], axis=1)
>
> But those functions tend to simply fall back to fancy indexing, and are
> pretty slow. On my system plain fancy inde
On 31-10-2019 01:44, Elliot Hallmark wrote:
> Depends on how big your array is. Numpy C code is 150x+ faster than
> python overhead. Fancy indexing can be expensive in my experience.
> Without trying I'd guess arr[:, argmax(arr, axis=1)] does what you want,
It does not.
> but even if it is, try
> On my system plain fancy indexing is fastest
Hardly surprising, since take_along_axis is doing that under the hood,
after constructing the index for you :)
https://github.com/numpy/numpy/blob/v1.17.0/numpy/lib/shape_base.py#L58-L172
I deliberately didn't expose the internal function that const
my thought was to try `take` or `take_along_axis`:
ind = np.argmin(a, axis=1)
np.take_along_axis(a, ind[:,None], axis=1)
But those functions tend to simply fall back to fancy indexing, and are
pretty slow. On my system plain fancy indexing is fastest:
>>> %timeit a[np.arange(N),ind]
1.58 µ