The following reduction functions receive a "where" argument:
- https://numpy.org/doc/stable/reference/generated/numpy.max.html
- https://numpy.org/doc/stable/reference/generated/numpy.min.html
- https://numpy.org/doc/stable/reference/generated/numpy.sum.html
- https://numpy.org/doc/stable/referenc
This also applies to
- https://numpy.org/doc/stable/reference/generated/numpy.argmax.html
- https://numpy.org/doc/stable/reference/generated/numpy.argmin.html
and their nan* counterparts.
An `initial` argument could be added to handle the empty case, as with np.max
and np.min.
___
Suggestion: Add the following default arguments to the following functions:
- https://numpy.org/doc/stable/reference/generated/numpy.zeros.html: shape=()
- https://numpy.org/doc/stable/reference/generated/numpy.ones.html: shape=()
- https://numpy.org/doc/stable/reference/generated/numpy.empty.html
As discussed
[here](https://github.com/numpy/numpy/issues/5032#issuecomment-1830838701),
[here](https://github.com/numpy/numpy/issues/5032#issuecomment-2307927804), and
[here](https://github.com/google/jax/issues/18661#issuecomment-1829031914), I'm
interested in a uniform interface for accessin
Currently,
[ldexp](https://numpy.org/doc/stable/reference/generated/numpy.ldexp.html)
throws a TypeError on a complex input:
```python3
import numpy as np
def naive_ldexp(x, n):
return x * 2**n
def new_ldexp(x, n):
if np.iscomplex(x):
y = np.empty_like(x)
y.real = np.ld
Feature request: Add out-of-place (i.e. pure) versions of all existing in-place
operations, such as the following:
- https://numpy.org/doc/stable/reference/generated/numpy.fill_diagonal.html
- https://numpy.org/doc/stable/reference/generated/numpy.put.html
- https://numpy.org/doc/stable/reference/
The following functions accept a diagonal offset argument:
- https://numpy.org/doc/stable/reference/generated/numpy.diag.html
- https://numpy.org/doc/stable/reference/generated/numpy.diagflat.html
- https://numpy.org/doc/stable/reference/generated/numpy.diagonal.html
- https://numpy.org/doc/stable/
Often, I've wanted to concatenate arrays with different ndims along a
particular axis, broadcasting the other axes as needed. Others have sought this
functionality as well:
- https://stackoverflow.com/questions/56357047
- https://github.com/numpy/numpy/issues/2115
-
https://stackoverflow.com/qu
Saturating arithmetic (https://en.wikipedia.org/wiki/Saturation_arithmetic) is
important in digital signal processing and other areas.
Feature request: Add saturating arithmetic functions for the following basic
operations:
- addition (C++ counterpart: https://en.cppreference.com/w/cpp/numeric/
Add a function called `spectral_radius` that computes the
[spectral_radius](https://en.wikipedia.org/wiki/Spectral_radius) of a given
matrix.
A naive way to do this is `np.max(np.abs(np.linalg.eigvals(a)))`, but there are
more efficient methods.
___
N
Hi Stefan,
It should work on unsigned integer types and use exact computation (not
floating-point computations like np.log2).
For an example, see
https://graphics.stanford.edu/~seander/bithacks.html#IntegerLogDeBruijn.
___
NumPy-Discussion mailing lis
Feature request: Add a `bit_width` function to NumPy's [bit-wise
operations](https://numpy.org/doc/stable/reference/routines.bitwise.html) that
computes the [bit-width](https://en.wikipedia.org/wiki/Bit-width) (also called
bit-length) of an input.
For an example, see C++'s
[`bit_width`](https:
Feature request: Add a non-circular shift function that is similar to
[`numpy.roll`](https://numpy.org/doc/stable/reference/generated/numpy.roll.html)
but, instead of wrapping elements around, replaces unoccupied entries with a
specified fill value. Examples:
```
>>> import numpy as np
>>> a =
Correction:
[`math.prod`](https://docs.python.org/3/library/math.html#math.prod).
___
NumPy-Discussion mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3//lists/n
Let the `axis` argument of
[`numpy.size`](https://numpy.org/doc/stable/reference/generated/numpy.size.html)
accept a tuple of ints (like other functions that take an `axis` argument) to
measure the size for multiple axes.
This should be straightforward to implement with
[`itertools.product`](h
It is common to want to pad an array along a *specific* axis. Examples:
-
https://stackoverflow.com/questions/72106542/how-to-pad-the-i-j-axes-of-a-3d-np-array-without-padding-its-k-axis
- https://stackoverflow.com/questions/56076094/zero-pad-ndarray-along-axis
-
https://stackoverflow.com/questi
My particular use case was computing cumulative returns in the context of
reinforcement learning.
A GitHub code search suggests that reverse cumsum is a fairly common operation,
with 3.8k hits:
https://github.com/search?q=%2Fcumsum%5C%28.*%5C%5B%3A%3A-1%5C%5D%5C%29%5C%5B%3A%3A-1%5C%5D%2F&type=c
Add a reverse argument to accumulating functions that, when true, causes the
accumulation to be performed in reverse. Examples:
- ufunc.accumulate:
https://numpy.org/doc/stable/reference/generated/numpy.ufunc.accumulate.html
- cumsum: https://numpy.org/doc/stable/reference/generated/numpy.cumsum
To be more explicit regarding the second approach:
```
def sum(a, ..., where=None, ignore_nan=False):
if ignore_nan:
if where is None:
where = ~isnan(a)
else:
where &= ~isnan(a)
# rest of the current implementation, as-is
```
___
nanmean internally calls mean, sum, and count. Folding the functionality under
mean itself would reduce complexity and the total amount of code, not increase
it.
___
NumPy-Discussion mailing list -- [email protected]
To unsubscribe send an ema
FWIW, [pandas](https://pandas.pydata.org/docs/) also has a `skipna` argument
for
[reductions](https://pandas.pydata.org/docs/reference/api/pandas.api.extensions.ExtensionArray._reduce.html)
and
[accumulations](https://pandas.pydata.org/docs/reference/api/pandas.api.extensions.ExtensionArray._ac
Another, perhaps more radical, possibility is to remove special handling for
nans from NumPy entirely. More precisely:
1. Add a `where` argument to all normal counterparts of nan-ignoring functions
that are currently missing it (see https://github.com/numpy/numpy/issues/26336).
2. Add a short FA
> whether it is the data that should indicate what needs to be skipped, or
> should the algorithm dictate that?
Can you clarify what exactly you mean by "the data" and "the algorithm" here?
___
NumPy-Discussion mailing list -- [email protected]
> Assuming totally normal numpy arrays, the former will first produce a boolean
> array of size N, then a second boolean array of size N to have the negated
> values. Meanwhile, no such extra arrays are made for the latter case. So, you
> are asking users to move to a potentially less performant
> The costs I worry about are performance and increased maintenance burden for
> the regular, no-nan case. For instance, the "obvious" way to implement a
> nan-omitting sum would be to check inside a loop whether any given element
> was nan, thus slowing down the regular case (e.g., by breaking
Correction: `where &= ~isnan(a)` should be `where = where & ~isnan(a)`.
___
NumPy-Discussion mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3//lists/numpy-discus
NumPy has the following nan-ignoring functions:
Mathematical functions:
- https://numpy.org/doc/stable/reference/generated/numpy.nanprod.html
- https://numpy.org/doc/stable/reference/generated/numpy.nansum.html
- https://numpy.org/doc/stable/reference/generated/numpy.nancumprod.html
- https://num
27 matches
Mail list logo