On Fri, Jun 7, 2019 at 1:19 AM Ralf Gommers <ralf.gomm...@gmail.com> wrote:

>
>
> On Fri, Jun 7, 2019 at 1:37 AM Nathaniel Smith <n...@pobox.com> wrote:
>
>>
>> My intuition is that what users actually want is for *native Python
>> types* to be treated as having 'underspecified' dtypes, e.g. int is
>> happy to coerce to int8/int32/int64/whatever, float is happy to coerce
>> to float32/float64/whatever, but once you have a fully-specified numpy
>> dtype, it should stay.
>>
>
> Thanks Nathaniel, I think this expresses a possible solution better than
> anything I've seen on this list before. An explicit "underspecified types"
> concept could make casting understandable.
>

I think the current model is that this holds for all scalars, but changing
that to be just for not already explicitly typed types makes sense.

In the context of a mental picture, one could think in terms of coercion,
of numpy having not just a `numpy.array` but also a `numpy.scalar`
function, which takes some input and tries to make a numpy scalar of it.
For python int, float, complex, etc., it uses the minimal numpy type.

Of course, this is slightly inconsistent with the `np.array` function which
converts things to `ndarray` using a default type for int, float, complex,
etc., but my sense is that that is explainable, e.g.,imagining both
`np.scalar` and `np.array` to have dtype attributes, one could say that the
default for one would be `'minimal'` and the other `'64bit'` (well, that
doesn't work for complex, but anyway).

-- Marten
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion

Reply via email to