On Tue, Feb 14, 2012 at 12:58 AM, Travis Oliphant <tra...@continuum.io>wrote:

> >
> > The lack of commutativity wasn't in precision, it was in the typecodes,
> and was there from the beginning. That caused confusion. A current cause of
> confusion is the many to one relation of, say, int32 and long, longlong
> which varies platform to platform. I think that confusion is a more
> significant problem. Having some types derived from Python types, a
> correspondence that also varies platform to platform is another source of
> inconsistent behavior that can be
> > confusing. So there are still plenty of issues to deal with
>
> I didn't think it was in the precision.  I knew what you meant.  However,
> I'm still hoping for an example of what you mean by "lack of commutativity
> in the typecodes".
>
>
I made a table back around 1.3 and the lack of symmetry was readily
apparent.

The confusion of long and longlong varying from platform to platform comes
> from C.   The whole point of having long and longlong is to ensure that you
> can specify the same types in Python that you would in C.   They should not
> be used if you don't care about that.
>
> Deriving from Python types for some array-scalars is an issue.  I don't
> like that either.  However, Python itself special-cases it's scalars in
> ways that necessitated it to have some use-cases not fall-over.    This
> shows a limitation of Python. I would prefer that all array-scalars were
> recognized appropriately by the Python type system.
>
> Most of the concerns that you mention here are mis-understandings.  Maybe
> there are solutions that "fix" the problem without just educating people.
>  I am open to them.
>
> I do think that it was a mistake to have the intp and uintp dtypes as
> *separate* dtypes.  They should have just mapped to the right one.   I
> think it was also a mistake to have dtypes for all the C-spellings instead
> of just a dtype for each different bit-length with an alias for the
> C-spellings.     We should change that in NumPy 2.0.
>
>
About the behavior in question, I would frame this as a specific case with
argument for and against like so:

*The Current Behavior*

In [1]: array([127], int8) + 127
Out[1]: array([-2], dtype=int8)

In [2]: array([127], int8) + 128
Out[2]: array([255], dtype=int16)


*Arguments for Old Behavior*

Predictable, explicit output type. This is a good thing, in that no one
wants their 8GB int8 array turning into a 16GB int16 array.

Backward compatibility.

*Arguments for New Behavior*

Fewer overflow problems. But no cure.


Put that way I think you can make a solid argument for a tweak to restore
old behavior. Overflow can be a problem, but partial cures are not going to
solve it. I think we do need a way to deal with overflow. Maybe in two
ways. 1) saturated operations, i.e., 127 + 128 -> 127. This might be good
for images. 2) raise an error. We could make specific ufuncs for these
behaviors.

<snip>

Chuck
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to