On Sat, Jun 4, 2016 at 9:16 PM, Charles R Harris <charlesr.har...@gmail.com>
wrote:

>
>
> On Sat, Jun 4, 2016 at 6:17 PM, <josef.p...@gmail.com> wrote:
>
>>
>>
>> On Sat, Jun 4, 2016 at 8:07 PM, Charles R Harris <
>> charlesr.har...@gmail.com> wrote:
>>
>>>
>>>
>>> On Sat, Jun 4, 2016 at 5:27 PM, <josef.p...@gmail.com> wrote:
>>>
>>>>
>>>>
>>>> On Sat, Jun 4, 2016 at 6:10 PM, Nathaniel Smith <n...@pobox.com> wrote:
>>>>
>>>>> On Sat, Jun 4, 2016 at 2:07 PM, V. Armando Sole <s...@esrf.fr> wrote:
>>>>> > Also in favor of 2. Always return a float for '**'
>>>>>
>>>>> Even if we did want to switch to this, it's such a major
>>>>> backwards-incompatible change that I'm not sure how we could actually
>>>>> make the transition without first making it an error for a while.
>>>>>
>>>>
>>>> AFAIU, only the dtype for int**int would change. So, what would be the
>>>> problem with FutureWarnings as with other dtype changes that were done in
>>>> recent releases.
>>>>
>>>>
>>> The main problem I see with that is that numpy integers would behave
>>> differently than Python integers, and the difference would be silent. With
>>> option 1 it is possible to write code that behaves the same up to overflow
>>> and the error message would supply a warning when the exponent should be
>>> float. One could argue that numpy scalar integer types could be made to
>>> behave like python integers, but then their behavior would differ from
>>> numpy arrays and numpy scalar arrays.
>>>
>>
>> I'm not sure I understand.
>>
>> Do you mean
>>
>> np.arange(5)**2 would behave differently than np.arange(5)**np.int_(2)
>>
>> or 2**2 would behave differently than np.int_(2)**np.int(2)
>>
>
> The second case. Python returns ints for non-negative integer powers of
> ints.
>
>
>>
>> ?
>>
>>
>> AFAICS, there are many cases where numpy scalars don't behave like python
>> scalars. Also, does different behavior mean different type/dtype or
>> different numbers.  (The first I can live with, the second requires human
>> memory usage, which is a scarce resource.)
>>
>> >>> 2**(-2)
>> 0.25
>>
>>
> But we can't mix types in np.arrays and we can't depend on the element
> values of arrays in the exponent, but only on their type, so 2 ** array([1,
> -1]) must contain a single type and making that type float would surely
> break code.  Scalar arrays, which are arrays, have the same problem. We
> can't do what Python does with ndarrays and numpy scalars, and it would be
> best to be consistent. Division was a simpler problem to deal with, as
> there were two operators, `//` and `/`. If there were two exponential
> operators life would be simpler.
>

What bothers me with the entire argument is that you are putting higher
priority on returning a dtype than on returning the correct numbers.

Reverse the argument: Because we cannot make the return type value
dependent we **have** to return float, in order to get the correct number.
(It's an argument not what we really have to do.)


Which code really breaks, code that gets a float instead of an int, and
with some advance warning users that really need to watch their memory can
use np.power.

My argument before was that I think a simple operator like `**` should work
for 90+% of the users and match their expectation, and the users that need
to watch dtypes can as well use the function.

(I can also live with the exception from case 1., but I really think this
is like the python 2 integer division "surprise")

Josef


>
> Chuck
>
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to