On Thu, Oct 20, 2016 at 7:58 PM, Charles R Harris
<charlesr.har...@gmail.com> wrote:
> Hi All,
>
> I've put up a preliminary PR for the proposed fpower ufunc. Apart from
> adding more tests and documentation, I'd like to settle a few other things.
> The first is the name, two names have been proposed and we should settle on
> one
>
> fpower (short)
> float_power (obvious)

+0.6 for float_power

> The second thing is the minimum precision. In the preliminary version I have
> used float32, but perhaps it makes more sense for the intended use to make
> the minimum precision float64 instead.

Can you elaborate on what you're thinking? I guess this is because
float32 has limited range compared to float64, so is more likely to
see overflow? float32 still goes up to 10**38 which is < int64_max**2,
FWIW. Or maybe there's some subtlety with the int->float casting here?

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to