On Di, 2014-02-25 at 17:52 -0500, Scott Ransom wrote:
> Hi All,
> 
> So I have a need to use longdouble numpy scalars in an application, and 
> I need to be able to reliably set long-double precision values in them. 
>   Currently I don't see an easy way to do that.  For example:
> 
> In [19]: numpy.longdouble("1.12345678901234567890")
> Out[19]: 1.1234567890123456912
> 
> Note the loss of those last couple digits.
> 
> In [20]: numpy.float("1.12345678901234567890")
> Out[20]: 1.1234567890123457
> 
> In [21]: numpy.longdouble("1.12345678901234567890") - 
> numpy.float("1.12345678901234567890")
> Out[21]: 0.0
> 
> And so internally they are identical.
> 
> In this case, the string appears to be converted to a C double (i.e. 
> numpy float) before being assigned to the numpy scalar.  And therefore 
> it loses precision.
> 
> Is there a good way of setting longdouble values?  Is this a numpy bug?
> 

Yes, this is a bug I think (never checked), we use the python parsing
functions where possible. But for longdouble python float (double) is
obviously not enough. A hack would be to split it into two:
np.float128(1.1234567890) + np.float128(1234567890e-something)

Though it would be better for the numpy parser to parse the full
precision when given a string.

- Sebastian

> I was considering using a tiny cython wrapper of strtold() to do a 
> conversion from a string to a long double, but it seems like this is 
> basically what should be happening internally in numpy in the above example!
> 
> Thanks,
> 
> Scott
> 


_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to