Hi All, So I have a need to use longdouble numpy scalars in an application, and I need to be able to reliably set long-double precision values in them. Currently I don't see an easy way to do that. For example:
In [19]: numpy.longdouble("1.12345678901234567890") Out[19]: 1.1234567890123456912 Note the loss of those last couple digits. In [20]: numpy.float("1.12345678901234567890") Out[20]: 1.1234567890123457 In [21]: numpy.longdouble("1.12345678901234567890") - numpy.float("1.12345678901234567890") Out[21]: 0.0 And so internally they are identical. In this case, the string appears to be converted to a C double (i.e. numpy float) before being assigned to the numpy scalar. And therefore it loses precision. Is there a good way of setting longdouble values? Is this a numpy bug? I was considering using a tiny cython wrapper of strtold() to do a conversion from a string to a long double, but it seems like this is basically what should be happening internally in numpy in the above example! Thanks, Scott -- Scott M. Ransom Address: NRAO Phone: (434) 296-0320 520 Edgemont Rd. email: sran...@nrao.edu Charlottesville, VA 22903 USA GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion