Le 14/03/2011 20:00, Robert Bradshaw a écrit :
In the case of the OP's failures, that processor (or libm, libc, whatever) is giving us less accurate answers than anything else we've tested on, and I think it's worth looking into fixing the problem, or even adding an # optional, or as an expected failure on this platform, rather than weakening the examples and tests for all other platforms.
Well, according to the debian developper I pushed the problem to, on such a system, "long double" and "double" are the same.
And the result is then accurate enough, since we see 13 decimal digits after the point, and there are 3 before ; that makes 16 good decimal digits.
According to http://en.wikipedia.org/wiki/IEEE_754-2008, double precision gives 15.95 decimal digits...
It reminds me a bit of all the correction we had to do because the literal floating point value for e is not correctly compiled on Solaris. It's one thing to have "numerical noise" in a lengthy computation involving lots of floating point arithmetic, it's another to give non-integral values for gamma(n) for small, integral n.
Well, as far as I know, the result of computing $\Gamma(n)$ for a small integral $n$ is mathematically an integer, but the computer function does get to the value by a lengthy computation involving lots of floating point arithmetic... so the issue isn't that clear!
Snark on #sage-devel -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org