Here is an illustration of the same phenomenon on x86_64. There, of course, 
8-byte floats are double, so the code to demonstrate
the problem is as follows:
#include <stdio.h>
#include <math.h>
int main ()
{
  double x = 6.0;
  printf("sizof(double)=%d\n",sizeof(double));
  printf("lgamma (%.20f)=%.20f\n", x, lgamma(x));
  printf("tgamma (%.20f)=%.20f\n", x, tgamma(x));
  printf("exp(lgamma) (%.20f)=%.20f\n", x, exp(lgamma(x)));
  return 0;
}
On x86_64 Linux, it outputs
sizof(double)=8
lgamma (6.00000000000000000000)=4.78749174278204581157
tgamma (6.00000000000000000000)=119.99999999999997157829
exp(lgamma) (6.00000000000000000000)=119.99999999999997157829

On MacOSX, libc has an honest gamma(), which does the job properly:
sizof(double)=8
lgamma (6.00000000000000000000)=4.78749174278204492339
tgamma (6.00000000000000000000)=120.00000000000000000000
exp(lgamma) (6.00000000000000000000)=119.99999999999987210231

So eglibc does a bad job computing gamma() on double (i.e. on 8-byte 
floats)...

Report upstream?

Dima


-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org

Reply via email to