Does somebody know why SET-PRECISION has the different effects
shown below in GForth 0.5.0?

In MacOS X it behaves as I expected.  In Redhat Linux, it
produces only the 17 digits that can be valid in IEEE 754 64-bit
double precision.  I'm guessing some library difference in the 2
gcc's?  The MacOS X gcc is 2.95.2, and the Redhat gcc is 2.91.66.

MacOS X
-------
GForth 0.5.0, Copyright (C) 1995-2000 Free Software Foundation, Inc.
GForth comes with ABSOLUTELY NO WARRANTY; for details type `license'
Type `bye' to exit
2e fsqrt f. 1.4142135623731  ok
40 set-precision  ok
2e fsqrt f. 1.414213562373095101065700873732566833496  ok

Redhat Linux
------------
GForth 0.5.0, Copyright (C) 1995-2000 Free Software Foundation, Inc.
GForth comes with ABSOLUTELY NO WARRANTY; for details type `license'
Type `bye' to exit
2e fsqrt f. 1.4142135623731  ok
40 set-precision   ok
2e fsqrt f. 1.4142135623730951  ok

There's a reason to prefer the first version.  Sometimes you
want an accurate decimal representation of a radix 2, 64-bit
floating point number considered as exact.

-- David


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to