On 2011-Mar-15 19:55:33 +0000, David Kirkby <david.kir...@onetel.net> wrote:
>The x86 CPU has an 80-bits, which is used for long-double on some
>systems.

This is part of the original x87 FPU.  The i386 API uses this by
default (but the x86_64 API uses SSE2 by default so long doubles need
special handling).  The x87 suffers from the additional issue that by
default FP registers have 80-bit precision even when performing double
operations.  This means the results can depend on if/when temporaries
were spilled from the x87 stack into memory.

> But not all systems have this (SPARC does not),

The SPARC architecture defines a 112-bit long double.  I don't
believe any implementations include hardware support for this.

>I may be wrong, but I think that gcc does not handle it any different
>to double - which is permitted by the C standard. Although the "long
>double" is defined, there's no need for "long double" to use any more
>bits than "double".

This depends on the gcc configuration - I would expect that gcc would
provide a long double which is the maximum precision available on the
architecture.  Note that it's up to libc and libm to provide relevant
functions on long doubles.

>> but the computer function does
>> get to the value by a lengthy computation involving lots of floating point
>> arithmetic... so the issue isn't that clear!
>
>True. I don't think there's a lot we can do

Once you get beyond trivial operations, you are at the mercy of the
libm implementation - and writjng functions that have "small" (ideally
< 1ULP) errors is non-trivial and often ignored.  long double tends to
be more problematic than double in this case because there's no higher
precision in which to perform intermediate operations.

Overall, I believe the abs(actual-expected)<tiny_number approach is
the only practical way to handle doctests.  The expected numeric
result is still available, just not on a line by itself.

-- 
Peter Jeremy

Attachment: pgpPiK7lqB9yl.pgp
Description: PGP signature

Reply via email to