On 2005-05-29 13:22:55 +0300, Michael Veksler wrote:
> Two examples come in mind:
> 1. Non conformance of x86 to the standard FP due to
>    its extra precision.

Wrong. The IEEE-754 standard allows extended precision.

>    This includes different results between -O2 and -O0 even with
>    -fsave-temps.

Getting different results is not the problem (and indeed, some bug
reports are invalid, but *some* of them only).

>    Several PR about this issue were marked invalid in
>    the past.
>    This is a bug in two places:
>     i.  x86 FP which implements wrong precision.

As I've said above, this is not a wrong precision; this is allowed
by the IEEE-754 standard. So, this is definitely not a hardware
bug. Perhaps just a bad design.

>     ii. glibc that claims in its headers that it sets to set
>        default precision to 64 bits, when in practice it
>        sets it to 80 bits.

Where? Has there been a bug report about that?

If you think of FLT_EVAL_METHOD, it is set to 2. It may be wrong
(as said later in this thread), but it certainly does not mean
that the default precision is the IEEE-754 double precision.

-- 
Vincent Lefèvre <[EMAIL PROTECTED]> - Web: <http://www.vinc17.org/>
100% accessible validated (X)HTML - Blog: <http://www.vinc17.org/blog/>
Work: CR INRIA - computer arithmetic / SPACES project at LORIA

Reply via email to