Hi Xavier,

In this bright age of IEEE-754 compatible CPUs,
it is certainly possible to achieve reproducible FP.
I worked for a company whose software produced bit-identical
results on various CPUs (x86, Sparc, Itanium) and OSes (Linux, Solaris,
Windows).

The trick is to closely RTFM for your CPU and compiler, in particular all
those nice
appendices related to "FPU control words" and "FP consistency models".

For example, if the author of that article had done so, he might have
learned
about the "precision control" field of the x87 status register, which you
can set
so that all intermediate operations are always represented as 64-bits
doubles.
So no double roundings from double-extended precision.

(Incidentally, the x87-internal double-extended precision is another fine
example where
being "more precise on occasion"  usually does not help.)

Frankly not very impressed with that article.
I could go in detail but that's off-topic, and I will try to fight
the "somebody is *wrong* on the Internet" urge.

Stephan

2017-01-17 16:04 GMT+01:00 Xavier Combelle <xavier.combe...@gmail.com>:

>
> Generally speaking, there are two reasons why people may *not* want an FMA
> operation.
> 1. They need their results to be reproducible across compilers/platforms.
> (the most common reason)
>
> The reproducibility of floating point calculation is very hard to reach  a
> good survey of the problem is https://randomascii.wordpress.
> com/2013/07/16/floating-point-determinism/ it mention the fma problem but
> it only a part of a biggest picture
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas@python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
_______________________________________________
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to