On 16.05.2016 06:26, Walter Bright wrote:

Incidentally, I made the mistake of mentioning this thread (due to my
astonishment that CTFE ignores float types)

Float types are not selected because they are less accurate,

(AFAIK, accuracy is a property of a value given some additional context. Types have precision.)

they are selected because they are smaller and faster.

Right. Hence, the 80-bit CTFE results have to be converted to the final precision at some point in order to commence the runtime computation. This means that additional rounding happens, which was not present in the original program. The additional total roundoff error this introduces can exceed the roundoff error you would have suffered by using the lower precision in the first place, sometimes completely defeating precision-enhancing improvements to an algorithm.

This might be counter-intuitive, but this is floating point. The precision should just stay the specified one during the entire computation (even if part of it is evaluated at compile-time).

The claim here is /not/ that lower precision throughout delivers more accurate results. The additional rounding is the problem.


There are other reasons why I think that this kind of implementation-defined behaviour is a terribly bad idea, eg.:

- it breaks common assumptions about code, especially how it behaves under seemingly innocuous refactorings, or with a different set of compiler flags.

- it breaks reproducibility, which is sometimes more important that being close to the infinite precision result (which you cannot guarantee with any finite floating point type anyway). (E.g. in a game, it is enough if the result seems plausible, but it should be the same for everyone. For some scientific experiments, the ideal case is to have 100% reproducibility of the computation, even if it is horribly wrong, such that other scientists can easily uncover and diagnose the problem, for example.)

Reply via email to