On 5/15/2016 2:36 PM, Ola Fosheim Grøstad wrote:
On Sunday, 15 May 2016 at 21:01:14 UTC, Walter Bright wrote:
Err... these kind of problems only applies to D.

Nope. They occur with every floating point implementation in every programming
language. FP math does not adhere to associative identities.

No. ONLY D give different results for the same pure function call because you
bind the result to a "const float" rather than a "float".

It's a fact that with floating point arithmetic, (x+y)+z is not equal to x+(y+x), on every programming language, whether or not the result is bound to a const float or a float.


Yes. Algorithms can break because of it.

So far, nobody has posted a legitimate one (i.e. not contrived).


Ironically, the identity is more likely to hold with D's extended precision
for intermediate values than with other languages.
No, it is not more likely to hold with D's hazard game. I don't know of a single
language that doesn't heed a request to truncate/round the mantissa if it
provides the means to do it.

Standard C provides no such hooks for constant folding at compile time. Neither does Standard C++. In fact I know of no language that does. Perhaps you can provide a link?


I care about algorithms working they way I designed them to work and what I have
tested them for. If I request rounding to a 24 bit mantissa then I _expect_ the
rounding to take place.

Example, please, of how you 'request' rounding/truncation.


And yes, it can break algorithms if you don't.

Example, please.

----

Something you should know from the C++ Standard:

"The values of the floating operands and the results of floating expressions may be represented in greater precision and range than that required by the type; the types are not changed thereby" -- 5.0.11

D is clearly making FP arithmetic *more* predictable, not less. Since you care about FP results, I seriously suggest studying http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html and what the C++ Standard actually says.

Reply via email to