On Monday, 9 May 2016 at 09:10:19 UTC, Walter Bright wrote:
Don Clugston pointed out in his DConf 2016 talk that:

    float f = 1.30;
    assert(f == 1.30);

will always be false since 1.30 is not representable as a float. However,

    float f = 1.30;
    assert(f == cast(float)1.30);

will be true.

So, should the compiler emit a warning for the former case?

What is the actual reason for the mismatch?

Does f lose precision as a float, while the 1.30 literal is a more precise double/real? Comparing float and double might be worth a warning.

Does it encode the two literals differently? If so, why?

Reply via email to