On 29-mar-10, at 10:29, Don wrote:

Walter Bright wrote:
Don wrote:
(1) Converting a floating point literal into a double literal is usually not lossless. 0.5f, 0.5, and 0.5L are all exactly the same number, since they are exactly representable.
But 0.1 is not the same as 0.1L.
It depends. The D compiler internally stores all floating point constants, regardless of type, in full 80 bit precision. Constant folding and CTFE are all carried out in 80 bit precision regardless of type. The only time it is actually truncated to the shorter formats is when writing it out to the object file. The central idea is that more precision == better. If your floating point algorithm breaks if precision increases, then the algorithm is a bad one. The only time I've seen code that relied on roundoff errors is in test suites that specifically tested for it.

There's some oddities.

  //enum A = 1.0e400; //  Error: number is not representable
  enum B = 1.0e200 * 1e200; // yes it is!
  enum C = 1.0e400L;
  static assert(B==C);

So there's a bit of schizophrenia about whether the 'L' suffix changes which values are representable, or whether it is just a type marker.

I think we should tighten things up a little bit, but I don't think it's a big deal.

good to have the numerics expert look into this.

Yes I think the situation is really quite ok, for example (now having some doubts) I just checked (to be sure the correct thing would happens as I thought) and

T mult1(T)(T x){
    return 1.000000000000000001*x;
}

    assert(mult1(1.0f)==1.0f,"float");
    assert(mult1(1.0)==1.0,"double");
    assert(mult1(1.0L)==1.0L*1.000000000000000001L,"real");


:)

Reply via email to