On Sunday, 29 June 2014 at 19:22:16 UTC, Walter Bright wrote:
On 6/29/2014 11:21 AM, Russel Winder via Digitalmars-d wrote:
Because when reading the code you haven't got a f####### clue how accurate the floating point number is until you ask and answer the
question "and which processor are you running this code on".

That is not true with D. D specifies that float and double are IEEE 754 types which have specified size and behavior. D's real type is the largest the underlying hardware will support.

D also specifies 'int' is 32 bits, 'long' is 64, and 'byte' is 8, 'short' is 16.

I'm afraid that it is exactly true if you use `real`.

What important use-case is there for using `real` that shouldn't also be accompanied by a `static assert(real.sizeof >= 10);` or similar, for correctness reasons?

Assuming there isn't one, then what is the point of having a type with hardware dependant precision? Isn't it just a useless abstraction over the hardware that obscures useful intent?

mixin(`alias real` ~ (real.sizeof*8).stringof ~ ` = real;`);

is more useful to me.

Reply via email to