Jerome BENOIT wrote:
Why would a compiler set `real' to 0.0 rather then 1.0, Pi, .... ?

Because 0.0 is the "lowest" (smallest, starting point, etc..) numerical value. Pi is the corner case and obviously has to be explicitly set.

If you want to take this further, chars could even be initialized to spaces or newlines or something similar. Pointers/references need to be defaulted to null because they absolutely must equal an explicit value before use. Value types don't share this limitation.

The more convenient default set certainly depends on the underlying mathematics,
and a compiler  cannot (yet) understand the encoded mathematics.
NaN is certainly the certainly the very choice as whatever the involved mathematics, they will blow up sooner or later. And, from a practical point of view, blowing up is easy to trace.

Zero is just as easy for the runtime/compiler to default to; and bugs can be introduce anywhere in the code, not just definition. We have good ways of catching these bugs in D with unittests already.

Reply via email to