On 14/04/12 09:45, F i L wrote:
Jonathan M Davis wrote:
No. You always have a bug if you don't initialize a variable to the value that
it's supposed to be. It doesn't matter whether it's 0, NaN, 527.1209823, or
whatever. All having a default value that you're more likely to use means is
that you're less likely to have to explicitly initialize the variable. It has
to be initialized to the correct value regardless.

Yes, I'm in favor of default values. That's not my argument here. I'm saying it 
makes more sense to have the default values be _usable_ (for convenience) 
rather than designed to catch (**cause**) bugs.


Why would a compiler set `real' to 0.0 rather then 1.0, Pi, .... ?
The more convenient default set certainly depends on the underlying mathematics,
and a compiler  cannot (yet) understand the encoded mathematics.
NaN is certainly the certainly the very choice as whatever the involved 
mathematics,
they will blow up sooner or later. And, from a practical point of view, blowing 
up
is easy to trace.


Reply via email to