On Wednesday, 3 October 2012 at 16:33:15 UTC, Simen Kjaeraas
wrote:
On 2012-10-03, 18:12, wrote:
They make sure you never pass null to a function that doesn't
expect null - I'd say that's a nice advantage.
No, it is meaningless. If you have a class which is supposed to
hold a prime number and you pass it to a function are you going
to check each time that the value is indeed prime? That would
kill the efficiency of your program guaranteed. So you would be
happy to know that the reference is non-null but you would take
it for granted the value is indeed prime? Does it make any sense?
I maintain that this non-null "advantage" does not warrant to
make the language more complicated even by a tiny bit. It is
dwarfed by normal considerations related to program correctness.
With default null references:
A)either null is an expected non-value for the type (like in the
chess example), checking for it is part of normal processing then
B) or null is not a valid value, then there is no need to check
for it. If you get a null reference it is a bug. It is like
getting a 15 for your prime number. You do not put checks like
that in your code. You test your prime generation routine not the
consumers. If your function gets a null reference when it should
not, some other part of your program is buggy. You do not process
bugs in your code - you remove them from it.
However with D, dereferencing an uninitialized reference is well
defined - null is not random data: you get a well-defined
exception and you know you are dealing with unitialized data.
This is easy to fix. You just go up the stack and check where the
reference comes from. Much easier probably than finding out why
your prime numbers turn out to be divisible by 3. How about
introducing some syntax that will rule this out?
To quote (loosely) Mr. Walter Bright from another discussion: how
many current bugs in dmd are related to default null references?