On 2012-10-03, 18:12, wrote:
As my comments indicated : the presence of a value does not guarantee a valid value by itself. The C++ declaration int n; introduces a value, good luck using it.
Which is why non-nullable references must not allow the programmer to declare them without also assigning a valid value (hence the no default value [note that this is completely different from 'random default value', which is what you indicate above]). This is easily checkable in a constructor.
In short, having null references is useful (a value outside of the type cannot be introduced easily unless the language gives a hand, check eof() in C++ character_traits),
Good gripes, I thought we'd been through this. If you need null, use it, already! Nobody is trying to take it away, we're suggesting that most uses of pointers/references should never be null, and such a constraint can and should be modeled in the type system. It's also worth pointing out that others have invented (non-null) sentinel values even for nullable types.
while forcing non-null references hardly offers any significant advantage.
They make sure you never pass null to a function that doesn't expect null - I'd say that's a nice advantage. As you may well be aware, reals are supersets of longs, just like nullable references are supersets of non-nullable references. If the argument was that (performance aside) you should simply use real wherever a long was needed, would you consider that a good idea? I mean, it's just a matter of making sure you never store a NaN or other non-integer in it. The example is admittedly more extreme, but the general idea is the same. -- Simen