On Saturday, 6 October 2012 at 04:10:28 UTC, Chad J wrote:
On 10/05/2012 08:31 AM, Regan Heath wrote:
On Fri, 05 Oct 2012 05:19:13 +0100, Alex Burton
<alexibureplacewithz...@gmail.com> wrote:
On Saturday, 15 September 2012 at 17:51:39 UTC, Jonathan M
Davis wrote:
On Saturday, September 15, 2012 19:35:44 Alex Rønne
Petersen wrote:
Out of curiosity: Why? How often does your code actually
accept null as
a valid state of a class reference?
I have no idea. I know that it's a non-negligible amount of
the time,
though
it's certainly true that they normally have values. But null
is how you
indicate that a reference has no value. The same goes for
arrays and
pointers.
Sometimes it's useful to have null and sometimes it's useful
to know
that a
value can't be null. I confess though that I find it very
surprising
how much
some people push for non-nullable references, since I've
never really
found
null to be a problem. Sure, once in a while, you get a null
pointer/reference
and something blows up, but that's very rare in my
experience, so I
can't help
but think that people who hit issues with null pointers on a
regular
basis are
doing something wrong.
- Jonathan M Davis
In my experience this sort of attutide is not workable in
projects
with more than one developer.
Almost all my work is on projects with multiple developers in
C/C++ and
making extensive use of null.
It all works OK if everyone knows the 'rules' about when to
check for
null and when not to.
As every good C/C++ developer does. The rule is simple, always
check for
nulls on input passed to "public" functions/methods. What you
do with
internal protected and private functions and methods is up to
you (I use
assert).
Telling team members that find bugs caused by your null
references
that they are doing it wrong and next time should check for
null is a
poor substitute for having the language define the rules.
Having language defined rules is a nice added /bonus/ it
doesn't let you
off the hook when it comes to being "null safe" in your code.
A defensive attitude of checking for null everywhere like I
have seen
in many C++ projects makes the code ugly.
That's a matter of opinion. I like to see null checks at the
top of a
function or method, it makes it far more likely to be safe and
it means
I can ignore the possibility of null from then on - making the
code much
cleaner.
R
I find this to be very suboptimal at the least.
This prevents null values from traveling "up" the stack, but
still allows them to move "down" (as return values) and allows
them to survive multiple unrelated function calls.
It catches null values once they've already ended up in a place
they shouldn't be. Too late.
Nulls can also be placed into variables within structs or
classes that then get passed around. Checking for those can
require some complex traversal: impractical for casual one-off
checks at the start of a function in some cases.
void main()
{
void* x = a(b());
c();
while(goobledegook)
{
x = p();
d(x);
}
e(x); /+ Crash! x is null. +/
}
Where did x's null value come from? Not a. Not p; the while
loop happened to be never executed. To say "b" would be
closer, but still imprecise. Actually it was created in the
q() function that was called by u() that was called by b()
which then created a class that held the null value and was
passed to a() that then dereferenced the class and returned the
value stored in the class that happened to be null. nulls
create very non-local bugs, and that's why they frustrate me to
no end sometimes.
What I really want to know is where errant null values come
FROM.
I also want to know this /at compile time/, because debugging
run-time errors is time consuming and debugging compile-time
errors is not.
The above example could yield the unchecked null assignment at
compile time if all of the types involved were typed as
non-nullable, except for the very bare minimum that needs to be
nullable. If something is explicitly nullable, then its
enclosing function/class is responsible for handling null
conditions before passing it into non-nullable space.
If a function/class with nullable state tries to pass a null
value into non-nullable space, then it is a bug. This contains
the non-locality of null values as much as is reasonably
possible.
Additionally, it might be nice to have a runtime nullable type
that uses its object file's debugging information to remember
which file/function/line that its null value originated from
(if it happens to be null at all). This would make for some
even better diagnostics when code that HAS to deal with null
values eventually breaks and needs to dump a stack trace on
some poor unsuspecting sap (that isn't me) or ideally sends an
email to the responsible staff (which is me).
returned the value stored in the class that happened to be null.
Happened? "I was driving carefully and then it happened I drove
into the tree, officer." Every function should define its
interface, its contract with the outside world. If a() function
returns a pointer it is a part of the contract whether it can be
null. Two possibilities:
A) The contract says it can be null. Then it is your duty to
check for null. Period. Learn to read the signs before you start
driving. You assinged the value without checking, it is your
fault, not a()'s, not the language's.
B) The description of a() says the return value cannot be null.
Then a() should check its return value before returning or make
otherwise sure it is not null. If it returns null it is a bug.
One of the infinite number of possible bugs that can happen.
Again it is not the problem of the language. The problem of
divergence of specification and code is a human problem that
cannot be solved formally. Insistance on formal tools is a
misunderstanding that leads to design bloat and eventually
failure (Ada).
D competes directly with C++ as Ada did before. Ada drowned under
the weight of its "safety" and so will D if it goes the same
route. The only thing needed now are mature compilers and good
systems API integration. If anything I would rather consider
removing features from the language than adding them.