On 2/1/14, 7:29 PM, Jonathan M Davis wrote:
On Saturday, February 01, 2014 12:09:10 Andrei Alexandrescu wrote:
On 2/1/14, 2:14 AM, Jonathan M Davis wrote:
On Saturday, February 01, 2014 04:01:50 deadalnix wrote:
Dereferencing it is unsafe unless you put runtime check.

How is it unsafe? It will segfault and kill your program, not corrupt
memory. It can't even read any memory. It's a bug to dereference a null
pointer or reference, but it's not unsafe, because it can't access _any_
memory, let alone memory that it's not supposed to be accessing, which is
precisely what @safe is all about.

This has been discussed to death a number of times. A field access
obj.field will use addressing with a constant offset. If that offset is
larger than the lowest address allowed to the application, unsafety may
occur.

The amount of low-address memory protected is OS-dependent. 4KB can
virtually always be counted on. For fields placed beyond than that
limit, a runtime test must be inserted. There are few enough 4KB objects
out there to make this practically a non-issue. But the checks must be
there.

Hmmm. I forgot about that. So, in essence, dereferencing null pointers is
almost always perfectly safe but in rare, corner cases can be unsafe. At that
point, we could either always insert runtime checks for pointers to such large
types or we could mark all pointers of such types @system (that's not even
vaguely acceptable in the general case, but it might be acceptable in a rare
corner case like this). Or we could just disallow such types entirely, though
it wouldn't surprise me if someone screamed over that. Runtime checks is
probably the best solution, though with any of those solutions, I'd be a bit
worried about there being bugs with the implementation, since we then end up
with a rare, special case which is not well tested in real environments.

   Which is stupid for something that can be verified at compile time.

In the general case, you can only catch it at compile time if you disallow
it completely, which is unnecessarily restrictive. Sure, some basic cases
can be caught, but unless the code where the pointer/reference is defined
is right next to the code where it's dereferenced, there's no way for the
compiler to have any clue whether it's null or not. And yes, there's
certainly code where it would make sense to use non-nullable references
or pointers, because there's no need for them to be nullable, and having
them be non-nullable avoids any risk of forgetting to initialize them,
but that doesn't mean that nullable pointers and references aren't useful
or that you can catch all instances of a null pointer or reference being
dereferenced at compile time.
The Java community has a good experience with @Nullable:
http://stackoverflow.com/questions/14076296/nullable-annotation-usage

Sure, and there are other things that the compiler can do to catch null
dereferences (e.g. look at the first dereferencing of the pointer in the
function that it's declared in and make sure that it was initialized or
assigned a non-null value first), but the only way to catch all null
dereferences at compile time would be to always know at compile time whether
the pointer was null at the point that it's dereferenced, and that can't be
done.

What are you talking about? That has been done.

AFAIK, the only solution that guarantees that it catches all dereferences of
null at compile time is a solution that disallows a pointer/reference from
ever being null in the first place.

Have you read through that link?


Andrei


Reply via email to