On Thursday, July 27, 2017 11:03:02 Steven Schveighoffer via Digitalmars-d 
wrote:
> A possibility:
>
> "@safe D does not support platforms or processes where dereferencing a
> null pointer does not crash the program. In such situations,
> dereferencing null is not defined, and @safe code will not prevent this
> from happening."
>
> In terms of not marking C/C++ code safe, I am not convinced we need to
> go that far, but it's not as horrible a prospect as having to unmark D
> @safe code that might dereference null.

I see no problem whatsoever requiring that the platform segfaults when you
dereference null. Anything even vaguely modern will do that. Adding extra
null checks is therefore redundant and complicates the compiler for no gain
whatsoever.

However, one issue that has been brought up from time to time and AFAIK has
never really been addressed is that apparently if an object is large enough,
when you access one of its members when the pointer is null, you won't get a
segfault (I think that it was something like if the object was greater than
a page in size). So, as I understand it, ludicrously large objects _could_
result in @safety problems with null pointers. This would not happen in
normal code, but it can happen. And if we want @safe to make the guarantees
that it claims, we really should either disallow such objects or insert null
checks for them. For smaller objects though, what's the point? It buys us
nothing if the hardware is already doing it, and the only hardware that
wouldn't do it should be too old to matter at this point.

So, I say that we need to deal with the problem with ludicrously large
objects, but beyond that, we should just change the spec, because inserting
the checks buys us nothing.

- Jonathan M Davis

Reply via email to