> This sounds like a genuine bug in gcc, then. As far as I can see,
> Andrew is right -- if the ARM hardware requires a legitimate object
> to be placed at address zero, then a standard C compiler has to use
> some other value for the null pointer.

I think changing that would cause more trouble than gain. The
processors where 0 is a legitimate object for pointer dereference 
are mostly the embedded cores without MMU (e.g. ARM7TDMI based
controllers, m68k family controllers, AVR, 68HC1x and alike). Some of
these do actually utilise their entire address space, such as the
68HC11 or the AVR, so there is no address whatsoever that is not a valid
one. Therefore, you can not define a standard-compliant NULL pointer,
unless you define the pointer to something wider than 16 bits and make
the long the smallest int that can store a pointer. In that case, it
would be easier just simply drop those processor families as targets,
as the generated code would be unusable in practice.

In fact, on a naked CPU core in an unknown hardware configuration,
without an MMU you can not define a NULL pointer that is guaranteed
to never point to a valid datum or function, simply becase as far
as the processor is concerned, every address in it entire address
space is valid. Since the compiler can not possibly know what address
is used by the surrounding hardware and what isn't, it can not
guarantee what the standard demands.

That, I think, is a problem with the standard and not with the
compiler. On such targets 0 for NULL is just as good a choice as any
other. Actually better, especially because that is the choice that makes
the conversion between a pointer and an integer a no-op, saving both
code space and execution time.

On such target environment the user has to learn (as I had to) to ask
the compiler not to strictly conform to the standard and not to infer
from a dereference operation that the pointer is not NULL.

Zoltan

Reply via email to