There are a couple of places in gcc where wierd-sized pointers are an
issue. While you can use a partial-integer mode for pointers, the
pointer *math* is still done in standard C types, which usually don't
match the modes of pointers and often result in suboptimal code.
My proposal is to allow the target to define its own type for
pointers, sizeof_t, and ptrdiff_t to use, so that gcc can adapt to
weird pointer sizes instead of the target having to use power-of-two
pointer math.
This means the target would somehow have to register its new types
(int20_t and uint20_t in the MSP430 case, for example), as well as
specify that type to the rest of gcc. There are some problems with
the naive approach, though, and the changes are somewhat pervasive.
So my question is, would this approach be acceptable? Most of the
cases where new code would be added, have gcc_unreachable() at the
moment anyway.
Specific issues follow...
SIZE_TYPE is used...
tree.c: build_common_tree_nodes()
compares against fixed strings; if no match it
gcc_unreachable()'s - instead, look up type in
language core.
c-family/c-common.c
just makes a string macro, it's up to the target to
provide a legitimate value for it.
lto/lto-lang.c
compares against fixed strings to define THREE related
types, including intmax_type_node and
uintmax_type_node. IMHO it should not be using
pointer sizes to determine integer sizes.
PTRDIFF_TYPE is used...
c-family/c-common.c
fortran/iso-c-binding.def
fortran/trans-types.c
These all use lookups; however fortran's
get_typenode_from_name only supports "standard" type
names.
POINTER_SIZE is used...
I have found in the past that gcc has issues if POINTER_SIZE
is not a power of two. IIRC BLKmode was used to copy pointer
values. Other examples:
assemble_align (POINTER_SIZE);
can't align to non-power-of-two bits
assemble_integer (XEXP (DECL_RTL (src), 0),
POINTER_SIZE / BITS_PER_UNIT, POINTER_SIZE, 1);
need to round up, not truncate