https://gcc.gnu.org/bugzilla/show_bug.cgi?id=72776
Bug ID: 72776 Summary: Too large array size not diagnosed properly when inferred from an initializer Product: gcc Version: 7.0 Status: UNCONFIRMED Keywords: diagnostic Severity: normal Priority: P3 Component: middle-end Assignee: unassigned at gcc dot gnu.org Reporter: amonakov at gcc dot gnu.org Target Milestone: --- For the following testcase: static char c[] = {[~0ul] = 1}; the array c would cover the whole address space. It's been noted that objects spanning more than half of address space are not supportable from the middle-end point of view. However, there is no early check for arrays that have their size inferred from their initializer, like the above; they would be diagnosed when an attempt to emit the initializer to assembly is made, causing the diagnostic to be omitted at -O1+ (if the object is unused) or -fsyntax-only. Moreover, for the example above GCC gets confused about the size of c and emits it as a zero-sized object. This is due to code in stor-layout.c: /* ??? We have no way to distinguish a null-sized array from an array spanning the whole sizetype range, so we arbitrarily decide that [0, -1] is the only valid representation. */ if (integer_zerop (length) && TREE_OVERFLOW (length) && integer_zerop (lb)) length = size_zero_node; (I don't really understand the comment). This is from https://gcc.gnu.org/ml/gcc-patches/2012-11/msg02230.html which gives as an example type Arr is array(Long_Integer) of Boolean; As I understand this would declare a similar array consuming the whole address space. From the testcases in the patch, it looks like the intention is to allow such huge array types while still rejecting objects of those types (?).