Consider a snippet like the following

-----------Begin Snippet--------------
#define N 2048

int main(int argc; char *argv[])
{
  int  mat[N][N];
,,,
-----------End Snippet--------------

or the slightly snazified C99 style

-----------Begin Snippet--------------
int main(int argc; char *argv[])
{
...
   N = strtol(argv[1], arg_end, 0);
   int  mat[N][N];
,,,
-----------End Snippet--------------

One would expect both of these to compile and run just fine. In fact both
compile without any problems, and for small values of N work as expected.
However, when the size of the array becomes about 8MB, the programs segfault at
run time. I get identical behaviour use both styles of declaring mat. This is
not a problem with the rest of the code. If I declare mat as an (**int) or as
an *int[N] and use malloc, then the program runs just fine for any value of N
(well, at least values of N which run in some reasonable period of time). There
is nothing sacred about it being a mutli-dimensional array. I've now reproduced
with 1d arrays, it's just that my actual code uses 2d arrays. Also, the size
limitation seems to be 8MB. If I use char instead of int, I can make the # of
elements larger until I hit 8MB. 

I've had identical results both on my x86 (coppermine) Debian box using Debian
builds of gcc 3.3, 4.1 and 4.2, and also on my ppc32 Mac OS X machine with
Apple builds of gcc 3.3 and 4.0.x. Thus I think this is an issue in gcc core.


-- 
           Summary: C/C++ programs segfault at runtime if arrays larger than
                    8MB are declared.
           Product: gcc
           Version: unknown
            Status: UNCONFIRMED
          Severity: normal
          Priority: P3
         Component: c
        AssignedTo: unassigned at gcc dot gnu dot org
        ReportedBy: is+gcc at cs dot hmc dot edu


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=32520

Reply via email to