On Apr 11, 2005 2:01 PM, Andrew Haley <[EMAIL PROTECTED]> wrote:
> Nathan Sidwell writes:
>  > Andrew Haley wrote:
>  > > Nathan Sidwell writes:
>  > >  > Andrew Haley wrote:
>  > >  >
>  > >  > > Might it still be possible for a front end to force all pending code
>  > >  > > to be generated, even with -fno-unit-at-a-time gone?
>  > >  >
>  > >  > I think this is a bad idea.  You're essentially asking for the backend
>  > >  > to retain all the functionality of -fno-unit-at-a-time.
>  > >
>  > > OK.  So, what else?
>  > As steven asked, I'd like to understand why this is not a problem
>  > for the C++ community.  There are several alternatives
>  >
>  > 1) The C++ programs are smaller than the java programs
> 
> That's my guess.  Usually, C++ users compile one source file at a
> time, whereas Java users find it convenient to compile a whole
> archive.

That's not really true.  For C++ programs relying heavily on templates libraries
like POOMA, all of the C++ program is usually in one translation unit.  Memory
usage of GCC in this respect is reasonable for me (it was much worse!)
- compiling
a big app fits in a Gig of ram easily.  Unlike with the Intel
compiler, f.i. which cannot
even compile a moderately large POOMA program with the 2GB virtual(!) memory
limit.

This is one important point, though - if the Java stuff takes more
than the usual
_virtual_ memory limits on a 32bit machine, the simple answer cannot
be "buy more ram"
any longer.  Of course "buy a 64bit machine" would be the canonical
response then.

It would be interesting to know what part of GCC is requiring so much
memory - does
it help switching to -O0, or is the internal representation really
taking so much memory?

RIchard.

Reply via email to