On Fri, Apr 29, 2005 at 12:49:37PM +0200, Lars Segerlund wrote:
>  If we do a reasonable comparison of compile times against the intel compiler 
> or
>  the portland group or something similar we consistenly find that gcc is 
> slower
>  by a couple of times 1x - 3x, ( this is only my impression, not backed up by
>  hard data but should be in the ballpark ).

Please don't add additional speculation to this already messy subject. 
Feel free to come back with data.

>  The real killer seems to be large memory usage, and I have a hard time 
> believing that
>  if you compile fx. 1 meg of source the compiler 'have' to use some 800 megs 
> or 
>  something as working memory. ( When speaking of the real killer here I mean 
> for
>  old systems ). With all the discussions on cache hit rate and similar 
> criterions
>  lately we can't forget that less data higher means hit rate.

Same here.  You've shown pretty clearly that you haven't looked at what
GCC does with its memory usage.  Yes, a lot of it is wasted, but a lot
of that is constant factors (e.g. structures that are wastefully
large), not things which would affect a non-linear blowup.  Do you
think it adds any value to GCC development to shout "please think about
this problem" without any concrete suggestions?

-- 
Daniel Jacobowitz
CodeSourcery, LLC

Reply via email to