On Nov 5, 2006, at 08:46, Kenneth Zadeck wrote:
The thing is that even as memories get larger, something has to give.
There are and will always be programs that are too large for the most
aggressive techniques and my proposal is simply a way to gracefully shed
the most expensive techniques as the programs get very large.

The alternative is to just to just shelve these bugs and tell the
submitter not to use optimization on them. I do not claim to know what
the right approach is.

For compiling very large programs, lower optimization levels
(-O1 -Os) already tend to be competitive with -O2. More often
than not, -O3 does not improve results or even results in
slower code. Performance becomes dominated by how much code
can fit in L2, TLB or RAM.

Ideally, we would always compile code that is executed infrequently
using -Os to minimize memory footprint, and always compile code
in loops with many iterations using high optimization levels.

Kenneth's proposal to trigger different optimization strategies
in each function based on certain statistics seems an excellent
step to allow compilations to be more balanced. This does not
only help in reducing compile time (mostly through reduced
memory usage), but also may improve generated code.

For the rare program with huge performance-critical functions,
we can either add user-adjustable parameters, new optimization
levels, or use profile-feedback to find out about such
programs.

For most programs though, more balanced optimization allows
the compiler to aggressively optimize code that matters, and
minimizing code size and compile-time for large swaths of
code that are uninteresting.

  -Geert

Reply via email to