On 11/14/23 04:03, Richard Biener wrote:
I suggest you farm bugzilla for the compile-time-hog / memory-hog testcases.
I do have a set of "large" testcases. Scanning results points at
PRs 36262, 37448, 39326, 69609 all having RA in the 20% area at
-O0 -g.
It's also a good idea to take say cc1files (set of preprocessed sources
that produce GCCs cc1) and look at the overall impact of compile-time
and memory-usage of a change on those which are representative
for "normal" TUs as opposed to the PRs above which often are
large machine-generated TUs (an important area where GCC usually
shines, at least at -O1).
RA is expensive optimization pass in any compiler even if the fastest
algorithms are used.
The most illustrative PR for this is 108500 where RA at -O0 spent 90%
(200s) of compilation time. But it is nothing in comparison with LLVM
"fast" RA algorithm where LLVM-14 spent almost 100% or 41500s (200 times
more than GCC) at -O0.
LLVM greedy RA is even worse I stopped LLVM after 120hours at -O1 when
GCC spent 30min at -O1. In contrast to LLVM, GCC RA also solves code
selection task.
IMHO GCC is better scaling compiler and better compiler for big TUs and
functions. When I worked on CRuby, I saw an interesting results of GCC
vs LLVM. Clang-15 with -O3 produced 70% slower (on a simple Ruby test)
Ruby basic interpreter code than GCC-12 with -O3. Also Clang spends 20
times more time to compile major Ruby interpreter file vm.c with huge
major interpreter function (315s for clang vs 15s for GCC).