Suppose you had a month in which to reorganise gcc so that it builds
its 3-stage bootstrap and runtime libraries in some massively parallel
fashion, without hardware or resource constraints(*).  How might you
approach this?

I'm looking for ideas on improving the build time of gcc itself.  So
far I have identified two problem areas:

- Structural.  In a 3-stage bootstrap, each stage depends on the
output of the one before.  No matter what does the actual compilation
(make, jam, and so on), this constrains the options.
- Mechanical.  Configure scripts cause bottlenecks in the build
process.  Even if compilation is offloaded onto something like distcc,
configures run locally and randomly throughout the complete build,
rather than (say) all at once upfront.  Source code compilation
blocks until configure is completed.

One way to improve the first might be to build the runtime libraries
with stage2 and interleave with stage3 build on the assumption that
comparison will pass.  That probably won't produce an exciting
speedup, likely modest at best.

Improving the second seems trickier still.  Pre-canned configure
responses might help, but would be incredibly brittle.  If even
feasible, that is.  Rewriting gcc's build to use something other than
autotools is unlikely to win many friends, at least in the short term.

Have there been attempts at this before?  Any papers or presentations
I'm not aware of that address the issue?  Or maybe you've thought
about this in the past and would be willing to share your ideas?  Even
if only as a warning to stay away...

Thanks.


(* Of course, in reality there are always resource constraints.)

--
Google UK Limited | Registered Office: Belgrave House, 76 Buckingham
Palace Road, London SW1W 9TQ | Registered in England Number: 3977902

Reply via email to