Yuval Kogman wrote:
- optimizers stack on top of each other
- the output of each one is executable
- optimizers work in a coroutine, and are preemptable
- optimizers are small
- optimizers operate with a certain section of code in mind
> ...
Optimizers get time slices to operate on code as it is needed. They
get small portions - on the first run only simple optimizations are
expected to actually finish.
> ...
A couple of thoughts spring to mind: in these coming times of ubiquitous
multi-core computing with software transaction support, perhaps it would
be realistic to place optimisation on a low-priority thread. So much
code is single-threaded that anything we can do to make use of
dual-cores is likely to improve system efficiency.
The other thing that I thought of was the question of errors detected
during optimisations. It is possible that an optimiser will do a more
in-depth type inference (or dataflow analysis, etc.) and find errors in
the code (e.g. gcc -O2 adds warnings for uninitialised variables). This
would be a compile-time error that occurs while the code is running. If
a program has been running for several hours when the problem is found,
what do you do with the error? Would you even want to send a warning to
stderr?