If CTFE gets used a lot, for example to parse a piece of D code at compile-time 
(as I am trying to do now), or it's used a lot, it can slow down the whole 
compilation. To solve this some kind of Just In Time compilation can be used to 
compile CTFE code and speed it up a lot.

There are several ways to do this. For example LLVM is designed to allow the 
creation of JIT compilers too, so LDC can use LLVM to JIT the CTFE code. But 
simpler solutions can be found, that can be used with DMD too.

DMD can keep an extra size_t state for each function run at compile-time. This 
word keeps it total running time in microseconds, summed for all the calls, of 
this function (this word is never present in the final binary). When this value 
for a function becomes bigger (for example) than 0.3 seconds, dmd can use 
itself to compile the function, so successive calls to it are run by the 
compiled version of the function.

The disadvantage of such simple solution is that a function that takes a lot to 
run the first time it is called will not be compiled. But I think this can be 
acceptable in many situations.

I think DMD can generally compile a CTFE function in less than 0.5 seconds. A 
possible optimization: I think compiling few functions doesn't take much more 
time than compiling just one, so when one of such "nested compilations" gets 
triggered, then CTFE functions that have more than 0.2 run time too can be 
compiled in the same block of code.

Bye,
bearophile

Reply via email to