Am 29.02.2012, 02:30 Uhr, schrieb Piotr Szturmaj <bncr...@jadamspam.pl>:

CTFE code can be much slower than native one, and I would like to see
some kind of compiler cache for such code.

I second this. As a fan of clean optimizations this is one of the things I 
tossed around my head for a while. It could use a cache file or the compiler 
could be started as a daemon process keeping a memory cache. All code that is 
called through CTFE would go into the cache, indexed by the internal 
representation of the function body and parameters.
But there are a few open questions, like how seamless this could be integrated. 
Is it easy to get a hash for function bodies and parameters? How should the 
cache be limited? N functions, n bytes or maybe one cache per module (together 
with the .o files)? The latter case would mean that if a.d uses CTFE, that 
executes code from b.d the results of CTFE would all be cached together with 
a.o, because that was the entry point. And if there was a module c.d that does 
the same as a.d it would duplicate the b.d part in its own cache. The benefit 
is that the cache works with both single module and whole program compilation 
and it doesn't need any limits. Instead the caches for each module are always 
refreshed with what was last used in the compilation.
In addition to the last compilation, the caches could be aware of versioned 
compilation. I usually want to compile debug versions and Linux/Windows release 
versions at least, so I wouldn't want to invalidate the caches. For 32-bit vs. 
64-bit I assume it is the best to just cache them separately as it could prove 
difficult to distinguish two versions of code that uses (void*).sizeof or 
something else that isn't declared wrapped in a version statement like size_t 
is.

Reply via email to