Fawzi Mohamed wrote:

On 21-lug-10, at 11:37, Don wrote:

Walter Bright wrote:
Don wrote:
While running the semantic on each function body, the compiler could fairly easily check to see if the function is CTFEable. (The main complication is that it has to guess about how many iterations are performed in loops). Then, when a CTFEable function is called with compile-time constants as arguments, it could run CTFE on it, even if it is not mandatory.
I think this is the halting problem, and is insoluble.

In general, yes. But some quite useful instances are trivial.
This is why the language does CTFE in well-defined circumstances and the CTFE must succeed else a compilation time error.

I'm not seeing CTFE as a credible optimization tool, either, as none of my programs would benefit at all from it. For example, what's a text editor going to precompute?

All it would be, would be moving parts of the optimization step from the back-end into the front-end. In theory, it wouldn't gain you anything. In practice, there may be some optimisations which are easier to do in the front-end than the back-end.
An existing example of this is with:

if(false) { xxx; }

where the entire xxx clause is dropped immediately, and not sent to the backend at all.

It don't think this has any practical consequences, it's merely an extra bit of freedom for compiler writers to implement optimisation.

Yes, but normally I dislike too much *unpredictable* magic.
yes you can try to evaluate some functions at compile time, but which ones?
You try for like 1s and if the calculation does not complete postpone it to the runtime? This then becomes Haskell like in the sense that some small changes to the source give large changes to the runtime, in a way that is not clearly seen from the source code.

There is absolutely no difference between this and the optimiser.


I am with Walter on this, one thing should be either compile time or runtime, and a programmer should be able to tell which is which.

Reply via email to