On Monday, 1 October 2012 at 17:46:00 UTC, Tommi wrote:
On Monday, 1 October 2012 at 08:04:49 UTC, Jonathan M Davis wrote:
And you _can't_ determine ahead of time which functions can be safely executed at compile time either, because that's an
instance of the halting problem.

I don't understand (I did read what "halting problem" means just now, but I still don't understand). If there was no __ctfe variable, and thus a guarantee that all functions do the same thing at compile-time and run-time, couldn't the compiler just try aggressively to execute all function calls at compile-time? Obviously it wouldn't bother trying CTFE for function calls that had arguments which weren't evaluable at compile-time. Nor would it bother with functions that it knows have memory allocations or other limitations of CTFE.

If not a compiler flag, then it could be a function attribute. Just like c++ function attribute constexpr, which guarantees that the function executes at compile time given you provide it with compile-time evaluable arguments (and there are limitations to what the function can do). Why wouldn't this attribute be possible with D?

__ctfe is a horrible yet very useful hack to address the underlying issue - the execution model for CTFE, which I personally do not agree with. Adding a compiler flag for the existing model, makes no sense whatsoever. Functions are essentially a run-time abstraction and the compiler generally speaking has no business trying to execute them at compile-time. The compiler is after all *not* an interpreter.

Besides, what would be the use case for such a flag anyway? If you already know that all parameters are known at compile-time, you can already "tell" the compiler to execute the function by assigning to a static/enum variable.

IMO, this mixing of code for various stages of execution is bad design but this cannot be changed in a backwards compatible way.

Reply via email to