On Saturday, August 26, 2017 23:53:36 Cecil Ward via Digitalmars-d-learn 
wrote:
> On Saturday, 26 August 2017 at 23:49:30 UTC, Cecil Ward wrote:
> > On Saturday, 26 August 2017 at 18:16:07 UTC, ag0aep6g wrote:
> >> On Saturday, 26 August 2017 at 16:52:36 UTC, Cecil Ward wrote:
> >>> Any ideas as to why GDC might just refuse to do CTFE on
> >>> compile-time-known inputs in a truly pure situation?
> >>
> >> That's not how CTFE works. CTFE only kicks in when the
> >> *result* is required at compile time. For example, when you
> >> assign it to an enum. The inputs must be known at compile
> >> time, and the interpreter will refuse to go on when you try
> >> something impure. But those things don't trigger CTFE.
> >>
> >> The compiler may choose to precompute any constant expression,
> >> but that's an optimization (constant folding), not CTFE.
> >
> > I think I understand, but I'm not sure. I should have explained
> > properly. I suspect what I should have said was that I was
> > expecting an _optimisation_ and I didn't see it. I thought that
> > a specific instance of a call to my pure function that has all
> > compile-time-known arguments would just produce generated code
> > that returned an explicit constant that is worked out by CTFE
> > calculation, replacing the actual code for the general function
> > entirely. So for example
> >
> >     auto foo() { return bar( 2, 3 ); }
> >
> > (where bar is strongly pure and completely CTFE-able) should
> > have been replaced by generated x64 code looking exactly
> > literally like
> >
> >     auto foo() { return 5; }
> >
> > expect that the returned result would be a fixed-length literal
> > array of 32-but numbers in my case (no dynamic arrays anywhere,
> > these I believe potentially involve RTL calls and the allocator
> > internally).
>
> I was expecting this optimisation to 'return literal constant
> only' because I have seen it before in other cases with GDC.
> Obviously generating a call that involves running the algorithm
> at runtime is a performance disaster when it certainly could have
> all been thrown away in the particular case in point and been
> replaced by a return of a precomputed value with zero runtime
> cost. So this is actually an issue with specific compilers, but I
> was wondering if I have missed anything about any D general rules
> that make CTFE evaluation practically impossible?

I don't know what you've seen before, but CTFE _only_ happens when the
result must be known at compile time - e.g. it's used to directly initialize
an enum or static variable. You will _never_ see CTFE done simply because
you called the function with literals. It's quite possible that GDC's
optimizer could inline the function and do constant folding and
significantly reduce the code that you actually end up with (maybe even
optimize it out entirely in some cases), but it would not be CTFE. It would
simply be the compiler backend optimizing the code. CTFE is done by the
frontend, and it's the same across dmd, ldc, and gdc so long as they have
the same version of the frontend (though the current version of gdc is quite
old, so if anything, it's behind on what it can do). So, if you want CTFE to
occur, then you _must_ assign the result to something that must have its
value known at compile time, and that will be the same across the various
compilers so long as the frontend version is the same. Any optimizations
which might optimize out function calls would be highly dependent on the
compiler backend and could easily differ across compiler versions.

My guess is that you previously saw your code optimized down such that you
thought that the compiler used CTFE when it didn't and that you're not
seeing such an optimization now, because your function is too large. If you
want to guarantee that the call is made at compile time and not worry about
whether the optimizer will do what you want, just assign the result to an
enum and then use the enum rather than hoping that the optimizer will
optimize the call out for you.

- Jonathan M Davis

Reply via email to