On Friday, 26 August 2016 at 08:44:54 UTC, kink wrote:
On Friday, 26 August 2016 at 05:50:52 UTC, Basile B. wrote:
On Thursday, 25 August 2016 at 22:37:13 UTC, kinke wrote:
On Thursday, 25 August 2016 at 18:15:47 UTC, Basile B. wrote:
From my perspective, the problem with this example isn't missed optimization potential. It's the code itself. Why waste implementation efforts for such optimizations, if that would only reward people writing such ugly code with an equal performance to a more sane `2 * foo.foo()`? The latter is a) shorter, b) also faster with optimizations turned off and c) IMO simply clearer.

You're too focused on the example itself (Let's find an non trivial example, but then the asm generated would be longer). The point you miss is that it just *illustrates* what should happend when many calls to a pure const function are occur in a single sub program.

I know that it's just an illustration. But I surely don't like any function with repeated calls to this pure function. Why not have the developer code in a sensible style (cache that result once for that whole 'subprogram' manually) if performance is a concern? A compiler penalizing such bad coding style is absolutely fine by me.

To be honnest I would have expected another criticism against this optimization idea, which is that the aggregate can expose shared methods that could, this time, modify the state, invalidating the automatic cache produced by the optim for one thread, even if this scenario is not plausible without hacks (e.g casting away the "shared" part in the type of a variable that's used in the const pure function.

Reply via email to