On Tuesday, 5 June 2018 at 17:47:15 UTC, Steven Schveighoffer
wrote:
Another observation: under the "infinite loops are important
observable behavior" world-view, pure functions cannot be
lazily evaluated either:
pure int foo() { /*infinite loop */}
void main(string[] args)
{
auto a = foo;
writeln("hi");
if(args[1] == "printa")
writeln(a);
}
With some optimizers, this can be rewritten:
writeln("hi");
if(args[1] == "printa")
writeln(foo);
Which if foo is a *normally returning* function and not an
infinite loop one, then this can save cycles in certain cases.
But under your world-view, the optimization is invalid, because
foo might have an infinite loop, and then the observable
behavior changes (instead of printing nothing and infinite
looping, "hi" is printed, and infinite loops).
That's correct, this optimization is invalid. The only
optimization that can arise from foo being pure is *subsequent*
calls to foo being removed.
-Steve
On Tuesday, 5 June 2018 at 17:47:15 UTC, Steven Schveighoffer
wrote:
I'll repeat what I said in the PR where you made this similar
comment, I don't think it is important to ensure a pure
function that never returns is always called. Can you explain
the benefit of such a thing?
We've all heard of optimizers that reduce "code that does
nothing" down to just a return statement, foiling people who
are expecting benchmarks to run properly, why is this any
different?
Frankly speaking, we should not implement optimizations merely on
the basis that we cannot immediately think of a case where they
fail. For instance, a practical function that loops forever would
be a find() call on an infinite range, such as a range returned
by .repeat or .generate.