On 6/5/18 5:03 PM, FeepingCreature wrote:
On Tuesday, 5 June 2018 at 17:47:15 UTC, Steven Schveighoffer wrote:
Another observation: under the "infinite loops are important observable behavior" world-view, pure functions cannot be lazily evaluated either:

pure int foo() { /*infinite loop */}

void main(string[] args)
{
   auto a = foo;
   writeln("hi");
   if(args[1] == "printa")
      writeln(a);
}

With some optimizers, this can be rewritten:

writeln("hi");
if(args[1] == "printa")
   writeln(foo);

Which if foo is a *normally returning* function and not an infinite loop one, then this can save cycles in certain cases.

But under your world-view, the optimization is invalid, because foo might have an infinite loop, and then the observable behavior changes (instead of printing nothing and infinite looping, "hi" is printed, and infinite loops).


That's correct, this optimization is invalid. The only optimization that can arise from foo being pure is *subsequent* calls to foo being removed.

I think Haskell would disagree with you: https://wiki.haskell.org/Lazy_evaluation

On Tuesday, 5 June 2018 at 17:47:15 UTC, Steven Schveighoffer wrote:
I'll repeat what I said in the PR where you made this similar comment, I don't think it is important to ensure a pure function that never returns is always called. Can you explain the benefit of such a thing?

We've all heard of optimizers that reduce "code that does nothing" down to just a return statement, foiling people who are expecting benchmarks to run properly, why is this any different?


Frankly speaking, we should not implement optimizations merely on the basis that we cannot immediately think of a case where they fail. For instance, a practical function that loops forever would be a find() call on an infinite range, such as a range returned by .repeat or .generate.

But a call to find doesn't "do nothing". It takes a range and returns a range.

We are specifically talking about strong-pure functions that return void, or even strong pure functions whose return value is ignored.

And yes, we can actually prove that calls to pure functions do nothing based on the rules of pure functions, which is why the optimization is easy to prove correct. It's one of the reasons pure optimizations are much easier to reason about.

However, if we have a wrinkle of "we have to make sure infinite loops execute their thing", then many pure optimizations get thrown out the window.

-Steve

Reply via email to