I guess I fall more to the "reason about code" side of the scale
rather than "testing the code" side. Testing seem to induce false
hopes about finding all defects even to the point where the tester is
blamed for not finding a bug rather than the developer for introducing
it.

[Bardur]
> It's definitely a valid point, but isn't that an argument *for* testing
> for preformance regressions rather than *against* compiler optimizations?

We could test for regressions and pass. Then upgrade to a new version
of compiler and test would no longer pass. And vice versa.
Maybe that's your point too. :)

[Iustin]
> Surely there will be a canary
> period, parallel running of the old and new system, etc.?

Is that common? I have not seen it and I do think my workplace is a
rather typical one.

Also, would we really want to preserve the old "bad" code just because
it happened to trigger some optimization?

Don't get me wrong, I am all for compiler optimizations and the
benefits they bring as well as testing. Just highlighting some
potential downsides.

Regards
/Johan

_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

Reply via email to