On Monday, 4 August 2014 at 02:18:12 UTC, David Bregman wrote:

His post basically says that his real life experience leads him to believe that a static analyzer based on using information from asserts will very likely generate a ton of warnings/errors, because real life code is imperfect.

In other words, if you use that information to optimize instead, you are going to get a ton of bugs, because the asserts are inconsistent with the code.

No.

My experience says deeper optimization comes from deeper understanding of the dataflow, with deeper understanding of the dataflow comes stricter warnings about defective usage.

ie. A Good compiler writer, as Walter and the gcc guys clearly are, don't just slap in an optimization pass out of nowhere.

They are all too painfully aware that if their optimization pass breaks anything, they will be fending off thousands of complaints that "Optimization X broke....".

Compiler users always blame the optimizer long before they blame their crappy code.

Watching the gcc mailing list over the years, those guys bend over backwards to prevent that happening.

But since an optimization has to be based on additional hard information, they have, with every new version of gcc, used that information both for warnings and optimization.

Reply via email to