On 8/3/14, 6:59 PM, David Bregman wrote:
w.r.t the one question about performance justification: I'm not
necessarily asking for research papers and measurements, but based on
these threads I'm not aware that there is any justification at all. For
all I know this is all based on a wild guess that it will help
performance "a lot", like someone who optimizes without profiling first.
That certainly isn't enough to justify code breakage and massive UB
injection, is it? I hope we can agree on that much at least!

I think at this point (without more data) a bit of trust in one's experience would be needed. I've worked on performance on and off for years, and so has Walter. We have plenty of war stories that inform our expertise in the matter, including weird stuff like "swap these two enum values and you'll get a massive performance regressions although code is correct either way".

I draw from numerous concrete cases that the right/wrong optimization at the right/wrong place may as well be the difference between winning and losing. Consider the recent php engine that gets within 20% of hhvm; heck, I know where to go to make hhvm 20% slower with 50 lines of code (compare at 2M+). Conversely, gaining those 20% were months multiplied by Facebook's best engineers.

Efficiency is hard to come by and easy to waste. I consider Walter's take on "assert" a modern, refreshing take on an old pattern that nicely preserves its spirit, and a good opportunity and differential advantage for D. If anything, these long threads have strengthened that belief. It has also clarified to me that:

(a) We must make sure we don't transform @safe code into unsafe code; in the first approximation that may simply mean assert() has no special meaning in release mode. Also bounds checking would need to probably be not elided by assert. I consider these challenging but in good, gainful ways.

(b) Deployment of optimizations must be carefully staggered and documented.


Andrei

Reply via email to