On 7/4/2011 5:38 PM, Mehrdad wrote:
Actually, there is **NO** performance issue -- at least not in C#. In fact, if
you run this program (with or without optimizations), you will see that they're
literally the same almost all the time:

It's a bit extraordinary that there is no cost in executing extra instructions.

I've seen many programmers (including experts) led down a primrose path on optimizations because they thought A was happening when actually B was. It pays to check the assembler output and see what's happening.

In your benchmark case, it is possible that the optimizer discovers that i is in the range 0..COUNT, and that i*i can therefore never overflow, and then the overflow check is eliminated.

Or it is possible that the compiler realizes that globalVar, despite being global, is never read from, and hence the assignments to it are dead, and hence the i*i is never computed at all.

Or that the loop is unrolled 10 times, and the overflow check is only done once per 10 iterations, burying its cost.

In other words perhaps it's a special case optimization happening, and is not indicative at all of the actual cost of overflow checks in the usual case, which cannot be optimized away.

Reply via email to