On Tue, 22 Sep 2009 17:56:46 -0400, Jarrett Billingsley <jarrett.billings...@gmail.com> wrote:

On Tue, Sep 22, 2009 at 4:13 PM, Steven Schveighoffer
<schvei...@yahoo.com> wrote:
Dynamic downcasts do usually indicate a weakness in the design. But an
even more fundamental matter of style is to not repeat yourself. You
don't use "x + y" if you need the sum of x and y in ten places; you do
"sum = x + y" and then use "sum." The same applies here. You're not
just working around a deficiency in the compiler, you're saving
yourself work later if you need to change all those values, and you're
giving it some kind of semantic attachment by putting it in a named
location.

What if x and y are possibly changing in between calls to x + y?

I'm thinking of a situation where o *might* be changing, but also might not, the compiler could do some optimization to avoid the dynamic-cast calls in the cases where it doesn't change. These would be hard to code manually.
 My understanding of pure function benefits (and it's not that great) is
that you can express yourself how you want and the compiler fixes your code
to be more optimized knowing it can avoid calls.

Realistically, how often is this going to come up? Why the hell are we
looking at what amounts to CSEE on a rarely-used construct when there
are far more important performance issues? I understand wanting to
solve this problem for pedagogical reasons, but practically, I don't
see the benefit.

Why have pure functions at all? Seriously, all pure function reorderings and reuse can be rewritten by human optimization. If we aren't going to look for places that pure functions can help optimize, why add them to the language, it seems more trouble than its worth?

If all it takes to optimize dynamic casts is to put pure on the function signature, have we wasted that much time?

-Steve

Reply via email to