Hello Rainer,

BCS wrote:

Ok try another case:

y = a*b + c*a;

for scalers, the compiler can (and should) convert that 3 ops to 2:

y = a*(b+c);

for a matrix you can't. And I'm sure there are matrix cases where you
can get improvements by doing algebraic manipulations before you
inline that would be darn near impossible to get the compiler to
figure out after you inline.

There are two possible cases:
- The algebraically manipulated expression is functionally
equivalent
to the original expression.  In this case, the optimizer should
produce
the exact same code whether for both expressions, so the high-level
algebraic manipulation is unnecessary.  (Yes, I'm asking a lot from
the
optimizer.)


Now that's a bit of understatment :)

- The algebraically manipulated expression is /not/ functionally
equivalent to the original expression.  In this case the algebraic
manipulation is invalid, and should not be performed.


Duh :)

Optimizing complex expressions is not algorithmically more difficult
than optimizing simple expressions, it just takes more memory and CPU
time.

My argument is coming from my understanding of what Andrei Alexandrescu was proposing. I think he is forwarding that operator overloading be done via AST macros. The end effect could be something like my backmath library in the it could manipulate expression under user control before the optimizer gets it crack at things.


Reply via email to