2014-03-03 12:33 GMT+01:00 Richard Biener <rguent...@suse.de>:
> On Fri, 28 Feb 2014, Kai Tietz wrote:
>
>> Hmm,  this all reminds me about the approach Andrew Pinski and I came
>> up with two years ago.
>
> You are talking about the gimple folding interface?  Yes, but it's
> more similar to what I proposed before that.

Well, this interface was for rtl, gimple, and tree AFAIR.


>> So I doubt that we want to keep fold-const patterns similar to gimple
>> (forward-prop) ones.
>> Wouldn't it make more sense to move fold-const patterns completely
>> into gimple, and having a facility in FE to ask gimple to *pre*-fold a
>> given tree to see if a constant-expression can be achieved?
>
> That was proposed by somebody, yes.  The FE would hand off an
> expression to 1) the gimplifier to gimplify it, then 2) to the
> gimple folder to simplify it.  Not sure if that's a good design
> but yes, it mimics the awkward thing we do now (genericize for
> folding in fold_stmt), just the other way around - and it makes
> it very costly.

Right, if we would redo step one, and two each time we visit same
statement again, then of course we would produce pretty high load.
By hashing this *pre*-computed gimple-expression I think the load of
such an approach would lower pretty much.  Of course it is true that
we move gimplification-costs to FE.  Nevertheless the avarage costs
should be in general the same as we have them now.


> Having a single meta-description of simplifications makes it
> possible to have the best of both worlds (no need to GENERICIZE
> back from GIMPLE and no need to GIMPLIFY from GENERIC) with
> a single point of maintainance.

True.  I am fully agreeing to the positive effects of a single
meta-description for this area. For sure it is worth to avoid the
re-doing of the same folding for GENERIC/GIMPLE again and again.

> [the possibility to use offline verification tools for the
> transforms comes to my mind as well]
This is actually a pretty interesting idea.  As it would allow us to
do testing for this area without  side-effects by high-level passes,
target-properties, etc

> If you think the .md style pattern definitions are too limiting
> can you think of sth more powerful without removing the possibility
> of implementing the matching with a state machine to make it O(1)?

Well, I am not opposed to the use of .md style pattern defintions at
all.  I see just some weaknesses on the current tree-folding
mechanism.
AST folder tries to fold into statementes by recursion into.  This
causes pretty high load in different areas.  Like stack-growth,
unnecessary repetition of operations on inner-statements, and complex
patterns for expression associative/distributive/commutative rules for
current operation-code.
I am thinking about a model where we use just for the
base-fold-operations the .md-style pattern definitions. On top of this
model we set a layer implementing associative/commutative/distributive
properties for statements in an optimize form.
By this we can do two different things with lower load.  One hand we
can do "virtual" folding and avoid tree-rebuilding without need.  On
the second hand we can use same pass for "normalize" tree structure of
expression-chains.
Additionally we can get rid of the than pretty useless reassociation
pass, which is IMHO just necessary by gimple-folders inability to do
that.

> Thanks,
> Richard.

Additional I think we need to some related conceptional decisions here
too. I saw here some missing concept for abstraction of
back-end/target specific limitations in middle-end.  For example the
branch-cost optimization, or the target's pattern-preferences, etc.
We lack to make a consequent difference between target-agnostic and
target-specific passes in middle-end.  There were introduced
diagnostics in late passes.  For latter see "possible integral
overflow on additions in loops" as one sample. Such stuff hinders us
to do better job on pattern-folding in many places. Right now we tend
to introduce backend-representation too early (eg branch-cost,
avoiding packing of conditional patterns due some limitiation on some
targets, too early jump-threading, etc).  So I think we should think
about at least some passes doing target/backend agnostic
folding/normalization, and doing target/backend-specific
pattern-transformation explicit in latter passes.

Regards,
Kai

Reply via email to