Joern Rennecke wrote:
> In http://gcc.gnu.org/ml/gcc/2006-07/msg00390.html, you write:
>   
>> depending on what you are doing, you can update the solution in place.
>> The point of the dataflow talk was not to say that you cannot do
>> anything incremental, it was to say that there are no good GENERAL
>> techniques.  Many times it is possible to update the solution precisely
>> if you have a very good understanding of the local conditions under
>> which the transformation is done on.
>>     
>
> Right.  So the struct-equiv code does not have to be converted to
> use def-use chains to be beter integrated with thedataflow branch,
> it can happily go on using regsets of live registers.
> What it needed is a solid understanding about what invariants are
> required and how we can maintain them while we are changing the rtl,
> in order to keep the cost of updating the information under control.
> Which, ironically, is what I proposed to use in the first place outside
> of the dataflow branch to address the compile time problems, but Berndt
> considerd that approach to fragile.
>
>
>   
It is fragile, but not in the normal sense.  It cannot be generalized to
work with any general  transformation.  You will end up with something
that only works for the kinds of transformations that you are doing in
the pass you are writing.  This is fine. 

I should point out that you will also not be able to cure cancer with
this.  You just have to be satisfied to do a good job solving the
problem at hand. 
>> The other trick it to do what I do in the new fast dce or the if-cvt on
>> the dataflow branch:
>> order the basic blocks so that it does not matter if you mess up the
>> solution. Generally a post order or a reverse postorder traversial of
>> the basic blocks will have the property that you can just mess things up
>> in a wave that moves from the beginning to the end or the end to the end
>> of the program without ever seeing the mess you make.
>>     
>
> cross-jumping two entire basic blocks creates opportunities for further
> cross-jumping and jump simplification involving the predecessor blocks.
> This appears to fit a reverse postorder traversal.
>
> However, if-conversion creates further if-conversion opportunities involving
> the newly merged block.  Thus, while a postorder / reverse postorder
> traversal makes sense, we also need valid information for the newly merged
> block, its successors and predecessors.
> I suppose reg-live-at-start / reg-live-at-end information is actually easier
> to maintain during if-conversion that def-use chains.
>
>   
This is true, certainly in theory, a lot less so in practice.
The way that you order things is this. 

while (something changes and we have not done too many passes)
    do the analysis
    do one properly ordered pass over the cfg changing what you can
change without using information that has been corrupted during this pass.
 
You should limit the number of passes to some small number times the
optimization level.
you only iterate if you made changes during that pass.
You try to keep the dataflow up-to-date.  If some transformation is too
hard to do this, you mark the block as not transformable until the next
pass.

In practice, there are few second order effects so you end up iterating
twice.

>>>> I think that what you want is something like the reaching uses problem
>>>> but you want a forwards version of this rather than a backwards version
>>>> as is defined in df-problems.c.
>>>>         
> ...
>   
>> the gen set of the reaching uses problem is the set of uses.  the kill
>> set are the defs and the clobbers.  This is why the problem is called
>> "reaching uses".
>>     
>
> In my problem, the gen set is the set of uses, and the kill set are
> defs, clobbers and uses.  Hence, it's still backwards problem.
>
>   
You could define it either way.  However, the backwards problem is
already there in both the mainline and the dataflow branch.  It is
called reaching uses or DF_RU.

> But it is not really useful to compute this just for one or two code
> transformations - benefits could only be obtained if this information
> can be kept usable between different transformation to reduce the number
> of global updates.
>
>   
You have to let it go.  This is why we defined it the way we did.  Just
compute it, use it, and throw it away when you are finished.  It is way
too hard to figure out how to keep this up to date for all of the passes.

> I suppose I should instead continue to operate on regsets, but keep them
> usable without frequent global updates, if that's all right with Berndt.
>   

I think you should look closely are DF_RU.

Reply via email to