Vladimir N. Makarov wrote:

>> However, my understanding (as someone who's not an expert on the DF code
>> base) is that, as you say, the new stuff is much tidier.  I understood
>> the objective to be not so much that DF itself would directly improve
>> the generated code, but rather than it would provide much-need
>> infrastructure that would allow future improvements.  This is a lot like
>> TREE-SSA which, by itself, wasn't so much about optimization as it was
>> about providing a platform for optimization.

> Imho, it is not a right analogy.  The right analogy would be a new RTL
> (low-level IR).  I know insn scheduler and RA well and I would not
> exaggerrate the DF importance such way at least for these tasks.

I was not trying to suggest that DF is necessarily as sweeping a change
as TREE-SSA.  Certainly, it's not a completely change to the
representation.

The sense in which I meant to suggest similarity is that DF is
infrastructure.  It's not that DF is a new optimization; it's that it's
a facilitator of optimizations.  The point is to get more accurate
information to the various RTL passes, in a more consistent way, so that
they can do their jobs better.  And, to be able to adjust those patches
to use newer, better algorithms, which depend on easy access to dataflow
information.  And, perhaps, to permit the creation of new optimization
passes that will only be useful because more accurate information is
available.

My point is that I don't think that the right criteria for DF is whether
it makes the generated code faster.  So long as it doesn't make the
generated code slower, and so long as the APIs seem well designed, and
so long as it doesn't make the compiler itself too much slower, I think
it's a win.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713

Reply via email to