David Edelsohn wrote:
Vladimir Makarov writes:
Vlad> I am just trying to convince that the proposed df infrastructure is not
Vlad> ready and might create serious problems for this release and future
Vlad> development because it is slow. Danny is saying that the beauty of the
Vlad> infrastracture is just in improving it in one place. I am agree in this
Vlad> partially. I am only affraid that solution for faster infrastructure
Vlad> (e.g. another slimmer data representation) might change the interface
Vlad> considerably. I am not sure that I can convinince in this. But I am
Vlad> more worried about 4.3 release and I really believe that inclusion of
Vlad> the data flow infrastructure should be the 1st step of stage 1 to give
Vlad> people more time to solve at least some problems.
DF has been successfully tested on many more targets than
originally requested by the GCC SC. The original requirement for targets
was the same as for the Tree-SSA merge. Tree-SSA continued to be cleaned
up, fixed, and improved after it was merged. Tree-SSA performance
improved by the time of the release and was not required to be perfect on
day one.
DF will be good when merged and will continue to improve on
mainline in Stage 2. GCC previously has not had a requirement that a
patch be committed at the beginning of Stage 1.
We understand your concerns, but unsubstantiated assertions like
"might create serious problems..." are not very helpful or convincing
arguments. You are selectively quoting other developers and pulling their
comments out of context to support your objections.
There were too many of them. I hope you are not saying there is no one
reason for my concerns.
Why, specifically, is the df infrastructure not ready? Have you
investigated the current status? Have you looked at the design documents,
implementation, and comments? Have you followed the mailinglist
discussions and patches?
I did investigate the current status of the infrastructure on future
mainstream processor Core2 (> 11% slower compiler, worse code and bigger
code size). That is the reason why I started this. I know this
infrastructure sufficientlly well for a long time. I always had concern
about the fat representation.
Why is it unacceptable for it to mature further on mainline like
Tree-SSA?
Two releases one after another to avoid. No one real experiment to try
to rewrite an RTL optimization to figure out how def-use chain will work.
Why is it better to delay merging an announced, planned and
approved project that the developers believe is ready, which will impose
the complexity and work of maintaining a branch with invasive changes for
a full release cycle? It took a long time to fix all of the current users
of dataflow and recent mainline patches continue to introduce new bugs.
Why are the discussions about the current performance, known
performance problems, and specific plans for performance improvement
throughout the rest of the release cycle insufficient to address your
concerns?
Ok, at least I tried. I hope 5% compiler time degradation is still in
effect.