On Sun, Jun 7, 2009 at 16:04, Alexandre Oliva<aol...@redhat.com> wrote:
> So the question is, what should I measure? Memory use for any specific > set of testcases, summarized over a bootstrap with memory use tracking > enabled, something else? Likewise for compile time? What else? Some quick measurements I'd be interested in: - Size of the IL over some standard code bodies (http://gcc.gnu.org/wiki/PerformanceTesting). - Memory consumption in cc1/cc1plus at -Ox -g over that set of apps. - Compile time in cc1/cc1plus at -Ox -g. - Performance differences over SPEC2006 and the other benchmarks we keep track of. Do all these comparisons against mainline as of the last merge point. The other set of measurements that would be interesting are probably harder to specify. I would like to have a set of criteria or guidelines of what should a pass writer keep in mind to make sure that their transformations do not butcher debug information. From what I understand, there are two situations that need handling: - When doing analysis, passes should explicitly ignore certain artifacts that carry debugging info. - When applying transformations, passes should generate/move/modify those artifacts. Documentation should describe exactly what those artifacts are and how should they be handled. I'd like to have a metric of intrusiveness that can be tied to the quality of the debugging information: - What percentage of code in a pass is dedicated exclusively to handling debug info? - What is the point of diminishing returns? If I write 30% more to keep track of debug info, will the debug info get 30% better? - What does it mean for debug info to be 30% better? How do we measure 'debug info goodness'? - Does keeping debug info up-to-date introduce algorithmic changes to the pass? Clearly, if one needs to dedicate a large portion of the pass just to handle debug information, that is going to be a very hard sell. Keeping perfect debug information at any cost is not sustainable long term. Diego.