On Wed, 2007-12-12 at 15:06 -0500, Diego Novillo wrote:
> Over the last few weeks we (Google) have been discussing ideas on how to
> leverage the LTO work to implement a whole program optimizer that is
> both fast and scalable.
> 
> While we do not have everything thought out in detail, we think we have
> enough to start doing some implementation work.  I tried attaching the 
> document, but the mailing list rejected it.  I've uploaded it to
> http://airs.com/dnovillo/pub/whopr.pdf

A few questions:

Do you have any thoughts on how this approach would be able to use
profiling information, which is very a very powerful source of
information for producing good optimisations?

Would there be much duplication of code between this and normal GCC
processing or would it be possible to share a common code base?

A few years back there were various suggestions about having files
containing intermediate representations and this was criticised because
it could make it possible for people for subvert the GPL by connecting
to the optimisation phases via such an intermediate file. Arguably the
language front end is then a different program and not covered by the
GPL. It might be worth thinking about this aspect. 

This also triggers the thought that if you have this intermediate
representation, and it is somewhat robust to GCC patchlevels, you do not
actually need source code of proprietary libraries to optimize into
them. You only need the intermediate files, which may be easier to get
than source code.

Tim Josling

Reply via email to