Re: Fwd: LLVM collaboration?

2014-02-13 Thread Richard Biener
On Wed, Feb 12, 2014 at 5:22 PM, Jan Hubicka  wrote:
>> On Wed, 12 Feb 2014, Richard Biener wrote:
>>
>> > What about instead of our current odd way of identifying LTO objects
>> > simply add a special ELF note telling the linker the plugin to use?
>> >
>> > .note._linker_plugin '/./libltoplugin.so'
>> >
>> > that way the linker should try 1) loading that plugin, 2) register the
>> > specific object with that plugin.
>>
>> Unless this is only allowed for a whitelist of known-good plugins in
>> known-good directories, it's a clear security hole for the linker to
>> execute code in arbitrary files named by linker input.  The linker should
>> be safe to run on untrusted input files.
>
> Also I believe the flies should be independent of particular setup (that is 
> not
> contain a path) and probably host OS (that is not having .so extension) at 
> least.
> We need some versioning scheme for different versions of compilers.
> Finally we need a solution for non-ELF LTO objects (like LLVM)
>
> But yes, having an compiler independent way of declaring that plugin is needed
> and what plugin should be uses seems possible.

Yeah, naming the plugin (and searching it in a ld specific trusted configurable
path only) would work as well, of course.

That also means that we should try to make the GCC side lto-plugin work for
older GCC versions as well (we pick the lto-wrapper to call from the environment
which would have to change if we'd try to support using multiple GCC versions
at the same time).

Richard.

> Honza
>>
>> --
>> Joseph S. Myers
>> jos...@codesourcery.com


Re: Fwd: LLVM collaboration?

2014-02-12 Thread Jan Hubicka
> On Wed, 12 Feb 2014, Richard Biener wrote:
> 
> > What about instead of our current odd way of identifying LTO objects
> > simply add a special ELF note telling the linker the plugin to use?
> > 
> > .note._linker_plugin '/./libltoplugin.so'
> > 
> > that way the linker should try 1) loading that plugin, 2) register the
> > specific object with that plugin.
> 
> Unless this is only allowed for a whitelist of known-good plugins in 
> known-good directories, it's a clear security hole for the linker to 
> execute code in arbitrary files named by linker input.  The linker should 
> be safe to run on untrusted input files.

Also I believe the flies should be independent of particular setup (that is not
contain a path) and probably host OS (that is not having .so extension) at 
least.
We need some versioning scheme for different versions of compilers.
Finally we need a solution for non-ELF LTO objects (like LLVM)

But yes, having an compiler independent way of declaring that plugin is needed
and what plugin should be uses seems possible.

Honza
> 
> -- 
> Joseph S. Myers
> jos...@codesourcery.com


Re: Fwd: LLVM collaboration?

2014-02-12 Thread Joseph S. Myers
On Wed, 12 Feb 2014, Richard Biener wrote:

> What about instead of our current odd way of identifying LTO objects
> simply add a special ELF note telling the linker the plugin to use?
> 
> .note._linker_plugin '/./libltoplugin.so'
> 
> that way the linker should try 1) loading that plugin, 2) register the
> specific object with that plugin.

Unless this is only allowed for a whitelist of known-good plugins in 
known-good directories, it's a clear security hole for the linker to 
execute code in arbitrary files named by linker input.  The linker should 
be safe to run on untrusted input files.

-- 
Joseph S. Myers
jos...@codesourcery.com


Re: Fwd: LLVM collaboration?

2014-02-12 Thread Rafael Espíndola
> What about instead of our current odd way of identifying LTO objects
> simply add a special ELF note telling the linker the plugin to use?
>
> .note._linker_plugin '/./libltoplugin.so'
>
> that way the linker should try 1) loading that plugin, 2) register the
> specific object with that plugin.
>
> If a full path is undesired (depends on install setup) then specifying
> the plugin SONAME might also work (we'd of course need to bump
> our plugins SONAME for each release to allow parallel install
> of multiple versions or make the plugin contain all the
> dispatch-to-different-GCC-version-lto-wrapper code).

Might be an interesting addition to what we have, but keep in mind
that LLVM uses thin non-ELF files. It is also able to load IR from
previous versions, so for LLVM at least, using the newest plugin is
probably the best default.

> Richard.

Cheers,
Rafael


Re: Fwd: LLVM collaboration?

2014-02-12 Thread Richard Biener
On Tue, Feb 11, 2014 at 10:20 PM, Jan Hubicka  wrote:
>> >> Since both toolchains do the magic, binutils has no incentive to
>> >> create any automatic detection of objects.
>>
>> It is mostly a historical decision. At the time the design was for the
>> plugin to be matched to the compiler, and so the compiler could pass
>> that information down to the linker.
>>
>> > The trouble however is that one needs to pass explicit --plugin argument
>> > specifying the particular plugin to load and so GCC ships with its own 
>> > wrappers
>> > (gcc-nm/gcc-ld/gcc-ar and the gcc driver itself) while LLVM does similar 
>> > thing.
>>
>> These wrappers should not be necessary. While the linker currently
>> requires a command line option, bfd has support for searching for a
>> plugin. It will search /lib/bfd-plugin. See for example the
>> instructions at http://llvm.org/docs/GoldPlugin.html.
>
> My reading of bfd/plugin.c is that it basically walks the directory and looks
> for first plugin that returns OK for onload. (that is always the case for
> GCC/LLVM plugins).  So if I instlal GCC and llvm plugin there it will
> depend who will end up being first and only that plugin will be used.
>
> We need multiple plugin support as suggested by the directory name ;)
>
> Also it sems that currently plugin is not used if file is ELF for ar/nm/ranlib
> (as mentioned by Markus) and also GNU-ld seems to choke on LLVM object files
> even if it has plugin.
>
> This probably needs ot be sanitized.
>
>>
>> This was done because ar and nm are not normally bound to any
>> compiler. Had we realized this issue earlier we would probably have
>> supported searching for plugins in the linker too.
>>
>> So it seems that what you want could be done by
>>
>> * having bfd-ld and gold search bfd-plugins (maybe rename the directory?)
>> * support loading multiple plugins, and asking each to see if it
>> supports a given file. That ways we could LTO when having a part GCC
>> and part LLVM build.
>
> Yes, that is what I have in mind.
>
> Plus perhaps additional configuration file to avoid loading everything.  Say
> user instealls 3 versions of LLVM, open64 and ICC. If all of them loads as a
> shared library, like LLVM does, it will probably slow down the tools
> measurably.

What about instead of our current odd way of identifying LTO objects
simply add a special ELF note telling the linker the plugin to use?

.note._linker_plugin '/./libltoplugin.so'

that way the linker should try 1) loading that plugin, 2) register the
specific object with that plugin.

If a full path is undesired (depends on install setup) then specifying
the plugin SONAME might also work (we'd of course need to bump
our plugins SONAME for each release to allow parallel install
of multiple versions or make the plugin contain all the
dispatch-to-different-GCC-version-lto-wrapper code).

Richard.

>> * maybe be smart about version and load new ones first? (libLLVM-3.4
>> before libLLVM-3.3 for example). Probably the first one should always
>> be the one given in the command line.
>
> Yes, i think we may want to prioritize the list.  So user can prevail
> his own version of GCC over the system one, for example.
>>
>> For OS X the situation is a bit different. There instead of a plugin
>> the linker loads a library: libLTO.dylib. When doing LTO with a newer
>> llvm, one needs to set DYLD_LIBRARY_PATH. I think I proposed setting
>> that from clang some time ago, but I don't remember the outcome.
>>
>> In theory GCC could implement a libLTO.dylib and set
>> DYLD_LIBRARY_PATH. The gold/bfd plugin that LLVM uses is basically a
>> API mapping the other way, so the job would be inverting it. The LTO
>> model ld64 is a bit more strict about knowing all symbol definitions
>> and uses (including inline asm), so there would be work to be done to
>> cover that, but the simple cases shouldn't be too hard.
>
> I would not care that much about symbols in asm definitions to start with.
> Even if we will force users to non-LTO those object files, it would be an
> improvement over what we have now.
>
> One problem is that we need a volunteer to implement the reverse glue
> (libLTO->plugin API), since I do not have an OS X box (well, have an old G5,
> but even that is quite far from me right now)
>
> Why complete symbol tables are required? Can't ld64 be changed to ignore
> unresolved symbols in the first stage just like gold/gnu-ld does?
>
> Honza
>>
>> Cheers,
>> Rafael


Re: Fwd: LLVM collaboration?

2014-02-11 Thread Rafael Espíndola
> My reading of bfd/plugin.c is that it basically walks the directory and looks
> for first plugin that returns OK for onload. (that is always the case for
> GCC/LLVM plugins).  So if I instlal GCC and llvm plugin there it will
> depend who will end up being first and only that plugin will be used.
>
> We need multiple plugin support as suggested by the directory name ;)
>
> Also it sems that currently plugin is not used if file is ELF for ar/nm/ranlib
> (as mentioned by Markus) and also GNU-ld seems to choke on LLVM object files
> even if it has plugin.
>
> This probably needs ot be sanitized.

CCing Hal Finkel. He got this to work some time ago. Not sure if he
ever ported the patches to bfd trunk.

>> For OS X the situation is a bit different. There instead of a plugin
>> the linker loads a library: libLTO.dylib. When doing LTO with a newer
>> llvm, one needs to set DYLD_LIBRARY_PATH. I think I proposed setting
>> that from clang some time ago, but I don't remember the outcome.
>>
>> In theory GCC could implement a libLTO.dylib and set
>> DYLD_LIBRARY_PATH. The gold/bfd plugin that LLVM uses is basically a
>> API mapping the other way, so the job would be inverting it. The LTO
>> model ld64 is a bit more strict about knowing all symbol definitions
>> and uses (including inline asm), so there would be work to be done to
>> cover that, but the simple cases shouldn't be too hard.
>
> I would not care that much about symbols in asm definitions to start with.
> Even if we will force users to non-LTO those object files, it would be an
> improvement over what we have now.
>
> One problem is that we need a volunteer to implement the reverse glue
> (libLTO->plugin API), since I do not have an OS X box (well, have an old G5,
> but even that is quite far from me right now)
>
> Why complete symbol tables are required? Can't ld64 be changed to ignore
> unresolved symbols in the first stage just like gold/gnu-ld does?

I am not sure about this. My *guess* is that it does dead stripping
computation before asking libLTO for the object file. I noticed the
issue while trying to LTO firefox some time ago.

Cheers,
Rafael


Re: Fwd: LLVM collaboration?

2014-02-11 Thread Jan Hubicka
> >> Since both toolchains do the magic, binutils has no incentive to
> >> create any automatic detection of objects.
> 
> It is mostly a historical decision. At the time the design was for the
> plugin to be matched to the compiler, and so the compiler could pass
> that information down to the linker.
> 
> > The trouble however is that one needs to pass explicit --plugin argument
> > specifying the particular plugin to load and so GCC ships with its own 
> > wrappers
> > (gcc-nm/gcc-ld/gcc-ar and the gcc driver itself) while LLVM does similar 
> > thing.
> 
> These wrappers should not be necessary. While the linker currently
> requires a command line option, bfd has support for searching for a
> plugin. It will search /lib/bfd-plugin. See for example the
> instructions at http://llvm.org/docs/GoldPlugin.html.

My reading of bfd/plugin.c is that it basically walks the directory and looks
for first plugin that returns OK for onload. (that is always the case for
GCC/LLVM plugins).  So if I instlal GCC and llvm plugin there it will
depend who will end up being first and only that plugin will be used.

We need multiple plugin support as suggested by the directory name ;)

Also it sems that currently plugin is not used if file is ELF for ar/nm/ranlib
(as mentioned by Markus) and also GNU-ld seems to choke on LLVM object files
even if it has plugin.

This probably needs ot be sanitized.

> 
> This was done because ar and nm are not normally bound to any
> compiler. Had we realized this issue earlier we would probably have
> supported searching for plugins in the linker too.
> 
> So it seems that what you want could be done by
> 
> * having bfd-ld and gold search bfd-plugins (maybe rename the directory?)
> * support loading multiple plugins, and asking each to see if it
> supports a given file. That ways we could LTO when having a part GCC
> and part LLVM build.

Yes, that is what I have in mind.

Plus perhaps additional configuration file to avoid loading everything.  Say
user instealls 3 versions of LLVM, open64 and ICC. If all of them loads as a
shared library, like LLVM does, it will probably slow down the tools
measurably.

> * maybe be smart about version and load new ones first? (libLLVM-3.4
> before libLLVM-3.3 for example). Probably the first one should always
> be the one given in the command line.

Yes, i think we may want to prioritize the list.  So user can prevail
his own version of GCC over the system one, for example.
> 
> For OS X the situation is a bit different. There instead of a plugin
> the linker loads a library: libLTO.dylib. When doing LTO with a newer
> llvm, one needs to set DYLD_LIBRARY_PATH. I think I proposed setting
> that from clang some time ago, but I don't remember the outcome.
> 
> In theory GCC could implement a libLTO.dylib and set
> DYLD_LIBRARY_PATH. The gold/bfd plugin that LLVM uses is basically a
> API mapping the other way, so the job would be inverting it. The LTO
> model ld64 is a bit more strict about knowing all symbol definitions
> and uses (including inline asm), so there would be work to be done to
> cover that, but the simple cases shouldn't be too hard.

I would not care that much about symbols in asm definitions to start with.
Even if we will force users to non-LTO those object files, it would be an
improvement over what we have now.

One problem is that we need a volunteer to implement the reverse glue
(libLTO->plugin API), since I do not have an OS X box (well, have an old G5,
but even that is quite far from me right now)

Why complete symbol tables are required? Can't ld64 be changed to ignore
unresolved symbols in the first stage just like gold/gnu-ld does?

Honza
> 
> Cheers,
> Rafael


Re: Fwd: LLVM collaboration?

2014-02-11 Thread Jan Hubicka
> On 2014.02.11 at 13:02 -0500, Rafael Espíndola wrote:
> > On 11 February 2014 12:28, Renato Golin  wrote:
> > > Now copying Rafael, which can give us some more insight on the LLVM LTO 
> > > side.
> > 
> > Thanks.
> > 
> > > On 11 February 2014 09:55, Renato Golin  wrote:
> > >> Hi Jan,
> > >>
> > >> I think this is a very good example where we could all collaborate
> > >> (including binutils).
> > 
> > It is. Both LTO models (LLVM and GCC) were considered form the start
> > of the API design and I think we got a better plugin model as a
> > result.
> > 
> > >> If I got it right, LTO today:
> > >>
> > >> - needs the drivers to explicitly declare the plugin
> > >> - needs the library available somewhere
> > 
> > True.
> > 
> > >> - may have to change the library loading semantics (via LD_PRELOAD)
> > 
> > That depends on the library being loaded. RPATH works just fine too.
> > 
> > >> Since both toolchains do the magic, binutils has no incentive to
> > >> create any automatic detection of objects.
> > 
> > It is mostly a historical decision. At the time the design was for the
> > plugin to be matched to the compiler, and so the compiler could pass
> > that information down to the linker.
> > 
> > > The trouble however is that one needs to pass explicit --plugin argument
> > > specifying the particular plugin to load and so GCC ships with its own 
> > > wrappers
> > > (gcc-nm/gcc-ld/gcc-ar and the gcc driver itself) while LLVM does similar 
> > > thing.
> > 
> > These wrappers should not be necessary. While the linker currently
> > requires a command line option, bfd has support for searching for a
> > plugin. It will search /lib/bfd-plugin. See for example the
> > instructions at http://llvm.org/docs/GoldPlugin.html.
> 
> Please note that this automatic loading of the plugin only happens for
> non-ELF files. So the LLVM GoldPlugin gets loaded fine, but automatic
> loading of gcc's liblto_plugin.so doesn't work at the moment.

Hmm, something that ought to be fixed.  Binutils can probably know about GCC's
LTO symbols it uses as a distniguisher.  Is there a PR about this?
> 
> A basic implementation to support both plugins seamlessly should be
> pretty straightforward, because LLVM's bitstream file format (non-ELF)
> is easily distinguishable from gcc's output (standard ELF with special
> sections).

I think it is easy even with two plugins for same file format - all ld need is
to load the plugins and then do the file claiming for each of them.
GCC plugin then should not claim files from LLVM or incompatible GCC version
and vice versa.

Honza
> 
> -- 
> Markus


Re: Fwd: LLVM collaboration?

2014-02-11 Thread Markus Trippelsdorf
On 2014.02.11 at 13:02 -0500, Rafael Espíndola wrote:
> On 11 February 2014 12:28, Renato Golin  wrote:
> > Now copying Rafael, which can give us some more insight on the LLVM LTO 
> > side.
> 
> Thanks.
> 
> > On 11 February 2014 09:55, Renato Golin  wrote:
> >> Hi Jan,
> >>
> >> I think this is a very good example where we could all collaborate
> >> (including binutils).
> 
> It is. Both LTO models (LLVM and GCC) were considered form the start
> of the API design and I think we got a better plugin model as a
> result.
> 
> >> If I got it right, LTO today:
> >>
> >> - needs the drivers to explicitly declare the plugin
> >> - needs the library available somewhere
> 
> True.
> 
> >> - may have to change the library loading semantics (via LD_PRELOAD)
> 
> That depends on the library being loaded. RPATH works just fine too.
> 
> >> Since both toolchains do the magic, binutils has no incentive to
> >> create any automatic detection of objects.
> 
> It is mostly a historical decision. At the time the design was for the
> plugin to be matched to the compiler, and so the compiler could pass
> that information down to the linker.
> 
> > The trouble however is that one needs to pass explicit --plugin argument
> > specifying the particular plugin to load and so GCC ships with its own 
> > wrappers
> > (gcc-nm/gcc-ld/gcc-ar and the gcc driver itself) while LLVM does similar 
> > thing.
> 
> These wrappers should not be necessary. While the linker currently
> requires a command line option, bfd has support for searching for a
> plugin. It will search /lib/bfd-plugin. See for example the
> instructions at http://llvm.org/docs/GoldPlugin.html.

Please note that this automatic loading of the plugin only happens for
non-ELF files. So the LLVM GoldPlugin gets loaded fine, but automatic
loading of gcc's liblto_plugin.so doesn't work at the moment.

A basic implementation to support both plugins seamlessly should be
pretty straightforward, because LLVM's bitstream file format (non-ELF)
is easily distinguishable from gcc's output (standard ELF with special
sections).

-- 
Markus


Re: Fwd: LLVM collaboration?

2014-02-11 Thread Rafael Espíndola
On 11 February 2014 12:28, Renato Golin  wrote:
> Now copying Rafael, which can give us some more insight on the LLVM LTO side.

Thanks.

> On 11 February 2014 09:55, Renato Golin  wrote:
>> Hi Jan,
>>
>> I think this is a very good example where we could all collaborate
>> (including binutils).

It is. Both LTO models (LLVM and GCC) were considered form the start
of the API design and I think we got a better plugin model as a
result.

>> If I got it right, LTO today:
>>
>> - needs the drivers to explicitly declare the plugin
>> - needs the library available somewhere

True.

>> - may have to change the library loading semantics (via LD_PRELOAD)

That depends on the library being loaded. RPATH works just fine too.

>> Since both toolchains do the magic, binutils has no incentive to
>> create any automatic detection of objects.

It is mostly a historical decision. At the time the design was for the
plugin to be matched to the compiler, and so the compiler could pass
that information down to the linker.

> The trouble however is that one needs to pass explicit --plugin argument
> specifying the particular plugin to load and so GCC ships with its own 
> wrappers
> (gcc-nm/gcc-ld/gcc-ar and the gcc driver itself) while LLVM does similar 
> thing.

These wrappers should not be necessary. While the linker currently
requires a command line option, bfd has support for searching for a
plugin. It will search /lib/bfd-plugin. See for example the
instructions at http://llvm.org/docs/GoldPlugin.html.

This was done because ar and nm are not normally bound to any
compiler. Had we realized this issue earlier we would probably have
supported searching for plugins in the linker too.

So it seems that what you want could be done by

* having bfd-ld and gold search bfd-plugins (maybe rename the directory?)
* support loading multiple plugins, and asking each to see if it
supports a given file. That ways we could LTO when having a part GCC
and part LLVM build.
* maybe be smart about version and load new ones first? (libLLVM-3.4
before libLLVM-3.3 for example). Probably the first one should always
be the one given in the command line.

For OS X the situation is a bit different. There instead of a plugin
the linker loads a library: libLTO.dylib. When doing LTO with a newer
llvm, one needs to set DYLD_LIBRARY_PATH. I think I proposed setting
that from clang some time ago, but I don't remember the outcome.

In theory GCC could implement a libLTO.dylib and set
DYLD_LIBRARY_PATH. The gold/bfd plugin that LLVM uses is basically a
API mapping the other way, so the job would be inverting it. The LTO
model ld64 is a bit more strict about knowing all symbol definitions
and uses (including inline asm), so there would be work to be done to
cover that, but the simple cases shouldn't be too hard.

Cheers,
Rafael


Re: Fwd: LLVM collaboration?

2014-02-11 Thread Renato Golin
Now copying Rafael, which can give us some more insight on the LLVM LTO side.

cheers,
--renato

On 11 February 2014 09:55, Renato Golin  wrote:
> Hi Jan,
>
> I think this is a very good example where we could all collaborate
> (including binutils).
>
> I'll leave your reply intact, so that Chandler (CC'd) can get a bit
> more context. I'm copying him because he (and I believe Diego) had
> more contact with LTO than I had.
>
> If I got it right, LTO today:
>
> - needs the drivers to explicitly declare the plugin
> - needs the library available somewhere
> - may have to change the library loading semantics (via LD_PRELOAD)
>
> Since both toolchains do the magic, binutils has no incentive to
> create any automatic detection of objects.
>
> The part that I didn't get is when you say about backward
> compatibility. Would LTO work on a newer binutils with the liblto but
> on an older compiler that knew nothing about LTO?
>
> Your proposal is, then, to get binutils:
>
> - recognizing LTO logic in the objects
> - automatically loading liblto if recognized
> - warning if not
>
> I'm assuming the extra symbols would be discarded if no library is
> found, together with the warning, right? Maybe an error if -Wall or
> whatever.
>
> Can we get someone from the binutils community to opine on that?
>
> cheers,
> --renato
>
> On 11 February 2014 02:29, Jan Hubicka  wrote:
>> One practical experience I have with LLVM developers is sharing experiences
>> about getting Firefox to work with LTO with Rafael Espindola and I think it 
>> was
>> useful for both of us. I am definitly open to more discussion.
>>
>> Lets try a specific topic that is on my TODO list for some time.
>>
>> I would like to make it possible for mutliple compilers to be used to LTO a
>> single binary. As we are all making LTO more useful, I think it is matter of
>> time until people will start shipping LTO object files by default and users
>> will end up feeding them into different compilers or incompatible version of
>> the same compiler. We probably want to make this work, even thought the
>> cross-module optimization will not happen in this case.
>>
>> The plugin interface in binutils seems to do its job well both for GCC and 
>> LLVM
>> and I hope that open64 and ICC will eventually join, too.
>>
>> The trouble however is that one needs to pass explicit --plugin argument
>> specifying the particular plugin to load and so GCC ships with its own 
>> wrappers
>> (gcc-nm/gcc-ld/gcc-ar and the gcc driver itself) while LLVM does similar 
>> thing.
>>
>> It may be smoother if binutils was able to load multiple plugins at once and
>> grab plugins from system and user installed compilers without explicit 
>> --plugin
>> argument.
>>
>> Binutils probably should also have a way to detect LTO object files and 
>> produce
>> more useful diagnostic than they do now, when there is no plugin claiming 
>> them.
>>
>> There are some PRs filled on the topic
>> http://cygwin.com/frysk/bugzilla/show_bug.cgi?id=15300
>> http://cygwin.com/frysk/bugzilla/show_bug.cgi?id=13227
>> but not much progress on them.
>>
>> I wonder if we can get this designed and implemented.
>>
>> On the other hand, GCC current maintains non-plugin path for LTO that is now
>> only used by darwin port due to lack of plugin enabled LD there.  It seems
>> that liblto used by darwin is losely compatible with the plugin API, but it 
>> makes
>> it harder to have different compilers share it (one has to LD_PRELOAD liblto
>> to different one prior executing the linker?)
>>
>> I wonder, is there chance to implement linker plugin API to libLTO glue or 
>> add
>> plugin support to native Darwin tools?
>>
>> Honza


Re: Fwd: LLVM collaboration?

2014-02-11 Thread Renato Golin
On 11 February 2014 16:00, Jan Hubicka  wrote:
> I basically think that binutils should have a way for installed compiler to
> register a plugin and load all plugins by default (or perhaps for performance
> or upon detecking an compatible LTO object file in some way, perhaps also by
> information given in the config file) and let them claim the LTO objects they
> understand to.

Right, so this would be not necessarily related to LTO, but with the
binutils plugin system. In my very limited experience with LTO and
binutils, I can't see how that would be different from just adding a
--plugin option on the compiler, unless it's something that the linker
would detect automatically without the interference of any compiler.


> With the backward compatibility I mean that if we release a new version of
> compiler that can no longer read the LTO objects of older compiler, one can
> just install both versions and have their plugins to claim only LTO objects
> they understand. Just if they were two different compilers.

Yes, this makes total sense.


> Finally I think we can make binutils to recognize GCC/LLVM LTO objects
> as a special case and produce friendly message when user try to handle
> them witout plugin as oposed to today strange errors about file formats
> or missing symbols.

Yes, that as well seems pretty obvious, and mostly orthogonal to the
other two proposals.

cheers,
--renato

PS: Removing Chandler, as he was not the right person to look at this.
I'll check with others in the LLVM list to chime in on this thread.


Re: Fwd: LLVM collaboration?

2014-02-11 Thread Uday Khedker





On Tuesday 11 February 2014 09:30 PM, Jan Hubicka wrote:





On Tuesday 11 February 2014 03:25 PM, Renato Golin wrote:

Hi Jan,

I think this is a very good example where we could all collaborate
(including binutils).

I'll leave your reply intact, so that Chandler (CC'd) can get a bit
more context. I'm copying him because he (and I believe Diego) had
more contact with LTO than I had.

If I got it right, LTO today:

- needs the drivers to explicitly declare the plugin


Yes.

- needs the library available somewhere
- may have to change the library loading semantics (via LD_PRELOAD)


Not in binutils implementation (I believe it is the case for darwin's libLTO).
With binutils you only need to pass explicit --plugin argument into all
tools that care (ld/ar/nm/ranlib)


There is another need that I have felt in LTO for quite some time.
Currently, it has a non-partitioned mode or a partitioned mode but
this decision is taken before the compilation begins. It would be
nice to have a mode that allows dynamic loading of function bodies
so that a flow and context sensitive IPA could load functions bodies
on demand, and unload them when they are not needed.


I implemented on-demand loading of function bodies into GCC-4.8 if I recall
correctly. Currently I thinko only Martin Liska's code unification pass uses it
to verify that two function bdoes it thinks are equivalent are actually
equivalent. Hopefully it will be merged into 4.10.


Great. We will experiment with it.

Uday.



Uday.



Since both toolchains do the magic, binutils has no incentive to
create any automatic detection of objects.

The part that I didn't get is when you say about backward
compatibility. Would LTO work on a newer binutils with the liblto but
on an older compiler that knew nothing about LTO?

Your proposal is, then, to get binutils:

- recognizing LTO logic in the objects
- automatically loading liblto if recognized
- warning if not


I basically think that binutils should have a way for installed compiler to
register a plugin and load all plugins by default (or perhaps for performance
or upon detecking an compatible LTO object file in some way, perhaps also by
information given in the config file) and let them claim the LTO objects they
understand to.

With the backward compatibility I mean that if we release a new version of
compiler that can no longer read the LTO objects of older compiler, one can
just install both versions and have their plugins to claim only LTO objects
they understand. Just if they were two different compilers.

Finally I think we can make binutils to recognize GCC/LLVM LTO objects
as a special case and produce friendly message when user try to handle
them witout plugin as oposed to today strange errors about file formats
or missing symbols.

Honza


I'm assuming the extra symbols would be discarded if no library is
found, together with the warning, right? Maybe an error if -Wall or
whatever.

Can we get someone from the binutils community to opine on that?

cheers,
--renato

On 11 February 2014 02:29, Jan Hubicka  wrote:

One practical experience I have with LLVM developers is sharing experiences
about getting Firefox to work with LTO with Rafael Espindola and I think it was
useful for both of us. I am definitly open to more discussion.

Lets try a specific topic that is on my TODO list for some time.

I would like to make it possible for mutliple compilers to be used to LTO a
single binary. As we are all making LTO more useful, I think it is matter of
time until people will start shipping LTO object files by default and users
will end up feeding them into different compilers or incompatible version of
the same compiler. We probably want to make this work, even thought the
cross-module optimization will not happen in this case.

The plugin interface in binutils seems to do its job well both for GCC and LLVM
and I hope that open64 and ICC will eventually join, too.

The trouble however is that one needs to pass explicit --plugin argument
specifying the particular plugin to load and so GCC ships with its own wrappers
(gcc-nm/gcc-ld/gcc-ar and the gcc driver itself) while LLVM does similar thing.

It may be smoother if binutils was able to load multiple plugins at once and
grab plugins from system and user installed compilers without explicit --plugin
argument.

Binutils probably should also have a way to detect LTO object files and produce
more useful diagnostic than they do now, when there is no plugin claiming them.

There are some PRs filled on the topic
http://cygwin.com/frysk/bugzilla/show_bug.cgi?id=15300
http://cygwin.com/frysk/bugzilla/show_bug.cgi?id=13227
but not much progress on them.

I wonder if we can get this designed and implemented.

On the other hand, GCC current maintains non-plugin path for LTO that is now
only used by darwin port due to lack of plugin enabled LD there.  It seems
that liblto used by darwin is losely compatible with the plugin API, but it 
makes
it harder to have different 

Re: Fwd: LLVM collaboration?

2014-02-11 Thread Jan Hubicka
> 
> 
> 
> 
> On Tuesday 11 February 2014 03:25 PM, Renato Golin wrote:
> >Hi Jan,
> >
> >I think this is a very good example where we could all collaborate
> >(including binutils).
> >
> >I'll leave your reply intact, so that Chandler (CC'd) can get a bit
> >more context. I'm copying him because he (and I believe Diego) had
> >more contact with LTO than I had.
> >
> >If I got it right, LTO today:
> >
> >- needs the drivers to explicitly declare the plugin

Yes.
> >- needs the library available somewhere
> >- may have to change the library loading semantics (via LD_PRELOAD)

Not in binutils implementation (I believe it is the case for darwin's libLTO).
With binutils you only need to pass explicit --plugin argument into all
tools that care (ld/ar/nm/ranlib)
> 
> There is another need that I have felt in LTO for quite some time.
> Currently, it has a non-partitioned mode or a partitioned mode but
> this decision is taken before the compilation begins. It would be
> nice to have a mode that allows dynamic loading of function bodies
> so that a flow and context sensitive IPA could load functions bodies
> on demand, and unload them when they are not needed.

I implemented on-demand loading of function bodies into GCC-4.8 if I recall
correctly. Currently I thinko only Martin Liska's code unification pass uses it
to verify that two function bdoes it thinks are equivalent are actually
equivalent. Hopefully it will be merged into 4.10.
> 
> Uday.
> 
> >
> >Since both toolchains do the magic, binutils has no incentive to
> >create any automatic detection of objects.
> >
> >The part that I didn't get is when you say about backward
> >compatibility. Would LTO work on a newer binutils with the liblto but
> >on an older compiler that knew nothing about LTO?
> >
> >Your proposal is, then, to get binutils:
> >
> >- recognizing LTO logic in the objects
> >- automatically loading liblto if recognized
> >- warning if not

I basically think that binutils should have a way for installed compiler to
register a plugin and load all plugins by default (or perhaps for performance
or upon detecking an compatible LTO object file in some way, perhaps also by
information given in the config file) and let them claim the LTO objects they
understand to.

With the backward compatibility I mean that if we release a new version of
compiler that can no longer read the LTO objects of older compiler, one can
just install both versions and have their plugins to claim only LTO objects
they understand. Just if they were two different compilers.

Finally I think we can make binutils to recognize GCC/LLVM LTO objects
as a special case and produce friendly message when user try to handle
them witout plugin as oposed to today strange errors about file formats
or missing symbols.

Honza
> >
> >I'm assuming the extra symbols would be discarded if no library is
> >found, together with the warning, right? Maybe an error if -Wall or
> >whatever.
> >
> >Can we get someone from the binutils community to opine on that?
> >
> >cheers,
> >--renato
> >
> >On 11 February 2014 02:29, Jan Hubicka  wrote:
> >>One practical experience I have with LLVM developers is sharing experiences
> >>about getting Firefox to work with LTO with Rafael Espindola and I think it 
> >>was
> >>useful for both of us. I am definitly open to more discussion.
> >>
> >>Lets try a specific topic that is on my TODO list for some time.
> >>
> >>I would like to make it possible for mutliple compilers to be used to LTO a
> >>single binary. As we are all making LTO more useful, I think it is matter of
> >>time until people will start shipping LTO object files by default and users
> >>will end up feeding them into different compilers or incompatible version of
> >>the same compiler. We probably want to make this work, even thought the
> >>cross-module optimization will not happen in this case.
> >>
> >>The plugin interface in binutils seems to do its job well both for GCC and 
> >>LLVM
> >>and I hope that open64 and ICC will eventually join, too.
> >>
> >>The trouble however is that one needs to pass explicit --plugin argument
> >>specifying the particular plugin to load and so GCC ships with its own 
> >>wrappers
> >>(gcc-nm/gcc-ld/gcc-ar and the gcc driver itself) while LLVM does similar 
> >>thing.
> >>
> >>It may be smoother if binutils was able to load multiple plugins at once and
> >>grab plugins from system and user installed compilers without explicit 
> >>--plugin
> >>argument.
> >>
> >>Binutils probably should also have a way to detect LTO object files and 
> >>produce
> >>more useful diagnostic than they do now, when there is no plugin claiming 
> >>them.
> >>
> >>There are some PRs filled on the topic
> >>http://cygwin.com/frysk/bugzilla/show_bug.cgi?id=15300
> >>http://cygwin.com/frysk/bugzilla/show_bug.cgi?id=13227
> >>but not much progress on them.
> >>
> >>I wonder if we can get this designed and implemented.
> >>
> >>On the other hand, GCC current maintains non-plugin path for LT

Re: Fwd: LLVM collaboration?

2014-02-11 Thread Uday Khedker





On Tuesday 11 February 2014 03:25 PM, Renato Golin wrote:

Hi Jan,

I think this is a very good example where we could all collaborate
(including binutils).

I'll leave your reply intact, so that Chandler (CC'd) can get a bit
more context. I'm copying him because he (and I believe Diego) had
more contact with LTO than I had.

If I got it right, LTO today:

- needs the drivers to explicitly declare the plugin
- needs the library available somewhere
- may have to change the library loading semantics (via LD_PRELOAD)


There is another need that I have felt in LTO for quite some time. 
Currently, it has a non-partitioned mode or a partitioned mode but this 
decision is taken before the compilation begins. It would be nice to 
have a mode that allows dynamic loading of function bodies so that a 
flow and context sensitive IPA could load functions bodies on demand, 
and unload them when they are not needed.


Uday.



Since both toolchains do the magic, binutils has no incentive to
create any automatic detection of objects.

The part that I didn't get is when you say about backward
compatibility. Would LTO work on a newer binutils with the liblto but
on an older compiler that knew nothing about LTO?

Your proposal is, then, to get binutils:

- recognizing LTO logic in the objects
- automatically loading liblto if recognized
- warning if not

I'm assuming the extra symbols would be discarded if no library is
found, together with the warning, right? Maybe an error if -Wall or
whatever.

Can we get someone from the binutils community to opine on that?

cheers,
--renato

On 11 February 2014 02:29, Jan Hubicka  wrote:

One practical experience I have with LLVM developers is sharing experiences
about getting Firefox to work with LTO with Rafael Espindola and I think it was
useful for both of us. I am definitly open to more discussion.

Lets try a specific topic that is on my TODO list for some time.

I would like to make it possible for mutliple compilers to be used to LTO a
single binary. As we are all making LTO more useful, I think it is matter of
time until people will start shipping LTO object files by default and users
will end up feeding them into different compilers or incompatible version of
the same compiler. We probably want to make this work, even thought the
cross-module optimization will not happen in this case.

The plugin interface in binutils seems to do its job well both for GCC and LLVM
and I hope that open64 and ICC will eventually join, too.

The trouble however is that one needs to pass explicit --plugin argument
specifying the particular plugin to load and so GCC ships with its own wrappers
(gcc-nm/gcc-ld/gcc-ar and the gcc driver itself) while LLVM does similar thing.

It may be smoother if binutils was able to load multiple plugins at once and
grab plugins from system and user installed compilers without explicit --plugin
argument.

Binutils probably should also have a way to detect LTO object files and produce
more useful diagnostic than they do now, when there is no plugin claiming them.

There are some PRs filled on the topic
http://cygwin.com/frysk/bugzilla/show_bug.cgi?id=15300
http://cygwin.com/frysk/bugzilla/show_bug.cgi?id=13227
but not much progress on them.

I wonder if we can get this designed and implemented.

On the other hand, GCC current maintains non-plugin path for LTO that is now
only used by darwin port due to lack of plugin enabled LD there.  It seems
that liblto used by darwin is losely compatible with the plugin API, but it 
makes
it harder to have different compilers share it (one has to LD_PRELOAD liblto
to different one prior executing the linker?)

I wonder, is there chance to implement linker plugin API to libLTO glue or add
plugin support to native Darwin tools?

Honza


Re: Fwd: LLVM collaboration?

2014-02-11 Thread Renato Golin
Hi Jan,

I think this is a very good example where we could all collaborate
(including binutils).

I'll leave your reply intact, so that Chandler (CC'd) can get a bit
more context. I'm copying him because he (and I believe Diego) had
more contact with LTO than I had.

If I got it right, LTO today:

- needs the drivers to explicitly declare the plugin
- needs the library available somewhere
- may have to change the library loading semantics (via LD_PRELOAD)

Since both toolchains do the magic, binutils has no incentive to
create any automatic detection of objects.

The part that I didn't get is when you say about backward
compatibility. Would LTO work on a newer binutils with the liblto but
on an older compiler that knew nothing about LTO?

Your proposal is, then, to get binutils:

- recognizing LTO logic in the objects
- automatically loading liblto if recognized
- warning if not

I'm assuming the extra symbols would be discarded if no library is
found, together with the warning, right? Maybe an error if -Wall or
whatever.

Can we get someone from the binutils community to opine on that?

cheers,
--renato

On 11 February 2014 02:29, Jan Hubicka  wrote:
> One practical experience I have with LLVM developers is sharing experiences
> about getting Firefox to work with LTO with Rafael Espindola and I think it 
> was
> useful for both of us. I am definitly open to more discussion.
>
> Lets try a specific topic that is on my TODO list for some time.
>
> I would like to make it possible for mutliple compilers to be used to LTO a
> single binary. As we are all making LTO more useful, I think it is matter of
> time until people will start shipping LTO object files by default and users
> will end up feeding them into different compilers or incompatible version of
> the same compiler. We probably want to make this work, even thought the
> cross-module optimization will not happen in this case.
>
> The plugin interface in binutils seems to do its job well both for GCC and 
> LLVM
> and I hope that open64 and ICC will eventually join, too.
>
> The trouble however is that one needs to pass explicit --plugin argument
> specifying the particular plugin to load and so GCC ships with its own 
> wrappers
> (gcc-nm/gcc-ld/gcc-ar and the gcc driver itself) while LLVM does similar 
> thing.
>
> It may be smoother if binutils was able to load multiple plugins at once and
> grab plugins from system and user installed compilers without explicit 
> --plugin
> argument.
>
> Binutils probably should also have a way to detect LTO object files and 
> produce
> more useful diagnostic than they do now, when there is no plugin claiming 
> them.
>
> There are some PRs filled on the topic
> http://cygwin.com/frysk/bugzilla/show_bug.cgi?id=15300
> http://cygwin.com/frysk/bugzilla/show_bug.cgi?id=13227
> but not much progress on them.
>
> I wonder if we can get this designed and implemented.
>
> On the other hand, GCC current maintains non-plugin path for LTO that is now
> only used by darwin port due to lack of plugin enabled LD there.  It seems
> that liblto used by darwin is losely compatible with the plugin API, but it 
> makes
> it harder to have different compilers share it (one has to LD_PRELOAD liblto
> to different one prior executing the linker?)
>
> I wonder, is there chance to implement linker plugin API to libLTO glue or add
> plugin support to native Darwin tools?
>
> Honza


Re: Fwd: LLVM collaboration?

2014-02-10 Thread Jan Hubicka
> 1. There IS an unnecessary fence between GCC and LLVM.
> 
> License arguments are one reason why we can't share code as easily as
> we would like, but there is no argument against sharing ideas,
> cross-reporting bugs, helping each other implement a better
> compiler/linker/assembler/libraries just because of an artificial
> wall. We need to break this wall.
> 
> I rarely see GCC folks reporting bugs on our side, or people saying
> "we should check with the GCC folks" actually doing it. We're not
> contagious folks, you know. Talking to GCC engineers won't make me a
> lesser LLVM engineer, and vice-versa.

One practical experience I have with LLVM developers is sharing experiences
about getting Firefox to work with LTO with Rafael Espindola and I think it was
useful for both of us. I am definitly open to more discussion.

Lets try a specific topic that is on my TODO list for some time.

I would like to make it possible for mutliple compilers to be used to LTO a
single binary. As we are all making LTO more useful, I think it is matter of
time until people will start shipping LTO object files by default and users
will end up feeding them into different compilers or incompatible version of
the same compiler. We probably want to make this work, even thought the
cross-module optimization will not happen in this case.

The plugin interface in binutils seems to do its job well both for GCC and LLVM
and I hope that open64 and ICC will eventually join, too.

The trouble however is that one needs to pass explicit --plugin argument
specifying the particular plugin to load and so GCC ships with its own wrappers
(gcc-nm/gcc-ld/gcc-ar and the gcc driver itself) while LLVM does similar thing.

It may be smoother if binutils was able to load multiple plugins at once and
grab plugins from system and user installed compilers without explicit --plugin
argument.

Binutils probably should also have a way to detect LTO object files and produce
more useful diagnostic than they do now, when there is no plugin claiming them.

There are some PRs filled on the topic
http://cygwin.com/frysk/bugzilla/show_bug.cgi?id=15300
http://cygwin.com/frysk/bugzilla/show_bug.cgi?id=13227
but not much progress on them.

I wonder if we can get this designed and implemented.

On the other hand, GCC current maintains non-plugin path for LTO that is now
only used by darwin port due to lack of plugin enabled LD there.  It seems
that liblto used by darwin is losely compatible with the plugin API, but it 
makes
it harder to have different compilers share it (one has to LD_PRELOAD liblto
to different one prior executing the linker?)

I wonder, is there chance to implement linker plugin API to libLTO glue or add
plugin support to native Darwin tools?

Honza


Re: Fwd: LLVM collaboration?

2014-02-07 Thread Renato Golin
On 7 February 2014 23:30, Joseph S. Myers  wrote:
> I think there are other closely related issues, as GCC people try to work
> around issues with glibc, or vice versa, rather than coordinating what
> might be the best solution involving changes to both components,

Hi Joseph,

Thanks for the huge email, all of it (IMHO) was spot on. I agree with
your arguments, and one of the reasons why I finally sent the email,
is that I'm starting to see all this on LLVM, too.

Because of licenses, we have to replicate libgcc, libstdc++, the
linker, etc. And in many ways, features get added to random places
because it's the easiest route, or because it's the right place to be,
even though there isn't anything controlling or monitoring the feature
in the grand scheme of things. This will, in the end, invariably take
us through the route that GNU crossed a few years back, when people
had to use radioactive suites to work on some parts of GCC.

So, I guess my email was more of a cry for help, than a request to
play nice (as some would infer). I don't think we should repeat the
same mistakes you guys did, but I also think that we have a lot to
offer, as you mention, in looking at extensions and proposing to
standards, or keeping kernel requests sane, and having a unison
argument on specific changes, and so on.

The perfect world would be if any compiler could use any assembler,
linker and libraries, interchangeably. While that may never happen, as
a long term goal, this would at least draw us a nice asymptote to
follow. As every one here and there, I don't have enough time to work
through every detail and follow all lists, but if we encourage the
cross over, or even cross posting between the two lists, we might
solve common problems without incurring in additional time wasted.

--renato


Re: Fwd: LLVM collaboration?

2014-02-07 Thread Joseph S. Myers
On Fri, 7 Feb 2014, Renato Golin wrote:

> For a long time already I've been hearing on the LLVM list people
> saying: "oh, ld should not accept this deprecated instruction, but we
> can't change that", "that would be a good idea, but we need to talk to
> the GCC guys first", and to be honest, nobody ever does.

I think there are other closely related issues, as GCC people try to work 
around issues with glibc, or vice versa, rather than coordinating what 
might be the best solution involving changes to both components, as people 
in the glibc context complain about some Linux kernel decision but have 
difficulty getting any conclusion in conjunction with Linux kernel people 
about the right way forward (or, historically, have difficulty getting 
agreement there is a problem at all - the Linux kernel community has 
tended to have less interest in supporting the details of standards than 
the people now trying to do things in GCC and glibc), as Linux kernel 
people complain about any compiler that optimizes C as a high-level 
language in ways conflicting with its use as something more like a 
portable assembler for kernel code, and as people from the various 
communities complain about issues with underlying standards such as ISO C 
and POSIX but rather less reliably engage with the standards process to 
solve those issues.

Maybe the compiler context is sufficiently separate from the others 
mentioned that there should be multiple collaboration routes for 
(compilers), (libraries / kernel), ... - but people need to be aware that 
just because something is being discussed in a compiler context doesn't 
mean that a C language extension is the right solution; it's possible 
something involving both language and library elements is right, it's 
possible collaboration with the standards process is right at an early 
stage.

(The libraries / kernel collaboration venue exists - the linux-api list, 
which was meant to be for anything about the kernel/userspace interface.  
Unfortunately, it's rarely used - most relevant kernel discussion doesn't 
go there - and I don't have time to follow linux-kernel.  We have recently 
seen several feature requests from the Linux kernel side reported to GCC 
Bugzilla, which is good - at least if there are people on the GCC side 
working on getting such things of use to the kernel implemented in a 
suitably clean way that works for what the kernel wants.)

> 2. There are decisions that NEED to be shared.
> 
> In the past, GCC implemented a lot of extensions because the standards
> weren't good enough. This has changed, but the fact that there will
> always be things that don't belong on any other standard, and are very
> specific to the toolchain inner workings, hasn't.

There are also lots of things where either (a) it would make sense to get 
something in a standard - it can be defined sensibly at the level ISO C / 
C++ deals with, or (b) the standard exists, but what gets implemented 
ignores the standard.  Some of this may be because economic incentives 
seem to get things done one way rather than another way that would 
ultimately be better for users of the languages.

To expand on (a): for a recent GCC patch there was a use for having 
popcount on the host, and I noted 
 how that's one 
of many integer manipulation operations lacking any form of standard C 
bindings.  Sometimes for these things we do at least have 
target-independent GCC extensions - but sometimes just target-specific 
ones, with multiple targets having built-in functions for similar things, 
nominally mapping to particular instructions, when it would be better to 
have a standard C binding for a given operation.

To expand on (b): consider the recent AVX512 GCC patches.  As is typical 
for patches enabling support for new instruction set features, they added 
a large pile of intrinsics (intrinsic headers, mapping to built-in 
functions).  The intrinsics implement a standard of sorts - shared with 
Intel's compiler, at least.  But are they really the right approach in all 
cases?  The features of AVX512 include, in particular, floating-point 
operations with rounding modes embedded in the instruction (in support of 
IEEE 754-2008 saying language standards should support a way to apply 
rounding modes to particular blocks, not just dynamic rounding modes).

There's a proposed specification for C bindings to such a feature - draft 
TS 18661-1 (WG14 N1778; JTC1 ballot ends 2014-03-05, so may well be 
published later this year).  There was some discussion of this starting 
with  (discussion 
continued into Jan 2013), which I presume was motivated by the AVX512 
feature, but in the end the traditional intrinsics were the approach taken 
for supporting this feature, not anything that would allow 
architecture-independent source code to be written.  (The AVX512 feature 
combines constant rounding modes with di

Fwd: LLVM collaboration?

2014-02-07 Thread Renato Golin
Folks,

I'm about to do something I've been advised against, but since I
normally don't have good judgement, I'll risk it, because I think it's
worth it. I know some people here share my views and this is the
reason I'm writing this.


The problem

For a long time already I've been hearing on the LLVM list people
saying: "oh, ld should not accept this deprecated instruction, but we
can't change that", "that would be a good idea, but we need to talk to
the GCC guys first", and to be honest, nobody ever does.

Worst still, with Clang and LLVM getting more traction recently, and
with a lot of very interesting academic work being done, a lot of new
things are getting into LLVM first (like the sanitizers, or some
specialized pragmas) and we're dangerously close to start having
clang-extensions, which in my humble opinion, would be a nightmare.

We, on the other side of the fence, know very well how hard it is to
keep with legacy undocumented gcc-extensions, and the ARM side is
particularly filled with magical things, so I know very well how you
guys would feel if you, one day, had to start implementing clang stuff
without even participating in the original design just because someone
relies on it.

So, as far as I can see (please, correct me if I'm wrong), there are
two critical problems that we're facing right now:

1. There IS an unnecessary fence between GCC and LLVM.

License arguments are one reason why we can't share code as easily as
we would like, but there is no argument against sharing ideas,
cross-reporting bugs, helping each other implement a better
compiler/linker/assembler/libraries just because of an artificial
wall. We need to break this wall.

I rarely see GCC folks reporting bugs on our side, or people saying
"we should check with the GCC folks" actually doing it. We're not
contagious folks, you know. Talking to GCC engineers won't make me a
lesser LLVM engineer, and vice-versa.

I happen to have a very deep respect for GCC *and* for my preferred
personal license (GPLv3), but I also happen to work with LLVM, and I
like it a lot. There is no contradiction on those statements, and I
wish more people could share my opinion.

2. There are decisions that NEED to be shared.

In the past, GCC implemented a lot of extensions because the standards
weren't good enough. This has changed, but the fact that there will
always be things that don't belong on any other standard, and are very
specific to the toolchain inner workings, hasn't.

It would be beneficial to both toolchains to have a shared forum where
we could not only discuss how to solve problems better, but also keep
track of the results, so we can use it as guidelines when implementing
those features.

Further still, other compilers would certainly benefit from such
guidelines, if they want to interact with our toolchains. So, this
wouldn't be just for our sake, but also for future technologies. We
had a hard time figuring out why GCC would do this or that, and in the
end, there was always a reason (mostly good, sometimes, not so much),
but we wasted a lot of time following problems lost in translation.


The Open Source Compiler Initiative

My view is that we're unnecessarily duplicating a lot of the work to
create a powerful toolchain. The license problems won't go away, so I
don't think LLVM will ever disappear. But we're engineers, not
lawyers, so we should solve the bigger technical problem in a way that
we know how: by making things work.

For the last year or two, Clang and GCC are approaching an asymptote
as to what people believe a toolchain should be, but we won't converge
to the same solution unless we talk. If we keep our ideas enclosed
inside our own communities (who has the time to follow both gcc and
llvm lists?), we'll forever fly around the expected target and never
reach it.

To solve the technical problem of duplicated work we just need to
start talking to each other. This mailing list (or LLVM's) is not a
good place, since the traffic is huge and not every one is interested,
so I think we should have something else (another list? a web page? a
bugzilla?) where we'd record all common problems and proposals for new
features (not present in any standards), so that at least we know what
the problems are.

Getting to fix a problem or accepting a proposal would go a long way
of having them as kosher on both compilers, and that could be
considered as the standard compiler implementation, so other
compilers, even the closed source ones, should follow suit.

I'll be at the GNU Cauldron this year, feel free to come and discuss
this and other ideas. I hope to participate more in the GCC side of
things, and I wish some of you guys would do the same on our side. And
hopefully, in a few years, we'll all be on the same side.

I'll stop here, TL;DR; wise. Please, reply copying me, as I'm not
(yet) subscribing to this list.

Best Regards,
--renato