Re: check_cxa_atexit_available

2010-10-14 Thread Mark Mitchell
On 9/29/2010 3:53 PM, Richard Henderson wrote:

> The test program in target-supports.exp is broken, since
> it doesn't preclude the use of cleanups instead.  Indeed,
> the init/cleanup3.C seems to be essentially identical to
> the target-supports test.

Why isn't the test program in target-supports.exp just a link-time test
that __cxa_atexit exists?  In other words:

  void main () {
 __cxa_atexit (...);
  }

Is the idea that we want to be able to run the tests with
-fno-use-cxa-atexit in the command-line options?  I guess we have to
worry about that.  In that case, yes, I guess an assembly-scan test in
target-supports.exp is the best that we can do.

-- 
Mark Mitchell
CodeSourcery
m...@codesourcery.com
(650) 331-3385 x713


RE: show size of stack needed by functions

2010-10-14 Thread Weddington, Eric
 

> -Original Message-
> From: Eric Botcazou [mailto:ebotca...@adacore.com] 
> Sent: Wednesday, October 13, 2010 4:43 PM
> To: sebastianspublicaddr...@googlemail.com
> Cc: gcc@gcc.gnu.org; Joe Buck
> Subject: Re: show size of stack needed by functions
> 
> We have had something along these lines in our compiler at 
> AdaCore for a few 
> years; it's called -fcallgraph-info, it generates a very 
> low-level callgraph 
> in VCG format for each compilation unit, and the nodes can be 
> decorated with 
> information (typically stack usage indeed).  Then you run a 
> script to "link" 
> all the VCG files for a program and you can (try and) compute 
> the worst case 
> stack usage for example.  This works reasonably well for 
> embedded stuff.
> 
> I'll try and submit something before stage 1 ends if people 
> are interested.

Yes, I'm very interested.

Eric Weddington


gcc-4.5-20101014 is now available

2010-10-14 Thread gccadmin
Snapshot gcc-4.5-20101014 is now available on
  ftp://gcc.gnu.org/pub/gcc/snapshots/4.5-20101014/
and on various mirrors, see http://gcc.gnu.org/mirrors.html for details.

This snapshot has been generated from the GCC 4.5 SVN branch
with the following options: svn://gcc.gnu.org/svn/gcc/branches/gcc-4_5-branch 
revision 165484

You'll find:

 gcc-4.5-20101014.tar.bz2 Complete GCC (includes all of below)

  MD5=917fc632f528188a860595eef2b5457a
  SHA1=90ffcb6ec91697caa31fd6586f1ab3322ac0d733

 gcc-core-4.5-20101014.tar.bz2C front end and core compiler

  MD5=ed36fbd3cdf70c921df9fff21402c6ee
  SHA1=60b4419840e93cf6935e994a69cb9e5a65f15f73

 gcc-ada-4.5-20101014.tar.bz2 Ada front end and runtime

  MD5=3d087ce08dea89e4513ff0f2918c08ab
  SHA1=a88427dd88d75712af853befe78f6577be6777c0

 gcc-fortran-4.5-20101014.tar.bz2 Fortran front end and runtime

  MD5=3573be4cc71e5379e43286e31ec93e02
  SHA1=2ddc94250c3805b16ff51266daf314364a3f204f

 gcc-g++-4.5-20101014.tar.bz2 C++ front end and runtime

  MD5=2a01cbd49fe0cba7c4d92f19fc45f2e7
  SHA1=164540a0cd9e498c37870e03e9540eb212c013f9

 gcc-java-4.5-20101014.tar.bz2Java front end and runtime

  MD5=1251c4f78a7a6361ebf9fd31eac052d9
  SHA1=34080e954b6c5dd2822b0b7661cc9ea019596251

 gcc-objc-4.5-20101014.tar.bz2Objective-C front end and runtime

  MD5=ea4a5719e454ec152cf2ef42076c71eb
  SHA1=6cc87d487337ac1634ef9773dc4ab3a390f8d365

 gcc-testsuite-4.5-20101014.tar.bz2   The GCC testsuite

  MD5=534abb72da9c929c5deb01a1b553b3d3
  SHA1=7a792b0976aad25579c898ed5b43332b1a0f9ac6

Diffs from 4.5-20101007 are available in the diffs/ subdirectory.

When a particular snapshot is ready for public consumption the LATEST-4.5
link is updated and a message is sent to the gcc list.  Please do not use
a snapshot before it has been announced that way.


Re: "Ada.Exceptions.Exception_Propagation" is not a predefined library unit

2010-10-14 Thread Robert Dewar

On 10/14/2010 3:31 AM, Duncan Sands wrote:

Hi Luke,


a-exexpr.adb:39:06: "Ada.Exceptions.Exception_Propagation" is not a
predefined library unit


it looks like you get this error when the compiler can't find a file that it
thinks forms part of the Ada library (this is determined by the name, eg: a
package Ada.XYZ is expected to be part of the Ada library).  For example,
if the compiler looks for the spec of Ada.Exceptions.Exception_Propagation
(which should be called a-exexpr.ads) but can't find it then you will get
this message.  At least, that's my understanding from a few minutes of
rummaging around in the source code.


You are not allowed to add new children or grandchildren to the Ada
hierarchy. Only the implementor can do this, and it must be done
following all the implementation rules (impunit entry, use -gnatg
to compile etc).


Re: Trouble doing bootstrap

2010-10-14 Thread Joseph S. Myers
On Thu, 14 Oct 2010, Ian Lance Taylor wrote:

> Ralf Wildenhues  writes:
> 
> >> 2) If we did use libtool to build gcc, then, yes, I would be concerned
> >>about the relinking issue.
> >
> > Why?  Because of 'make install' run as root?  Any other reasons?
> 
> Any install process which is more complex than cp is a matter for
> concern.  It should only be undertaken for a really good reason.

And we should be aiming to make the GCC build, test and install process 
simpler: install to a staging directory during build, use the built-in 
relocatability in building target libraries and running testsuites rather 
than needing to pass long sequences of -B etc. options at those stages, 
and just copy the staging directory for final installation.  (It's nasty 
that not just the build systems of the various toolchain components but 
upstream DejaGnu itself has hardcoded information about how bits of the 
tools can locate each other in their build directories.  Nothing should 
ever need to know that; the staging install code should just put things so 
that they can find each other automatically.)

-- 
Joseph S. Myers
jos...@codesourcery.com


Re: Trouble doing bootstrap

2010-10-14 Thread Joe Buck
On Thu, Oct 14, 2010 at 12:47:34PM -0700, Ian Lance Taylor wrote:
> > It is not so unlikely that multiple instances of cc1, cc1plus, and f951
> > are running simultaneously.  Granted, I haven't done any measurements.
> 
> Most projects are written in only one language.  Sure, there may be
> cases where cc1 and cc1plus are running simultaneously.  But I suspect
> those cases are relatively unlikely.  In particular, I suspect that the
> gain when that happens, which is really quite small, is less than the
> cost of using a shared library.  Needless to say, I also have not done
> any measurements.

Projects that use C in some places and C++ in others are common, so a
simultaneous cc1 and cc1plus run will often occur with parallel builds.
However, the mp math libraries are relatively small compared to the size
of cc1 or cc1plus so the memory savings from having one copy instead of
two are minimal.



Re: Trouble doing bootstrap

2010-10-14 Thread Ian Lance Taylor
Ralf Wildenhues  writes:

>> 2) If we did use libtool to build gcc, then, yes, I would be concerned
>>about the relinking issue.
>
> Why?  Because of 'make install' run as root?  Any other reasons?

Any install process which is more complex than cp is a matter for
concern.  It should only be undertaken for a really good reason.


>> 3) Shared libraries are less efficient.  You get some efficiency back if
>>the libraries are in fact shared by multiple different executables
>>running simultaneously.  But in this case it is relatively unlikely
>>that gmp, mpfr, and mpc will actually be used by any executable other
>>than gcc.
>
> It is not so unlikely that multiple instances of cc1, cc1plus, and f951
> are running simultaneously.  Granted, I haven't done any measurements.

Most projects are written in only one language.  Sure, there may be
cases where cc1 and cc1plus are running simultaneously.  But I suspect
those cases are relatively unlikely.  In particular, I suspect that the
gain when that happens, which is really quite small, is less than the
cost of using a shared library.  Needless to say, I also have not done
any measurements.

Ian


Re: Trouble doing bootstrap

2010-10-14 Thread Ralf Wildenhues
* Ian Lance Taylor wrote on Thu, Oct 14, 2010 at 08:43:51PM CEST:
> Ralf Wildenhues writes:
> > OK.  I won't argue my point further, but I am interested to learn why
> > shared libraries in nonstandard locations are seemingly frowned upon
> > here.  Is that due to fragility of the libtool approach of relinking,
> > or for relocatability issues of the installed program?
> 
> 1) We don't use libtool to build gcc.

I know.  That's easily "fixed" though.[1]

Whether one uses the tool or reinvents or reuses part of the
functionality is a different question than whether some functionality
itself is desirable or not, however.

> 2) If we did use libtool to build gcc, then, yes, I would be concerned
>about the relinking issue.

Why?  Because of 'make install' run as root?  Any other reasons?

> 3) Shared libraries are less efficient.  You get some efficiency back if
>the libraries are in fact shared by multiple different executables
>running simultaneously.  But in this case it is relatively unlikely
>that gmp, mpfr, and mpc will actually be used by any executable other
>than gcc.

It is not so unlikely that multiple instances of cc1, cc1plus, and f951
are running simultaneously.  Granted, I haven't done any measurements.

>So why make them shared libraries?  We lose something, as
>can be seen by the number of times this has come up in gcc-help.  We
>gain nothing.

Well, we *could* make hardcoding the default in order to cope with
gcc-help.[2]

> 4) People sometimes suggest seeing the DT_RPATH/DT_RUNPATH of the
>executable to point to the shared library locations.  Past experience
>has shown that this is a bad idea for some organizations.  Some
>places mount /usr/local/lib or other directories over NFS.  Putting
>that in DT_RPATH/DT_RUNPATH causes the dynamic linker to search those
>directories at program startup.  That causes startup to be much
>slower, and can even cause it to hang until the NFS server returns.
>That is not desirable for the compiler.

Good points.

Thanks,
Ralf

[1] I'll hope you'll excuse me playing devil's advocate here.  ;-)
[2] Oops.  Twice in one message.


Re: Trouble doing bootstrap

2010-10-14 Thread Ian Lance Taylor
Ralf Wildenhues  writes:

> * Ian Lance Taylor wrote on Thu, Oct 14, 2010 at 06:56:27PM CEST:
>> Ralf Wildenhues writes:
>> > Provide a configure switch --with-hardcoded-gccdeps that adds run path
>> > entries for pre-installed support libraries?
>> 
>> I'm fine with that, but it just introduces another configure option for
>> people to learn about.  If we have to teach people something, I'd rather
>> teach them to use --disable-shared when they build the libraries
>> themselves.
>
> OK.  I won't argue my point further, but I am interested to learn why
> shared libraries in nonstandard locations are seemingly frowned upon
> here.  Is that due to fragility of the libtool approach of relinking,
> or for relocatability issues of the installed program?

1) We don't use libtool to build gcc.

2) If we did use libtool to build gcc, then, yes, I would be concerned
   about the relinking issue.

3) Shared libraries are less efficient.  You get some efficiency back if
   the libraries are in fact shared by multiple different executables
   running simultaneously.  But in this case it is relatively unlikely
   that gmp, mpfr, and mpc will actually be used by any executable other
   than gcc.  So why make them shared libraries?  We lose something, as
   can be seen by the number of times this has come up in gcc-help.  We
   gain nothing.

4) People sometimes suggest seeing the DT_RPATH/DT_RUNPATH of the
   executable to point to the shared library locations.  Past experience
   has shown that this is a bad idea for some organizations.  Some
   places mount /usr/local/lib or other directories over NFS.  Putting
   that in DT_RPATH/DT_RUNPATH causes the dynamic linker to search those
   directories at program startup.  That causes startup to be much
   slower, and can even cause it to hang until the NFS server returns.
   That is not desirable for the compiler.

Ian


Re: Trouble doing bootstrap

2010-10-14 Thread Ralf Wildenhues
* Ian Lance Taylor wrote on Thu, Oct 14, 2010 at 06:56:27PM CEST:
> Ralf Wildenhues writes:
> > Provide a configure switch --with-hardcoded-gccdeps that adds run path
> > entries for pre-installed support libraries?
> 
> I'm fine with that, but it just introduces another configure option for
> people to learn about.  If we have to teach people something, I'd rather
> teach them to use --disable-shared when they build the libraries
> themselves.

OK.  I won't argue my point further, but I am interested to learn why
shared libraries in nonstandard locations are seemingly frowned upon
here.  Is that due to fragility of the libtool approach of relinking,
or for relocatability issues of the installed program?

Thanks,
Ralf


Re: LTO symtab sections vs. missing symbols (libcalls maybe?) and lto-plugin vs. COFF

2010-10-14 Thread Ian Lance Taylor
Dave Korn  writes:

>   The consequence of this is that either there are going to be undefined
> symbols in the final executable, or the linker has to perform another round of
> library scanning.  It occurred to me that the semantics of this might even not
> have been decided yet, since ELF platforms are perfectly happy with undefined
> symbols at final link time.

Only when linking dynamically, though.  This suggests that your test
case should fail on ELF when linking with -static.  If not, why not?

Ian


Re: Trouble doing bootstrap

2010-10-14 Thread Ian Lance Taylor
Ralf Wildenhues  writes:

> * Ian Lance Taylor wrote on Thu, Oct 14, 2010 at 03:07:46AM CEST:
>> Paul Koning writes:
>> > My build system doesn't have LD_LIBRARY_PATH defined so whatever is
>> > the Linux default would apply.  Perhaps I should change that.  But it
>> > seems strange that configure finds the prerequisites and then ends up
>> > generating makefiles that produce a compiler that can't find those
>> > things, even when it's built into the same /usr/local as the libraries
>> > it depends on.
>> 
>> Yes, it's a mess.  But we don't know of any really clean way to fix it.
>
> Provide a configure switch --with-hardcoded-gccdeps that adds run path
> entries for pre-installed support libraries?

I'm fine with that, but it just introduces another configure option for
people to learn about.  If we have to teach people something, I'd rather
teach them to use --disable-shared when they build the libraries
themselves.


> Of course the same could be done for when support libraries are built as
> part of the GCC source tree, but then we'd have to know about whether
> run path entries override LD_LIBRARY_PATH settings, and/or might need to
> relink upon installation in order to not use old preinstalled stuff
> while inside the build tree ...

There is no problem when the support libraries are built as part of the
gcc source tree, because in that case the build system automatically
configures them with --disable-shared.

Ian


Re: Trouble doing bootstrap

2010-10-14 Thread Paul Koning

On Oct 13, 2010, at 9:07 PM, Ian Lance Taylor wrote:

> Paul Koning  writes:
> 
>> My build system doesn't have LD_LIBRARY_PATH defined so whatever is
>> the Linux default would apply.  Perhaps I should change that.  But it
>> seems strange that configure finds the prerequisites and then ends up
>> generating makefiles that produce a compiler that can't find those
>> things, even when it's built into the same /usr/local as the libraries
>> it depends on.
> 
> Yes, it's a mess.  But we don't know of any really clean way to fix it.
> 
> I very strongly recommend that if you want to build gcc's supporting
> libraries yourself, rather than getting them from your distro, that you
> configure them with --disable-shared when you build them.
> 
> Or, if you must, add /usr/local/lib to your /etc/ld.so.conf file or to
> one of your /etc/ld.so.conf.d files, and run ldconfig.

I tried the static-only approach.  That worked for quite a while and then blew 
up with some java bits complaining they couldn't find libgmp.so.  So for now 
I'm doing the ldconfig thing.

paul



Re: LTO symtab sections vs. missing symbols (libcalls maybe?) and lto-plugin vs. COFF

2010-10-14 Thread Dave Korn
On 14/10/2010 16:24, Richard Guenther wrote:
> On Thu, Oct 14, 2010 at 5:28 PM, Dave Korn  wrote:
>> On 14/10/2010 15:44, Richard Guenther wrote:

>>> I have no idea about the linker-plugin side, but we could of course
>>> avoid generating any calls that were not there before (by for example
>>> streaming builtin decls and only if they are used).  But that's as much
>>> a workaround as fixing things up in the linker afterwards ...
>>  Sorry, I don't quite understand that suggestion!  Do you mean we'd emit a
>> symbol for printf and that would result in an explicit printf which wouldn't
>> have the chance of being optimised to a puts at link-time?
> 
> Yes.

  I'd rather leave that as a real last resort!

>> If so I see how
>> it'd work, but it would be a shame to lose optimisation in LTO.  Or to 
>> include
>> unnecessary library members.  I *think* that re-adding the stdlibs after all
>> the new input files in the plugin might work, but haven't tried it yet.
>>
>>  I have the same problem with '__main', BTW.  Is that supposed to count as a
>> builtin, or do we need to do something in expand_main_function() to make LTO
>> aware when it calls __main?
> 
> Hm, I don't know - I suppose that's from the crt*.o stuff?

  Typically it's from libgcc:

" If no init section is available, when GCC compiles any function called
`main' (or more accurately, any function designated as a program entry
point by the language front end calling `expand_main_function'), it
inserts a procedure call to `__main' as the first executable code after
the function prologue.  The `__main' function is defined in `libgcc2.c'
and runs the global constructors."

  On cygwin, it's supplied by libc.  On other systems I don't know, maybe it
can be in the crt.o files, but in that case there wouldn't be any problem with
it getting pulled into the link, it's only a problem when it's a library
archive member.

>  The main function itself should already appear in the symbols.

  It does, but there's no reference to __main.  I was wondering if that was
supposed to happen, and looking at expand_main_function I guess so, because
it's calling "emit_library_call (init_one_libfunc (...))", but this is one
libfunc that we know can't be optimised away at linktime, so it would probably
be OK to stream it.  (But there's a lot I don't know about LTO, so I could
always be wrong there.)

cheers,
  DaveK



RE: Bootstrap errors on i386-pc-solaris2.10 bisected

2010-10-14 Thread Arthur Haas

>> On Tue, Oct 12, 2010 at 2:46 PM, Art Haas  wrote:
>> Hi.
>>
>> The bootstrap problems I've been having on the x86 Solaris machine,
>> plus the reply from maintainer Rainer Orth that his builds have
>> been succeeding were the impetus to investigate how 'git bisect'
>> works. After a bit of fumbling around, including a rebuild of
>> an apparently miscompiled 'git' binary, I was able to bisect
>> the build problems to this commit:
>> { ... snip ... }

> It could be:

> http://gcc.gnu.org/bugzilla/show_bug.cgi?id=45865

I tried the one-line patch at the end of the bug report and it did not help.

Art Haas


RE: Bootstrap failures on sparc/x86 solaris2.10 machines

2010-10-14 Thread Arthur Haas
> Hi Art,

>> No luck with this mornings builds on both x86 and sparc.
>>
>> My last successful i386-pc-solaris2.10 build was several weeks ago; all
> the build attempts fail at this assertion in the function/file below:
>> { ... snip ... }

> I'm building mainline on Solaris 8 to 11 with both Sun as and gas all
> the time without problems, though I'm very rarely using GNU ld
> (install.texi warns against doing so for a reason).  What version of gld
> are you using above?  I recently found a couple of bugs with CVS
> binutils and am in the process of fixing them.  In the rare attempts of
> building with gld, I've never seen a problem as you report, so please
> file a bug for this issue.  Also, please try to use Sun ld and see if
> this helps.

A build using Sun 'ld' also failed in a similar manner.

I've added the bug to GCC bugzilla:

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=46018


>> { ... snip sparc bug error ... }
> Known bug, already fixed:
>
> http://gcc.gnu.org/ml/gcc-patches/2010-10/msg00942.html

My sparc builds are working fine now. Thanks!

Art Haas


Re: LTO symtab sections vs. missing symbols (libcalls maybe?) and lto-plugin vs. COFF

2010-10-14 Thread Richard Guenther
On Thu, Oct 14, 2010 at 5:28 PM, Dave Korn  wrote:
> On 14/10/2010 15:44, Richard Guenther wrote:
>> On Thu, Oct 14, 2010 at 4:59 PM, Dave Korn  
>> wrote:
>
>>>  Nor indeed is there any sign of puts, which is what the generated ltrans0.s
>>> file ends up optimising it to (as indeed does the native code in the 
>>> original
>>> .o file).  I'm assuming that this is by design and is for some reason along
>>> the lines that we don't even know whether or which function calls are 
>>> actually
>>> going to be emitted until link time?
>>
>> I think this is because builtin functions are handled specially (their decls
>> are not streamed, likewise I think we don't stream their cgraph nodes).
>
>  Ah, yes; -fno-builtin avoids it, for example.
>
>> As you noted we may eventually fold printf to puts (we may also generate
>> a memcpy call out of an aggregate assignment), so it might not make
>> your life perfect if we emit symbols for those calls (as indeed we don't know
>> which ones we will emit at link time).
>
>  Yes, I can see that we'd quite possibly end up with unused library code
> pulled into the link.
>
>>>  It makes life complicated in the linker though, because it means there are
>>> symbols present in the object files that the plugin adds via the
>>> add_input_files callback that weren't in the original symbols the linker
>>> presented via add_symbols when it initially claimed the object file 
>>> containing
>>> the IR.
>>>
>>>  The consequence of this is that either there are going to be undefined
>>> symbols in the final executable, or the linker has to perform another round 
>>> of
>>> library scanning.  It occurred to me that the semantics of this might even 
>>> not
>>> have been decided yet, since ELF platforms are perfectly happy with 
>>> undefined
>>> symbols at final link time.  The only documentation I know of that specifies
>>> the linker plugin API is the GCC wiki whopr/driver page(*), and it doesn't 
>>> say
>>> anything explicit about this; maybe I've failed to infer something that I
>>> should have.
>>
>> Yeah, I think you have to deal with undefined references to "standard"
>> functions (mostly from libc, libm but maybe also from libpthread or so).
>
>  Well, the thing I'm trying to figure out is how to deal with them.  COFF
> doesn't allow undefined references in executables.
>
>>>  So, is there a defined way in which this is supposed to work?  And if the
>>> linker is supposed to rescan the libs after the plugin adds files, is it
>>> supposed to offer any archive members it finds to the plugin for claiming?
>>> Should there be multiple iterations of claiming files and calling
>>> all_symbols_read?  And if not, what about if the archive members we find on
>>> the second pass contain LTO IR?
>>>
>>>  It occurs to me that maybe this is what the add_input_library hook is for:
>>> perhaps a simple fix would be for collect2 to pass a list of all the stdlibs
>>> to the plugin as a plugin option, and it could add_input_library them after
>>> it's finished adding object files.  Would that be a reasonable approach?
>>>
>>>  (Right now I have a "working" COFF lto-plugin, but the link fails with
>>> unresolved symbols unless I manually add (e.g.) "-Wl,-u,_puts (... etc.)" to
>>> the command-line to make sure all the required libc archive members get 
>>> pulled
>>> in during the first pass over libs!)
>>
>> I have no idea about the linker-plugin side, but we could of course
>> avoid generating any calls that were not there before (by for example
>> streaming builtin decls and only if they are used).  But that's as much
>> a workaround as fixing things up in the linker afterwards ...
>
>  Sorry, I don't quite understand that suggestion!  Do you mean we'd emit a
> symbol for printf and that would result in an explicit printf which wouldn't
> have the chance of being optimised to a puts at link-time?

Yes.

> If so I see how
> it'd work, but it would be a shame to lose optimisation in LTO.  Or to include
> unnecessary library members.  I *think* that re-adding the stdlibs after all
> the new input files in the plugin might work, but haven't tried it yet.
>
>  I have the same problem with '__main', BTW.  Is that supposed to count as a
> builtin, or do we need to do something in expand_main_function() to make LTO
> aware when it calls __main?

Hm, I don't know - I suppose that's from the crt*.o stuff?  The main
function itself should already appear in the symbols.

Richard.

>
>    cheers,
>      DaveK
>
>
>


Re: LTO symtab sections vs. missing symbols (libcalls maybe?) and lto-plugin vs. COFF

2010-10-14 Thread Dave Korn
On 14/10/2010 15:44, Richard Guenther wrote:
> On Thu, Oct 14, 2010 at 4:59 PM, Dave Korn  wrote:

>>  Nor indeed is there any sign of puts, which is what the generated ltrans0.s
>> file ends up optimising it to (as indeed does the native code in the original
>> .o file).  I'm assuming that this is by design and is for some reason along
>> the lines that we don't even know whether or which function calls are 
>> actually
>> going to be emitted until link time?
> 
> I think this is because builtin functions are handled specially (their decls
> are not streamed, likewise I think we don't stream their cgraph nodes).

  Ah, yes; -fno-builtin avoids it, for example.

> As you noted we may eventually fold printf to puts (we may also generate
> a memcpy call out of an aggregate assignment), so it might not make
> your life perfect if we emit symbols for those calls (as indeed we don't know
> which ones we will emit at link time).

  Yes, I can see that we'd quite possibly end up with unused library code
pulled into the link.

>>  It makes life complicated in the linker though, because it means there are
>> symbols present in the object files that the plugin adds via the
>> add_input_files callback that weren't in the original symbols the linker
>> presented via add_symbols when it initially claimed the object file 
>> containing
>> the IR.
>>
>>  The consequence of this is that either there are going to be undefined
>> symbols in the final executable, or the linker has to perform another round 
>> of
>> library scanning.  It occurred to me that the semantics of this might even 
>> not
>> have been decided yet, since ELF platforms are perfectly happy with undefined
>> symbols at final link time.  The only documentation I know of that specifies
>> the linker plugin API is the GCC wiki whopr/driver page(*), and it doesn't 
>> say
>> anything explicit about this; maybe I've failed to infer something that I
>> should have.
> 
> Yeah, I think you have to deal with undefined references to "standard"
> functions (mostly from libc, libm but maybe also from libpthread or so).

  Well, the thing I'm trying to figure out is how to deal with them.  COFF
doesn't allow undefined references in executables.

>>  So, is there a defined way in which this is supposed to work?  And if the
>> linker is supposed to rescan the libs after the plugin adds files, is it
>> supposed to offer any archive members it finds to the plugin for claiming?
>> Should there be multiple iterations of claiming files and calling
>> all_symbols_read?  And if not, what about if the archive members we find on
>> the second pass contain LTO IR?
>>
>>  It occurs to me that maybe this is what the add_input_library hook is for:
>> perhaps a simple fix would be for collect2 to pass a list of all the stdlibs
>> to the plugin as a plugin option, and it could add_input_library them after
>> it's finished adding object files.  Would that be a reasonable approach?
>>
>>  (Right now I have a "working" COFF lto-plugin, but the link fails with
>> unresolved symbols unless I manually add (e.g.) "-Wl,-u,_puts (... etc.)" to
>> the command-line to make sure all the required libc archive members get 
>> pulled
>> in during the first pass over libs!)
> 
> I have no idea about the linker-plugin side, but we could of course
> avoid generating any calls that were not there before (by for example
> streaming builtin decls and only if they are used).  But that's as much
> a workaround as fixing things up in the linker afterwards ...

  Sorry, I don't quite understand that suggestion!  Do you mean we'd emit a
symbol for printf and that would result in an explicit printf which wouldn't
have the chance of being optimised to a puts at link-time?  If so I see how
it'd work, but it would be a shame to lose optimisation in LTO.  Or to include
unnecessary library members.  I *think* that re-adding the stdlibs after all
the new input files in the plugin might work, but haven't tried it yet.

  I have the same problem with '__main', BTW.  Is that supposed to count as a
builtin, or do we need to do something in expand_main_function() to make LTO
aware when it calls __main?

cheers,
  DaveK




Re: "Ada.Exceptions.Exception_Propagation" is not a predefined library unit

2010-10-14 Thread Olivier Hainque
Hello Luke,

Luke A. Guest wrote:
> Can anyone give me a pointer here? I'm totally new to this :/

> a-exexpr.adb:39:06: "Ada.Exceptions.Exception_Propagation" is not a
> predefined library unit
> a-exexpr.adb:39:06: "Ada.Exceptions (body)" depends on
> "Ada.Exceptions.Exception_Propagation (body)"
> a-exexpr.adb:39:06: "Ada.Exceptions.Exception_Propagation (body)"
> depends on "Ada.Exceptions.Exception_Propagation (spec)"

 We discussed this internally a bit.

 The compiler is looking for the spec of Ada.Exceptions.Exception_Propagation
 in a separate file (which would be a-exexpr.ads) because you are trying to
 add a child of it.

 This won't work, as there is indeed no such file today because this
 unit is provided as a subunit of ada.exceptions (package bla is ... end; 
 package body bla is separate;)

 What you probably could do instead is to define a System unit
 (e.g. System.GCC_Exceptions or System.Unwind_Control or ...) 
 to hold the low level unwinder type definitions.  That would allow
 reuse from other units, which might become of interest in the not so
 distant future.

 In case you don't already know about it, gnatmake -a is a very convenient
 device to experiment with alternate/extra Ada runtime units (accounts for
 variants in the current directory, for example).

 Olivier






Re: LTO symtab sections vs. missing symbols (libcalls maybe?) and lto-plugin vs. COFF

2010-10-14 Thread Richard Guenther
On Thu, Oct 14, 2010 at 4:59 PM, Dave Korn  wrote:
>
>    Hello list,
>
>  When I compile this source with -flto:
>
>> extern int retval;
>> int func (void)
>> {
>>   return retval;
>> }
>
> ... the LTO symbol table contains both symbols:
>
>> /gnu/binutils/git.repo/obj/ld/test/func.o:     file format pe-i386
>>
>> Contents of section .gnu.lto_.symtab.227b80e3:
>>   66756e63     func
>>  0010 4b00 72657476 616c 0200  K...retval..
>>  0020  5100                ..Q...
>
>  But when I compile this:
>
>> extern int printf (const char *fmt, ...);
>>
>> extern const char *text;
>> extern int func (void);
>>
>> int retval = 0;
>>
>> int main (int argc, const char **argv)
>> {
>>   printf ("%s\n", text);
>>   return func ();
>> }
>
> ... there is no sign of printf:
>
>> /gnu/binutils/git.repo/obj/ld/test/main.o:     file format pe-i386
>>
>> Contents of section .gnu.lto_.symtab.6e8eaf64:
>>   6d61696e     main
>>  0010 4b00 66756e63 0200   K...func
>>  0020  5b00 74657874 0200  [...text
>>  0030   5f00 72657476  _...retv
>>  0040 616c   6100  ala.
>>  0050                                  ..
>
>  Nor indeed is there any sign of puts, which is what the generated ltrans0.s
> file ends up optimising it to (as indeed does the native code in the original
> .o file).  I'm assuming that this is by design and is for some reason along
> the lines that we don't even know whether or which function calls are actually
> going to be emitted until link time?

I think this is because builtin functions are handled specially (their decls
are not streamed, likewise I think we don't stream their cgraph nodes).
As you noted we may eventually fold printf to puts (we may also generate
a memcpy call out of an aggregate assignment), so it might not make
your life perfect if we emit symbols for those calls (as indeed we don't know
which ones we will emit at link time).

>  It makes life complicated in the linker though, because it means there are
> symbols present in the object files that the plugin adds via the
> add_input_files callback that weren't in the original symbols the linker
> presented via add_symbols when it initially claimed the object file containing
> the IR.
>
>  The consequence of this is that either there are going to be undefined
> symbols in the final executable, or the linker has to perform another round of
> library scanning.  It occurred to me that the semantics of this might even not
> have been decided yet, since ELF platforms are perfectly happy with undefined
> symbols at final link time.  The only documentation I know of that specifies
> the linker plugin API is the GCC wiki whopr/driver page(*), and it doesn't say
> anything explicit about this; maybe I've failed to infer something that I
> should have.

Yeah, I think you have to deal with undefined references to "standard"
functions (mostly from libc, libm but maybe also from libpthread or so).

>  So, is there a defined way in which this is supposed to work?  And if the
> linker is supposed to rescan the libs after the plugin adds files, is it
> supposed to offer any archive members it finds to the plugin for claiming?
> Should there be multiple iterations of claiming files and calling
> all_symbols_read?  And if not, what about if the archive members we find on
> the second pass contain LTO IR?
>
>  It occurs to me that maybe this is what the add_input_library hook is for:
> perhaps a simple fix would be for collect2 to pass a list of all the stdlibs
> to the plugin as a plugin option, and it could add_input_library them after
> it's finished adding object files.  Would that be a reasonable approach?
>
>  (Right now I have a "working" COFF lto-plugin, but the link fails with
> unresolved symbols unless I manually add (e.g.) "-Wl,-u,_puts (... etc.)" to
> the command-line to make sure all the required libc archive members get pulled
> in during the first pass over libs!)

I have no idea about the linker-plugin side, but we could of course
avoid generating any calls that were not there before (by for example
streaming builtin decls and only if they are used).  But that's as much
a workaround as fixing things up in the linker afterwards ...

Richard.

>    cheers,
>      DaveK
> --
> (*) - http://gcc.gnu.org/wiki/whopr/driver
>
>


LTO symtab sections vs. missing symbols (libcalls maybe?) and lto-plugin vs. COFF

2010-10-14 Thread Dave Korn

Hello list,

  When I compile this source with -flto:

> extern int retval;
> int func (void)
> {
>   return retval;
> }

... the LTO symbol table contains both symbols:

> /gnu/binutils/git.repo/obj/ld/test/func.o: file format pe-i386
>
> Contents of section .gnu.lto_.symtab.227b80e3:
>   66756e63     func
>  0010 4b00 72657476 616c 0200  K...retval..
>  0020  5100    ..Q...

  But when I compile this:

> extern int printf (const char *fmt, ...);
>
> extern const char *text;
> extern int func (void);
>
> int retval = 0;
>
> int main (int argc, const char **argv)
> {
>   printf ("%s\n", text);
>   return func ();
> }

... there is no sign of printf:

> /gnu/binutils/git.repo/obj/ld/test/main.o: file format pe-i386
>
> Contents of section .gnu.lto_.symtab.6e8eaf64:
>   6d61696e     main
>  0010 4b00 66756e63 0200   K...func
>  0020  5b00 74657874 0200  [...text
>  0030   5f00 72657476  _...retv
>  0040 616c   6100  ala.
>  0050  ..

  Nor indeed is there any sign of puts, which is what the generated ltrans0.s
file ends up optimising it to (as indeed does the native code in the original
.o file).  I'm assuming that this is by design and is for some reason along
the lines that we don't even know whether or which function calls are actually
going to be emitted until link time?

  It makes life complicated in the linker though, because it means there are
symbols present in the object files that the plugin adds via the
add_input_files callback that weren't in the original symbols the linker
presented via add_symbols when it initially claimed the object file containing
the IR.

  The consequence of this is that either there are going to be undefined
symbols in the final executable, or the linker has to perform another round of
library scanning.  It occurred to me that the semantics of this might even not
have been decided yet, since ELF platforms are perfectly happy with undefined
symbols at final link time.  The only documentation I know of that specifies
the linker plugin API is the GCC wiki whopr/driver page(*), and it doesn't say
anything explicit about this; maybe I've failed to infer something that I
should have.

  So, is there a defined way in which this is supposed to work?  And if the
linker is supposed to rescan the libs after the plugin adds files, is it
supposed to offer any archive members it finds to the plugin for claiming?
Should there be multiple iterations of claiming files and calling
all_symbols_read?  And if not, what about if the archive members we find on
the second pass contain LTO IR?

  It occurs to me that maybe this is what the add_input_library hook is for:
perhaps a simple fix would be for collect2 to pass a list of all the stdlibs
to the plugin as a plugin option, and it could add_input_library them after
it's finished adding object files.  Would that be a reasonable approach?

  (Right now I have a "working" COFF lto-plugin, but the link fails with
unresolved symbols unless I manually add (e.g.) "-Wl,-u,_puts (... etc.)" to
the command-line to make sure all the required libc archive members get pulled
in during the first pass over libs!)

cheers,
  DaveK
-- 
(*) - http://gcc.gnu.org/wiki/whopr/driver



Options for dumping dependence checking results

2010-10-14 Thread Hongtao
 Hi All,

What's the option for dumping the results of loop dependence checking?
such as dependence relations, direction vectors, etc.

Thanks,
Hongtao


Re: %pc relative addressing of string literals/const data

2010-10-14 Thread Joakim Tjernlund
Joakim Tjernlund/Transmode wrote on 2010/10/12 11:00:36:
> 
> Alan Modra  wrote on 2010/10/11 14:58:45:
> > 
> > On Sun, Oct 10, 2010 at 11:20:06AM +0200, Joakim Tjernlund wrote:
> > > Now I have had a closer look at this and it looks much like -fpic
> > > on ppc32, you still use the GOT/TOC to load the address where the 
data is.
> > 
> > No, with ppc64 -mcmodel=medium you use the GOT/TOC pointer plus an
> > offset to address local data.
> > 
> > > I was looking for true %pc relative addressing of data. I guess this 
is really
> > > hard on PowerPC?
> > 
> > Yes, PowerPC lacks pc-relative instructions.
> > 
> > > I am not sure this is all it takes to make -fpic to work with 
-mrelocatable,
> > > any ideas?
> > 
> > You might be lucky.  With -mrelocatable, .got2 only contains
> > addresses.  No other constants.  So a simple run-time loader can
> > relocate the entire .got2 section, plus those locations specified in
> > .fixup.  You'll have to make sure gcc does the same for .got, and your
> > run-time loader will need to be modified to handle .got (watch out for
> > the .got header!).

> Got it working now. It was just u-boot reloc routine I first failed
> to extend properly to reloc *got too.
> 
> I think this is safe as one can mix fpic with fPIC and
> mrelocatable is the same as fPIC+fixups.
> 
> Will you accept this patch into gcc?

Ping?

>From d8ff0b3f0b44480542eab04d1659f4368b6b09cf Mon Sep 17 00:00:00 2001
From: Joakim Tjernlund 
Date: Sun, 10 Oct 2010 10:34:50 +0200
Subject: [PATCH] powerpc: Support -fpic too with mrelocatable


Signed-off-by: Joakim Tjernlund 
---
 sysv4.h |3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/gcc/config/rs6000/sysv4.h b/gcc/config/rs6000/sysv4.h
index 8da8410..e4b8280 100644
--- a/gcc/config/rs6000/sysv4.h
+++ b/gcc/config/rs6000/sysv4.h
@@ -227,7 +227,8 @@ do {\
 }  \
\
   else if (TARGET_RELOCATABLE) \
-flag_pic = 2;  \
+if (!flag_pic) \
+  flag_pic = 2;\
 } while (0)
 
 #ifndef RS6000_BI_ARCH
-- 
1.7.2.2



Re: "Ada.Exceptions.Exception_Propagation" is not a predefined library unit

2010-10-14 Thread Luke A. Guest
On Thu, 2010-10-14 at 09:31 +0200, Duncan Sands wrote:
> Hi Luke,
> 
> > a-exexpr.adb:39:06: "Ada.Exceptions.Exception_Propagation" is not a
> > predefined library unit
> 
> it looks like you get this error when the compiler can't find a file that it
> thinks forms part of the Ada library (this is determined by the name, eg: a
> package Ada.XYZ is expected to be part of the Ada library).  For example,
> if the compiler looks for the spec of Ada.Exceptions.Exception_Propagation
> (which should be called a-exexpr.ads) but can't find it then you will get
> this message.  At least, that's my understanding from a few minutes of
> rummaging around in the source code.

Hmmm, well, this spec is actually inside the body of a-except.adb (which
also specifies that the body of a-exexpr.ads is separate. All files are
present in the rts dirs.

Thanks,
Luke.




Re: Trouble doing bootstrap

2010-10-14 Thread Jonathan Wakely
On 14 October 2010 02:07, Paul Koning wrote:
>
> Explicitly setting LD_LIBRARY_PATH seems to cure the problem.  It would be 
> good to have that called out in the procedures (or, preferably, made not to 
> be necessary).

As Ian pointed out, it's documented under --with-mpc et al, although I
only added that note recently.

I find building the support libs in-tree is easiest, followed by
installing them separately but with --disable-shared


Re: "Ada.Exceptions.Exception_Propagation" is not a predefined library unit

2010-10-14 Thread Duncan Sands

Hi Luke,


a-exexpr.adb:39:06: "Ada.Exceptions.Exception_Propagation" is not a
predefined library unit


it looks like you get this error when the compiler can't find a file that it
thinks forms part of the Ada library (this is determined by the name, eg: a
package Ada.XYZ is expected to be part of the Ada library).  For example,
if the compiler looks for the spec of Ada.Exceptions.Exception_Propagation
(which should be called a-exexpr.ads) but can't find it then you will get
this message.  At least, that's my understanding from a few minutes of
rummaging around in the source code.

Ciao,

Duncan.