Re: Cross-compilation and Shared Libraries

2006-06-13 Thread Ranjit Mathew
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Brian Dessent wrote:
> Ranjit Mathew wrote:
> 
>>   I just noticed that even with "--disable-static --enable-static",
> 
> Do you mean --disable-static --enable-shared?

Yes, sorry for the silly typo.


>> a Linux-to-MinGW cross compiler (mainline) still created static
>> libraries for the C++ and Java runtimes. Is this by design or is it
>> a bug? From the point of view of creating executables for embedded
> 
> As far as I know, shared libstdc++ for mingw/cygwin has never worked,
> you always get static no matter what you do, regardless of
> --enable-shared or native/cross.  I don't know if this is because of the
> archaic version of libtool that's in the tree, or some other reason.  It
> sure would be nice to get a shared libstdc++ one of these days without
> having to resort to hacks like manually linking together all the .o
> files in the build tree.

I had forgotten about libtool's limitations w.r.t. shared
libaries for Windows that TJ Laurenzo had already hit while
trying to create a shared libgcj for Windows:

  http://gcc.gnu.org/ml/java/2005-09/msg9.html

Thanks,
Ranjit.

- --
Ranjit Mathew  Email: rmathew AT gmail DOT com

Bangalore, INDIA.Web: http://rmathew.com/


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.2.2 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFEj6UJYb1hx2wRS48RAoXyAJ4uTzoPrm+1Ov0h/JHBnPBi3QFezQCfRjxo
utyPNuPlYio/vkAn6XvVDNU=
=o8Xf
-END PGP SIGNATURE-



Re: Cross-compilation and Shared Libraries

2006-06-13 Thread Brian Dessent
Ranjit Mathew wrote:

>   I just noticed that even with "--disable-static --enable-static",

Do you mean --disable-static --enable-shared?

> a Linux-to-MinGW cross compiler (mainline) still created static
> libraries for the C++ and Java runtimes. Is this by design or is it
> a bug? From the point of view of creating executables for embedded

As far as I know, shared libstdc++ for mingw/cygwin has never worked,
you always get static no matter what you do, regardless of
--enable-shared or native/cross.  I don't know if this is because of the
archaic version of libtool that's in the tree, or some other reason.  It
sure would be nice to get a shared libstdc++ one of these days without
having to resort to hacks like manually linking together all the .o
files in the build tree.

Brian


Cross-compilation and Shared Libraries

2006-06-13 Thread Ranjit Mathew
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

  I just noticed that even with "--disable-static --enable-static",
a Linux-to-MinGW cross compiler (mainline) still created static
libraries for the C++ and Java runtimes. Is this by design or is it
a bug? From the point of view of creating executables for embedded
platforms, this sort of makes sense but for a "full-fledged"
environment like MinGW, it doesn't - yes, it is a bit painful
to transfer the executable and all the libraries it depends on
to the target machine, but this should not be barred as such.

Or is there a more profound reason for this feature?

Thanks,
Ranjit.

- --
Ranjit Mathew   Email: rmathew AT gmail DOT com

Bangalore, INDIA. Web: http://rmathew.com/




-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.2 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFEj4uOYb1hx2wRS48RApY/AJ9hXFfYyRRipZW2o29D0GFHd3bxcACfSL/s
8nBAEYP3FvoFFM1EjErEvPc=
=Z47d
-END PGP SIGNATURE-



Re: Patch queue and reviewing (Was Re: Generator programs can only be built with optimization enabled?)

2006-06-13 Thread Ian Lance Taylor
Daniel Berlin <[EMAIL PROTECTED]> writes:

> It can also tell you who to copy on a ping email to make sure it
> actually goes to a maintainer.
> the interface is under construction, but "okay" for casual use.
> http://www.dberlin.org/patches/patches/maintainer_list/745 would be the
> one for this patch.

I'm on that list, but I can't approve this patch.  It needs approval
from a build system maintainer, not a middle-end maintainer.

Ian


Patch queue and reviewing (Was Re: Generator programs can only be built with optimization enabled?)

2006-06-13 Thread Daniel Berlin



> 
> IMHO this PR is a striking example of the *major* problems we have been 
> having 
> in the patch reviewing department for quite some time.

I don't disagree in this case.

Not only was this patch submitted in march and not reviewed, it was even
pinged on march 29th by someone *else*.

http://gcc.gnu.org/ml/gcc-patches/2006-03/msg01693.html

It seems most undirected pings go ignored, probably because nobody
realizes they are supposed to be looking at this patch.

At least, i hope this is why.

The patch queue can now tell who can review a given patch (it guesses
matching the maintenance area you say the patch is for when adding to
the queue against regexps for each maintenance area).

It knows the difference between global and non-global maintainers.

Given this, it would be trivial to make it able to generate, for
example, an RSS feed (Or it could send them emails) for a maintainer
that they can subscribe to which will have new items when they have new
patches they can review.

Does anyone believe this would help make sure patches stop dropping
through the cracks?

It can also tell you who to copy on a ping email to make sure it
actually goes to a maintainer.
the interface is under construction, but "okay" for casual use.
http://www.dberlin.org/patches/patches/maintainer_list/745 would be the
one for this patch.

I already tried generating mass-ping emails for patches that have been
outstanding > 2 weeks on the patch queue, but this didn't seem to help.

I could try generating the ping mails for single patches automatically,
and try to randomly disperse them so that you can't just ignore some
email bomb of ping emails, but this seems like it should be unnecessary.
Past the above, I have no better ideas for getting patches reviewed
other than appointing more maintainers.

--Dan


Re: Darwin cross-compiler build failures on ppc MacOSX/Darwin

2006-06-13 Thread Mike Stump
Any suggestions?  Does the -isysroot compiler flag fix this sort of  
issue?  It does not seem to be used in the gcc build.


I'd expect it might.  Run with -v and see if isysroot is given to  
ld.  If not, add -Wl,-isysroot=... to pass it down to ld.  In later  
compilers, we do this automagically, given -isysroot.


Re: Darwin cross-compiler build failures on ppc MacOSX/Darwin

2006-06-13 Thread Bill Northcott

On 14/06/2006, at 5:15 AM, Mike Stump wrote:
None of this is a problem on MacOS X Intel.  The cross-compilers  
build without problems on an Intel Mac.


Well, apparently one solution is to fatten your system.


My attempts to do that just resulted in a system that would not boot :-(
Fortunately, I tried it on a spare disk.


The other might be to try:


x86-*-darwin*,*-*-darwin*)


instead, and then use --with-sysroot=/.


I have done something like that already.  Basically breaking the  
configure test and adding --with-sysroot= to the CONFIGFLAGS  
definition in build-gcc.   This allowed the powerpc-i686 build to go  
through.


[ this list is for people that want to roll up their sleeves and  
fix their own problem.  If you want us to fix it for you, just file  
a PR. :-) ]


I am trying.  Currently the cross-hosted compiler build still breaks,  
which seems to be an issue with the SDK.


The link breaks like this:
/usr/bin/ld: warning fat file: /usr/lib/system/libmathCommon.A.dylib  
does not contain an architecture that matches the specified -arch  
flag: i386 (file ignored)

/usr/bin/ld: warning multiple definitions of symbol _strncmp
../libiberty/libiberty.a(strncmp.o) definition of _strncmp in section  
(__TEXT,__text)
/Developer/SDKs/MacOSX10.4u.sdk/usr/lib/libSystem.dylib(strncmp.So)  
definition of _strncmp

/usr/bin/ld: Undefined symbols:
_fegetround referenced from libSystem expected to be defined in /usr/ 
lib/system/libmathCommon.A.dylib

collect2: ld returned 1 exit status
make[2]: *** [makedepend] Error 1
make[1]: *** [all-libcpp] Error 2
+ exit 1

This would seem to be because libSystem.B.dylib references the  
system's libmathCommon.


billn% otool -L /Developer/SDKs/MacOSX10.4u.sdk/usr/lib/ 
libSystem.B.dylib

/Developer/SDKs/MacOSX10.4u.sdk/usr/lib/libSystem.B.dylib:
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0,  
current version 88.1.3)
/usr/lib/system/libmathCommon.A.dylib (compatibility version  
1.0.0, current version 210.0.0)


This cannot be changed with install_name_tool because  
libSystem.B.dylib is a stub library.


Any suggestions?  Does the -isysroot compiler flag fix this sort of  
issue?  It does not seem to be used in the gcc build.


Bill




Re: CIL back-end

2006-06-13 Thread Joe Buck
On Tue, Jun 13, 2006 at 11:39:25AM -0700, Mike Stump wrote:
> On Jun 13, 2006, at 2:02 AM, Roberto COSTA wrote:
> >In the meantime, I hope this doesn't prevent requesting a  
> >development branch.
> 
> I too think the SC should decide this issue.  They are there for  
> guidance, and on this issue, I think that is what we need.

I've raised the topic on the SC list.  This might take a while to
resolve; RMS must be convinced.


Re: Darwin cross-compiler build failures on ppc MacOSX/Darwin

2006-06-13 Thread Mike Stump

On Jun 13, 2006, at 1:21 AM, Bill Northcott wrote:
I am trying to build a universal APPLE gcc on a MacOS PPC system,  
because I want to tweak it to add a couple extra features.


The assumption is incorrect because, MacOS PPC systems do not have  
i386 code in their system libraries, only ppc and ppc64.


:-)  Mine does.  If you add a strategic -L to point to the SDK area,  
you might get it to build.


None of this is a problem on MacOS X Intel.  The cross-compilers  
build without problems on an Intel Mac.


Well, apparently one solution is to fatten your system.  The other  
might be to try:



x86-*-darwin*,*-*-darwin*)


instead, and then use --with-sysroot=/.

[ this list is for people that want to roll up their sleeves and fix  
their own problem.  If you want us to fix it for you, just file a  
PR. :-) ]


Re: CIL back-end

2006-06-13 Thread Mike Stump

On Jun 13, 2006, at 2:02 AM, Roberto COSTA wrote:
In the meantime, I hope this doesn't prevent requesting a  
development branch.


I too think the SC should decide this issue.  They are there for  
guidance, and on this issue, I think that is what we need.


I don't think this prevents anyone from working on this project, one  
can always use source forge or even run a svn server on an always on  
connection.


Re: GCC 4.2 emitting static template constants as global symbols?

2006-06-13 Thread Benjamin Redelings




But right now what is given in the bug report is hard to reproduce as there is 
no source
  

Right.  I added a short snippet that reproduces the problem.

-BenRI


Re: Problem with type safety and the "sentinel" attribute

2006-06-13 Thread Stefan Westerfeld
   Hi!

On Fri, Jun 09, 2006 at 07:30:25PM +0200, Tim Janik wrote:
> On Fri, 9 Jun 2006, Kaveh R. Ghazi wrote:
> >>  void print_string_array (const char *array_name,
> >> const char *string, ...) __attribute__
> >> ((__sentinel__));
> >>
> >>   print_string_array ("empty_array", NULL); /* gcc warns, but shouldn't 
> >*/
> >>
> >> The only way out for keeping the sentinel attribute and avoiding the
> >> warning is using
> >>
> >> static void print_string_array (const char *array_name, ...)
> >> __attribute__ ((__sentinel__));
> >
> >I think you could maintain typesafety and silence the warning by
> >keeping the more specific prototype and adding an extra NULL, e.g.:
> >
> >print_string_array ("empty_array", NULL, NULL);
> >
> >Doesn't seem elegant, but it does the job.
> 
> this is an option for a limited set of callers, yes.

For the statistics, by adding the __sentinel__ attribute to the beast
codebase, I have about 50 of those sentinel warnings that I don't need.
So the double NULL termination would make quite some code more messy
than it should be. 

> >> By the way, there is already an existing gcc bug, which is about the
> >> same thing (NULL passed within named args), but wants to have it the
> >> way it works now:
> >>
> >>   http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21911
> >
> >Correct, the feature as I envisioned it expected the sentinel to
> >appear in the variable arguments only.  This PR reflects when I found
> >out it didn't do that and got around to fixing it.  Note the "buggy"
> >behavior wasn't exactly what you wanted either because GCC got fooled
> >by a sentinel in *any* of the named arguments, not just the last one.

Ah, I see.

> >> so if it gets changed, then gcc might need to support both
> >>  - NULL termination within the last named parameter allowed
> >>  - NULL termination only allowed within varargs parameters (like it is
> >>now)
> >
> >I'm not against this enhancement, but you need to specify a syntax
> >that allows the old behavior but also allows doing it your way.
> >
> >Hmm, perhaps we could check for attribute "nonnull" on the last named
> >argument, if it exists then that can't be the sentinel, if it's not
> >there then it does what you want.  This is not completely backwards
> >compatible because anyone wanting the existing behavior has to add the
> >attribute nonnull.  But there's precedent for this when attribute
> >printf split out whether the format specifier could be null...
> >
> >We could also create a new attribute name for the new behavior.  This
> >would preserve backwards compatibility.  I like this idea better.
> 
> i agree here. as far as the majority of the GLib and Gtk+ APIs are 
> concerned,
> we don't really need the flexibility of the sentinel attribute but rather
> a compiler check on whether the last argument used in a function call
> is NULL or 0 (regardless of whether it's the last named arg or already part
> of the varargs list).
> that's also why the actual sentinel wrapper in GLib looks like this:
> 
>   #define G_GNUC_NULL_TERMINATED __attribute__((__sentinel__))
> 
> so, if i was to make a call on this issue, i'd either introduce
> __attribute__((__null_terminated__)) with the described semantics,
> or have __attribute__((__sentinel__(-1))) work essentially like
> __attribute__((__sentinel__(0))) while also accepting 0 in the position
> of the last named argument.

I also like the backwards compatible way better than trying to somehow
modifying the attribute; it may break something now, or - which would
also be bad - it may make something that somebody will want to check
with the sentinel attribute in some future program impossible.

The only case that I ever needed was NULL termination, so I think
implementing one of the two proposals Tim made would be sufficient.
Personally I like __attribute__((__null_terminated__)) better, because
a -1 sentinel may be less intuitive to read in the source code. So this
would be my suggestion for a "syntax specification".

> >Next you need to recruit someone to implement this enhancement, or
> >submit a patch. :-) Although given that you can silence the warning by
> >adding an extra NULL at the call site, I'm not sure it's worth it.
> 
> i would say this is definitely worth it, because the issue also shows up in
> other code that is widely used:
>   gpointerg_object_new   (GType   object_type,
>   const gchar*first_property_name,
>   ...);
> that's for instance a function which is called in many projects. 
> putting the burden on the caller is clearly the wrong trade off here.
> 
> so please take this as a vote for the worthiness of a fix ;)

Good. Of course I would be happy if somebody with knowledge of the
compiler source could implement it. I never hacked gcc code before.
But since you suggested sending a patch, I'll at least try to imple

Re: GCC 4.2 emitting static template constants as global symbols?

2006-06-13 Thread Andrew Pinski
> 
> Ian Lance Taylor wrote:
> > Benjamin Redelings <[EMAIL PROTECTED]> writes:
> >
> >   
> >> substitution.o:(.data+0x0): multiple definition of
> >> `_ZN5boost7numeric5ublas21scalar_divides_assignIT_T0_E8computedE'
> >> 
> >
> > I can't make sense of that as a mangled name.  It has template
> > parameter references but no template definition.  That suggests that
> > it is purely abstract.  But we shouldn't have a symbol for an abstract
> > object.
> >
> > So this looks like a bug to me.
> >
> > Ian
> >   
> Thanks!  This is now PR28016

But right now what is given in the bug report is hard to reproduce as there is 
no source
only snipits of sources (which goes against what is mentioned on 
http://gcc.gnu.org/bugs.html).

-- Andrew


Re: GCC 4.2 emitting static template constants as global symbols?

2006-06-13 Thread Benjamin Redelings

Ian Lance Taylor wrote:

Benjamin Redelings <[EMAIL PROTECTED]> writes:

  

substitution.o:(.data+0x0): multiple definition of
`_ZN5boost7numeric5ublas21scalar_divides_assignIT_T0_E8computedE'



I can't make sense of that as a mangled name.  It has template
parameter references but no template definition.  That suggests that
it is purely abstract.  But we shouldn't have a symbol for an abstract
object.

So this looks like a bug to me.

Ian
  

Thanks!  This is now PR28016

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=28016

-BenRI


Re: CIL back-end

2006-06-13 Thread Daniel Berlin
Andrew Haley wrote:
> Roberto COSTA writes:
>  > 
>  > By the way, from the previous messages, I understand that the
>  > inclusion of a CIL back-end into gcc cannot be taken as granted
>  > until the issue is discussed and an approval is obtained.
> 
> Right.  And I wouldn't hold my breath waiting.
> 
>  > In the meantime, I hope this doesn't prevent requesting a development 
>  > branch. Without that, it would be much more difficult to build a 
>  > collaborative, open and world-wide visible development environment.
> 
> This is a sensitive matter.  Whether or not this can be done even on a
> development branch is a matter for the SC.



Well, we've generally allowed development branches for anyone who has
write access to the repository and a copyright assignment (which is a
prereq for write access anyway).

Is this not our policy in this case as well, regardless of whether we
believe it will ever be accepted to mainline?

> 
> Andrew.
> 



Re: libsupc++.a(eh_globals.o): In function `__gnu_internal::get_global()': undefined reference to `___tls_get_addr'

2006-06-13 Thread Jakub Jelinek
On Tue, Jun 13, 2006 at 08:35:17AM -0700, Ian Lance Taylor wrote:
> Well, your libstdc++ was configured for a system which supports TLS
> (Thread Local Storage).  That causes it to call __tls_get_addr in some
> cases.  And you are explicitly linking against -lsupc++, which is an
> archive, not a shared library.  This means that your program has a
> direct reference to __tls_get_addr which needs to be satisfied.
> 
> Normally __tls_get_addr is defined by the dynamic linker itself.  When
> linking an executable, one normally links against the dynamic linker,
> so the symbol reference is satisfied.  When linking a shared library,
> one normally does not link against the dynamic linker, but that's OK
> because shared libraries are permitted to have undefined references.
> 
> However, you are linking with -z defs, which directs the linker to
> prohibit undefined references even though it is linking a shared
> library.

If you have sufficiently recent glibc, you have something like:
/* GNU ld script
   Use the shared library, but some functions are only in
   the static library, so try that secondarily.  */
OUTPUT_FORMAT(elf64-x86-64)
GROUP ( /lib64/libc.so.6 /usr/lib64/libc_nonshared.a  AS_NEEDED ( 
/lib64/ld-linux-x86-64.so.2 ) )
in /usr/lib64/libc.so (and similarly for other arches).
If you have old glibc (approx. 16 months old or older), you either
need to stop using -Wl,-z,defs in this case, or add the dynamic
linker on the command line explicitly.

Jakub


Re: Generator programs can only be built with optimization enabled?

2006-06-13 Thread Eric Botcazou
> However, the audit trail of the PR seems to say that now
> -fkeep-inline-functions is sort of implied by -O0; I can build
> insn-conditions.md with "-O0 -fkeep-inline-functions" so I'm not
> affected by the PR.

Comment #36 seems to say that we're back to the initial state.

-- 
Eric Botcazou


Re: libsupc++.a(eh_globals.o): In function `__gnu_internal::get_global()': undefined reference to `___tls_get_addr'

2006-06-13 Thread Ian Lance Taylor
"yang xiaoxin" <[EMAIL PROTECTED]> writes:

> Building project store
> =
> /usr/src/rpm/BUILD/OOB680_m5/store/source
> -
> /usr/src/rpm/BUILD/OOB680_m5/store/util
> --
> Making: ../unxlngi6.pro/lib/libstore.so.3
> g++ -Wl,-z,combreloc -Wl,-z,defs -Wl,-rpath,'$ORIGIN'
> "-Wl,-hlibstore.so.3" -shared -Wl,-O1 -Wl,--version-script
> ../unxlngi6.pro/misc/store_store.map -L../unxlngi6.pro/lib -L../lib
> -L/usr/src/rpm/BUILD/OOB680_m5/solenv/unxlngi6/lib
> -L/usr/src/rpm/BUILD/OOB680_m5/solver/680/unxlngi6.pro/lib
> -L/usr/src/rpm/BUILD/OOB680_m5/solenv/unxlngi6/lib -L/usr/lib
> -L/usr/jre/lib/i386 -L/usr/jre/lib/i386/client
> -L/usr/jre/lib/i386/native_threads -L/usr/X11R6/lib
> -L/usr/lib/firefox-1.5.0.1 ../unxlngi6.pro/slo/store_version.o
> ../unxlngi6.pro/slo/store_description.o -o
> ../unxlngi6.pro/lib/libstore.so.3 ../unxlngi6.pro/slo/object.o
> ../unxlngi6.pro/slo/memlckb.o ../unxlngi6.pro/slo/filelckb.o
> ../unxlngi6.pro/slo/storbase.o ../unxlngi6.pro/slo/storcach.o
> ../unxlngi6.pro/slo/stordata.o ../unxlngi6.pro/slo/storlckb.o
> ../unxlngi6.pro/slo/stortree.o ../unxlngi6.pro/slo/storpage.o
> ../unxlngi6.pro/slo/store.o -luno_sal -lsupc++ -lgcc_s -ldl -lpthread
> -lm
> /usr/lib/libsupc++.a(eh_globals.o): In function 
> `__gnu_internal::get_global()':
> : undefined reference to `___tls_get_addr'
> collect2: ld 返回 1
> dmake:  Error code 1, while making '../unxlngi6.pro/lib/libstore.so.3'
> '---* tg_merge.mk *---'
> 
> ERROR: Error 65280 occurred while making 
> /usr/src/rpm/BUILD/OOB680_m5/store/util
> dmake:  Error code 1, while making 'build_instsetoo_native'
> 
> 
> What's the problem?

Well, your libstdc++ was configured for a system which supports TLS
(Thread Local Storage).  That causes it to call __tls_get_addr in some
cases.  And you are explicitly linking against -lsupc++, which is an
archive, not a shared library.  This means that your program has a
direct reference to __tls_get_addr which needs to be satisfied.

Normally __tls_get_addr is defined by the dynamic linker itself.  When
linking an executable, one normally links against the dynamic linker,
so the symbol reference is satisfied.  When linking a shared library,
one normally does not link against the dynamic linker, but that's OK
because shared libraries are permitted to have undefined references.

However, you are linking with -z defs, which directs the linker to
prohibit undefined references even though it is linking a shared
library.

So you get an error.

Hope this helps.

Ian


Re: CIL back-end

2006-06-13 Thread Sebastian Pop
Roberto COSTA wrote:
> Ori Bernstein wrote:
> >On Mon, 12 Jun 2006 09:50:13 +0200, Roberto COSTA <[EMAIL PROTECTED]> 
> >said:
> >
> >
> >>Hello,
> >>
> >>I'm working for an R&D organization of STMicroelectronics.  Within our 
> >>team we have decided to write a gcc back-end that produces CIL binaries 
> >>(compliant with ECMA specification, see 
> >>http://www.ecma-international.org/publications/standards/Ecma-335.htm).
> >>Our main motivation is the ability to produce highly-optimized CIL 
> >>binaries out of plain C code (and C++ in the future), and possibly to 
> >>add some optimizations to improve, if needed, the quality of the 
> >>generated code.
> >
> >
> >It seems that there's a Summer of Code student working on the exact same 
> >item:
> >http://code.google.com/soc/mono/about.html
> >
> >Perhaps you could collaborate with him, or (as I believe the Summer of Code
> >rules might require) build off his work after it gets submitted. I'd 
> >suggest
> >you contact the Mono project about it.
> 
> Thanks for the info.
> A few days ago, the student posted a help request to gcc-help mailing 
> list (http://gcc.gnu.org/ml/gcc-help/2006-06/msg00018.html), which shows 
> he's at an early stage of the work.

Yi Wang, here is an early attempt to a RTL based CIL backend, together
with the difficulties of dealing with such a low level representation,

"GCC .NET---a feasibility study". Jeremy Singer. In Proceedings of the
First International Workshop on C# and .NET Technologies, pages
55--62. Feb 2003. http://wscg.zcu.cz/Rotor/NET_2003/Papers/Singer.pdf

so I think that an RTL CIL generator is really not a good idea.

> I think in my team we're at a more advanced stage, since we have ideas 
> about how to do things and we start having some prototype code.
> I hope a collaboration is possible; I will certainly contact him and the 
> mentor of the SoC project about it. If there are restrictions imposed by 
> SoC rules, it's up to them to let me know.
> 
> By the way, from the previous messages, I understand that the inclusion 
> of a CIL back-end into gcc cannot be taken as granted until the issue is 
> discussed and an approval is obtained.
> In the meantime, I hope this doesn't prevent requesting a development 
> branch. Without that, it would be much more difficult to build a 
> collaborative, open and world-wide visible development environment.
> 
> Not working on the development of the CIL back-end, or even letting it 
> stalled, is not a choice for my team and myself.
> What is a choice is to share its development and the related 
> infrastructure in the most open way... I think it's the best choice, for 
> all parties; I really hope it's a viable one!
> 
> Cheers,
> Roberto


Re: Generator programs can only be built with optimization enabled?

2006-06-13 Thread Paolo Bonzini

Eric Botcazou wrote:

I didn't understand the purpose of:

(build/gencondmd.o): Filter out -fkeep-inline-functions.


Read the comment?

It can help indeed.

However, the audit trail of the PR seems to say that now 
-fkeep-inline-functions is sort of implied by -O0; I can build 
insn-conditions.md with "-O0 -fkeep-inline-functions" so I'm not 
affected by the PR.  In this case, will your patch still fix the bug 
when using GCC with BOOT_CFLAGS=-O0?


Paolo



Re: help interpreting gcc 4.1.1 optimisation bug

2006-06-13 Thread andrew
On Tue, Jun 13, 2006 at 12:01:39PM +0100, Andrew Haley wrote:
> 
> All you've got here is an inline asm version of 
> 
> inline void longcpy(long* _dst, long* _src, unsigned _numwords)
> {
>   __builtin_memcpy (_dst, _src, _numwords * sizeof (long));
> }
> 
> which gcc will optimize if it can.  
> 
> These days, "rep movs" is not as advantageous as it once was, and you
> may get better performance by allowing gcc to choose how to do memory
> copies.
> 

Hi Andrew,

Actually, I knew this, but I was using longcpy as a bellwether
of many more complex inline-asm functions in a c++ big integer
library.

I've just finished a trawl through the entire library, fixing a Good
Many Things which I now know (thanks to you guys) could really confuse
the optimiser.

Many thanks,

Andrew Walrond


Re: help interpreting gcc 4.1.1 optimisation bug

2006-06-13 Thread Andrew Haley
[EMAIL PROTECTED] writes:
 > On Tue, Jun 13, 2006 at 10:37:29AM +, [EMAIL PROTECTED] wrote:
 > > On Mon, Jun 12, 2006 at 04:59:04PM -0700, Ian Lance Taylor wrote:
 > > > 
 > > > Probably better to say that these are read-write operands, using the
 > > > '+' constraint.
 > > > 
 > > > > Now everything works fine at -O3. However, I really don't understand
 > > > > the '&' early clobber constraint modifer. What use is it?
 > > > 
 > > > It is needed for assembly code which has both outputs and inputs, and
 > > > which includes more than one instruction, such that at least one of
 > > > the outputs is generated by an instruction which runs before another
 > > > instruction which requires one of the inputs.  The '&' constraint
 > > > tells gcc that some of the output operands are produced before some of
 > > > the input operands are used.  gcc will then avoid allocating the input
 > > > and output operands to the same register.
 > > > 
 > > 
 > > Ian, thanks for the reply.
 > > 
 > > So, in conclusion, a correct longcpy() would look like this:
 > > 
 > > void longcpy(long* _dst, long* _src, unsigned _numwords)
 > >  {
 > >  asm volatile (
 > >  "cld \n\t"
 > >  "rep \n\t"
 > >  "movsl   \n\t"
 > >// Outputs (read/write)
 > >  : "+S" (_src), "+D" (_dst), "+c" (_numwords)
 > >// Inputs - specify same registers as outputs
 > >  : "0"  (_src), "1"  (_dst), "2"  (_numwords)
 > >// Clobbers: direction flag, so "cc", and "memory"
 > >  : "cc", "memory"
 > >  );
 > >  }
 > > 
 > 
 > Which doesn't compile ;(
 > 
 > The correct version is I think,
 > 
 > void longcpy(long* _dst, long* _src, unsigned _numwords)
 >  {
 >  asm volatile (
 >  "cld \n\t"
 >  "rep \n\t"
 >  "movsl   \n\t"
 >  // Outputs (read/write)
 >  : "=S" (_src), "=D" (_dst), "=c" (_numwords)
 >  // Inputs - specify same registers as outputs
 >  : "0"  (_src), "1"  (_dst), "2"  (_numwords)
 >  // Clobbers: direction flag, so "cc", and "memory"
 >  : "cc", "memory"
 >  );
 >  }

All you've got here is an inline asm version of 

inline void longcpy(long* _dst, long* _src, unsigned _numwords)
{
  __builtin_memcpy (_dst, _src, _numwords * sizeof (long));
}

which gcc will optimize if it can.  

These days, "rep movs" is not as advantageous as it once was, and you
may get better performance by allowing gcc to choose how to do memory
copies.

Andrew.



Re: help interpreting gcc 4.1.1 optimisation bug

2006-06-13 Thread andrew
On Tue, Jun 13, 2006 at 10:37:29AM +, [EMAIL PROTECTED] wrote:
> On Mon, Jun 12, 2006 at 04:59:04PM -0700, Ian Lance Taylor wrote:
> > 
> > Probably better to say that these are read-write operands, using the
> > '+' constraint.
> > 
> > > Now everything works fine at -O3. However, I really don't understand
> > > the '&' early clobber constraint modifer. What use is it?
> > 
> > It is needed for assembly code which has both outputs and inputs, and
> > which includes more than one instruction, such that at least one of
> > the outputs is generated by an instruction which runs before another
> > instruction which requires one of the inputs.  The '&' constraint
> > tells gcc that some of the output operands are produced before some of
> > the input operands are used.  gcc will then avoid allocating the input
> > and output operands to the same register.
> > 
> 
> Ian, thanks for the reply.
> 
> So, in conclusion, a correct longcpy() would look like this:
> 
> void longcpy(long* _dst, long* _src, unsigned _numwords)
>  {
>  asm volatile (
>  "cld \n\t"
>  "rep \n\t"
>  "movsl   \n\t"
>   // Outputs (read/write)
>  : "+S" (_src), "+D" (_dst), "+c" (_numwords)
>   // Inputs - specify same registers as outputs
>  : "0"  (_src), "1"  (_dst), "2"  (_numwords)
>   // Clobbers: direction flag, so "cc", and "memory"
>  : "cc", "memory"
>  );
>  }
> 

Which doesn't compile ;(

The correct version is I think,

void longcpy(long* _dst, long* _src, unsigned _numwords)
 {
 asm volatile (
 "cld \n\t"
 "rep \n\t"
 "movsl   \n\t"
// Outputs (read/write)
 : "=S" (_src), "=D" (_dst), "=c" (_numwords)
// Inputs - specify same registers as outputs
 : "0"  (_src), "1"  (_dst), "2"  (_numwords)
// Clobbers: direction flag, so "cc", and "memory"
 : "cc", "memory"
 );
 }


> Thanks
> 
> Andrew Walrond


Re: help interpreting gcc 4.1.1 optimisation bug

2006-06-13 Thread andrew
On Mon, Jun 12, 2006 at 04:59:04PM -0700, Ian Lance Taylor wrote:
> 
> Probably better to say that these are read-write operands, using the
> '+' constraint.
> 
> > Now everything works fine at -O3. However, I really don't understand
> > the '&' early clobber constraint modifer. What use is it?
> 
> It is needed for assembly code which has both outputs and inputs, and
> which includes more than one instruction, such that at least one of
> the outputs is generated by an instruction which runs before another
> instruction which requires one of the inputs.  The '&' constraint
> tells gcc that some of the output operands are produced before some of
> the input operands are used.  gcc will then avoid allocating the input
> and output operands to the same register.
> 

Ian, thanks for the reply.

So, in conclusion, a correct longcpy() would look like this:

void longcpy(long* _dst, long* _src, unsigned _numwords)
 {
 asm volatile (
 "cld \n\t"
 "rep \n\t"
 "movsl   \n\t"
// Outputs (read/write)
 : "+S" (_src), "+D" (_dst), "+c" (_numwords)
// Inputs - specify same registers as outputs
 : "0"  (_src), "1"  (_dst), "2"  (_numwords)
// Clobbers: direction flag, so "cc", and "memory"
 : "cc", "memory"
 );
 }

Thanks

Andrew Walrond


Re: GCC trunk build failed on ia64: ICE in __gcov_init

2006-06-13 Thread Maxim Kuvyrkov

Grigory Zagorodnev wrote:

Hi!

Build of mainline GCC on ia64-redhat-linux failed since Thu Jun 8 
16:23:09 UTC 2006 (revision 114488). Last successfully built revision is 
114468.


I wonder if somebody sees the same.


...


- Grigory



This was fixed in revision 114604.


--
Maxim



GCC trunk build failed on ia64: ICE in __gcov_init

2006-06-13 Thread Grigory Zagorodnev

Hi!

Build of mainline GCC on ia64-redhat-linux failed since Thu Jun 8 
16:23:09 UTC 2006 (revision 114488). Last successfully built revision is 
114468.


I wonder if somebody sees the same.

make[4]: Entering directory `/home/user/gcc-42/bld/gcc'
/home/user/gcc-42/bld/./gcc/xgcc -B/home/user/gcc-42/bld/./gcc/ 
-B/home/user/gcc-42/
usr/ia64-unknown-linux-gnu/bin/ 
-B/home/user/gcc-42/usr/ia64-unknown-linux-gnu/lib/
-isystem /home/user/gcc-42/usr/ia64-unknown-linux-gnu/include -isystem 
/home/user/gc
c-42/usr/ia64-unknown-linux-gnu/sys-include -O2  -O2 -g -O2  -DIN_GCC 
 -DUSE_LIBUN
WIND_EXCEPTIONS -W -Wall -Wwrite-strings -Wstrict-prototypes 
-Wmissing-prototypes -W
old-style-definition  -isystem ./include  -fPIC -DUSE_GAS_SYMVER -g 
-DHAVE_GTHR_DEFA
ULT -DIN_LIBGCC2 -D__GCC_FLOAT_NOT_NEEDED  -I. -I. 
-I/home/user/gcc-42/src/gcc -I/ho
me/user/gcc-42/src/gcc/. -I/home/user/gcc-42/src/gcc/../include 
-I/home/user/gcc-42/
src/gcc/../libcpp/include  -I/home/user/gcc-42/src/gcc/../libdecnumber 
-I../libdecnu

mber  -DL_gcov -c /home/user/gcc-42/src/gcc/libgcov.c -o libgcc/./_gcov.o
/home/user/gcc-42/src/gcc/libgcov.c: In function '__gcov_init':
/home/user/gcc-42/src/gcc/libgcov.c:577: internal compiler error: 
Segmentation fault

Please submit a full bug report,
with preprocessed source if appropriate.
See http://gcc.gnu.org/bugs.html> for instructions.
make[4]: *** [libgcc/./_gcov.o] Error 1
make[4]: Leaving directory `/home/user/gcc-42/bld/gcc'
make[3]: *** [libgcc.a] Error 2


Configured with:
--prefix=/home/user/gcc-42/usr --program-suffix=-42 -enable-shared 
--enable-threads=posix --enable-checking --with-system-zlib 
--enable-__cxa_atexit


- Grigory


Re: CIL back-end

2006-06-13 Thread Andrew Haley
Roberto COSTA writes:
 > 
 > By the way, from the previous messages, I understand that the
 > inclusion of a CIL back-end into gcc cannot be taken as granted
 > until the issue is discussed and an approval is obtained.

Right.  And I wouldn't hold my breath waiting.

 > In the meantime, I hope this doesn't prevent requesting a development 
 > branch. Without that, it would be much more difficult to build a 
 > collaborative, open and world-wide visible development environment.

This is a sensitive matter.  Whether or not this can be done even on a
development branch is a matter for the SC.

Andrew.


Re: CIL back-end

2006-06-13 Thread Roberto COSTA

Ori Bernstein wrote:

On Mon, 12 Jun 2006 09:50:13 +0200, Roberto COSTA <[EMAIL PROTECTED]> said:



Hello,

I'm working for an R&D organization of STMicroelectronics.  Within our 
team we have decided to write a gcc back-end that produces CIL binaries 
(compliant with ECMA specification, see 
http://www.ecma-international.org/publications/standards/Ecma-335.htm).
Our main motivation is the ability to produce highly-optimized CIL 
binaries out of plain C code (and C++ in the future), and possibly to 
add some optimizations to improve, if needed, the quality of the 
generated code.



It seems that there's a Summer of Code student working on the exact same item:
http://code.google.com/soc/mono/about.html

Perhaps you could collaborate with him, or (as I believe the Summer of Code
rules might require) build off his work after it gets submitted. I'd suggest
you contact the Mono project about it.


Thanks for the info.
A few days ago, the student posted a help request to gcc-help mailing 
list (http://gcc.gnu.org/ml/gcc-help/2006-06/msg00018.html), which shows 
he's at an early stage of the work.
I think in my team we're at a more advanced stage, since we have ideas 
about how to do things and we start having some prototype code.
I hope a collaboration is possible; I will certainly contact him and the 
mentor of the SoC project about it. If there are restrictions imposed by 
SoC rules, it's up to them to let me know.


By the way, from the previous messages, I understand that the inclusion 
of a CIL back-end into gcc cannot be taken as granted until the issue is 
discussed and an approval is obtained.
In the meantime, I hope this doesn't prevent requesting a development 
branch. Without that, it would be much more difficult to build a 
collaborative, open and world-wide visible development environment.


Not working on the development of the CIL back-end, or even letting it 
stalled, is not a choice for my team and myself.
What is a choice is to share its development and the related 
infrastructure in the most open way... I think it's the best choice, for 
all parties; I really hope it's a viable one!


Cheers,
Roberto


Re: gcc 4.1.1, template specialisation, -O3

2006-06-13 Thread Richard Guenther

On 6/13/06, Gavin Band <[EMAIL PROTECTED]> wrote:

Hello,
I hope this is the right place for this sort of question, which concerns
the behaviour of gcc (versions 3.4.4 and 4.1.1) and template
specialisations.

I have some code split into three files header.hpp, specialisation.cpp,
and main.cpp, given below.  A class having a template member function
print() is defined in "header.hpp", and a specialisation of print() for
std::strings is defined in "specialisation.cpp".  In "main.cpp" print()
is used to print an integer and then a string.

Without any optimisation turned on (i.e. compiling with the command line

"g++ -o test test.cpp specialisation.cpp")

the program output is

$ ./main
The object of type T was: 256.
The string was: hello.

If I compile with -O3, however, I get

$ ./main
The object of type T was: 256.
The object of type T was: hello.

Let me call the former behaviour "good" and the latter "bad" - here are
the optimisation flags which cause "good" and "bad" behaviour on my
machine:

gcc 3.4.4:  good: -O0, -O1 bad: -O2 or greater.
gcc 4.1.1:  good: -O0 bad: -O1 or greater

But perhaps it is not gcc but rather my code that is at fault?  This is
on a Linux box.  The code is below.



#include 
#include 
#include "header.hpp"

int main(int argc, char** argv)
{
  std::string s("hello");
  int i(256);

  A a;
  a.print(std::cerr, i);
  a.print(std::cerr, s);

  return 0;
}


You need to make a specialization declaration visible to main(), otherwise
gcc will inline the not specialized version at -O and above.

Richard.


gcc 4.1.1, template specialisation, -O3

2006-06-13 Thread Gavin Band
Hello,
I hope this is the right place for this sort of question, which concerns
the behaviour of gcc (versions 3.4.4 and 4.1.1) and template
specialisations.

I have some code split into three files header.hpp, specialisation.cpp,
and main.cpp, given below.  A class having a template member function
print() is defined in "header.hpp", and a specialisation of print() for
std::strings is defined in "specialisation.cpp".  In "main.cpp" print()
is used to print an integer and then a string.

Without any optimisation turned on (i.e. compiling with the command line

"g++ -o test test.cpp specialisation.cpp")

the program output is

$ ./main
The object of type T was: 256.
The string was: hello.

If I compile with -O3, however, I get

$ ./main
The object of type T was: 256.
The object of type T was: hello.

Let me call the former behaviour "good" and the latter "bad" - here are
the optimisation flags which cause "good" and "bad" behaviour on my
machine:

gcc 3.4.4:  good: -O0, -O1 bad: -O2 or greater.
gcc 4.1.1:  good: -O0 bad: -O1 or greater

But perhaps it is not gcc but rather my code that is at fault?  This is
on a Linux box.  The code is below.

Thanks for your help,
Gavin Band


- file header.hpp --

#ifndef HEADER
#define HEADER

#include 

struct A
{
  template
  void print(std::ostream& out, T const& t)
  {
 out << "The object of type T was: " << t << ".\n";
  }
};

#endif

-- file specialisation.cpp --

#include 
#include 
#include "header.hpp"

template<>
void A::print(std::ostream& out, std::string const& t)
{
  out << "The string was: " << t << ".\n";
}

-- file main.cpp --

#include 
#include 
#include "header.hpp"

int main(int argc, char** argv)
{
  std::string s("hello");
  int i(256);

  A a;
  a.print(std::cerr, i);
  a.print(std::cerr, s);

  return 0;
}

-- end of files --



Re: {Spam?} Re: Generator programs can only be built with optimization enabled?

2006-06-13 Thread Eric Botcazou
> I didn't understand the purpose of:
>
>   (build/gencondmd.o): Filter out -fkeep-inline-functions.

Read the comment?

-- 
Eric Botcazou


Darwin cross-compiler build failures on ppc MacOSX/Darwin

2006-06-13 Thread Bill Northcott
I am trying to build a universal APPLE gcc  on a MacOS PPC system,  
because I want to tweak it to add a couple extra features.


The build fails as already described here:
http://www.cocoabuilder.com/archive/message/cocoa/2005/6/24/139961

The problem seems to be around line 1626 of gcc/configure.ac  in both  
Apple and FSF versions where the following appears:

if test x$host != x$target
then
CROSS="-DCROSS_COMPILE"
ALL=all.cross
SYSTEM_HEADER_DIR='$(CROSS_SYSTEM_HEADER_DIR)'
case "$host","$target" in
# Darwin crosses can use the host system's libraries and headers,
# because of the fat library support.  Of course, it must be the
# same version of Darwin on both sides.  Allow the user to
# just say --target=foo-darwin without a version number to mean
# "the version on this system".
*-*-darwin*,*-*-darwin*)
hostos=`echo $host | sed 's/.*-darwin/darwin/'`
targetos=`echo $target | sed 's/.*-darwin/darwin/'`
if test $hostos = $targetos -o $targetos = darwin ; then
CROSS=
SYSTEM_HEADER_DIR='$(NATIVE_SYSTEM_HEADER_DIR)'
with_headers=yes
fi
;;

i?86-*-*,x86_64-*-* \
| powerpc*-*-*,powerpc64*-*-*)
CROSS="$CROSS -DNATIVE_CROSS" ;;
esac
elif test "x$TARGET_SYSTEM_ROOT" != x; then


As far as I can see, the assumption of the test is currently  
incorrect and the logic of the test is flawed.


The assumption is incorrect because, MacOS PPC systems do not have  
i386 code in their system libraries, only ppc and ppc64.  MacOS Intel  
system have three way fat libraries with i386, ppc and ppc64 code.


The logic seems to me to be flawed first because it confuses the  
build os with the host os on which the compiler being built is  
expected to run.  Secondly it ignores what comes after "darwin" in  
the host and target names.  As luck would have it, an FSF build seems  
the invoke configure in gcc with host=ppc-apple-darwin8.6.0 and  
target=i386-apple-darwin8.  The test

if test $hostos = $targetos -o $targetos = darwin ; then
fails and the build succeeds.  In the Apple build host gets set to  
ppc-apple-darwin8.  So the test succeeds and the build fails because  
of the missing i386 code in the system libraries.


There is similar code in the configure script of libstdc++.

None of this is a problem on MacOS X Intel.  The cross-compilers  
build without problems on an Intel Mac.


This has also been reported by someone trying to use the SDKs to  
build a cross compiler on Linux which targeted i386 Darwin, I am  
afraid I have lost the reference.


Cheers
Bill Northcott



Re: {Spam?} Re: Generator programs can only be built with optimization enabled?

2006-06-13 Thread Paolo Bonzini

Eric Botcazou wrote:

Thanks for chiming in this discussion.  You've clearly given a good deal
of thought to the problem, and if you have suggestions I'm all ears.



http://gcc.gnu.org/ml/gcc-patches/2006-03/msg00297.html

Cool.  Mark, this is very similar to my patch, but better. :-)

I didn't understand the purpose of:

(build/gencondmd.o): Filter out -fkeep-inline-functions.

Paolo


Re: Generator programs can only be built with optimization enabled?

2006-06-13 Thread Eric Botcazou
> Thanks for chiming in this discussion.  You've clearly given a good deal
> of thought to the problem, and if you have suggestions I'm all ears.

http://gcc.gnu.org/ml/gcc-patches/2006-03/msg00297.html

-- 
Eric Botcazou


Re: Generator programs can only be built with optimization enabled?

2006-06-13 Thread Paolo Bonzini

Eric Botcazou wrote:

An untested patch to do so is attached.  You can try it and, if it
fails, there is also Rainer Orth's patch in comment #14 of the PR.


Sure, but read the date of the comment. :-)
  

Yes, OTOH it is the patch that I like the most...

Thanks for chiming in this discussion.  You've clearly given a good deal 
of thought to the problem, and if you have suggestions I'm all ears.


Paolo



Re: Generator programs can only be built with optimization enabled?

2006-06-13 Thread Eric Botcazou
> An untested patch to do so is attached.  You can try it and, if it
> fails, there is also Rainer Orth's patch in comment #14 of the PR.

Sure, but read the date of the comment. :-)

I'm really wondering what the "Patch URL" field of the PR is for...

IMHO this PR is a striking example of the *major* problems we have been having 
in the patch reviewing department for quite some time.

-- 
Eric Botcazou


Re: Generator programs can only be built with optimization enabled?

2006-06-13 Thread Paolo Bonzini



I think that, after Zack's change, the generator programs that include
rtl.h should be linked with build/vec.o.  That may not be necessary when
optimizing, but it would avoid this problem.  Do you agree?
  
Well, if it fixes the bug, yes: I prefer to fix this in the makefile 
than with #ifdef GENERATOR_FILE's here and there.


An untested patch to do so is attached.  You can try it and, if it 
fails, there is also Rainer Orth's patch in comment #14 of the PR.


Paolo
2006-06-13  Paolo Bonzini  <[EMAIL PROTECTED]>

* Makefile.in (BUILD_RTL): Add build/vec.o.
(build/genextract, build/genautomata): Remove dependency on it.

Index: Makefile.in
===
--- Makefile.in (revision 114487)
+++ Makefile.in (working copy)
@@ -854,7 +854,7 @@ LDEXP_LIB = @LDEXP_LIB@
 # even if we are cross-building GCC.
 BUILD_LIBS = $(BUILD_LIBIBERTY)
 
-BUILD_RTL = build/rtl.o build/read-rtl.o build/ggc-none.o \
+BUILD_RTL = build/rtl.o build/read-rtl.o build/ggc-none.o build/vec.o \
build/min-insn-modes.o build/gensupport.o build/print-rtl.o
 BUILD_ERRORS = build/errors.o
 
@@ -3017,9 +3020,6 @@ genprogmd = attr attrtab automata codes 
 $(genprogmd:%=build/gen%$(build_exeext)): $(BUILD_RTL) $(BUILD_ERRORS)
 
 # These programs need files over and above what they get from the above list.
-build/genextract$(build_exeext) : build/vec.o
-
-build/genautomata$(build_exeext) : build/vec.o
 build/genautomata$(build_exeext) : BUILD_LIBS += -lm
 
 # These programs are not linked with the MD reader.


Re: Generator programs can only be built with optimization enabled?

2006-06-13 Thread Paolo Bonzini



The behavior prior to the top-level bootstrap changes that I and
others repeatedly have mentioned in email and IRC: if I type "make cc1" in
the gcc subdirectory, the build should be invoked with the appropriate
options from the current build stage.  In other words, if I have a
completely bootstrapped compiler, change a source file, enter $objdir/gcc,
and type "make cc1", I expect cc1 to be rebuilt with CFLAGS="-O2 -g".
Instead, if I type "make" or "make quickstrap", the compilation uses
CFLAGS=-g.


I see.  Thanks for the explanation.

Paolo