Re: Why doesn't libgcc define _chkstk on MinGW?

2006-11-04 Thread Ross Ridge
Ross Ridge wrote:
>There are other MSC library functions that MinGW doesn't provide, so
>libraries may not link even with a _chkstk alias.

Mark Mitchell wrote:
>Got a list?

Probably the most common missing symbols, using their assembler
names are:

__ftol2
@[EMAIL PROTECTED]
___security_cookie

These are newer symbols in the MS CRT library and also cause problems
for Visual C++ 6.0 users.  I've worked around the missing security cookie
symbols by providing my own stub implementation, but apparently newer
versions of the Platform SDK include a library that fully implement these.
I'm not sure how _ftol2 is supposed to be different from _ftol, but
since I use -ffast-math anyways, I've just used the following code
as a work around:

long _ftol2(double f) { return (long) f; }

Looking at an old copy of MSVCRT.LIB (c. 1998) other missing symbols
that might be a problem include:

T __alldiv [I]
T __allmul [I]
T __alloca_probe [I][*]
T __allrem [I]
T __allshl [I][*]
T __allshr [I]
T __aulldiv [I]
T __aullrem [I]
T __aullshr [I]
A __except_list [I][*]
T __matherr [D]
T __setargv [D]
T ___setargv [X]
A __tls_array [I]
B __tls_index [I]
R __tls_used [I]
T __wsetargv [D]

[D] Documented external interface
[I] Implicitly referenced by the MSC compiler
[X] Undocumented external interface
[*] Missing symbols I've encountered

The are other problems related to linking that can make an MSC compiled
static library incompatible including not processing MSC intialization
and termination sections, no support for thread-local variables and
broken COMDAT section handling.

Ross Ridge



gcc-4.3-20061104 is now available

2006-11-04 Thread gccadmin
Snapshot gcc-4.3-20061104 is now available on
  ftp://gcc.gnu.org/pub/gcc/snapshots/4.3-20061104/
and on various mirrors, see http://gcc.gnu.org/mirrors.html for details.

This snapshot has been generated from the GCC 4.3 SVN branch
with the following options: svn://gcc.gnu.org/svn/gcc/trunk revision 118481

You'll find:

gcc-4.3-20061104.tar.bz2  Complete GCC (includes all of below)

gcc-core-4.3-20061104.tar.bz2 C front end and core compiler

gcc-ada-4.3-20061104.tar.bz2  Ada front end and runtime

gcc-fortran-4.3-20061104.tar.bz2  Fortran front end and runtime

gcc-g++-4.3-20061104.tar.bz2  C++ front end and runtime

gcc-java-4.3-20061104.tar.bz2 Java front end and runtime

gcc-objc-4.3-20061104.tar.bz2 Objective-C front end and runtime

gcc-testsuite-4.3-20061104.tar.bz2The GCC testsuite

Diffs from 4.3-20061028 are available in the diffs/ subdirectory.

When a particular snapshot is ready for public consumption the LATEST-4.3
link is updated and a message is sent to the gcc list.  Please do not use
a snapshot before it has been announced that way.


16 byte alignment hint for sse vectorization

2006-11-04 Thread Michael James

Hello,

I have been playing with gcc's new (to me) auto vectorization
optimizations. I have a particular loop for which I have made external
provisions to ensure that the data is 16-byte aligned. I have tried
everything I can think of to give gcc the hint that it is operating on
aligned data, but still the vectorizer warns that it is operating on
unaligned data and generates the less efficient MOVLPS/MOVUPS instead
of MOVAPS.

The code is like this:

#define SSE __attribute__((aligned (16)))

typedef float matrix_t[100][1024];

matrix_t aa SSE, bb SSE, cc SSE;

void calc(float *a, float *b, float *c) {
 int i, n = 1024;

 for (i=0; i

Bootstrap failure on trunk on linux? (libgmp.so.3 exists, but not found)

2006-11-04 Thread Brooks Moses
I've been setting up a Debian box to do builds on, and make bootstrap on 
mainline is failing somewhere in the middle of Stage 1.  The problem 
appears to be that it's not looking in the right places for libgmp.so.3 
when it calls ./gcc/xgcc at the end of the stage.


-

The box, for what it's worth, is an out-of-the-box Debian Stable, with 
the latest GMP and fully-patched MPFR built by hand and installed in 
/usr/local/lib:


~/build-trunk> ls /usr/local/lib
firmware  libgmp.a  libgmp.la  libgmp.so  libgmp.so.3  libgmp.so.3.4.1 
libmpfr.a  libmpfr.la


I used the following configure line:

~/build-trunk> ../svn-source/configure --verbose 
--prefix=/home/brooks/gcc-trunk --enable-languages=c,c++,fortran 
--with-gmp=/usr/local --with-mpfr=/usr/local


This appears to work quite well for a while; configure finds the mpfr 
and gmp libraries, and is quite happy with them.  However, a good ways 
into the build, it fails on the following error (with a few messages 
quoted before that for context):


gcc -c   -g -fkeep-inline-functions -DIN_GCC   -W -Wall -Wwrite-strings 
-Wstrict-prototypes -Wmissing-prototypes -Wmissing-format-attribute 
-fno-common   -DHAVE_CONFIG_H -I. -I. -I../../svn-source/gcc 
-I../../svn-source/gcc/. -I../../svn-source/gcc/../include 
-I../../svn-source/gcc/../libcpp/include -I/usr/local/include 
-I/usr/local/include -I../../svn-source/gcc/../libdecnumber 
-I../libdecnumber../../svn-source/gcc/cppspec.c -o cppspec.o
gcc   -g -fkeep-inline-functions -DIN_GCC   -W -Wall -Wwrite-strings 
-Wstrict-prototypes -Wmissing-prototypes -Wmissing-format-attribute 
-fno-common   -DHAVE_CONFIG_H  -o cpp gcc.o opts-common.o gcc-options.o 
cppspec.o \
  intl.o prefix.o version.o driver-i386.o  ../libcpp/libcpp.a 
../libiberty/libiberty.a ../libdecnumber/libdecnumber.a -L/usr/local/lib 
-L/usr/local/lib -lmpfr -lgmp
/home/brooks/build-trunk/./gcc/xgcc -B/home/brooks/build-trunk/./gcc/ 
-B/home/brooks/gcc-trunk/i686-pc-linux-gnu/bin/ 
-B/home/brooks/gcc-trunk/i686-pc-linux-gnu/lib/ -isystem 
/home/brooks/gcc-trunk/i686-pc-linux-gnu/include -isystem 
/home/brooks/gcc-trunk/i686-pc-linux-gnu/sys-include -dumpspecs > tmp-specs
/home/brooks/build-trunk/./gcc/xgcc: error while loading shared 
libraries: libgmp.so.3: cannot open shared object file: No such file or 
directory

make[3]: *** [specs] Error 127
make[3]: Leaving directory `/home/brooks/build-trunk/gcc'
make[2]: *** [all-stage1-gcc] Error 2
make[2]: Leaving directory `/home/brooks/build-trunk'
make[1]: *** [stage1-bubble] Error 2
make[1]: Leaving directory `/home/brooks/build-trunk'
make: *** [bootstrap] Error 2

I'm not really sure what to make of this; libgmp.so.3 most certainly 
exists in the specified directory, configure had no problem finding the 
relevant files, and the line immediately before the one that fails has a 
-lgmp -lmpfr on it that works fine.


However, there's a workaround: if I copy libgmp.so.3 into /lib, then the 
build works.  (Or, at least, it gets to Stage 2; it's still going)


It shouldn't be doing that, yes?

- Brooks



Re: Bootstrap failure on trunk on linux? (libgmp.so.3 exists, but not found)

2006-11-04 Thread Daniel Jacobowitz
On Sat, Nov 04, 2006 at 10:57:14AM -0800, Brooks Moses wrote:
> I've been setting up a Debian box to do builds on, and make bootstrap on 
> mainline is failing somewhere in the middle of Stage 1.  The problem 
> appears to be that it's not looking in the right places for libgmp.so.3 
> when it calls ./gcc/xgcc at the end of the stage.

It's doing exactly what it ought to, though unintuitive.  If you tell a
-compiler to use L/usr/local/lib, you're responsible for also setting
up either an rpath or LD_LIBRARY_PATH to point at /usr/local/lib; doing
it by default causes all kinds of problems.

-- 
Daniel Jacobowitz
CodeSourcery


compiling very large functions.

2006-11-04 Thread Kenneth Zadeck
I think that it is time that we in the GCC community took some time to
address the problem of compiling very large functions in a somewhat
systematic manner.

GCC has two competing interests here:  it needs to be able to provide
state of the art optimization for modest sized functions and it needs to
be able to properly process very large machine generated functions using
reasonable resources. 

I believe that the default behavior for the compiler should be that
certain non essential passes be skipped if a very large function is
encountered. 

There are two problems here:

1) defining the set of optimizations that need to be skipped.
2) defining the set of functions that trigger the special processing.


For (1) I would propose that three measures be made of each function. 
These measures should be made before inlining occurs. The three measures
are the number of variables, the number of statements, and the number of
basic blocks. 
Many of the gcc passes are non linear in one or more of these measures
and these passes should be skipped if one or more of these measures
exceeds some threshold.

For (2) I would propose that we add three new fields to the compilation
manager.  These fields would be null or zero if the optimization is
either essential or is only linear in the measure.  Otherwise, some
indication of either a threshold or the exponent of the growth is used
as the field. 

The compilation manager could then look at the options, in particular
the -O level and perhaps some new options to indicate that this is a
small machine or in the other extreme "optimize all functions come hell
or high water!!" and skip those passes which will cause performance
problems.

I do not claim to understand how sensitive every pass is to these
measures.  However, I could possibly make a good first cut on the rtl
passes. 

However, I think that before anyone starts hacking anything, we should
come to a consensus on an overall strategy and implement something
consistent for the entire compiler rather than attack some particular
pass in a manner that only gets us pass the next pr. 

Volunteer(s) to implement the compilation manager part of this would
also be appreciated.

Kenny




Re: compiling very large functions.

2006-11-04 Thread Richard Guenther

On 11/4/06, Kenneth Zadeck <[EMAIL PROTECTED]> wrote:

I think that it is time that we in the GCC community took some time to
address the problem of compiling very large functions in a somewhat
systematic manner.

GCC has two competing interests here:  it needs to be able to provide
state of the art optimization for modest sized functions and it needs to
be able to properly process very large machine generated functions using
reasonable resources.

I believe that the default behavior for the compiler should be that
certain non essential passes be skipped if a very large function is
encountered.

There are two problems here:

1) defining the set of optimizations that need to be skipped.
2) defining the set of functions that trigger the special processing.


For (1) I would propose that three measures be made of each function.
These measures should be made before inlining occurs. The three measures
are the number of variables, the number of statements, and the number of
basic blocks.


Why before inlining?  These three numbers can change quite significantly
as a function passes through the pass pipeline.  So we should try to keep
them up-to-date to have an accurate measurement.

Otherwise the proposal sounds reasonable but we should make sure the
limits we impose allow reproducible compilations for N x M cross
configurations and native compilation on different sized machines.

Richard.


Re: compiling very large functions.

2006-11-04 Thread Kenneth Zadeck
Richard Guenther wrote:
> On 11/4/06, Kenneth Zadeck <[EMAIL PROTECTED]> wrote:
>> I think that it is time that we in the GCC community took some time to
>> address the problem of compiling very large functions in a somewhat
>> systematic manner.
>>
>> GCC has two competing interests here:  it needs to be able to provide
>> state of the art optimization for modest sized functions and it needs to
>> be able to properly process very large machine generated functions using
>> reasonable resources.
>>
>> I believe that the default behavior for the compiler should be that
>> certain non essential passes be skipped if a very large function is
>> encountered.
>>
>> There are two problems here:
>>
>> 1) defining the set of optimizations that need to be skipped.
>> 2) defining the set of functions that trigger the special processing.
>>
>>
>> For (1) I would propose that three measures be made of each function.
>> These measures should be made before inlining occurs. The three measures
>> are the number of variables, the number of statements, and the number of
>> basic blocks.
>
> Why before inlining?  These three numbers can change quite significantly
> as a function passes through the pass pipeline.  So we should try to keep
> them up-to-date to have an accurate measurement.
>
I am flexible here. We may want inlining to be able to update the
numbers.  However, I think that we should drive the inlining agression
based on these numbers. 
> Otherwise the proposal sounds reasonable but we should make sure the
> limits we impose allow reproducible compilations for N x M cross
> configurations and native compilation on different sized machines.
>
I do not want to get into the game where we are looking at the size of
the machine and making this decision.  Doing that would make it hard to
reproduce bugs that come in from the field.  Thus, I think that the
limits (or functions) should be platform independent.

> Richard.



Re: compiling very large functions.

2006-11-04 Thread Richard Guenther

On 11/4/06, Kenneth Zadeck <[EMAIL PROTECTED]> wrote:

Richard Guenther wrote:
> On 11/4/06, Kenneth Zadeck <[EMAIL PROTECTED]> wrote:
>> I think that it is time that we in the GCC community took some time to
>> address the problem of compiling very large functions in a somewhat
>> systematic manner.
>>
>> GCC has two competing interests here:  it needs to be able to provide
>> state of the art optimization for modest sized functions and it needs to
>> be able to properly process very large machine generated functions using
>> reasonable resources.
>>
>> I believe that the default behavior for the compiler should be that
>> certain non essential passes be skipped if a very large function is
>> encountered.
>>
>> There are two problems here:
>>
>> 1) defining the set of optimizations that need to be skipped.
>> 2) defining the set of functions that trigger the special processing.
>>
>>
>> For (1) I would propose that three measures be made of each function.
>> These measures should be made before inlining occurs. The three measures
>> are the number of variables, the number of statements, and the number of
>> basic blocks.
>
> Why before inlining?  These three numbers can change quite significantly
> as a function passes through the pass pipeline.  So we should try to keep
> them up-to-date to have an accurate measurement.
>
I am flexible here. We may want inlining to be able to update the
numbers.  However, I think that we should drive the inlining agression
based on these numbers.


Well, for example jump threading and tail duplication can cause these
numbers to significantly change.  Also CFG instrumentation and PRE
can increase the BB count.  So we need to deal with cases where an
optimization produces overly large number of basic blocks or instructions.
(like by throtting those passes properly)

Richard.


multilib fixes for libjava

2006-11-04 Thread Jack Howarth
   Could anyone comment on the following? Geoff introduced
fixes in r117741 to allow multilib builds on 32-bit PowerPC
processors on Darwin PPC. However the necessary changes for the
libjava subdirectory were never introduced. I have been
attempting to fix this by modelling a patch after the changes
done for configure.ac and Makefile.in in the libobjc directory...

http://gcc.gnu.org/viewcvs/trunk/libobjc/configure.ac?r1=110182&r2=117741
http://gcc.gnu.org/viewcvs/trunk/libobjc/Makefile.in?r1=117618&r2=117741

and regenerating the configure files with...

 cd libjava
 aclocal  -I . -I .. -I ../config
 autoconf  -I . -I .. -I ../config
 automake -a
 cd classpath
 aclocal -I m4 -I ../.. -I ../../config
 autoconf -I m4 -I ../.. -I ../../config
 automake -a
 cd ../libltdl
 aclocal  -I ../.. -I ../../config
 autoconf  -I ../.. -I ../../config
 automake -a
 cd ..
 cd ..

So far the patch looks like...

--- gcc/libjava/configure.ac.org2006-11-04 08:49:05.0 -0500
+++ gcc/libjava/configure.ac2006-11-04 09:25:25.0 -0500
@@ -15,27 +15,8 @@
 # We may get other options which we don't document:
 # --with-target-subdir, --with-multisrctop, --with-multisubdir

-# When building with srcdir == objdir, links to the source files will
-# be created in directories within the target_subdir.  We have to
-# adjust toplevel_srcdir accordingly, so that configure finds
-# install-sh and other auxiliary files that live in the top-level
-# source directory.
-if test "${srcdir}" = "."; then
-  if test -z "${with_target_subdir}"; then
-toprel=".."
-  else
-if test "${with_target_subdir}" != "."; then
-  toprel="${with_multisrctop}../.."
-else
-  toprel="${with_multisrctop}.."
-fi
-  fi
-else
-  toprel=".."
-fi
-
-libgcj_basedir=$srcdir/$toprel/./libjava
-AC_SUBST(libgcj_basedir)
+# Find the rest of the source tree framework.
+AM_ENABLE_MULTILIB(, ..)

 AC_CANONICAL_SYSTEM
 _GCC_TOPLEV_NONCANONICAL_BUILD
@@ -74,16 +55,6 @@
 [version_specific_libs=no]
 )

-# Default to --enable-multilib
-AC_ARG_ENABLE(multilib,
-  AS_HELP_STRING([--enable-multilib],
- [build many library versions (default)]),
-[case "${enableval}" in
-  yes) multilib=yes ;;
-  no)  multilib=no ;;
-  *)   AC_MSG_ERROR(bad value ${enableval} for multilib option) ;;
- esac], [multilib=yes])dnl
-
 AC_ARG_ENABLE(plugin,
   AS_HELP_STRING([--enable-plugin],
  [build gcjwebplugin web browser plugin]),
@@ -905,7 +876,7 @@
 AM_CONDITIONAL(USING_GCC, test "$GCC" = yes)

 # We're in the tree with gcc, and need to include some of its headers.
-GCC_UNWIND_INCLUDE='-I$(libgcj_basedir)/../gcc'
+GCC_UNWIND_INCLUDE='-I$(multi_basedir)/./libjava/../gcc'

 if test "x${with_newlib}" = "xyes"; then
# We are being configured with a cross compiler.  AC_REPLACE_FUNCS
@@ -1518,7 +1489,7 @@
 case " $CONFIG_FILES " in
  *" Makefile "*)
LD="${ORIGINAL_LD_FOR_MULTILIBS}"
-   ac_file=Makefile . ${libgcj_basedir}/../config-ml.in
+   ac_file=Makefile . ${multi_basedir}/./libjava/../config-ml.in
;;
 esac
 for ac_multi_file in $CONFIG_FILES; do
@@ -1534,7 +1505,7 @@
 with_multisubdir=${with_multisubdir}
 ac_configure_args="${multilib_arg} ${ac_configure_args}"
 CONFIG_SHELL=${CONFIG_SHELL-/bin/sh}
-libgcj_basedir=${libgcj_basedir}
+multi_basedir=${multi_basedir}
 CC="${CC}"
 CXX="${CXX}"
 ORIGINAL_LD_FOR_MULTILIBS="${ORIGINAL_LD_FOR_MULTILIBS}"
--- gcc/libjava/Makefile.in.org 2006-11-04 09:16:49.0 -0500
+++ gcc/libjava/Makefile.in 2006-11-04 09:18:12.0 -0500
@@ -665,7 +665,7 @@
 install_sh = @install_sh@
 libdir = @libdir@
 libexecdir = @libexecdir@
-libgcj_basedir = @libgcj_basedir@
+multi_basedir = @multi_basedir@
 mandir = @mandir@

With these changes, the multilib build on a G4 dies at...

checking for dladdr in -ldl... yes
checking for /proc/self/exe... configure: error: cannot check for file 
existence when cross compiling

Do any of you see anything obiviously wrong in the configure.ac and Makefile.in 
changes? It wasn't
straightforward how I should map Geoff's changes to libjava since you use 
libgcj_basedir instead of
toplevel_srcdir. Thanks in advance for any advice as I am pretty much stuck at 
this point.
  Jack
ps I also patch...

--- gcc-4.2-20061031/libjava/libltdl/Makefile.am.org2006-11-03 
18:10:46.0 -0500
+++ gcc-4.2-20061031/libjava/libltdl/Makefile.am2006-11-03 
18:11:12.0 -0500
@@ -2,6 +2,8 @@

 AUTOMAKE_OPTIONS = no-dependencies foreign

+ACLOCAL_AMFLAGS = -I ../.. -I ../../config
+
 INCLUDES = $(GCINCS)

 if INSTALL_LTDL

to make sure that the ACLOCAL_AMFLAGS is properly set for finding the new
multi.m4 file Geoff added.



Re: compiling very large functions.

2006-11-04 Thread Kenneth Zadeck
Richard Guenther wrote:
> On 11/4/06, Kenneth Zadeck <[EMAIL PROTECTED]> wrote:
>> Richard Guenther wrote:
>> > On 11/4/06, Kenneth Zadeck <[EMAIL PROTECTED]> wrote:
>> >> I think that it is time that we in the GCC community took some
>> time to
>> >> address the problem of compiling very large functions in a somewhat
>> >> systematic manner.
>> >>
>> >> GCC has two competing interests here:  it needs to be able to provide
>> >> state of the art optimization for modest sized functions and it
>> needs to
>> >> be able to properly process very large machine generated functions
>> using
>> >> reasonable resources.
>> >>
>> >> I believe that the default behavior for the compiler should be that
>> >> certain non essential passes be skipped if a very large function is
>> >> encountered.
>> >>
>> >> There are two problems here:
>> >>
>> >> 1) defining the set of optimizations that need to be skipped.
>> >> 2) defining the set of functions that trigger the special processing.
>> >>
>> >>
>> >> For (1) I would propose that three measures be made of each function.
>> >> These measures should be made before inlining occurs. The three
>> measures
>> >> are the number of variables, the number of statements, and the
>> number of
>> >> basic blocks.
>> >
>> > Why before inlining?  These three numbers can change quite
>> significantly
>> > as a function passes through the pass pipeline.  So we should try
>> to keep
>> > them up-to-date to have an accurate measurement.
>> >
>> I am flexible here. We may want inlining to be able to update the
>> numbers.  However, I think that we should drive the inlining agression
>> based on these numbers.
>
> Well, for example jump threading and tail duplication can cause these
> numbers to significantly change.  Also CFG instrumentation and PRE
> can increase the BB count.  So we need to deal with cases where an
> optimization produces overly large number of basic blocks or
> instructions.
> (like by throtting those passes properly)
>
I lean to leave the numbers static even if they do increase as time goes
by.  Otherwise you get two effects, the first optimizations get to be
run more, and you get the wierd non linear step functions where small
changes in some upstream function effect the down stream.

kenny

> Richard.



Re: Bootstrap failure on trunk on linux? (libgmp.so.3 exists, but not found)

2006-11-04 Thread Vincent Lefevre
On 2006-11-04 14:21:39 -0500, Daniel Jacobowitz wrote:
> It's doing exactly what it ought to, though unintuitive.  If you tell a
> -compiler to use L/usr/local/lib, you're responsible for also setting
> up either an rpath or LD_LIBRARY_PATH to point at /usr/local/lib; doing
> it by default causes all kinds of problems.

But gcc does use /usr/local/lib by default.

-- 
Vincent Lefèvre <[EMAIL PROTECTED]> - Web: 
100% accessible validated (X)HTML - Blog: 
Work: CR INRIA - computer arithmetic / Arenaire project (LIP, ENS-Lyon)


Re: Bootstrap failure on trunk on linux? (libgmp.so.3 exists, but not found)

2006-11-04 Thread Brooks Moses

Daniel Jacobowitz wrote:

On Sat, Nov 04, 2006 at 10:57:14AM -0800, Brooks Moses wrote:
I've been setting up a Debian box to do builds on, and make bootstrap on 
mainline is failing somewhere in the middle of Stage 1.  The problem 
appears to be that it's not looking in the right places for libgmp.so.3 
when it calls ./gcc/xgcc at the end of the stage.


It's doing exactly what it ought to, though unintuitive.  If you tell a
-compiler to use L/usr/local/lib, you're responsible for also setting
up either an rpath or LD_LIBRARY_PATH to point at /usr/local/lib; doing
it by default causes all kinds of problems.


Ah, okay.  Thanks for the quick reply!

I guess I was assuming that since GMP is supposedly only a prerequisite 
for building the compiler and not for using it, that it was being linked 
in statically rather than dynamically.  But I guess that wouldn't apply 
to xgcc, since it's only used in the build (right?).


- Brooks



Re: Bootstrap failure on trunk on linux? (libgmp.so.3 exists, but not found)

2006-11-04 Thread H. J. Lu
On Sat, Nov 04, 2006 at 04:58:42PM -0800, Brooks Moses wrote:
> Daniel Jacobowitz wrote:
> >On Sat, Nov 04, 2006 at 10:57:14AM -0800, Brooks Moses wrote:
> >>I've been setting up a Debian box to do builds on, and make bootstrap on 
> >>mainline is failing somewhere in the middle of Stage 1.  The problem 
> >>appears to be that it's not looking in the right places for libgmp.so.3 
> >>when it calls ./gcc/xgcc at the end of the stage.
> >
> >It's doing exactly what it ought to, though unintuitive.  If you tell a
> >-compiler to use L/usr/local/lib, you're responsible for also setting
> >up either an rpath or LD_LIBRARY_PATH to point at /usr/local/lib; doing
> >it by default causes all kinds of problems.
> 
> Ah, okay.  Thanks for the quick reply!
> 
> I guess I was assuming that since GMP is supposedly only a prerequisite 
> for building the compiler and not for using it, that it was being linked 
> in statically rather than dynamically.  But I guess that wouldn't apply 
> to xgcc, since it's only used in the build (right?).
> 

I have been using this patch to make sure that GMPLIBS is linked
statically so that I can install gcc binaries on machines without
updated GMPLIBS.


H.J.

--- gcc/Makefile.in.gmp 2006-05-19 06:23:09.0 -0700
+++ gcc/Makefile.in 2006-05-19 13:20:17.0 -0700
@@ -295,7 +295,7 @@ ZLIB = @zlibdir@ -lz
 ZLIBINC = @zlibinc@
 
 # How to find GMP
-GMPLIBS = @GMPLIBS@
+GMPLIBS = -Wl,-Bstatic @GMPLIBS@ -Wl,-Bdynamic
 GMPINC = @GMPINC@
 
 CPPLIB = ../libcpp/libcpp.a


Re: Bootstrap failure on trunk on linux? (libgmp.so.3 exists, but not found)

2006-11-04 Thread Daniel Jacobowitz
On Sat, Nov 04, 2006 at 04:58:42PM -0800, Brooks Moses wrote:
> I guess I was assuming that since GMP is supposedly only a prerequisite 
> for building the compiler and not for using it, that it was being linked 
> in statically rather than dynamically.  But I guess that wouldn't apply 
> to xgcc, since it's only used in the build (right?).

No, xgcc is installed as gcc.  If you have a dynamic libgmp, it will be
used.

-- 
Daniel Jacobowitz
CodeSourcery


Re: compiling very large functions.

2006-11-04 Thread Paolo Bonzini

Kenneth Zadeck wrote:

I think that it is time that we in the GCC community took some time to
address the problem of compiling very large functions in a somewhat
systematic manner.


While I agree with you, I think that there are so many things we are 
already trying to address, that this one can wait.  I think we've been 
doing a very good job on large functions too, and I believe that authors 
of very large functions are just getting not only what they deserve, but 
actually what the expect: large compile times (superlinear).


I think that the most obvious O(n^2) time spots have been cleared 
(dataflow is practically never O(n^2) with a good equation solver), and 
we can live with the remaining O(n^2) space spots since we have a bit, 
or half a bit in front of the n^2.  Actually I just mentioned that 
PR28701 is worth a check on dataflow-branch, but I don't think it will 
be worse than what we have now on mainline (because df is also used in 
liveness, and the memory that fwprop eats might be reused), and it 
should be possible to fix outstanding problems easily (e.g. with bitmap 
obstacks).


Paolo