Would like to make one version of is_too_expensive in gcse.c and cprop.c

2015-11-10 Thread Bradley Lucier

The routines declared as

static bool
is_too_expensive (const char *pass)

in both cprop.c and gcse.c are identical except for two comment lines.

I'd like to modify is_too_expensive, which implied to me that there 
should be only one copy of the routine.


Would it be reasonable to add an extern declaration of is_too_expensive 
(with perhaps a more descriptive name) in gcse.h, remove the static 
declaration from gcse.c, include gcse.h in cprop.c?


(I realize this is a very simple question for regular C programmers, but 
I'm not a regular C programmer, sorry).


Thanks.

Brad


Warning for converting (possibly) negative float/double to unsigned int

2016-02-26 Thread Bradley Lucier

Perhaps this question is appropriate for the gcc mail list.

Converting a float/double to unsigned int is undefined if the result 
would be negative when converted to a signed int.


x86-64 and arm treat this condition differently---x86-64 returns a value 
whose bit pattern is the same as the bit pattern for converting to 
signed int, and arm returns zero.  So it would be nice to have a warning 
that this will (or could) happen.


I couldn't find such a warning in the GCC manual or in the GCC code 
base.  Looking through the code, it seemed it might go in this code in 
force_operand() in expr.c on mainline:


  if (UNARY_P (value))
{
  if (!target)
target = gen_reg_rtx (GET_MODE (value));
  op1 = force_operand (XEXP (value, 0), NULL_RTX);
  switch (code)
{
case ZERO_EXTEND:
case SIGN_EXTEND:
case TRUNCATE:
case FLOAT_EXTEND:
case FLOAT_TRUNCATE:
  convert_move (target, op1, code == ZERO_EXTEND);
  return target;

case FIX:
case UNSIGNED_FIX:
  expand_fix (target, op1, code == UNSIGNED_FIX);
  return target;

case FLOAT:
case UNSIGNED_FLOAT:
  expand_float (target, op1, code == UNSIGNED_FLOAT);
  return target;

default:
  return expand_simple_unop (GET_MODE (value), code, op1, 
target, 0);

}
}

But maybe not.

Any advice on how to proceed?  I'd be willing to write and test the few 
lines of code myself if I knew where to put them.


Brad


4.2 hasn't bootstrapped on powerpc-apple-darwin G5 machine for a very long time

2006-05-03 Thread Bradley Lucier
4.2 hasn't bootstrapped on powerpc-apple-darwin G5 machine for a very  
long time.  I'm seeing the same problem as


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=27121

It would be nice if this were remedied.  I do try to test gcc  
versions before release.


Brad


Re: 4.2 hasn't bootstrapped on powerpc-apple-darwin G5 machine for a very long time

2006-05-09 Thread Bradley Lucier


On May 3, 2006, at 7:50 PM, David Fang wrote:


FWIW, the 20060415 mainline (4.2) snapshot bootstrapped for me, using
odcctools-20060413 (odcctools-590.36od13).  This machine is a dual G5
(ppc970) using OS X 10.3.9, and Apple's gcc-3.3 (build 1640).

However, before building, I patched the following 2 files:
In gcc/config/darwin.h:
at: #define LINK_COMMAND_SPEC :
replace:/usr/bin/libtool
with:   /path/to/odcctools/bin/libtool


Thanks.  Is there any reason to hardwire which version of libtool to  
use?


Brad


Re: 4.2 hasn't bootstrapped on powerpc-apple-darwin G5 machine for a very long time

2006-05-18 Thread Bradley Lucier


On May 3, 2006, at 7:50 PM, David Fang wrote:




Bradley Lucier writes:
Brad> 4.2 hasn't bootstrapped on powerpc-apple-darwin G5 machine  
for a very

Brad> long time.  I'm seeing the same problem as
Brad> http://gcc.gnu.org/bugzilla/show_bug.cgi?id=27121
Brad> It would be nice if this were remedied.  I do try to test gcc
Brad> versions before release.

For the time-being, you can bootstrap with --disable-multilib.
The real solution requires Apple to provide an updated cctools with a
working ld64.


Hi,

FWIW, the 20060415 mainline (4.2) snapshot bootstrapped for me, using
odcctools-20060413 (odcctools-590.36od13).  This machine is a dual G5
(ppc970) using OS X 10.3.9, and Apple's gcc-3.3 (build 1640).

However, before building, I patched the following 2 files:
In gcc/config/darwin.h:
at: #define LINK_COMMAND_SPEC :
replace:/usr/bin/libtool
with:   /path/to/odcctools/bin/libtool

In libstdc++-v3/scripts/make_exports.pl (for OS X 10.3 only):
replace:nm -P
with:   /path/to/odcctools/bin/nm -P
(probably not necessary if this nm is already found first in path)

Configure command (your paths may vary):
../configure --prefix=/Users/fang/local/gcc-4.2 \
--program-suffix=-4.2 \
--disable-nls \
--with-gmp=/sw \
--with-mpfr=/sw \
--infodir='${prefix}/share/info' \
--with-included-gettext \
--host=powerpc-apple-darwin`uname -r|cut -f1 -d.` \
`if test ! -f /usr/lib/libSystemStubs.a ; then echo -n "-- 
with-as=/Users/fang/lib/odcctools/bin/as --with-ld=/Users/fang/lib/ 
odcctools/bin/ld" ; fi`


I just tried to follow your detailed instructions (thanks) and  
bootstrap still failed with today's mainline.  I installed  
odcctools-20060413, changed


[lindv2:mainline/gcc/config] lucier% rcsdiff darwin.h
===
RCS file: RCS/darwin.h,v
retrieving revision 1.1
diff -r1.1 darwin.h
207c207
< %{!Zdynamiclib:%(linker)}%{Zdynamiclib:/usr/bin/libtool} \
---
> %{!Zdynamiclib:%(linker)}%{Zdynamiclib:/usr/local/ 
odcctools-20060413/bin/libtool} \


and my configure and build command was

[lindv2:gcc/mainline/objdir64] lucier% cat ../build-gcc-64
#!/bin/tcsh
/bin/rm -rf *; ../configure --with-as=/usr/local/odcctools-20060413/ 
bin/as --with-ld=/usr/local/odcctools-20060413/bin/ld --prefix=/pkgs/ 
gcc-mainline --with-gmp=/opt/local/ --with-mpfr=/opt/local/ ; make -j  
16 bootstrap >& build.log && (make -k -j 16 check RUNTESTFLAGS="-- 
target_board 'unix{-mcpu=970/-m64}'"  >& check.log ; make mail-report- 
with-warnings.log)


I have

[lindv2:gcc/mainline/objdir64] lucier% which ld
/usr/local/odcctools-20060413/bin/ld
[lindv2:gcc/mainline/objdir64] lucier% which as
/usr/local/odcctools-20060413/bin/as

and I'm still getting

/Users/lucier/programs/gcc/mainline/objdir64/gcc/gcj -B/Users/lucier/ 
programs/gcc/mainline/objdir64/powerpc-apple-darwin8.6.0/ppc64/ 
libjava/ -B/Users/lucier/programs/gcc/mainline/objdir64/gcc/ -g -O2 - 
m64 -m64 -o .libs/jv-convert --main=gnu.gcj.convert.Convert -shared- 
libgcc   -L/Users/lucier/programs/gcc/mainline/objdir64/powerpc-apple- 
darwin8.6.0/ppc64/libjava -L/Users/lucier/programs/gcc/mainline/ 
objdir64/powerpc-apple-darwin8.6.0/ppc64/libjava/.libs ./.libs/ 
libgcj.dylib -L/Users/lucier/programs/gcc/mainline/objdir64/powerpc- 
apple-darwin8.6.0/ppc64/libstdc++-v3/src -L/Users/lucier/programs/gcc/ 
mainline/objdir64/powerpc-apple-darwin8.6.0/ppc64/libstdc++-v3/ 
src/.libs -lpthread -ldl

can't resolve symbols:
  ___dso_handle, referenced from:
  _atexit in crt3.o
ld64 failed: symbol(s) not found
collect2: ld returned 1 exit status
make[5]: *** [jv-convert] Error 1
make[4]: *** [all-recursive] Error 1
make[3]: *** [multi-do] Error 1
make[2]: *** [all-multi] Error 2
make[1]: *** [all-target-libjava] Error 2
make: *** [bootstrap] Error 2

Can you see what I'm doing incorrectly here?

Brad



Suggestion for logging changes on Bugzilla

2006-05-31 Thread Bradley Lucier
I just did a search on "My bugs" and found that one of them, 22118,  
had a title that I didn't recognize.  I then remembered that the  
title had been changed from my original description; ordinarily there  
is not an entry in a bugzilla record noting that a title has in fact  
been changed.


I would like to suggest that all substantive changes to a bugzilla  
record (change of title, change of component, change of "Reported  
against", etc.) be automatically accompanied by a log entry in the PR  
that states the change and who made it.  Then we can at least have a  
record of who made changes and be able to ask them why there were  
made, etc..  I think e-mail is now sent to interested parties when  
such changes are made, but it would be good if a record is kept in  
the PR, too.


Brad

BTW, PR 22118 can probably be closed; for example, Geoff has committed

http://gcc.gnu.org/ml/gcc-patches/2006-05/msg01224.html




Re: regress and -m64

2006-08-28 Thread Bradley Lucier
When I run bootstrap and "make check", I check the -m64 option  
(only).  Check gcc-testresults.


Currently, the results don't look very good.  Maybe I'm doing  
something wrong.


Brad


Re: regress and -m64

2006-08-28 Thread Bradley Lucier


On Aug 28, 2006, at 12:10 PM, Jack Howarth wrote:


   Why don't you try a normal multi-lib build without any
extra flags.


What extra flags?  The configure command is

 ../configure --prefix=/pkgs/gcc-mainline --with-gmp=/opt/local/ -- 
with-mpfr=/opt/local/


which is totally generic (one needs to specify gmp and mpfr libraries  
to get fortran to build).


I build with

make -j 4 bootstrap >& build.log

which is completely generic.

Or do you mean the -mcpu=970 in the test options?


Oh, when you do your make check from the top level of the
build directory use this form of the command...

make -k check RUNTESTFLAGS='--target_board="unix{,-m64}"'


my make check is

make -k -j 8 check RUNTESTFLAGS="--target_board 'unix{-mcpu=970/-m64}'"

I don't see any reason to check the 32-bit stuff that the regression  
checker checks at least once a day.


Brad


Re: regress and -m64

2006-08-30 Thread Bradley Lucier
After some discussion with Jack Howarth, I have found that the  
gfortran and libgomp executable tests on powerpc-apple-darwin8.7.0  
(at least) do not link the correct, just-built-using-"make  
bootstrap", libraries until those libraries have first been installed  
in $prefix/lib/...


I filed a bug report at

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=28913

I noted the different results there between two "make check"  
commands, one just before "make install" and one just after.


64-bit test results are now as follows.  (See also

http://gcc.gnu.org/ml/gcc-testresults/2006-08/msg01383.html

)

=== g++ Summary ===

# of expected passes11595
# of unexpected failures1350
# of expected failures  69
# of unresolved testcases   28
# of unsupported tests  129
/Users/lucier/programs/gcc/mainline/objdir64/gcc/testsuite/g++/../../g 
++  version 4.2.0 20060829 (experimental)

=== gcc Summary ===

# of expected passes41550
# of unexpected failures45
# of unexpected successes   1
# of expected failures  108
# of untested testcases 28
# of unsupported tests  507
/Users/lucier/programs/gcc/mainline/objdir64/gcc/xgcc  version 4.2.0  
20060829 (experimental)

=== gfortran Summary ===

# of expected passes14014
# of unexpected failures33
# of unexpected successes   3
# of expected failures  7
# of unsupported tests  41
/Users/lucier/programs/gcc/mainline/objdir64/gcc/testsuite/ 
gfortran/../../gfortran  version 4.2.0 20060829 (experimental)

=== objc Summary ===

# of expected passes1707
# of unexpected failures68
# of expected failures  7
# of unresolved testcases   1
# of unsupported tests  2
/Users/lucier/programs/gcc/mainline/objdir64/gcc/xgcc  version 4.2.0  
20060829 (experimental)

=== libffi Summary ===

# of expected passes472
# of unexpected failures384
# of expected failures  8
# of unsupported tests  8
=== libgomp Summary ===

# of expected passes1075
# of unexpected failures205
# of unsupported tests  111
=== libjava Summary ===

# of expected passes1776
# of unexpected failures2069
# of expected failures  32
# of untested testcases 3021
=== libstdc++ Summary ===

# of expected passes2052
# of unexpected failures1668
# of unexpected successes   1
# of expected failures  15
# of unsupported tests  321



Re: regress and -m64

2006-08-31 Thread Bradley Lucier


On Aug 30, 2006, at 9:55 PM, Jack Howarth wrote:


Try building
some of the g++ testcases manually and see what the errors
are.


Perhaps this is a problem:

grep 'Symbol not found' g++.log | sort | uniq -c
1254 dyld: Symbol not found: ___dso_handle




Re: Darwin as primary platform

2006-09-22 Thread Bradley Lucier
Right now, it seems that one may not be able to build a 64-bit  
version of the compiler itself, on either either x86-64 or ppc64, see


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=28994

I notice that because some of my (automatically generated) C  
programs, with certain compiler options, require more memory than a  
32-bit compiler provides.


Just a comment.

Brad


Re: Darwin as primary platform

2006-10-04 Thread Bradley Lucier


On Sep 22, 2006, at 9:20 PM, Eric Christopher wrote:


Bradley Lucier wrote:
Right now, it seems that one may not be able to build a 64-bit  
version of the compiler itself


You may or may not have noticed that there are no 64-bit native  
targets for darwin.


I just looked at

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=26854

again, where I use a 64-bit version of 4.1.0 on powerpc-apple- 
darwin8.5.0.  So it is (was?) possible to build a 64-bit version of  
4.1.0 on (at least one version of) darwin.


I would consider not being able to do so for 4.2 a regression.

Brad


Re: Darwin as primary platform

2006-10-04 Thread Bradley Lucier


On Oct 4, 2006, at 1:57 PM, Eric Christopher wrote:

FWIW I think a 64-bit native version might be nice as a separate  
target, but I've been told there's no real advantage there either  
on ppc.


Perhaps I'm misunderstanding your comment, but with a 64-bit gcc you  
can compile machine-generated programs that are so large that gcc's  
internal data structures take more memory than you can address in 32  
bits.


John Mashey makes this part of his fourth step on the progress from a  
32-bit to a 64-bit operating system. Mashey's "10 step program" to  
move from 32-bit to 64-bit hardware and operating systems can be  
found on page 32 of the current issue of Queue, which means page 34  
of the pdf file


http://www.acm.org/acmqueue/digital/Queuevol4no8_October2006.pdf

Brad


Large memory requirements for 4.2 and 4.3

2006-10-25 Thread Bradley Lucier
For many years, the default gcc compile options for C code generated  
by Gambit, the Scheme->C compiler, were very simple (-O1 -fschedule- 
insns2 -fno-math-errno -fno-trapping-math) and I didn't have problems  
with gcc's space requirements to compile those files.  (I often ran  
into complexity issues with the algorithms, but people here fixed  
those over the years.


Recently, Marc Feeley used a genetic algorithm to find better compile  
options for Gambit-generated code; he found that performance  
increased by about 30% with more aggressive optimizations (but not  
gcse, yet ;-).


With these new options I can't compile necessary Gambit-generated C  
files on darwin-ppc in 2 GB of space, with or without modulo  
scheduling.  I'm sure that the same thing will happen on other  
architectures for slightly larger files.


I filed

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29374

that gives an example of gcc requiring inordinate (to my mind)  
amounts of memory to compile a program.  Perhaps someone would like  
to look at this PR and give some perspective on the space  
requirements I'm seeing.


Brad


Re: compiling very large functions

2006-11-05 Thread Bradley Lucier
The gcc developers have been very cooperative over the years in  
working to solve problems that I've had in compiling large machine- 
generated programs.  For example, when gcse was disabled for large  
flow flow graphs in 2000, the warn_disabled_optimization flag was  
added at my suggestion.  As Steven Bosscher hinted at, however, it  
appears that the disabled optimization warning has not been used at  
all beyond its introduction:


% grep -R disabled_optimization *
gcc/.svn/text-base/ChangeLog-2000.svn-base: * toplev.c  
(warn_disabled_optimization): Declare new warning flag.
gcc/.svn/text-base/ChangeLog-2000.svn-base: * flags.h  
(warn_disabled_optimization): Add it here.
gcc/.svn/text-base/common.opt.svn-base:Common Var 
(warn_disabled_optimization)
gcc/.svn/text-base/gcse.c.svn-base:  warning  
(OPT_Wdisabled_optimization,
gcc/.svn/text-base/gcse.c.svn-base:  warning  
(OPT_Wdisabled_optimization,
gcc/ChangeLog-2000: * toplev.c (warn_disabled_optimization):  
Declare new warning flag.
gcc/ChangeLog-2000: * flags.h (warn_disabled_optimization): Add  
it here.

gcc/common.opt:Common Var(warn_disabled_optimization)
gcc/gcse.c:  warning (OPT_Wdisabled_optimization,
gcc/gcse.c:  warning (OPT_Wdisabled_optimization,

A grep of 'PARAM_VALUE.*MAX' may give a more accurate idea of where  
optimization passes have been throttled for various reasons.


In reporting runtime problems, my experience has been that if there  
is one specific pass whose runtime overwhelms the runtime of all the  
other passes, then people respond quickly to deal with it.  I've  
tended to use a relatively low level of optimization in my programs (- 
O1 -fschedule-insns2 -fno-math-errno -fno-trapping-math -fwrapv -fno- 
strict-aliasing) and even as new tree-ssa passes were added to -O1,  
the runtime and space requirements were kept reasonable (perhaps  
after a bit of work).


Now I'm finding that more aggressive optimizations can significantly  
speed up these codes; I'm also finding that these newly-attempted  
optimization passes take up so much memory (especially on darwin G5  
for some reason) that they can't be applied except on very large  
machines.  (One of my codes took only six minutes, but over 10.1 GB  
of memory, to compile; smaller examples are given in PR 29374.)   
Perhaps these passes intrinsically require large amounts of memory,  
or perhaps the algorithms and data structures used have not yet been  
examined critically for very large programs.  When I find time I plan  
to discover which specific optimizations require large memory;  
perhaps with this data the passes involved can be studied more  
closely to see whether the memory requirements can be cut.


With that background, I'd like to request that optimization passes  
not be throttled back by default, especially in stage 1 of a release  
cycle.  I fear that this would unduly hide problems that might be  
solved with a reasonable amount of effort.  It's not clear to me how  
many new optimization passes and data structures have been stress- 
tested on very large programs; we may find that most problems that  
appear now can be fixed if they are examined in isolation with the  
right input data.


Brad


gfortran testsuite failures with 4.3.0 on powerpc64-apple-darwin8.8.0

2006-12-05 Thread Bradley Lucier
I'm getting several thousand gfortran testsuite errors with messages  
like:


FAIL: gfortran.dg/PR19754_2.f90  -O3 -fomit-frame-pointer -funroll- 
all-loops -finline-functions  (test for excess errors)

Excess errors:
/Users/gcc-test/programs/gcc/mainline/gcc/testsuite/gfortran.dg/ 
PR19754_2.f90:0: warning: 'const' attribute directive ignored


http://www.math.purdue.edu/~lucier/gcc/test-results/4_3_0_2006-12-05.gz



Re: gfortran testsuite failures with 4.3.0 on powerpc64-apple-darwin8.8.0

2006-12-05 Thread Bradley Lucier


On Dec 6, 2006, at 1:33 AM, Steve Kargl wrote:


So when was the last good bootstrap?


I last bootstrapped and regtested this configuration here

http://www.math.purdue.edu/~lucier/gcc/test-results/4_3_0_2006-11-11.gz

The results appear roughly similar.  (This is a recent architecture  
triple.)



What did you change since that time?


I ran config/gcc_update.

Brad


Re: gfortran testsuite failures with 4.3.0 on powerpc64-apple-darwin8.8.0

2006-12-05 Thread Bradley Lucier


On Dec 6, 2006, at 2:18 AM, Steve Kargl wrote:




On Dec 6, 2006, at 1:33 AM, Steve Kargl wrote:


So when was the last good bootstrap?


I last bootstrapped and regtested this configuration here

http://www.math.purdue.edu/~lucier/gcc/test-results/ 
4_3_0_2006-11-11.gz


The results appear roughly similar.  (This is a recent architecture
triple.)


Okay, so "when was the last good bootstrap"



I have reported no other bootstrap/regtests.  So, depending on your  
meaning, perhaps there has never been a "last good bootstrap".



What did you change since that time?


I ran config/gcc_update.


And OS or other changes?


No.



Re: gfortran testsuite failures with 4.3.0 on powerpc64-apple-darwin8.8.0

2006-12-06 Thread Bradley Lucier

After

0.  Making Jack's suggested changes to prune.exp (even though they  
didn't catch any new linker messages);


1.  Configuring and making with

/bin/rm -rf *; env CC=/pkgs/gcc-4.2.0-64/bin/gcc ../configure -- 
build=powerpc64-apple-darwin8.8.0 --host=powerpc64-apple-darwin8.8.0  
--target=powerpc64-apple-darwin8.8.0 --with-gmp=/pkgs/gmp-4.2.1-64/ -- 
with-mpfr=/pkgs/gmp-4.2.1-64/ --prefix=/pkgs/gcc-4.3.0-64; make -j 8  
bootstrap BOOT_LDFLAGS='-Wl,-search_paths_first' >& build.log


2. make install;

3.  a simple

make -k -j 8 check >& check.log ; make mail-report-with-warnings.log

I got results that appear not much different from the powerpc-apple- 
darwin8.8.0 (i.e., 32-bit) results:


http://gcc.gnu.org/ml/gcc-testresults/2006-12/msg00267.html

i.e., these results don't show a particular fortran problem.

Brad


Re: gfortran testsuite failures with 4.3.0 on powerpc64-apple-darwin8.8.0

2006-12-08 Thread Bradley Lucier


On Dec 8, 2006, at 1:43 AM, Andrew Pinski wrote:


In case anyone does not know yet, the warning is the same as PR 29779.
I don't remember if this was mentioned or not.


Thank you very much for that info.  That is indeed the problem with  
these test cases, as can be seen if I specify a 64-bit CPU that has  
altivec:


[descartes:gcc/testsuite/gfortran.dg] gcc-test% /pkgs/gcc-4.3.0-64/ 
bin/gfortran -O3 -fomit-frame-pointer -funroll-all-loops -finline- 
functions -c PR19754_2.f90
[descartes:gcc/testsuite/gfortran.dg] gcc-test% /pkgs/gcc-4.3.0-64/ 
bin/gfortran -O3 -fomit-frame-pointer -funroll-all-loops -finline- 
functions -c PR19754_2.f90 -mcpu=970

PR19754_2.f90:0: warning: 'const' attribute directive ignored


Brad


Compile and run time comparison of every gcc release since 2.95

2012-04-29 Thread Bradley Lucier
Marc Feeley, the author of the Gambit Scheme compiler and interpreter, has 
measured the time to "make" the current version of Gambit, and then to run an 
application in the Gambit interpreter, for every release of gcc since gcc-2.95.

For each version of gcc, Feeley built Gambit in each of two ways:  with each 
Scheme function compiled to its own C function (--enable-multiple-hosts), and 
with all the Scheme functions in a file combined into a single C function 
(--enable-single-host).  The latter version increases compile time, and 
typically doubles the speed of execution of the compiled code.  Feeley also 
compiled Gambit with each of -O1 or -O2.

Perhaps some of you may be interested in the results, which can be found here:

https://mercure.iro.umontreal.ca/pipermail/gambit-list/2012-April/005936.html

Brad Lucier


4.3.0 manual vs changes.html

2008-03-18 Thread Bradley Lucier

The web page

http://gcc.gnu.org/gcc-4.3/changes.html

states that "The -ftree-vectorize option is now on by default under - 
O3.", but on


http://gcc.gnu.org/onlinedocs/gcc-4.3.0/gcc/Optimize-Options.html

-ftree-vectorize is not listed as one of the options enabled by -O3.

Is the first statement correct?

Brad


bootstrap error in 4.1 on sparc

2005-03-03 Thread Bradley Lucier
With today's mainline I get
stage1/xgcc -Bstage1/ 
-B/export/users/lucier/local/gcc-mainline/sparc-sun-solaris2.9/bin/ -c  
 -g -O2 -DIN_GCC   -W -Wall -Wwrite-strings -Wstrict-prototypes 
-Wmissing-prototypes -pedantic -Wno-long-long -Wno-variadic-macros 
-Wold-style-definition -Werror -fno-common   -DHAVE_CONFIG_H-I. -I. 
-I../../gcc -I../../gcc/. -I../../gcc/../include -I./../intl 
-I../../gcc/../libcpp/include -I/pkgs/gmp-4.1.4/include 
-I/pkgs/gmp-4.1.4/include ../../gcc/reorg.c -o reorg.o
../../gcc/reorg.c: In function 'get_jump_flags':
../../gcc/reorg.c:901: internal compiler error: in invert_jump_1, at 
jump.c:1711
Please submit a full bug report,
with preprocessed source if appropriate.

And so it begins ...
Brad


Re: bootstrap error in 4.1 on sparc

2005-03-03 Thread Bradley Lucier
On Mar 3, 2005, at 3:13 PM, Andrew Pinski wrote:
Was this fixed with my/Roger's patch which went in this morning (EST)?
No:
stage1/xgcc -Bstage1/ 
-B/export/users/lucier/local/gcc-mainline/sparc-sun-solaris2.9/bin/ -c  
 -g -O2 -DIN_GCC   -W -Wall -Wwrite-strings -Wstrict-prototypes 
-Wmissing-prototypes -pedantic -Wno-long-long -Wno-variadic-macros 
-Wold-style-definition -Werror -fno-common   -DHAVE_CONFIG_H-I. -I. 
-I../../gcc -I../../gcc/. -I../../gcc/../include -I./../intl 
-I../../gcc/../libcpp/include -I/pkgs/gmp-4.1.4/include 
-I/pkgs/gmp-4.1.4/include ../../gcc/reorg.c -o reorg.o
../../gcc/reorg.c: In function 'get_jump_flags':
../../gcc/reorg.c:901: internal compiler error: in invert_jump_1, at 
jump.c:1714
zorn-483% cat ../LAST_UPDATED
Thu Mar  3 20:21:22 EST 2005
Fri Mar  4 01:21:22 UTC 2005



Re: Heads up: 4.0 libjava failures on powerpc-apple-darwin7.8.0

2005-03-26 Thread Bradley Lucier
On Mar 25, 2005, at 1:22 PM, Tom Tromey wrote:
"Brad" == Bradley Lucier <[EMAIL PROTECTED]> writes:
Brad> http://gcc.gnu.org/ml/gcc-testresults/2005-03/msg01559.html
I didn't see more recent results, but I suspect this problem has been
fixed.
It seems that the libjava tests have been turned off, so it depends on 
the meaning of "fixed":

http://gcc.gnu.org/ml/gcc-testresults/2005-03/msg01749.html
I'm sorry, I don't understand what's going on, but it doesn't look good.
Brad


Re: Is it possible to catch overflow in long long multiply ?

2005-06-03 Thread Bradley Lucier
This is the wrong list to ask such a question, but I'll answer it  
anyway since the answer might be of general interest.


There is a wonderful book "Hacker's Delight" by Henry S. Warren Jr.,

http://www.awprofessional.com/bookstore/product.asp? 
isbn=0201914654&redir=1&rl=1


In some ways it can be thought of as building on the HAKMEM memos of  
MIT; it has a Forward by Guy Steele.  The book has a lot of fast  
bit-twiddling (unbelievably, "twiddling" just passed my Mac's  
spell-checker) algorithms for operating at the machine word level and  
gives the answer to your question in Chapter 1.


Assuming that overflow of signed integer arithmetic wraps (and what gcc  
flag do I have to set to assume this?) then here is the algorithm to  
multiply x and y with overflow detection.


if y = 0 then
   result = 0
else if y = -1 then
  if x = the_minimum_negative_value then
  result = overflow
  else
  result = -x
else
  let z = x * y;
 if z / y = x then
result = z
 else
result = overflow

Brad



4.0.0->4.0.1 regression: Can't use 64-bit shared libs on powerpc-apple-darwin8.1.0

2005-06-15 Thread Bradley Lucier

Mark:

I cannot build and use (link, etc.) 64-bit shared libraries on  
powerpc-apple-darwin8.1.0 with gcc version 4.0.1 20050615  
(prerelease).  This is a regression from 4.0.0 on the same platform.


I couldn't come up with a short example, sorry, but it is easy to  
reproduce if you have the right hardware/software combo (G5 with  
Xcode 2.1 on MacOS X 10.4.1).


This was assigned

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=22082

Brad


Re: 4.0.0->4.0.1 regression: Can't use 64-bit shared libs on powerpc-apple-darwin8.1.0

2005-06-15 Thread Bradley Lucier


On Jun 15, 2005, at 1:26 PM, Andrew Pinski wrote:



On Jun 15, 2005, at 2:19 PM, Bradley Lucier wrote:



Mark:

I cannot build and use (link, etc.) 64-bit shared libraries on  
powerpc-apple-darwin8.1.0 with gcc version 4.0.1 20050615  
(prerelease).  This is a regression from 4.0.0 on the same platform.




This is not a regression, in fact in the last couple days before  
4.0.0 was released,

multilib support for the 64bit shared libraries was turned off.


???.  It works just fine with

[descartes:~/programs/gambc40b13] lucier% /pkgs/gcc-4.0.0/bin/gcc -v
Using built-in specs.
Target: powerpc-apple-darwin8.1.0
Configured with: ../configure --prefix=/pkgs/gcc-4.0.0 --with-gmp=/ 
pkgs/gmp-4.1.3 --with-mpfr=/pkgs/gmp-4.1.3

Thread model: posix
gcc version 4.0.0

So why you say it's not a regression, I don't know.

And 4.0.0 is now the *only* version of gcc that will compile Gambit-C  
correctly;


[descartes:~/programs/gambc40b13] lucier% /pkgs/gcc-4.0.0-apple/bin/ 
gcc -v

Using built-in specs.
Target: powerpc-apple-darwin8.1.0
Configured with: ../configure --prefix=/pkgs/gcc-4.0.0-apple --with- 
gmp=/pkgs/gmp-4.1.4 --with-mpfr=/pkgs/gmp-4.1.4 --enable-languages=c,c 
++,f95

Thread model: posix
gcc version 4.0.0 (Apple Computer, Inc. build 5018)

gives me the same error; the Xcode 2.0 gcc compiler was a POS; and with

[descartes:~/programs/gambc40b13] lucier% /usr/bin/gcc -v
Using built-in specs.
Target: powerpc-apple-darwin8
Configured with: /private/var/tmp/gcc/gcc-5026.obj~19/src/configure -- 
disable-checking --prefix=/usr --mandir=/share/man --enable- 
languages=c,objc,c++,obj-c++ --program-transform-name=/^[cg][^+.-]*$/ 
s/$/-4.0/ --with-gxx-include-dir=/include/gcc/darwin/4.0/c++ -- 
build=powerpc-apple-darwin8 --host=powerpc-apple-darwin8 -- 
target=powerpc-apple-darwin8

Thread model: posix
gcc version 4.0.0 (Apple Computer, Inc. build 5026)

I get

[descartes:~/programs/gambc40b13] lucier% gsi
Illegal instruction

The last two are not the FSF gcc team's problem, of course, but why  
go from a compiler that works on PowerPC darwin to one that doesn't I  
don't know.


Brad


Re: 4.0.0->4.0.1 regression: Can't use 64-bit shared libs on powerpc-apple-darwin8.1.0

2005-06-15 Thread Bradley Lucier


On Jun 15, 2005, at 7:12 PM, Mike Stump wrote:


On Wednesday, June 15, 2005, at 11:19  AM, Bradley Lucier wrote:
I cannot build and use (link, etc.) 64-bit shared libraries on 
powerpc-apple-darwin8.1.0 with gcc version 4.0.1 20050615 > 
(prerelease).


If you remove the # that comment out the -m64 multilibs, does it then 
work perfectly?  If so, then, that is the solution to make it work, 
you just won't be able to do java (not that you care).


Thank you for your reply.  I plan to test mainline after removing the 
#'s.


Also, I do wonder if there was a specs files that is polluting your 
gcc-4.0.0 build, to check that, if you want, you can install in a new 
prefix directory, and then see if it remains working.


Yes, it does.  I did that this afternoon.  There is a rather long 
exchange between me and Andrew in the bug report.  I don't know why it 
works, but it does.  Perhaps you might know how one can have a shared 
library for which otool64 doesn't report a link to libgcc_s.


Anyway, while this is a regression for you, we meant for gcc-4.0.0 to 
not work for -m64, so I would not expect that it'll work for 4.0.1.   
:-(.


I understand now that -m64 was not meant to work with Darwin.  I didn't 
realize this before, tried it, and was happy when it worked.


I've found the discussion about 64-bit java not working, beginning at

http://gcc.gnu.org/ml/gcc-patches/2005-03/msg02396.html

The reasons given for disabling ppc64 multilib instead of java on 
darwin were



- it involves changing configury that affects every darwin target, not
  just darwin8
- I think that people using FSF GCC are more likely to want to use gcj
  than 64-bit, since they can use Apple's compiler for 64-bit but not
  for gcj
- java worked for 3.4, but ppc64 didn't


I think the second justification was a mistake.  In my opinion, the FSF 
shouldn't be asking people to rely on a company's compiler for certain 
features (64-bit support).  And Apple's 64-bit support in both Xcode 
2.0 and Xcode 2.1 has been broken for me.  I wasn't terribly worried 
since FSF gcc-4.0.0 seemed to work.


It's not clear to me that the third reason was persuasive, either.  
Fortran worked on 3.4, but not on 4.0.0.  It's a matter of what one 
decides one has to break.


Would it be possible to disable multilib by default on Darwin 8, but 
leave it as a configure option? Then one could


./configure --prefix=/pkgs/gcc-4.0.0
make bootstrap
make install

to build a complete 32-bit compiler suite and

./configure --enable-multilib --enable-languages=c,c++,objc,objc++,f95

to build 64-bit versions of languages except java and ada.

It would be good if 64-bit applications got tested in the FSF gcc tree 
for darwin, too.


Brad



Re: 4.0.0->4.0.1 regression: Can't use 64-bit shared libs on powerpc-apple-darwin8.1.0

2005-06-16 Thread Bradley Lucier


On Jun 16, 2005, at 1:30 AM, Mike Stump wrote:


 Please try something like:
...
and let me know if it works.


Thank you, I will try it today.

Last night I unconditionally allowed multilibs and configured with

Compiler version: 4.1.0 20050615 (experimental)
Platform: powerpc-apple-darwin8.1.0
configure flags: --prefix=/pkgs/gcc-4.0-mainline  
--with-gmp=/pkgs/gmp-4.1.3 --with-mpfr=/pkgs/gmp-4.1.3  
--enable-languages=c,c++,f95,objc,obj-c++

BOOT_CFLAGS=-g -O2 -mdynamic-no-pic

There were many -m64 failures in some of the testsuites; e.g.,

=== g++ Summary for unix/-m64 ===

# of expected passes8438
# of unexpected failures1370
# of expected failures  60
# of unresolved testcases   68
# of unsupported tests  114
...
=== gfortran Summary for unix/-m64 ===

# of expected passes517
# of unexpected failures3320
# of unexpected successes   3
# of expected failures  9
# of untested testcases 1472
# of unsupported tests  17
...
=== objc Summary for unix/-m64 ===

# of expected passes493
# of unexpected failures576
# of unresolved testcases   533
# of unsupported tests  1

It seems that the 64-bit libgcc_s can't be found many times; a typical  
failure from the top of the c++ test suite looks like


Test Run By lucier on Wed Jun 15 21:35:15 2005
Native configuration is powerpc-apple-darwin8.1.0

=== g++ tests ===

Schedule of variations:
unix/-m64
unix

Running target unix/-m64
Using /pkgs/dejagnu/share/dejagnu/baseboards/unix.exp as board  
description file for target.
Using /pkgs/dejagnu/share/dejagnu/config/unix.exp as generic interface  
file for target.
Using  
/Users/lucier/programs/gcc/gcc-mainline/gcc/testsuite/config/ 
default.exp as tool-and-target-specific interface file.
Running  
/Users/lucier/programs/gcc/gcc-mainline/gcc/testsuite/g++.dg/bprob/ 
bprob.exp ...
set_ld_library_path_env_vars:  
ld_library_path=.:/Users/lucier/programs/gcc/gcc-mainline/objdir/ 
powerpc-apple-darwin8.1.0/ppc64/libstdc++-v3/src/.libs:/Users/lucier/ 
programs/gcc/gcc-mainline/objdir/gcc
ALWAYS_CXXFLAGS set to {additional_flags=-nostdinc++  
-I/Users/lucier/programs/gcc/gcc-mainline/objdir/powerpc-apple- 
darwin8.1.0/ppc64/libstdc++-v3/include/powerpc-apple-darwin8.1.0  
-I/Users/lucier/programs/gcc/gcc-mainline/objdir/powerpc-apple- 
darwin8.1.0/ppc64/libstdc++-v3/include  
-I/Users/lucier/programs/gcc/gcc-mainline/libstdc++-v3/libsupc++  
-I/Users/lucier/programs/gcc/gcc-mainline/libstdc++-v3/include/backward  
-I/Users/lucier/programs/gcc/gcc-mainline/libstdc++-v3/testsuite}  
{ldflags=  
-L/Users/lucier/programs/gcc/gcc-mainline/objdir/powerpc-apple- 
darwin8.1.0/ppc64/libstdc++-v3/src/.libs  
-L/Users/lucier/programs/gcc/gcc-mainline/objdir/powerpc-apple- 
darwin8.1.0/ppc64/libiberty } additional_flags=-fmessage-length=0  
{ldflags=-multiply_defined suppress}
Executing on host:  
/Users/lucier/programs/gcc/gcc-mainline/objdir/gcc/testsuite/../g++  
-B/Users/lucier/programs/gcc/gcc-mainline/objdir/gcc/testsuite/../  
/Users/lucier/programs/gcc/gcc-mainline/gcc/testsuite/g++.dg/bprob/g++- 
bprob-1.C  -nostdinc++  
-I/Users/lucier/programs/gcc/gcc-mainline/objdir/powerpc-apple- 
darwin8.1.0/ppc64/libstdc++-v3/include/powerpc-apple-darwin8.1.0  
-I/Users/lucier/programs/gcc/gcc-mainline/objdir/powerpc-apple- 
darwin8.1.0/ppc64/libstdc++-v3/include  
-I/Users/lucier/programs/gcc/gcc-mainline/libstdc++-v3/libsupc++  
-I/Users/lucier/programs/gcc/gcc-mainline/libstdc++-v3/include/backward  
-I/Users/lucier/programs/gcc/gcc-mainline/libstdc++-v3/testsuite  
-fmessage-length=0  -g   -fprofile-arcs 
-L/Users/lucier/programs/gcc/gcc-mainline/objdir/powerpc-apple- 
darwin8.1.0/ppc64/libstdc++-v3/src/.libs  
-L/Users/lucier/programs/gcc/gcc-mainline/objdir/powerpc-apple- 
darwin8.1.0/ppc64/libiberty  -multiply_defined suppress -lm   -m64 -o  
/Users/lucier/programs/gcc/gcc-mainline/objdir/gcc/testsuite/g++-bprob 
-1.x01(timeout = 300)

ld64 failed: library not found for -lgcc_s_ppc64^M
collect2: ld returned 1 exit status^M
compiler exited with status 1
output is:
ld64 failed: library not found for -lgcc_s_ppc64^M
collect2: ld returned 1 exit status^M

even though

[descartes:gcc/gcc-mainline/objdir] lucier% find . -name 'libgcc_s*'
./gcc/libgcc_s.1.0.dylib
./gcc/libgcc_s.1.0.dylib.backup
./gcc/libgcc_s.dylib
./gcc/libgcc_s_ppc64.1.0.dylib
./gcc/libgcc_s_ppc64.1.0.dylib.backup
./gcc/libgcc_s_ppc64.dylib
...

Is the testsuite failing to set up to set the DYLD_LIBRARY_PATH  
appropriately?


Brad



Re: 4.0.0->4.0.1 regression: Can't use 64-bit shared libs on powerpc-apple-darwin8.1.0

2005-06-16 Thread Bradley Lucier
It seems that the libtool command line may be wrong.  Here's a simple  
test.


[descartes:~/programs] lucier% cat conftest.c
int main2() { return 0;}
[descartes:~/programs] lucier% gcc -m64 -mcpu=970 -o conftest  
-dynamiclib conftest.c -v -save-temps

Using built-in specs.
Target: powerpc-apple-darwin8.1.0
Configured with: ../configure --prefix=/pkgs/gcc-4.0-mainline  
--with-gmp=/pkgs/gmp-4.1.3 --with-mpfr=/pkgs/gmp-4.1.3  
--enable-languages=c,c++,objc,obj-c++,f95

Thread model: posix
gcc version 4.1.0 20050615 (experimental)
 /pkgs/gcc-4.0-mainline/libexec/gcc/powerpc-apple-darwin8.1.0/4.1.0/cc1  
-E -quiet -v -D__DYNAMIC__ -D__APPLE_CC__=1 conftest.c -fPIC -m64  
-mcpu=970 -fpch-preprocess -o conftest.i

ignoring nonexistent directory "/usr/local/include"
ignoring nonexistent directory  
"/pkgs/gcc-4.0-mainline/lib/gcc/powerpc-apple-darwin8.1.0/4.1.0/../../ 
../../powerpc-apple-darwin8.1.0/include"

#include "..." search starts here:
#include <...> search starts here:
 /pkgs/gcc-4.0-mainline/include
 /pkgs/gcc-4.0-mainline/lib/gcc/powerpc-apple-darwin8.1.0/4.1.0/include
 /usr/include
 /System/Library/Frameworks
 /Library/Frameworks
End of search list.
 /pkgs/gcc-4.0-mainline/libexec/gcc/powerpc-apple-darwin8.1.0/4.1.0/cc1  
-fpreprocessed conftest.i -fPIC -quiet -dumpbase conftest.c -m64  
-mcpu=970 -auxbase conftest -version -o conftest.s

GNU C version 4.1.0 20050615 (experimental) (powerpc-apple-darwin8.1.0)
compiled by GNU C version 4.1.0 20050615 (experimental).
GGC heuristics: --param ggc-min-expand=30 --param ggc-min-heapsize=4096
Compiler executable checksum: 856564be1b7d2e1a3d0c80ce3c26789d
 as -arch ppc64 -o conftest.o conftest.s
 /usr/bin/libtool -dynamic -arch_only ppc64 -noall_load  
-weak_reference_mismatches non-weak -o conftest  
-L/pkgs/gcc-4.0-mainline/lib/gcc/powerpc-apple-darwin8.1.0/4.1.0/ppc64  
-L/pkgs/gcc-4.0-mainline/lib/gcc/powerpc-apple-darwin8.1.0/4.1.0/../../ 
../ppc64 conftest.o -lgcc_s_ppc64 -lgcc -lSystemStubs -lmx -lSystem

/usr/bin/libtool: can't locate file for: -lgcc_s_ppc64
/usr/bin/libtool: file: -lgcc_s_ppc64 is not an object file (not  
allowed in a library)


However, if I add by hand /pkgs/gcc-4.0-mainline/lib, where  
libgcc_s_ppc64.1.0.dylib is installed, to the link path to libtool, I  
get


[descartes:~/programs] lucier% /usr/bin/libtool -dynamic -arch_only  
ppc64 -noall_load -weak_reference_mismatches non-weak -o conftest  
-L/pkgs/gcc-4.0-mainline/lib/gcc/powerpc-apple-darwin8.1.0/4.1.0/ppc64  
-L/pkgs/gcc-4.0-mainline/lib/gcc/powerpc-apple-darwin8.1.0/4.1.0/../../ 
../ppc64 conftest.o -lgcc_s_ppc64 -lgcc -lSystemStubs -lmx -lSystem  
-L/pkgs/gcc-4.0-mainline/lib

[descartes:~/programs] lucier% file conftest
conftest: Mach-O 64-bit dynamically linked shared library ppc64
[descartes:~/programs] lucier% otool64 -L conftest
conftest:
conftest (compatibility version 0.0.0, current version 0.0.0)
/pkgs/gcc-4.0-mainline/lib/libgcc_s_ppc64.1.0.dylib  
(compatibility version 1.0.0, current version 1.0.0)
/usr/lib/libmx.A.dylib (compatibility version 1.0.0, current  
version 92.0.0)
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0,  
current version 88.0.0)


Perhaps someone moved libgcc_s_ppc64.1.0.dylib but didn't change the  
script that builds the libtool command line to tell libtool where to  
find it?


Brad



Re: 4.0.0->4.0.1 regression: Can't use 64-bit shared libs on powerpc-apple-darwin8.1.0

2005-06-20 Thread Bradley Lucier


On Jun 16, 2005, at 3:06 PM, Mike Stump wrote:


Actually, by try, I meant try your application.  :-)


I can't seem to build any 64-bit shared library on powerpc-apple- 
darwin8.1.0, although I can now run the test suite more effectively; see


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=22110

and

http://gcc.gnu.org/ml/gcc-testresults/2005-06/msg01124.html

Brad


Can't turn off overflow_warning?

2005-07-15 Thread Bradley Lucier
After examining the source and documentation, it appears to me that in 
4.0.* and on mainline one cannot turn off the warning generated by 
overflow_warning, in, for example,


[descartes:~] lucier% cat test.c
#include 

int main() {
  if (1048256 * 1048256 < 0)
printf("1");
  else
printf("2");

return 0;
}
[descartes:~] lucier% /pkgs/gcc-4.0.0/bin/gcc -v
Using built-in specs.
Target: powerpc-apple-darwin8.1.0
Configured with: ../configure --prefix=/pkgs/gcc-4.0.0 
--with-gmp=/pkgs/gmp-4.1.3 --with-mpfr=/pkgs/gmp-4.1.3

Thread model: posix
gcc version 4.0.0
[descartes:~] lucier% /pkgs/gcc-4.0.0/bin/gcc -O1 -Wall -W -Wextra 
-fwrapv test.c

test.c: In function 'main':
test.c:4: warning: integer overflow in expression
[descartes:~] lucier% ./a.out
1[descartes:~] lucier%

I'd like to be able to do this, since I'm auto-generating code that 
explicitly uses overflow tests that assume that signed ints overflow 
using twos-complement arithmetic and I don't want to be distracted by 
extraneous warnings when constant propagation in the Scheme system puts 
constants into this C code.


Personally, I'd like overflow_warning to be turned off for signed ints 
if -fwrapv is passed as a flag.  I suppose an explicit 
-Wno-overflow-warning would be OK, too.


If I were to propose a patch, which way should I go?

Brad



Why no strings in error messages?

2009-08-26 Thread Bradley Lucier
I've never seen the answer to the following question:  Why do some  
versions of gcc that I build not have string substitutions in error  
messages?


I get things like this:

[luc...@lambda-head lib]$ /pkgs/gcc-mainline/bin/gcc -mcpu=970 -m64 - 
fschedule-insns -Wno-unused -O1 -fno-math-errno -fschedule-insns2 - 
fno-trapping-math -fno-strict-aliasing -fwrapv -fomit-frame-pointer - 
fPIC -fno-common -I"../include" -c _thread-test.i -save-temps

_thread.c: In function â:
_thread.c:10035:146: error: â undeclared (first use in this function)
_thread.c:10035:146: error: (Each undeclared identifier is reported  
only once

_thread.c:10035:146: error: for each function it appears in.)

It makes cutting down a test case for a bug report a bit of guess and  
hack.


And an unrelated question:  I've found on some tests on x86-64 that - 
fschedule-insns added to -O1 speeds up some of my codes impressively,  
but on x86-64-linux and powerpc64-linux I've had problems finding  
spill registers and I've also been getting stuff like


/pkgs/gcc-mainline/bin/gcc -mcpu=970 -m64 -fschedule-insns -save- 
temps -Wno-unused -O1 -fno-math-errno -fschedule-insns2 -fno-trapping- 
math -fno-strict-aliasing -fwrapv -fomit-frame-pointer -fPIC -fno- 
common   -I"../include" -c -o "_thread.o" -I. -DHAVE_CONFIG_H - 
D___GAMBCDIR="\"/usr/local/Gambit-C\"" -D___SYS_TYPE_CPU="\"powerpc64 
\"" -D___SYS_TYPE_VENDOR="\"unknown\"" -D___SYS_TYPE_OS="\"linux-gnu 
\"" -D___CONFIGURE_COMMAND="\"./configure CC=/pkgs/gcc-mainline/bin/ 
gcc -mcpu=970 -m64 -fschedule-insns\"" -D___OBJ_EXTENSION="\".o\"" - 
D___EXE_EXTENSION="\"\"" -D___PRIMAL _thread.c -D___LIBRARY

_thread.c: In function â:
_thread.c:15556:1: error: insn does not satisfy its constraints:
(insn 614 220 216 26 _thread.c:15462 (set (reg:DF 20 20)
(mem:DF (plus:DI (reg:DI 23 23 [orig:199 D.20368 ] [199])
(const_int 23 [0x17])) [0 S8 A64])) 357  
{*movdf_hardfloat64} (nil))
_thread.c:15556:1: internal compiler error: in  
reload_cse_simplify_operands, at postreload.c:396


Actually, I haven't been able to get Gambit to build on any  
architecture I've tried with -fschedule-insns on the command line.


So, is -fschedule-insns an option to be avoided?

Brad


Re: Why no strings in error messages?

2009-08-26 Thread Bradley Lucier
On Wed, 2009-08-26 at 20:38 +0200, Paolo Bonzini wrote:
> 
> > When I worked at AMD, I was starting to suspect that it may be more 
> > beneficial
> > to re-enable the first schedule insns pass if you were compiling in 64-bit
> > mode, since you have more registers available, and the new registers do not
> > have hard wired uses, which in the past always meant a lot of spills (also, 
> > the
> > default floating point unit is SSE instead of the x87 stack).  I never got
> > around to testing this before AMD and I parted company.
> 
> Unfortunately, hardwired use of %ecx for shifts is still enough to kill 
> -fschedule-insns on AMD64.

The AMD64 Architecture manual I found said that various combinations of
the RSI, RDI, and RCX registers are used implicitly by ten instructions
or prefixes, and RBX is used by XLAT, XLATB.  So it appears that there
are 12 general-purpose registers available for allocation.

Are 12 registers not enough, in principle, to do scheduling before
register allocation?  I was getting a 15% speedup on some numerical
codes, as pre-scheduling spaced out the vector loads among the
floating-point computations.

Brad



Re: Why no strings in error messages?

2009-08-26 Thread Bradley Lucier
On Wed, 2009-08-26 at 17:12 -0700, Ian Lance Taylor wrote:

> If you are getting that kind of speedup (which I personally did not
> expect) then this is clearly worth pursuing.  It should be possible to
> make it work at least in 64-bit mode.  I recommend that you file a bug
> report or two for cases which fail when using -fschedule-insns.

Thanks, I've added some details of the speedup (with code differences)
to the end of PR33928 and I've reported the two different failures (for
x86-64 and ppc64) in PR41164 and PR41176.

Brad



Re: Where does the time go?

2010-05-20 Thread Bradley Lucier
On my codes, pre-RA instruction scheduling on X86-64 (a) improves run
times by roughly 10%, and (b) costs a lot of compile time.

The -fscheduling option didn't seem to be on in your time tests (I think
it's not on by default on that architecture at -O2).

Brad



Re: Integer overflow in operator new

2007-04-08 Thread Bradley Lucier

Robert Dewar wrote:


I have always been told that -ftrapv is nowhere near fully working or
reliable (I think Eric is the source of that advice).


Is this just a rumor, or are there data that backs this up.  (That - 
fwrapv doesn't work, not that Dewar was always told that it doesn't  
work.)


Brad


Status of PR21561

2007-04-15 Thread Bradley Lucier

If you try

../gcc-4.1.2/configure; make bootstrap

on a powerpc-darwin G4 system, then the bootstrap will fail because  
the process builds 64-bit multilibs and tries to execute a program  
with "xgcc -m64'.


In May 2005, PR 21561 reported this same problem on 32-bit x86  
solaris; the workaround is to specify --disable-multilibs on the  
configure line.  The suggested fix is to automatically generate this  
"--disable-multilibs" on machines where bootstrap would fail without it.


A comment in the PR says "Supending until the other bugs like this is  
fixed."  I'm kind of surprised that a bootstrap failure like this was  
shipped with 4.1.2; also, I couldn't find out using bugzilla what are  
"the other bugs like this".


Would it be reasonable to reopen this report?  A bootstrap failure on  
32-bit powerpc-darwin is definitely a regression from gcc-3.


Brad


Re: Status of PR21561

2007-04-15 Thread Bradley Lucier


On Apr 15, 2007, at 6:37 PM, Andrew Pinski wrote:



On 4/15/07, Bradley Lucier <[EMAIL PROTECTED]> wrote:

If you try

../gcc-4.1.2/configure; make bootstrap

on a powerpc-darwin G4 system, then the bootstrap will fail because
the process builds 64-bit multilibs and tries to execute a program
with "xgcc -m64'.


Again this is not magic or rock science, use --disable-mutlilib for
32bit targets that enable 64bit by default even on machines where you
have only run 32bit programs.


After my first two attempts today to build 4.1.2 on powerpc-darwin  
failed, I'm already trying this.


I put a note in the PR21561 saying I think it should be re-opened.

Brad


How do you get the benefit of -fstrict-aliasing?

2007-04-21 Thread Bradley Lucier
I've decided to try to contribute modifications to the the C code  
that is generated by the Gambit Scheme->C compiler so that (a) it  
doesn't have any aliasing violations and (b) more aliasing  
distinctions can be made (the car and cdr of a pair don't overlap  
with the entries of a vector, etc.).  This was in response to a  
measured 20% speedup with some numerical code with -fstrict-aliasing  
instead of -fno-strict-aliasing, nearly all of which came because gcc  
then knew that stores to a vector of doubles didn't change the values  
of variables on the stack.


Part (a) is essentially a non-issue for user-written code, since the  
only aliasing problems of which I am aware are in the bignum library,  
so as a preliminary test I added -fstrict-aliasing to the gcc command  
line and reran the benchmark suite on a 2GHz G5.  To my surprise,  
while there were some improvements, the -fstrict-aliasing option led  
to slower code overall, in some cases quite severely (7.014 seconds  
to 11.794 seconds, for example), and, perhaps not surprisingly,  
compilation times were significantly longer.  This was true both with  
Apple's 4.0.1 and FSF 4.1.2.


So I'm wondering whether certain options have to be included on the  
command line to get the benefits of -fstrict-aliasing.  The current  
command line is


gcc -mcpu=970 -m64  -no-cpp-precomp -Wall -W -Wno-unused -O1 -fno- 
math-errno -fschedule-insns2 -fno-trapping-math -fno-strict-aliasing - 
fwrapv -fexpensive-optimizations -fforce-addr -fpeephole2 -falign- 
jumps -falign-functions -fno-function-cse -ftree-copyrename -ftree- 
fre -ftree-dce -fregmove -fgcse-las -freorder-functions -fcaller- 
saves -fno-if-conversion2 -foptimize-sibling-calls -fcse-skip-blocks - 
funit-at-a-time -finline-functions -fomit-frame-pointer -fPIC -fno- 
common -bundle -flat_namespace -undefined suppress -fstrict-aliasing


where the optimizations between -fwrapv (which is no longer  
necessary, I should remove that) and -fstrict-aliasing were chosen by  
some experiments with genetic algorithms.


I didn't think that adding aliasing information could lead to worse  
code.  So I'm wondering how to use that aliasing information more  
effectively to get better code.


Brad


Re: Effects of newly introduced -mpcX 80387 precision flag

2007-04-29 Thread Bradley Lucier

Richard Henderson wrote:


On Tue, Apr 03, 2007 at 10:56:42AM +0200, Uros Bizjak wrote:
> ...  Note that a change of default precision control may
> affect the results returned by some of the mathematical functions.
>
> to the documentation to warn users about this fact.

Eh.  It can seriously break some libm implementations that
require longer precision.  It's one of the reasons I'm not
really in favour of global switches like this.


I just (re-)discovered these tables giving maximum known errors in  
some libm functions when extended precision is enabled:


http://people.inf.ethz.ch/gonnet/FPAccuracy/linux/summary.html

and when the precision of the mantissa is set to 53 bits (double  
precision):


http://people.inf.ethz.ch/gonnet/FPAccuracy/linux64/summary.html

This is from 2002, and indeed, some of the errors in double-precision  
results are hundreds or thousands of times bigger when the precision  
is set to 53 bits.


I think the warning in the documentation is very mild considering the  
possible effects.


Perhaps the manual should also mention that sometimes this option  
brings a 2% improvement in the speed of FP-intensive code along with  
massive increases in the error of some libm functions, and then  
people could decide if they want to use it.  (I'm not opposed to a  
switch like this, my favorite development environment sets the  
precision to 53 bits globally just as this switch does, but I think  
the documentation should be more clear about the trade-offs.)


Brad


Re: Effects of newly introduced -mpcX 80387 precision flag

2007-04-29 Thread Bradley Lucier


On Apr 29, 2007, at 1:01 PM, Tim Prince wrote:


[EMAIL PROTECTED] wrote:

I just (re-)discovered these tables giving maximum known errors in  
some libm functions when extended precision is enabled:

http://people.inf.ethz.ch/gonnet/FPAccuracy/linux/summary.html
and when the precision of the mantissa is set to 53 bits (double  
precision):

http://people.inf.ethz.ch/gonnet/FPAccuracy/linux64/summary.html
This is from 2002, and indeed, some of the errors in double- 
precision results are hundreds or thousands of times bigger when  
the precision is set to 53 bits.
This isn't very helpful.  I can't find an indication of whose libm  
is being tested, it appears to be an unspecified non-standard  
version of gcc, and a lot of digging would be needed to find out  
what the tests are.


The C code for the tests is in a subdirectory of:

http://people.inf.ethz.ch/gonnet/FPAccuracy/

There is also a file "all.tar.Z" there, and a few html files with  
commentary.


It makes no sense at all for sqrt() to break down with change in  
precision mode.


If you do an extended-precision (80-bit) sqrt and then round the  
result again to a double (64-bit) then those two roundings will  
increase the error, sometimes to > 1/2 ulp.


To give current results on a machine I have access to, I ran the  
tests there on


vendor_id   : AuthenticAMD
cpu family  : 15
model   : 33
model name  : Dual Core AMD Opteron(tm) Processor 875

using

euler-59% gcc -v
Using built-in specs.
Target: x86_64-unknown-linux-gnu
Configured with: ../configure --prefix=/pkgs/gcc-4.1.2
Thread model: posix
gcc version 4.1.2

on an up-to-date RHEL 4.0 server (so whatever libm is offered there),  
and, indeed, the only differences that it found were in 1/x, sqrt(x),  
and Pi*x because of double rounding.  In other words, the code that  
went through libm gave identical answers whether running on sse, x87  
(extended precision), or x87 (double precision).


I don't know whether there are still math libraries for which  
Gonnet's 2002 results prevail.


Brad

The only change I made to the code was

euler-63% rcsdiff header.h
===
RCS file: RCS/header.h,v
retrieving revision 1.1
diff -r1.1 header.h
66c66,68
< double scalb( double x, double n );
---
> /* double scalb( double x, double n ); */
> #include 
> #include 

and the scripts I used to run the code were

euler-60% cat generate-results
#! /bin/tcsh
set files = `ls *.c | sed 's/\.c//'`
foreach file ( $files )
  gcc -O3 -mfpmath=sse -o $file $file.c -lm
  ./$file >! $file.-mfpmath=sse
  gcc -O3 -mfpmath=387 -o $file $file.c -lm
  ./$file >! $file.-mfpmath=387
  gcc -O3 -mfpmath=387 -DFORCE_FPU_DOUBLE -o $file $file.c -lm
  ./$file >! $file.-mfpmath=387-DFORCE_FPU_DOUBLE
  rm $file
end
euler-61% cat compare
#! /bin/tcsh
set files = `ls *.c | sed 's/\.c//'`
foreach file ( $files )
echo comparing $file.-mfpmath=387 $file.-mfpmath=387- 
DFORCE_FPU_DOUBLE

diff $file.-mfpmath=387 $file.-mfpmath=387-DFORCE_FPU_DOUBLE
echo comparing $file.-mfpmath=387 $file.-mfpmath=sse
diff $file.-mfpmath=387 $file.-mfpmath=sse
end



Re: Effects of newly introduced -mpcX 80387 precision flag

2007-05-03 Thread Bradley Lucier


On May 3, 2007, at 11:11 AM, Uros Bizjak wrote:

Could you please post a patch with suggested wording about this  
option (I was trying to write something similar to the warning that  
icc has in its documentation about precision settings).


How about this?  It perhaps reflects my own biases, but the term  
"catastrophic loss of accuracy" is sometimes used in the technical  
sense that I mean here.  For the performance figures, I used the  
figures you gave in your e-mail but add "or more" to be on the safe  
side.


Brad

[descartes:~/Desktop] lucier% rcsdiff -u invoke.texi
===
RCS file: RCS/invoke.texi,v
retrieving revision 1.1
diff -u -r1.1 invoke.texi
--- invoke.texi 2007/05/03 15:43:01 1.1
+++ invoke.texi 2007/05/03 17:15:24
@@ -10139,12 +10139,21 @@
@opindex mpc80
Set 80387 floating-point precision to 32, 64 or 80 bits.  When @option 
{-mpc32}
-is specified, the significand of floating-point operations is  
rounded to 24

-bits (single precision), @option{-mpc64} rounds the significand of
-floating-point operations to 53 bits (double precision) and @option{- 
mpc80}
-rounds the significand of floating-point operations to 64 bits  
(extended

-double precision).  Note that a change of default precision control may
-affect the results returned by some of the mathematical functions.
+is specified, the significands of results of floating-point  
operations are

+rounded to 24 bits (single precision); @option{-mpc64} rounds the the
+significands of results of floating-point operations to 53 bits (double
+precision) and @option{-mpc80} rounds the significands of results of
+floating-point operations to 64 bits (extended double precision),  
which is
+the default.  When this option is used, floating-point operations in  
higher

+precisions are not available to the programmer without setting the FPU
+control word explicitly.
+
+Setting the rounding of floating-point operations to less than the  
default
+80 bits can speed some programs by 2% or more.  Note that some  
mathematical
+libraries assume that extended precision (80 bit) floating-point  
operations
+are enabled by default; routines in such libraries could suffer  
catastrophic
+loss of accuracy when this option is used to set the precision to  
less than

+extended precision.
@item -mstackrealign
@opindex mstackrealign




Re: Effects of newly introduced -mpcX 80387 precision flag

2007-05-03 Thread Bradley Lucier


On May 3, 2007, at 2:45 PM, Uros Bizjak wrote:



Bradley Lucier wrote:


On May 3, 2007, at 11:11 AM, Uros Bizjak wrote:

Could you please post a patch with suggested wording about this  
option (I was trying to write something similar to the warning  
that icc has in its documentation about precision settings).


How about this?  It perhaps reflects my own biases, but the term  
"catastrophic loss of accuracy" is sometimes used in the technical  
sense that I mean here.  For the performance figures, I used the  
figures you gave in your e-mail but add "or more" to be on the  
safe side.
What about "significant loss of accuracy" as these options probably  
won't cause a nuclear reactor meltdown ;)


Well, I did some googling, and the technical term I was thinking of  
was "catastrophic cancellation".  So how about


Note that some mathematical routines in such libraries could suffer  
significant loss of accuracy, typically through so-called  
"catastrophic cancellation", when this option is used to set the  
precision to less than extended precision.


Brad


Re: Effects of newly introduced -mpcX 80387 precision flag

2007-05-03 Thread Bradley Lucier

On May 3, 2007, at 3:29 PM, Uros Bizjak wrote:



Bradley Lucier wrote:

What about "significant loss of accuracy" as these options  
probably won't cause a nuclear reactor meltdown ;)


Well, I did some googling, and the technical term I was thinking  
of was "catastrophic cancellation".  So how about


Note that some mathematical routines in such libraries could  
suffer significant loss of accuracy, typically through so-called  
"catastrophic cancellation", when this option is used to set the  
precision to less than extended precision.


I think this is OK. So if nobody objects, this patch is OK for  
mainline.


I'm sorry, but I don't have checkin privileges.

Brad





Re: How do you get the benefit of -fstrict-aliasing?

2007-05-18 Thread Bradley Lucier


On Apr 21, 2007, at 6:01 PM, Bradley Lucier wrote:

So I'm wondering whether certain options have to be included on the  
command line to get the benefits of -fstrict-aliasing.


I've thought about this question a bit more, so maybe I can make it  
less content-free.


The C code generated by Gambit-C, the Scheme->C compiler, has certain  
somewhat strange characteristics, mainly because it's targeted to a  
simulated virtual machine implemented using C macros.  Experimental  
results show that simply using gcc -O2 or -O3 results in  
significantly suboptimal code; gcse generally makes the code  
noticeably worse, for example.  So we pick and choose our gcc  
optimization options.


The generated code also has significant opportunities to exploit  
alias information---words on the stack cannot alias the car or cdr of  
a pair, the contents of a vector, etc.  The macros implementing the  
virtual machine can be tweaked to make more of that information  
explicit, and I'd like gcc to exploit that information, so I'm  
wondering which optimizations make particular use of aliasing  
information.  Perhaps I've left those off the gcc command line.


Until now, I have used -fno-strict-aliasing in the code for  
correctness reasons (some of the virtual machine macros, related to  
the bignum library and written in a style similar to assembly, have  
aliasing violations).  When I add -fstrict-aliasing when compiling  
user code (which doesn't use the bignum macros), I sometimes get  
significantly slower code. Perhaps there is a lot more register  
pressure as Andrew suggested, but looking at the assembly output  
doesn't seem to justify this assumption (basic blocks are small and  
somewhat independent, it's running on a register-rich machine [64-bit  
PowerPC], etc.).


Brad


Re: Activate -mrecip with -ffast-math?

2007-06-18 Thread Bradley Lucier


On Jun 18, 2007, at 2:14 PM, Uros Bizjak wrote:



tbp wrote:

For example, when doing 1/x and sqrt(x) via reciprocal + NR, you  
first

get an inf from said reciprocal which then turns to a NaN in the NR
stage but if you correct it by, say, doing a comparison to 0 and a
'and'.
That's what ICC used to do in your back. That's what you'll find page
151 of the amdfam10 optimization manual. Because that's a common  
case.


As far as i can see, there's no such provision in the current patch.
At the very least provide a mean to look after those NaNs without
losing sanity, like a way to enforce argument order of
min/max[ss|ps|pd] without ressorting to inline asm.


But even if sqrt is corrected for 0.0 * inf, there would still be a  
lot of problems with the combinations of NR-enhanced rsqrt and rcp.  
Consider for example:


1.0/sqrt(a/b) alias rsqrt(a/b)

Having a=0, b != 0, the result is inf.


As already stated, -ffast-math turns on -ffinite-math-only, which  
allows the compiler to assume that a result of inf cannot happen, so  
gcc is allowed to ignore this possiblity.  Producing NaN instead of  
inf seems to be allowed.


This expression is mathematically equal to sqrt(b/a) and the  
compiler is free to do this optimization. In this case, b*rcp(a)  
produces NaN due to NR of rcp(a) and here we loose.


Let's correct both, rsqrt and rcp NR steps for 0.0, so we have NR- 
rsqrt(0.0) = inf, NR-rcp(0.0) = inf.


Again, sqrt(b/a) will create sqrt(inf) = inf * rsqrt(inf), so NR  
step for rsqrt will hit (0.0 * inf) from the other side. We loose,  
because there is no correction for the case where input operand is  
infinity.


IMO,  due to limited range of operands for -mrecip pass (inf, - 
inf); where 0.0 is excluded, it should be keept out of -ffast-math.  
There is no point to fix reciprocals only for 0.0, we need to fix  
both conversions for infinity and 0.0, even in -ffast-math.


I think that tbp wants just to ensure that sqrt(0.0)=0.0 even with  
your various reciprocal and sqrt optimizations.  (I can't test the  
new code now, but I think he claims that with the new sqrt  
optimizations sqrt(0.) => NaN; if indeed it does this then I would  
consider this a bug.)  I don't think he wants the optimizations to  
have to "do the right thing" when an argument or result of one of  
these operations is infinite or a NaN.


Of course, he can correct me if I'm wrong.

Brad


Re: Activate -mrecip with -ffast-math?

2007-06-18 Thread Bradley Lucier


On Jun 18, 2007, at 2:27 PM, Bradley Lucier wrote:

But even if sqrt is corrected for 0.0 * inf, there would still be  
a lot of problems with the combinations of NR-enhanced rsqrt and  
rcp. Consider for example:


1.0/sqrt(a/b) alias rsqrt(a/b)

Having a=0, b != 0, the result is inf.


As already stated, -ffast-math turns on -ffinite-math-only, which  
allows the compiler to assume that a result of inf cannot happen,  
so gcc is allowed to ignore this possiblity.  Producing NaN instead  
of inf seems to be allowed.


Let me restate this.

If -ffinite-math-only is specified, then producing NaN instead of inf  
should be allowed.


If -fno-finite-math-only is specified, then the generated code should  
"do the right thing" if an argument or result is inf or NaN.


In any case, I would consider it an error if the argument is finite,  
the result is supposed to be finite, and inf or NaN is produced.


Brad


PR 22082: Trouble linking with 64-bit libgcc on powerpc-darwin

2005-10-07 Thread Bradley Lucier
Geoff Keating has made several changes to the darwin configuration 
files recently; I was thinking that while people are looking at these 
things, perhaps someone can say I'm doing something wrong in


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=22082

or whether further configuration changes are needed.  This happened 
with a recent


[lindv2:~/Desktop/gcc-test] lucier% gcc -v
Using built-in specs.
Target: powerpc-apple-darwin8.2.0
Configured with: ../configure powerpc-apple-darwin8.2.0 
--enable-languages=c --prefix=/pkgs/gcc-mainline 
--with-gmp=/pkgs/gmp-4.1.4 --with-mpfr=/pkgs/gmp-4.1.4

Thread model: posix
gcc version 4.1.0 20051007 (experimental)


Brad



4.1: Many 64-bit failures on powerpc-apple-darwin8.3.0

2005-12-07 Thread Bradley Lucier
I bootstrapped and regtested 4.1 on powerpc-apple-darwin8.3.0 and  
there were so many errors with -mcpu=970 -m64 that the gcc mail  
daemon wouldn't accept the summary.  So I put it at


http://www.math.purdue.edu/~lucier/gcc/test-results/4_1-12-06-2005.gz

The most serious problems seem to be in g++, gfortran, and libjava,  
but none of the 64-bit tests are comparable to the 32-bit tests:


=== g++ Summary for unix/-mcpu=970/-m64 ===

# of expected passes8885
# of unexpected failures1381
# of unexpected successes   1
# of expected failures  63
# of unresolved testcases   47
# of unsupported tests  151

=== gcc Summary for unix/-mcpu=970/-m64 ===

# of expected passes38275
# of unexpected failures86
# of unexpected successes   1
# of expected failures  93
# of untested testcases 28
# of unsupported tests  395

=== gfortran Summary for unix/-mcpu=970/-m64 ===

# of expected passes1322
# of unexpected failures4592
# of expected failures  10
# of untested testcases 1616
# of unsupported tests  66

=== objc Summary for unix/-mcpu=970/-m64 ===

# of expected passes1653
# of unexpected failures43
# of unresolved testcases   1
# of unsupported tests  1

=== libffi Summary for unix/-mcpu=970/-m64 ===

# of expected passes100
# of unexpected failures81
# of unsupported tests  2

=== libjava Summary for unix/-mcpu=970/-m64 ===

# of expected passes929
# of unexpected failures1236
# of expected failures  31
# of untested testcases 1773


=== libstdc++ Summary for unix/-mcpu=970/-m64 ===

# of expected passes3345
# of unexpected failures34
# of expected failures  13
# of unsupported tests  322

Brad


Re: 4.0.0->4.0.1 regression: Can't use 64-bit shared libs on powerpc-apple-darwin8.1.0

2005-12-16 Thread Bradley Lucier


On Dec 16, 2005, at 6:23 PM, Mike Stump wrote:


On Jun 20, 2005, at 2:41 PM, Bradley Lucier wrote:
I can't seem to build any 64-bit shared library on powerpc-apple- 
darwin8.1.0, although I can now run the test suite more  
effectively; see


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=22110

and

http://gcc.gnu.org/ml/gcc-testresults/2005-06/msg01124.html


So, I thought I'd ping you and see if everything is nearer to  
normal now.


Thanks!  Unfortunately not.

Geoff thinks that not being able to build a 64-bit shared library is  
a libtool problem; the discussion seems to have ended in


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=22082

Unfortunately, even with my Apple Developer account I can't seem to  
figure out how to look up radar reports that I haven't submitted.


And there seem to be a large number of 64-bit failures in the 4.1  
test suite; the results from the 10th can be found at


http://www.math.purdue.edu/~lucier/gcc/test-results/4_1-12-10-2005.gz

About my earlier e-mail regarding the test results on the 6th:

http://gcc.gnu.org/ml/gcc/2005-12/msg00182.html

Geoff commented:

http://gcc.gnu.org/ml/java/2005-12/msg00093.html

Brad


Re: 4.0.0->4.0.1 regression: Can't use 64-bit shared libs on powerpc-apple-darwin8.1.0

2005-12-16 Thread Bradley Lucier


On Dec 16, 2005, at 8:25 PM, Eric Christopher wrote:



http://gcc.gnu.org/bugzilla/show_bug.cgi?id=22082

Unfortunately, even with my Apple Developer account I can't seem  
to figure out how to look up radar reports that I haven't submitted.


I took a look at the radar. Says, effectively, that the bug has  
been fixed in ld64 and will be in the next release.


Great!  Thanks.

Brad


svn access on RHEL 4.0

2006-01-06 Thread Bradley Lucier
I'm having all kinds of trouble running svn on my RHEL 4.0 system.  A  
typical example of what's happening is:


euler-62% svn cleanup
svn: XML parser failed in 'gcc/testsuite/gcc.dg/special'

I first got that message when I tried contrib/gcc_update after doing  
a svn checkout.  Now I just get


euler-63% contrib/gcc_update
Updating SVN tree
svn: Working copy '.' locked
svn: run 'svn cleanup' to remove locks (type 'svn help cleanup' for  
details)

Adjusting file timestamps
SVN update of full tree failed.


Does anyone have any helpful ideas of what to do?

Brad



Re: svn access on RHEL 4.0

2006-01-08 Thread Bradley Lucier


On Jan 8, 2006, at 9:04 AM, Andreas Schwab wrote:

Try removing the offending directory (gcc/testsuite/gcc.dg/special)  
and
run svn cleanup again, updating the tree afterwards.  If you didn't  
have
any local changes in that directory you should not lose anything.   
If the

problem persists then you probably have a hardware problem.


Thanks for

I installed subversion 1.3.0 and tried your suggestion recursively  
and seemed to get into a cycle ---after removing libjava/testsuite/ 
libjava.lang as one of the "problem" directories on the way to  
getting a clean "xvn cleanup", it showed up again as one of the  
"problem" directories in trying to do contrib/gcc_update, notes are  
below.  Actually, gcc_update appears to complain about libstdc++-v3/ 
testsuite/data immediately after installing a new version.  (I got a  
slightly more detailed error message with 1.3.0 than with the default  
1.1.4:


euler-5% svn cleanup
svn: XML parser failed in 'gcc/testsuite/gcc.dg/pch'
svn: Bogus date

but I don't know what "Bogus date" means in any detail.)

It's interesting that you mention a possible hardware problem.  The  
file system is mounted on a new SAN served from a Sparc NFSv4 server;  
do you thin that perhaps we should try to mount it using NFSv3 to see  
if that fixes the problem?


Brad




euler-36% tcsh
euler-1% set path=(/export/users/lucier/local/subversion-1.3.0/bin/  
$path)

euler-2% which svn
/export/users/lucier/local/subversion-1.3.0/bin//svn
euler-3% dirs
~/programs/subversion-1.3.0
euler-4% pu ~/programs/gcc/4.2/
~/programs/gcc/4.2 ~/programs/subversion-1.3.0
euler-5% svn cleanup
svn: XML parser failed in 'gcc/testsuite/gcc.dg/pch'
svn: Bogus date
euler-6% /bin/rm -rf gcc/testsuite/gcc.dg/pch
euler-7% svn cleanup
svn: XML parser failed in 'libstdc++-v3/testsuite/data'
svn: Bogus date
euler-8% /bin/rm -rf libstdc++-v3/testsuite/data
euler-9% svn cleanup
svn: XML parser failed in 'libjava/classpath/org/omg/SendingContext'
svn: Bogus date
euler-10% /bin/rm -rf libjava/classpath/org/omg/SendingContext
euler-11% svn cleanup
svn: XML parser failed in 'libjava/testsuite/libjava.special'
svn: Bogus date
euler-12% /bin/rm -rf libjava/testsuite/libjava.special
euler-13% svn cleanup
svn: XML parser failed in 'libjava/testsuite/libjava.lang'
svn: Bogus date
euler-14% /bin/rm -rf libjava/testsuite/libjava.lang
euler-15% svn cleanup
svn: XML parser failed in 'libjava/testsuite/libjava.loader'
svn: Bogus date
euler-16% /bin/rm -rf libjava/testsuite/libjava.loader
euler-17% svn cleanup
euler-18% contrib/gcc_update
Updating SVN tree
Agcc/testsuite/gcc.dg/special
Agcc/testsuite/gcc.dg/special/2419-2.c
Agcc/testsuite/gcc.dg/special/wkali-1.c
Agcc/testsuite/gcc.dg/special/wkali-2.c
Agcc/testsuite/gcc.dg/special/wkali-2a.c
Agcc/testsuite/gcc.dg/special/wkali-2b.c
Agcc/testsuite/gcc.dg/special/mips-abi.exp
Agcc/testsuite/gcc.dg/special/mips-abi.s
Agcc/testsuite/gcc.dg/special/gcsec-1.c
Agcc/testsuite/gcc.dg/special/weak-1.c
Agcc/testsuite/gcc.dg/special/weak-2.c
Agcc/testsuite/gcc.dg/special/weak-1a.c
Agcc/testsuite/gcc.dg/special/weak-2a.c
Agcc/testsuite/gcc.dg/special/alias-1.c
Agcc/testsuite/gcc.dg/special/weak-2b.c
Agcc/testsuite/gcc.dg/special/alias-2.c
Agcc/testsuite/gcc.dg/special/special.exp
Agcc/testsuite/gcc.dg/pch
Agcc/testsuite/gcc.dg/pch/valid-1b.c
Agcc/testsuite/gcc.dg/pch/macro-2.c
Agcc/testsuite/gcc.dg/pch/decl-5.hs
Agcc/testsuite/gcc.dg/pch/macro-4.c
Agcc/testsuite/gcc.dg/pch/decl-1.c
Agcc/testsuite/gcc.dg/pch/inline-3.hs
Agcc/testsuite/gcc.dg/pch/decl-3.c
Agcc/testsuite/gcc.dg/pch/import-1.c
Agcc/testsuite/gcc.dg/pch/decl-5.c
Agcc/testsuite/gcc.dg/pch/cpp-2.hs
Agcc/testsuite/gcc.dg/pch/save-temps-1.hs
Agcc/testsuite/gcc.dg/pch/static-2.hs
Agcc/testsuite/gcc.dg/pch/import-1b.h
Agcc/testsuite/gcc.dg/pch/cpp-2.c
Agcc/testsuite/gcc.dg/pch/system-1.c
Agcc/testsuite/gcc.dg/pch/pch.exp
Agcc/testsuite/gcc.dg/pch/valid-3.hs
Agcc/testsuite/gcc.dg/pch/macro-4.hs
Agcc/testsuite/gcc.dg/pch/warn-1.hs
Agcc/testsuite/gcc.dg/pch/valid-2.c
Agcc/testsuite/gcc.dg/pch/empty.c
Agcc/testsuite/gcc.dg/pch/decl-4.hs
Agcc/testsuite/gcc.dg/pch/valid-4.c
Agcc/testsuite/gcc.dg/pch/include
Agcc/testsuite/gcc.dg/pch/include/import-2a.h
Agcc/testsuite/gcc.dg/pch/include/import-2b.h
Agcc/testsuite/gcc.dg/pch/valid-6.c
Agcc/testsuite/gcc.dg/pch/inline-2.hs
Agcc/testsuite/gcc.dg/pch/warn-1.c
Agcc/testsuite/gcc.dg/pch/cpp-1.hs
Agcc/testsuite/gcc.dg/pch/system-1.hs
Agcc/testsuite/gcc.dg/pch/static-1.hs
Agcc/testsuite/gcc.dg/pch/inline-2.c
Agcc/testsuite/gcc.dg/pch/inline-4.c
Agcc/testsuite/gcc.dg/pch/struct-1.c
Agcc/testsuite/gcc.dg/pch/static-1.c
Agcc/testsuite/gcc.dg/pch/common-1.c
Agcc/testsuite/gcc.dg/pch/empty.hs
Agcc/testsuite/gcc.dg/pch/valid-2.hs
Agcc/testsuite/gcc.dg/pch/valid-1b.hs
Agcc/

Re: svn access on RHEL 4.0

2006-01-08 Thread Bradley Lucier


On Jan 8, 2006, at 9:12 AM, Daniel Berlin wrote:



Try removing the offending directory (gcc/testsuite/gcc.dg/ 
special) and
run svn cleanup again, updating the tree afterwards.  If you  
didn't have
any local changes in that directory you should not lose anything.   
If the

problem persists then you probably have a hardware problem.


Just "for the record":

gcc.gnu.org runs RHEL4, and we've never had any trouble like this.

All the snapshots are generated locally using svn, etc.


OK, here are some details.  Our server is a dual UltraSparc running  
Solaris 10 attached to the SAN.


Working client situation:  subversion 1.3.0 on Sparc Solaris 9, not  
using Berkeley DB


Non-working client situation: subversion 1.3.0 on x86-64 RHEL 4.0,  
using Berkeley DB


I think everything is running NFSv4 at this point.

So I don't know if the problem is with RHEL versus Solaris 10, or  
Berkeley DB versus non-Berkeley DB (whatever subversion uses when  
Berkeley DB is not available).  Perhaps I can do some experiments to  
see whether Solaris 9 + Berkeley DB works or not.


Brad



Re: svn access on RHEL 4.0

2006-01-14 Thread Bradley Lucier


On Jan 8, 2006, at 7:19 PM, Daniel Berlin wrote:


On Sun, 2006-01-08 at 18:05 -0600, Bradley Lucier wrote:

OK, here are some details.  Our server is a dual UltraSparc running
Solaris 10 attached to the SAN.

Working client situation:  subversion 1.3.0 on Sparc Solaris 9, not
using Berkeley DB

Non-working client situation: subversion 1.3.0 on x86-64 RHEL 4.0,
using Berkeley DB

I think everything is running NFSv4 at this point.


Unless you are running a server locally, whether you've compiled in  
BDB

or not doesn't matter.


Thank you all for your suggestions. I think we've isolated the  
problem enough to hand it off to someone else, since we have hardware  
and OS software support on all the machines involved.


File server: dual ultrasparc running Solaris 10, serves the file  
system using either NFSv3 or NFSv4.


Operation:

svn -q checkout svn://gcc.gnu.org/svn/gcc/trunk gcc; cd gcc; contrib/ 
gcc_update


Results:

File system mounted as NFSv3 or NFSv4 on dual ultrasparc running  
Solaris 9: works


File system mounted as NFSv3 on RHEL 4: works

File system mounted as NFSv4 on RHEL 4: reports seeming file system  
corruption.


This is reproducible, so perhaps someone can find a fix for it.

Thanks again.

Brad




Results for 4.1.0 20060117 (prerelease) testsuite on powerpc-apple-darwin8.4.0 (-m64 results)

2006-01-18 Thread Bradley Lucier

http://www.math.purdue.edu/~lucier/gcc/test-results/4_1-2006-01-17.gz

(Too large to be accepted here.)

So I have a question.  I've installed the latest Xcode release, or,  
at least I think I did:


[lindv2:gcc/4.1/objdir64] lucier% gcc -v
Using built-in specs.
Target: powerpc-apple-darwin8
Configured with: /private/var/tmp/gcc/gcc-5250.obj~12/src/configure -- 
disable-checking -enable-werror --prefix=/usr --mandir=/share/man -- 
enable-languages=c,objc,c++,obj-c++ --program-transform-name=/^[cg] 
[^.-]*$/s/$/-4.0/ --with-gxx-include-dir=/include/c++/4.0.0 -- 
build=powerpc-apple-darwin8 --host=powerpc-apple-darwin8 -- 
target=powerpc-apple-darwin8

Thread model: posix
gcc version 4.0.1 (Apple Computer, Inc. build 5250)
[lindv2:gcc/4.1/objdir64] lucier% ld -v
Apple Computer, Inc. version cctools-590.23.2.obj~17

and I was hoping that this might clear up a significant fraction of  
the 7,000+ 64-bit testsuite failures for 4.1 on powerpc-apple- 
darwin8.4.0.  But it appears this hasn't happened yet.


Does anyone wish to try yet again to drive it into my thick skull  
what goals gcc 4.1 has on powerpc-apple-darwin8.4.0?


(And I still can't seem to use the Apple tools to build 64-bit shared  
libraries that work; but that isn't a problem for this list ...)


Brad


Re: Results for 4.1.0 20060117 (prerelease) testsuite on powerpc-apple-darwin8.4.0 (-m64 results)

2006-01-25 Thread Bradley Lucier


On Jan 23, 2006, at 8:07 PM, Shantonu Sen wrote:

I've posted a new version of odcctools (based on Apple's cctools  
and ld64 source) which should fix a few thousand of the failures.  
Instructions are at:




This is based on cctools-590.18 and ld64-26.0.81, which should be  
substantially similar to what you have, and since you can use -- 
prefix, you don't need to overwrite the Apple-provided tools.


Can you report how this changes things?


The results are much better, thanks:

http://gcc.gnu.org/ml/gcc-testresults/2006-01/msg01316.html

So what are you recommending people do?  Use the OpenDarwin version  
of cctools instead of relying on Apple's official version?


Brad


Re: Imported GNU Classpath 0.90

2006-03-12 Thread Bradley Lucier

Please let us know ([EMAIL PROTECTED]) if there are any issues
with the new import. It has been tested on x86, x86-64 and ppc-32 on
GNU/Linux and sun-sparc-solaris8 multilib and darwin-pcc 32-bit. But
more testing is helpful (it also includes an update to the fdlibm
library).


I don't know if it is of help, but I report a set of 64-bit  
regression tests on powerpc-apple-darwin8.5.0 here:


http://gcc.gnu.org/ml/gcc-testresults/2006-03/msg00806.html

Over the past couple of months the size of the file mail-report-with- 
warnings.log has increased from about 120KB to about 160KB, but I  
haven't taken the time to check whether these are new tests that are  
failing or whether these are indeed new regressions of old tests.


Brad


Results for 4.2.0 20060320 (experimental) testsuite on powerpc-apple-darwin8.5.0 (-m64 results)

2006-03-21 Thread Bradley Lucier

64-bit powerpc-darwin results be found here:

http://www.math.purdue.edu/~lucier/gcc/test-results/4_2-2006-03-20.gz

The mail-report-with-warnings.log file is again too large to be  
accepted by the gcc-testresults mail list after quite a few weeks  
when it was only about 125K long.


I'm curious about whether any of the changes recently proposed to  
clean up the x86-darwin port can be applied to the 64-bit PowerPC  
darwin compiler; I'm getting the feeling that gcc on ppc64 darwin may  
become something of an orphan.


Brad


Re: Results for 4.2.0 20060320 (experimental) testsuite on powerpc-apple-darwin8.5.0 (-m64 results)

2006-03-22 Thread Bradley Lucier


On Mar 21, 2006, at 11:39 PM, Shantonu Sen wrote:


On Mar 21, 2006, at 12:34 PM, Bradley Lucier wrote:

I'm curious about whether any of the changes recently proposed to  
clean up the x86-darwin port can be applied to the 64-bit PowerPC  
darwin compiler;


Like what? I haven't really seen many cleanups that were x86/darwin- 
specific


I was thinking of this thread

http://gcc.gnu.org/ml/gcc-patches/2006-03/msg01073.html

But perhaps I misunderstood.

Brad



Possible configure problem in mainline?

2006-04-15 Thread Bradley Lucier
I'm trying to build a 64-bit mainline compiler on powerpc-darwin; I  
want gcc to generate 32-bit binaries by default, I just want cc1,  
etc., to be 64-bit binaries so I can compile large files.


This works in 4.1, but not on mainline.  This is reported at

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=26892

The problem is that the makefile is trying to link libiconv.dylib,  
which is only 32 bit, into a 64-bit makedepend.


This doesn't happen on 4.1, where configure finds iconv.h, but  
realizes that it can't use libiconv:


[lindv2:gcc/gcc-4.1.0/objdir] lucier% grep iconv build.log
checking for iconv... no, consider installing GNU libiconv
checking iconv.h usability... yes
checking iconv.h presence... yes
checking for iconv.h... yes
checking for iconv... no, consider installing GNU libiconv
checking for iconv.h... yes
checking for iconv... no, consider installing GNU libiconv

For some reason, the tests for iconv in mainline don't rule out using  
libiconv; some details are given in the preceding bugzilla report.


Does anyone have any suggestions?

Brad


Re: Can gcc 4.3.1 handle big function definitions?

2008-09-08 Thread Bradley Lucier

Klaus:

Perhaps your problem is related to PR 26854:

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=26854

See in particular comment 70, which has some statistics.

If you're building your own gcc, configure gcc with --enable-gather- 
detailed-mem-stats and compile your program with -ftime-report -fmem- 
report and you'll get more detailed statistics that might give more  
insight.


Brad


Mainline bootstrap failure on powerpc64-darwin, but looks generic

2008-09-24 Thread Bradley Lucier

I'm just not having any luck bootstrapping this thing ...

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=37639


Should 27_io/ios_base/storage/2.cc be XFAILed on powerpc64-apple-darwin9?

2008-11-19 Thread Bradley Lucier

I'm getting the following failure on powerpc64-apple-darwin9.5.0:

Setting LD_LIBRARY_PATH to :/Users/lucier/programs/gcc/objdirs/ 
mainline/gcc:/Users/lucier/programs/gcc/objdirs/mainline/powerpc64- 
apple-darwin9.5.0/./libstdc++-v3/src/.libs::/Users/lucier/programs/gcc/ 
objdirs/mainline/gcc:/Users/lucier/programs/gcc/objdirs/mainline/ 
powerpc64-apple-darwin9.5.0/./libstdc++-v3/src/.libs

WARNING: program timed out.
FAIL: 27_io/ios_base/storage/2.cc execution test
extra_tool_flags are:
  -include bits/stdc++.h

This test is XFAILed on darwin8.[0-4] with the comment:

// This fails on some versions of Darwin 8 because malloc doesn't return
// NULL even if an allocation fails (filed as Radar 3884894).
// { dg-do run { xfail *-*-darwin8.[0-4].* } }

On my G5 with 8GB of memory 2.exe has an RSIZE of 6.80 GB and a VSIZE  
of 34.57 GB before it times out, which doesn't seem right to me.  And  
it makes the machine pretty sluggish for a while ;-).


Should this be XFAILed on powerpc64-apple-darwin9?

Brad


Re: Should 27_io/ios_base/storage/2.cc be XFAILed on powerpc64-apple-darwin9?

2008-11-20 Thread Bradley Lucier


On Thu, Nov 20, 2008 at 11:17:52AM +0100, Paolo Carlini wrote:

Hi,

Should this be XFAILed on powerpc64-apple-darwin9?
A patch doing that is essentially preapproved if you can confirm  
that in

the meanwhile the malloc bug (Radar 3884894) has been fixed for
i386-apple-darwin and not for powerpc64-apple-darwin. Can you do that?


I don't quite understand: does XFAILing the test mean that the test  
isn't attempted, or just that if it fails it doesn't show up on the  
report (except incrementing the number of expected failures)?  I'd  
like it not to be attempted if it will want 34GB + of virtual memory.


Brad


Re: GCC 4.4.0 Status Report (2008-11-17)

2008-11-20 Thread Bradley Lucier
There has been some discussion here of GCC's reputation and of how to  
classify bugs.


This bug

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=26854

has gradually morphed from a compile-time issue to a space issue; if  
it's not fixed for 4.4 (and it appears that it will not be fixed in  
that time frame) then there will have been two consecutive major gcc  
releases where I cannot use the compiler to build one of my  
applications on the machines in my office and at home because they  
each have "only" 8GB of RAM.  The servers at work have 16 and 32GB of  
ram, so I can still build and test it there.


Over the years I have tried to test prerelease versions of gcc with C  
code that is generated by the Gambit Scheme compiler; that code seems  
to be different enough from other tests that it has revealed some  
bugs or inefficiencies in GCC, and people have been very helpful in  
fixing those bugs and eliminating those inefficiencies.  And GCC has  
several features (computed gotos, ___builtin_expect, ...) that the  
Gambit Scheme compiler now exploits to generate faster code, so  
certainly the Gambit Scheme community appreciates GCC.


But now that 4.3.* and (soon) 4.4.* are pushed out to the general  
public, I have to admit to the other people using Gambit that yes,  
recent versions of GCC require markedly more resources to compile  
code than older versions did.  And, for some applications, the code  
runs more slowly: because of


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=33928

basically all nontrivial operations on large (> 20,000 bits) bignums  
are 10% slower when compiled with 4.3.* and 4.4 than with previous  
compilers.


I really appreciate how helpful the GCC community has been over the  
years and the product that community produces, but I can't see how  
recent versions of GCC can avoid having a worsened reputation among  
at least this small group of users.


Brad


Measuring FSF gcc from 4.1.2 to today on various benchmarks.

2009-05-29 Thread Bradley Lucier
I've put at

http://www.math.purdue.edu/~lucier/bugzilla/9/

some compile-time and run-time statistics related to PR 39157 and PR
33928 and compile times and run times for the programs in the Gambit
Scheme benchmark suite.  The statistics are for 4.1.2 release, 4.2.4
release, 4.3.3 release, 4.4.1 20090522, 4.5.0 20090521 (revision
147758), and 4.5.0 20090521 (revision 147758) with
-fno-forward-propagate; they use (mainly) the set of options

-O1 -fno-math-errno -fschedule-insns2 -fno-trapping-math -fno-strict-aliasing 
-fwrapv -fomit-frame-pointer -fPIC -fno-common -mieee-fp

on a Core 2 quad processor (running basically nothing else at the time).

I would conclude from the statistics that, right now, the cost of
including -fforward-propagate in -O1 overrides any performance benefit
that may result.

Brad



Bootstrap failure configuring in-tree gmp in mainline

2009-07-15 Thread Bradley Lucier
After configuring

Target: x86_64-unknown-linux-gnu
gcc version 4.5.0 20090715 (experimental) [trunk revision 149654] (GCC) 

with

../../mainline/configure --enable-checking=release 
--prefix=/pkgs/gcc-mainline-mem-stats --enable-languages=c 
--enable-gather-detailed-mem-stats

I get the bootstrap error:

Configuring stage 2 in ./gmp
< stuff omitted>
checking how to run the C++ preprocessor... /lib/cpp
configure: error: C++ preprocessor "/lib/cpp" fails sanity check
See `config.log' for more details.
make[2]: *** [configure-stage2-gmp] Error 1
make[2]: Leaving directory `/home/lucier/programs/gcc/objdirs/mainline'
make[1]: *** [stage2-bubble] Error 2
make[1]: Leaving directory `/home/lucier/programs/gcc/objdirs/mainline'
make: *** [bootstrap] Error 2

This is using an in-tree gmp 4.3.0, gmp/config.log reports:

configure:11030: checking how to run the C++ preprocessor
configure:11061:  /home/lucier/programs/gcc/objdirs/mainline/./prev-gcc/g++ 
-B/home/lucier/programs/gcc/objdirs/mainline/./prev-gcc/ 
-B/pkgs/gcc-mainline-mem-stats/x86_64-unknown-linux-gnu/bin/ -nostdinc++ 
-I/home/lucier/programs/gcc/objdirs/mainline/prev-x86_64-unknown-linux-gnu/libstdc++-v3/include/x86_64-unknown-linux-gnu
 
-I/home/lucier/programs/gcc/objdirs/mainline/prev-x86_64-unknown-linux-gnu/libstdc++-v3/include
 
-I/home/lucier/programs/gcc/objdirs/mainline/../../mainline/libstdc++-v3/libsupc++
 
-L/home/lucier/programs/gcc/objdirs/mainline/prev-x86_64-unknown-linux-gnu/libstdc++-v3/src/.libs
 -E -DNO_ASM conftest.cc
/home/lucier/programs/gcc/mainline/gmp/configure: line 11062: 
/home/lucier/programs/gcc/objdirs/mainline/./prev-gcc/g++: No such file or 
directory

configure:11061: /lib/cpp -DNO_ASM conftest.cc
cpp: error trying to exec 'cc1plus': execvp: No such file or directory

Am i missing something? 

Brad



Re: Bootstrap failure configuring in-tree gmp in mainline

2009-07-25 Thread Bradley Lucier

Thanks for your reply.

On Jul 25, 2009, at 7:18 AM, Ralf Wildenhues wrote:


Does /home/lucier/programs/gcc/objdirs/mainline/./prev-gcc/g++ exist,


No.


and if yes, is it a functioning executable?

If it doesn't exist, that looks like the toplevel logic for which
languages to build still has a loop hole for --enable-languages=c,
either not properly enabling the C++ compiler for stage 1, or wrongly
overriding CXX, CXX_FOR_BUILD in toplevel Makefile.tpl to point to
nonexistent previous-stage C++ compiler.  I don't know which is the
desired one.

BTW, what's the last , and why does your /lib/cpp  
try to

spawn cc1plus?


I put gmp/config.log at

http://www.math.purdue.edu/~lucier/bugzilla/10/gmp-config.log

and the build log (which is a bit confusing, I did a "make -j 6  
bootstrap > build.log" and then "make bootstrap >>& build.log" (in  
tcsh) to get a clearer message of the error at the end of the log) at


http://www.math.purdue.edu/~lucier/bugzilla/10/build.log

I tried building gcc like this on another machine, and it gets more  
confusing, at least to me.  When I put gmp-4.2.4 or gmp-4.3.0 in-tree  
on RHEL 5 with


leibniz-4% uname -a
Linux leibniz.math.purdue.edu 2.6.18-128.1.16.el5xen #1 SMP Fri Jun  
26 11:10:46 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux


it works; when I put either in-tree on Ubuntu 9.04 with

heine:~/programs/gcc/objdirs/mainline> uname -a
Linux heine.math.purdue.edu 2.6.28-13-generic #45-Ubuntu SMP Tue Jun  
30 22:12:12 UTC 2009 x86_64 GNU/Linux


it fails.

Brad


Re: Bootstrap failure configuring in-tree gmp in mainline

2009-07-25 Thread Bradley Lucier


On Jul 25, 2009, at 12:54 PM, Paolo Bonzini wrote:


Am i missing something?


No, it is a bug due to the build-with-C++ patches.  Please file a  
PR and, in the meanwhile, try --enable-stage1-languages=c,c++


That seemed to work, thanks, bootstrap has gotten past my old problem.


or --enable-build-with-cxx.


This fails with


Configuring stage 1 in ./libcpp

checking dependency style of g++... none
configure: error: no usable dependency style found
make[2]: *** [configure-stage1-libcpp] Error 1
make[2]: Leaving directory `/home/lucier/programs/gcc/objdirs/ 
mainline'

make[1]: *** [stage1-bubble] Error 2
make[1]: Leaving directory `/home/lucier/programs/gcc/objdirs/ 
mainline'

make: *** [bootstrap] Error 2



Brad


Re: Bootstrap failure configuring in-tree gmp in mainline

2009-07-25 Thread Bradley Lucier


On Jul 25, 2009, at 2:16 PM, Bradley Lucier wrote:




On Jul 25, 2009, at 12:54 PM, Paolo Bonzini wrote:


Am i missing something?


No, it is a bug due to the build-with-C++ patches.  Please file a  
PR and, in the meanwhile, try --enable-stage1-languages=c,c++


That seemed to work, thanks, bootstrap has gotten past my old problem.


Ah, I was too quick, it failed again at the next bootstrap stage.


Re: Bootstrap failure configuring in-tree gmp in mainline

2009-08-05 Thread Bradley Lucier


On Jul 25, 2009, at 12:54 PM, Paolo Bonzini wrote:





Am i missing something?


No, it is a bug due to the build-with-C++ patches.  Please file a  
PR and, in the meanwhile, try --enable-stage1-languages=c,c++ or -- 
enable-build-with-cxx.


I filed PR40950 for this.

I also filed PR40968 for an ICE on bootstrap when configured with -- 
enable-gather-detailed-mem-stats.


Brad


Shipping gmp and mpfr with gcc-4.0?

2005-02-15 Thread Bradley Lucier
Sorry if this has been answered before.
I installed gmp and mpfr successfully on my Mac G5 and so I can test 
gfortran on my Mac.

I tried this evening to install gmp-4.1.4 and mpfr-2.1.0 on my Solaris 
machines and I failed on the first try.  (I think the default install 
for gmp on my machines is a 64-bit version, but the default for mpfr 
and gcc is 32-bit, so I'm going to have to figure out how to configure 
everything to match.)

Now, I'm not a great systems administrator, but I've been maintaining 
my own unix/linux/MacOS X machines since 1987, and installing gmp and 
mpfr as they come from the default distributions in such a way that gcc 
can use them seems to be nontrivial.

So, my question.  Will gcc-4.0 ship with a properly configured gmp/mpfr 
distribution so gfortran will be built by ./configure; make; make 
install?

Brad


Re: Shipping gmp and mpfr with gcc-4.0?

2005-02-19 Thread Bradley Lucier
On Feb 16, 2005, at 2:13 AM, Eric Botcazou wrote:
I tried this evening to install gmp-4.1.4 and mpfr-2.1.0 on my Solaris
machines and I failed on the first try.  (I think the default install
for gmp on my machines is a 64-bit version, but the default for mpfr
and gcc is 32-bit, so I'm going to have to figure out how to configure
everything to match.)
./configure sparc-sun-solaris2.9 --prefix=xxx --enable-mpfr
After explicitly specifying --build=sparc-sun-solaris2.9 with  
gmp-4.1.4, downloading a more recent mpfr and building it with  
--build=sparc-sun-solaris2.9, specifying

../configure --host=sparc-sun-solaris2.9 --build=sparc-sun-solaris2.9  
--target=sparc-sun-solaris2.9  
--prefix=/export/users/lucier/local/gcc-mainline  
--with-gmp=/pkgs/gmp-4.1.4 --with-mpfr=/pkgs/gmp-4.1.4 ; make -j 1  
bootstrap >& build.log

the build failed the first time gfortran tried to compile something  
with the error

/homes/lucier/programs/gcc/mainline/objdir/gcc/gfortran  
-B/homes/lucier/programs/gcc/mainline/objdir/gcc/  
-B/export/users/lucier/local/gcc-mainline/sparc-sun-solaris2.9/bin/  
-B/export/users/lucier/local/gcc-mainline/sparc-sun-solaris2.9/lib/  
-isystem  
/export/users/lucier/local/gcc-mainline/sparc-sun-solaris2.9/include  
-isystem  
/export/users/lucier/local/gcc-mainline/sparc-sun-solaris2.9/sys- 
include -Wall -fno-repack-arrays -fno-underscoring -c  
../../../libgfortran/intrinsics/selected_int_kind.f90  -fPIC -DPIC -o  
.libs/selected_int_kind.o
ld.so.1: /homes/lucier/programs/gcc/mainline/objdir/gcc/f951: fatal:  
libgmp.so.3: open failed: No such file or directory
gfortran: Internal error: Killed (program f951)
Please submit a full bug report.
See http://gcc.gnu.org/bugs.html> for instructions.
make[3]: *** [selected_int_kind.lo] Error 1
make[3]: Leaving directory  
`/export/users/lucier/programs/gcc/mainline/objdir/sparc-sun- 
solaris2.9/libgfortran'
make[2]: *** [all] Error 2
make[2]: Leaving directory  
`/export/users/lucier/programs/gcc/mainline/objdir/sparc-sun- 
solaris2.9/libgfortran'
make[1]: *** [all-target-libgfortran] Error 2
make[1]: Leaving directory  
`/export/users/lucier/programs/gcc/mainline/objdir'
make: *** [bootstrap] Error 2

So now what?  Not build shared libraries for gmp?  Add /pkgs/gmp-4.1.4  
to my LD_LIBRARY_PATH?  Find another configure option for GCC that I  
overlooked?

This is supposed to be straightforward?
Brad


Will people install gfortran in 4.0? [was Re: Shipping gmp and mpfr with gcc-4.0?]

2005-02-19 Thread Bradley Lucier
On Feb 19, 2005, at 11:18 AM, Eric Botcazou wrote:
So now what?  Not build shared libraries for gmp?  Add /pkgs/gmp-4.1.4
to my LD_LIBRARY_PATH?
The latter.
Well, I can't really require people using the compiler to have 
/pkgs/gcc-4.0/lib, /pkgs/gcc-4.0/lib/sparcv9, *and* /pkgs/gmp-4.1.4 in 
their LD_LIBRARY_PATH, and I think my systems people would balk at 
adding /pkgs/gmp-4.1.4 to the crle path, so perhaps I'll just find out 
how to link the gmp libraries in statically.

But I think that in many installations people simply won't dance 
through these hoops and gfortran will not be installed in 4.0.

Brad


Where to place warning about non-optimized tail and sibling calls

2023-08-01 Thread Bradley Lucier via Gcc
The Gambit Scheme->C compiler has an option to generate more efficient 
code if it knows that all tail and sibling calls in the generated C code 
will be optimized.  If gcc does not, however, optimize a tail or sibling 
call, the generated C code may be incorrect (depending on circumstances).


So I would like to add a warning enabled by -Wdisabled-optimization so 
that if -foptimize-sibling-calls is given and a tail or sibling call is 
not optimized, then a warning is triggered.


I don't quite know where to place the warning.  It would be good if 
there were one piece of code to identify all tail and sibling calls, and 
then another piece that decides whether the optimization can be performed.


I see code in gcc/tree-tailcall.cc

suitable_for_tail_opt_p
suitable_for_tail_call_opt_p

which are called by

tree_optimize_tail_calls_1

which takes an argument

opt_tailcalls

and it's called in one place with opt_tailcalls true and in another 
place with opt_tailcalls false.


So I'm losing the plot here.

There is other code dealing with tail calls in gcc/calls.cc I don't seem 
to understand at all.


Any advice?

Brad


Re: Where to place warning about non-optimized tail and sibling calls

2023-08-01 Thread Bradley Lucier via Gcc

On 8/1/23 12:51 PM, Paul Koning wrote:

How is it possible to write valid C that is correct only if some optimization 
is done?


Perhaps "incorrect" was the wrong word.  If sibling-call optimization is 
not done, then perhaps the program will blow out the stack, which would 
not happen if the optimization is done.


Also, transforming sibling calls is an optimization for C, but for 
Scheme it's a part of the language.  Translating Scheme to C has to take 
that into account: if there is no sibling-call optimization, then the 
Scheme code is translated to C code that uses a so-called trampoline; if 
there is sibling-call optimization that the Scheme compiler can rely on, 
then taking advantage of the optimization leads to faster Scheme code.


Brad


Re: Where to place warning about non-optimized tail and sibling calls

2023-08-01 Thread Bradley Lucier via Gcc

On 8/1/23 6:08 PM, David Malcolm wrote:

Or from libgccjit.  FWIW I added it to support Scheme from libgccjit;
see this patch kit:
   https://gcc.gnu.org/ml/gcc-patches/2016-05/msg01287.html

Perhaps there's a case for a frontend attribute for this.
Dave


Thanks.  I thought a front-end warning might be enough, as one can add 
-Werror to the command line to fail if it can't optimize a sibling call.


Is there a reasonable place to put a warning if 
flag_optimize_sibling_calls is true and a sibling call is *not* optimized?


Brad


Re: Where to place warning about non-optimized tail and sibling calls

2023-08-02 Thread Bradley Lucier via Gcc

On 8/1/23 6:08 PM, David Malcolm wrote:

FWIW I added it to support Scheme from libgccjit;


Do you know of any Scheme using libgccjit?

BTW, I tried to build mainline with --enable-coverage to see which code 
is executed with -foptimize-sibling-calls, but bootstrap fails with


/home/lucier/programs/gcc/objdirs/gcc-mainline/./prev-gcc/xg++ 
-B/home/lucier/programs/gcc/objdirs/gcc-mainline/./prev-gcc/ 
-B/pkgs/gcc-mainline/x86_64-pc-linux-gnu/bin/ -nostdinc++ 
-B/home/lucier/programs/gcc/objdirs/gcc-mainline/prev-x86_64-pc-linux-gnu/libstdc++-v3/src/.libs 
-B/home/lucier/programs/gcc/objdirs/gcc-mainline/prev-x86_64-pc-linux-gnu/libstdc++-v3/libsupc++/.libs 

-I/home/lucier/programs/gcc/objdirs/gcc-mainline/prev-x86_64-pc-linux-gnu/libstdc++-v3/include/x86_64-pc-linux-gnu 

-I/home/lucier/programs/gcc/objdirs/gcc-mainline/prev-x86_64-pc-linux-gnu/libstdc++-v3/include 
 -I/home/lucier/programs/gcc/gcc-mainline/libstdc++-v3/libsupc++ 
-L/home/lucier/programs/gcc/objdirs/gcc-mainline/prev-x86_64-pc-linux-gnu/libstdc++-v3/src/.libs 
-L/home/lucier/programs/gcc/objdirs/gcc-mainline/prev-x86_64-pc-linux-gnu/libstdc++-v3/libsupc++/.libs 
 -fno-PIE -c   -g -O2 -fno-checking -gtoggle -DIN_GCC  -fprofile-arcs 
-ftest-coverage -frandom-seed=opts.o -O0 -fkeep-static-functions 
-fno-exceptions -fno-rtti -fasynchronous-unwind-tables -W -Wall 
-Wno-narrowing -Wwrite-strings -Wcast-qual -Wmissing-format-attribute 
-Wconditionally-supported -Woverloaded-virtual -pedantic -Wno-long-long 
-Wno-variadic-macros -Wno-overlength-strings -Werror -fno-common 
-DHAVE_CONFIG_H -fno-PIE -I. -I. -I../../../gcc-mainline/gcc 
-I../../../gcc-mainline/gcc/. -I../../../gcc-mainline/gcc/../include 
-I../../../gcc-mainline/gcc/../libcpp/include 
-I../../../gcc-mainline/gcc/../libcody 
-I../../../gcc-mainline/gcc/../libdecnumber 
-I../../../gcc-mainline/gcc/../libdecnumber/bid -I../libdecnumber 
-I../../../gcc-mainline/gcc/../libbacktrace   -o opts.o -MT opts.o -MMD 
-MP -MF ./.deps/opts.TPo ../../../gcc-mainline/gcc/opts.cc
../../../gcc-mainline/gcc/opts.cc: In function 'void 
print_filtered_help(unsigned int, unsigned int, unsigned int, unsigned 
int, gcc_options*, unsigned int)':
../../../gcc-mainline/gcc/opts.cc:1687:26: error: '  ' directive output 
may be truncated writing 2 bytes into a region of size between 1 and 256 
[-Werror=format-truncation=]

 1687 |   "%s  %s", help, _(use_diagnosed_msg));
  |  ^~
../../../gcc-mainline/gcc/opts.cc:1686:22: note: 'snprintf' output 3 or 
more bytes (assuming 258) into a destination of size 256

 1686 | snprintf (new_help, sizeof new_help,
  | ~^~~
 1687 |   "%s  %s", help, _(use_diagnosed_msg));
  |   ~
cc1plus: all warnings being treated as errors