Re: GCC optimizes integer overflow: bug or feature?

2006-12-29 Thread Paul Eggert
Roberto Bagnara [EMAIL PROTECTED] writes:

 My reading, instead, is that C99 requires unsigned long long int
 to have exactly the same number of bits as long long int.

Yes, that's correct.  Sorry, I got confused between C89
(which is what that Tandem NSK version supports) and C99.


Re: GCC optimizes integer overflow: bug or feature?

2006-12-29 Thread Paul Eggert
Paolo Bonzini [EMAIL PROTECTED] writes:

 Or you can do, since elsewhere in the code you compute time_t_max:
   for (j = 1; j = time_t_max / 2 + 1; j *= 2)

 No, this does not work.  It would work to have:

   for (j = 1;;)
 {
   if (j  time_t_max / 2)
 break;
   j *= 2;
 }

 Oops.

Oops is my thought as well.  Even the second version of your code is
incorrect in general, since it assumes that 'int' and 'time_t' are the
same width.

It's surprisingly tricky to get right.  This underscores why -O2's
changes in this area are so worrisome.

What I eventually ended up doing -- and this was before reading the
above-quoted email -- was this:

  for (j = 1; ; j = 1)
if (! bigtime_test (j))
  return 1;
else if (INT_MAX / 2  j)
  break;

This is portable and should work on all real platforms.

But that was the easy code!  Here's the harder stuff, the
code that computes time_t_max:

  static time_t time_t_max;
  static time_t time_t_min;
  for (time_t_max = 1; 0  time_t_max; time_t_max *= 2)
continue;
  time_t_max--;
  if ((time_t) -1  0)
for (time_t_min = -1; (time_t) (time_t_min * 2)  0; time_t_min *= 2)
  continue;

Obviously this code is buggy, at least in theory, due to the signed
integer overflows.  But rewriting it is not so easy, since we have no
INT_MAX to rescue us as we did in the bigtime_test loop.  Here's what
I eventually came up with:

  for (;;)
{
  time_t t = (time_t_max  1) + 1;
  if (t = time_t_max)
break;
  time_t_max = t;
}
  time_t_min = - ((time_t) ~ (time_t) 0 == (time_t) -1) - time_t_max;

This isn't guaranteed to be portable by C99 either, of course; among
other things, left-shift has undefined behavior on signed integer
overflow.  I am relying on your heuristic advice to use left shift
rather than multiplication by 2, so that GCC won't mess up here.  But
it's a weak heuristic and I'm afraid it doesn't inspire a whole lot of
confidence.


Re: does zlib really need to build multilib?

2006-12-29 Thread Paolo Bonzini

Jack Howarth wrote:

   I noticed that in gcc trunk and gcc 4.2 branch
that multilib builds of zlib occur. Does gcc
actually use the multlib zlib?


Not for the host zlib.


For instance on
x86_64 linux does the 32-bit zlib get used or on
Darwin does the 64-bit zlib get used? We are
considering using the --with-system-zlib option
when building gcc 4.2 but MacOS X 10.4 and earlier
lacks a 64-bit zlib support library. Thanks in
advance for any clarifications.


If you want to prepare a patch to pass --disable-multilib in the host 
configure arguments, and test it adequately, I'll be happy to review it.


Paolo



Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Ian Lance Taylor
Paul Eggert [EMAIL PROTECTED] writes:

   * NEWS: AC_PROG_CC, AC_PROG_CXX, and AC_PROG_OBJC now take an
   optional second argument specifying the default optimization
   options for GCC.  These optimizations now default to -O2 -fwrapv
   instead of to -O2.  This partly attacks the problem reported by
   Ralf Wildenhues in
   http://lists.gnu.org/archive/html/bug-gnulib/2006-12/msg00084.html
   and in http://gcc.gnu.org/ml/gcc/2006-12/msg00459.html.

I fully appreciate that there is a real problem here which needs to be
addressed, but this does not seem like the best solution to me.  A
great number of C programs are built using autoconf.  If we make this
change, then they will all be built with -fwrapv.  That will disable
useful loop optimizations, optimizations which are enabled by default
by gcc's competitors.  The result will be to make gcc look worse than
it is.

You will recall that the problem with the original code was not in the
loop optimizers; it was in VRP.  I think we would be better served by
changing VRP to not rely on undefined signed overflow.  Or, at least,
to not rely on it without some additional option.

If we make that change on the 4.2 branch and on mainline, then no
autoconf change is necessary, and the default gcc behaviour will be
less confusing.

Does anybody think that Paul's proposed patch to autoconf would be
better than changing VRP?

Ian


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Andrew Pinski
 
 Paul Eggert [EMAIL PROTECTED] writes:
 
  * NEWS: AC_PROG_CC, AC_PROG_CXX, and AC_PROG_OBJC now take an
  optional second argument specifying the default optimization
  options for GCC.  These optimizations now default to -O2 -fwrapv
  instead of to -O2.  This partly attacks the problem reported by
  Ralf Wildenhues in
  http://lists.gnu.org/archive/html/bug-gnulib/2006-12/msg00084.html
  and in http://gcc.gnu.org/ml/gcc/2006-12/msg00459.html.
 
 Does anybody think that Paul's proposed patch to autoconf would be
 better than changing VRP?

I think both ways are incorrect way forward.
What about coding the loops like:

if (sizeof(time_t) == sizeof(unsigned int))
{
  // do loop using unsigned int
  // convert to time_t and then see if an overflow happened
}
//etc. for the other type


This way you don't depend on either implemenetation defined behavior
of converting between integer with different sizes and undefined behavior
with signed type overflow.


Thanks,
Andrew Pinski



[heads-up] disabling ../configure --disable-bootstrap make bootstrap

2006-12-29 Thread Paolo Bonzini
As per the subject.  The upcoming merge of toplevel libgcc will only 
work either for disabled bootstrap, or with the toplevel bootstrap 
mechanism.  For this reason, we are now disabling ../configure 
--disable-bootstrap  make bootstrap.  The correct way to bootstrap is 
to just use ./configure (followed by make, make bootstrap or similar).


Next week, after the merge, the bootstrap rules in the gcc directory 
will go away.


Ciao,

Paolo


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Gabriel Dos Reis
Andrew Pinski [EMAIL PROTECTED] writes:

|  
|  Paul Eggert [EMAIL PROTECTED] writes:
|  
| * NEWS: AC_PROG_CC, AC_PROG_CXX, and AC_PROG_OBJC now take an
| optional second argument specifying the default optimization
| options for GCC.  These optimizations now default to -O2 -fwrapv
| instead of to -O2.  This partly attacks the problem reported by
| Ralf Wildenhues in
| http://lists.gnu.org/archive/html/bug-gnulib/2006-12/msg00084.html
| and in http://gcc.gnu.org/ml/gcc/2006-12/msg00459.html.
|  
|  Does anybody think that Paul's proposed patch to autoconf would be
|  better than changing VRP?
| 
| I think both ways are incorrect way forward.
| What about coding the loops like:
| 
| if (sizeof(time_t) == sizeof(unsigned int))
| {
|   // do loop using unsigned int
|   // convert to time_t and then see if an overflow happened
| }
| //etc. for the other type

Yuck.


If the above is the only without Autoconf change, I would highly
recommend Autoconf change if GCC optimizers highly value benchmarks
over running real world code.

-- Gaby


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Gabriel Dos Reis
Ian Lance Taylor [EMAIL PROTECTED] writes:

| Paul Eggert [EMAIL PROTECTED] writes:
| 
|  * NEWS: AC_PROG_CC, AC_PROG_CXX, and AC_PROG_OBJC now take an
|  optional second argument specifying the default optimization
|  options for GCC.  These optimizations now default to -O2 -fwrapv
|  instead of to -O2.  This partly attacks the problem reported by
|  Ralf Wildenhues in
|  http://lists.gnu.org/archive/html/bug-gnulib/2006-12/msg00084.html
|  and in http://gcc.gnu.org/ml/gcc/2006-12/msg00459.html.
| 
| I fully appreciate that there is a real problem here which needs to be
| addressed, but this does not seem like the best solution to me.  A
| great number of C programs are built using autoconf.  If we make this
| change, then they will all be built with -fwrapv.  That will disable
| useful loop optimizations, optimizations which are enabled by default
| by gcc's competitors.  The result will be to make gcc look worse than
| it is.

well, given current situation, gcc looks worse :-)

| 
| You will recall that the problem with the original code was not in the
| loop optimizers; it was in VRP.  I think we would be better served by
| changing VRP to not rely on undefined signed overflow.  Or, at least,
| to not rely on it without some additional option.
| 
| If we make that change on the 4.2 branch and on mainline, then no
| autoconf change is necessary, and the default gcc behaviour will be
| less confusing.
| 
| Does anybody think that Paul's proposed patch to autoconf would be
| better than changing VRP?

If GCC optimizers dare take existing pratice outside benchmarks into
account, then my first choice would be to make the optimizers
understand that not all undefined behaviour are equal, therefore
honor some existing practice (your proposal).  

If that is infeasible for GCC, then the way foward is to tell GCC
users to effectively turn on -ftrapw (Paul's proposal) so as to
compile non-benchmark programs and expect them to run as before.

-- Gaby


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Andrew Pinski
 
 Andrew Pinski [EMAIL PROTECTED] writes:
 
 |  
 |  Paul Eggert [EMAIL PROTECTED] writes:
 |  
 |   * NEWS: AC_PROG_CC, AC_PROG_CXX, and AC_PROG_OBJC now take an
 |   optional second argument specifying the default optimization
 |   options for GCC.  These optimizations now default to -O2 
 -fwrapv
 |   instead of to -O2.  This partly attacks the problem reported 
 by
 |   Ralf Wildenhues in
 |   
 http://lists.gnu.org/archive/html/bug-gnulib/2006-12/msg00084.html
 |   and in http://gcc.gnu.org/ml/gcc/2006-12/msg00459.html.
 |  
 |  Does anybody think that Paul's proposed patch to autoconf would be
 |  better than changing VRP?
 | 
 | I think both ways are incorrect way forward.
 | What about coding the loops like:
 | 
 | if (sizeof(time_t) == sizeof(unsigned int))
 | {
 |   // do loop using unsigned int
 |   // convert to time_t and then see if an overflow happened
 | }
 | //etc. for the other type
 
 Yuck.

It might be Yuck but that is the only real portable way to do this loop.
Now if other compilers actually treat signed overflow as undefined, you
will still run into this, even if GCC gets changed.

 If the above is the only without Autoconf change, I would highly
 recommend Autoconf change if GCC optimizers highly value benchmarks
 over running real world code.

Which one, mine or Paul's?  Because Paul's change just makes everything
compiles with -fwrapv anyways which is not going to work for other
compilers which could actually treat signed overflow as undefined in any
form.

Since autoconf is not only for compiling with GCC but any compiler, you would
run into it with a different compiler anyways.


-- Pinski


Re: g++ doesn't unroll a loop it should unroll

2006-12-29 Thread Geert Bosch


On Dec 13, 2006, at 17:09, Denis Vlasenko wrote:


# g++ -c -O3 toto.cpp -o toto.o
# g++ -DUNROLL -O3 toto.cpp -o toto_unroll.o -c
# size toto.o toto_unroll.o
   textdata bss dec hex filename
525   8   1 534 216 toto.o
359   8   1 368 170 toto_unroll.o

How can C++ compiler know that you are willing to trade
so much of text size for performance?


Huh? The unrolled version is 30% smaller, isn't it?

  -Geert



Re: configuration options policy (at toplevel or only inside gcc/)?

2006-12-29 Thread Gerald Pfeifer
On Thu, 14 Dec 2006, Basile STARYNKEVITCH wrote:
 I really think that such information should go into GCC internal
 documentation, where I was not able to find it out. Do you believe
 that some of the descriptions in this thread and in the Wiki page just
 cited should go into the documentation? Is the documentation expected
 to help new GCC contributors, or is it only for users?

Both.  As far as contributors go, most documentation on processes etc.
is on our web pages and the Wiki, whereas documentation of internals
is in the texinfo documentation (and source code, of course ;-).

 In particular, IMHO the commands to re-generate the configure scripts 
 should be documented if the documentation also targets potential GCC 
 contributors.

I agree, that sounds useful.  DJ, Alexandre, Paolo, what's your take
on this.  Any recommendations?

 I could write (by copying phrases from the wiki page) a few sentences
 into the documentation (gcc/doc/sourcebuild.texi)? Is it worthwhile;
 in other words for whom is this documentation written: for users of
 GCC (including the few people compiling GCC to use it) or for
 potential contributors (GCC hackers)?

Both. ;-)  Let's see what our configury maintainers think about your
proposal.

Thanks,
Gerald

Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Gabriel Dos Reis
Andrew Pinski [EMAIL PROTECTED] writes:

|  
|  Andrew Pinski [EMAIL PROTECTED] writes:
|  
|  |  
|  |  Paul Eggert [EMAIL PROTECTED] writes:
|  |  
|  | * NEWS: AC_PROG_CC, AC_PROG_CXX, and AC_PROG_OBJC now take an
|  | optional second argument specifying the default optimization
|  | options for GCC.  These optimizations now default to -O2 
-fwrapv
|  | instead of to -O2.  This partly attacks the problem reported 
by
|  | Ralf Wildenhues in
|  | 
http://lists.gnu.org/archive/html/bug-gnulib/2006-12/msg00084.html
|  | and in http://gcc.gnu.org/ml/gcc/2006-12/msg00459.html.
|  |  
|  |  Does anybody think that Paul's proposed patch to autoconf would be
|  |  better than changing VRP?
|  | 
|  | I think both ways are incorrect way forward.
|  | What about coding the loops like:
|  | 
|  | if (sizeof(time_t) == sizeof(unsigned int))
|  | {
|  |   // do loop using unsigned int
|  |   // convert to time_t and then see if an overflow happened
|  | }
|  | //etc. for the other type
|  
|  Yuck.
| 
| It might be Yuck but that is the only real portable way to do this loop.

You have not shown all the real codes that need to go in the blanks but
I have some doubt as to whether what would come out would be portable,
and even if it would be portable, whether it would be readable, and
even if it would readable, wheter it would be maintainable.

| Now if other compilers actually treat signed overflow as undefined, you
| will still run into this, even if GCC gets changed.

That indicates a fundamental problem with the stance GCC optimizers
have taken recently.

For this _specific_ instance of the general problem, C++ users could
use numeric_limitstime_t::max() and get away with it, but I don't
believe such a solution (or the one you propose or similar I've seen)
to this specific instance generalizes to portable, readable and
maintainable solution to the general problem.   

|  If the above is the only without Autoconf change, I would highly
|  recommend Autoconf change if GCC optimizers highly value benchmarks
|  over running real world code.
| 
| Which one, mine or Paul's?

If what you propose is the only way out, and there is no way to make
GCC optimizers reasonable, then I believe Paul's proposal is the next
option. 

-- Gaby


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Andrew Pinski
 
 |  If the above is the only without Autoconf change, I would highly
 |  recommend Autoconf change if GCC optimizers highly value benchmarks
 |  over running real world code.
 | 
 | Which one, mine or Paul's?
 
 If what you propose is the only way out, and there is no way to make
 GCC optimizers reasonable, then I believe Paul's proposal is the next
 option. 

But that still does not address the issue is that this is not just about
GCC any more since autoconf can be used many different compilers and is right
now.  So if you change autoconf to default to -fwrapv and someone comes alongs
and tries to use it with say ACC (some made up compiler right now).  The loop
goes into an infinite loop because they treat (like GCC did) signed type 
overflow
as undefined, autoconf still becomes an issue.

If you want to make the code more readable and maintainable, you can use macros 
like:

MAX_USING_TYPE(type, othertype, max) \
{ \
 ... \
 max = (othertype) _real_max; \
}


MAX_TYPE(type, max) \
{ \
  if (sizeof(type)==sizeof(unsigned int))\
MAX_USING_TYPE(unsigned int, type, max);
  else if (. \
  else \
{
  printf(Need another integeral type sized, %d\n, sizeof(type)); \
  abort (); \
} \
}


Yes people think macros can be less reabable, but this one case actually makes
it readable.

Thanks,
Andrew Pinski


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Gabriel Dos Reis
Andrew Pinski [EMAIL PROTECTED] writes:

|  
|  |  If the above is the only without Autoconf change, I would highly
|  |  recommend Autoconf change if GCC optimizers highly value benchmarks
|  |  over running real world code.
|  | 
|  | Which one, mine or Paul's?
|  
|  If what you propose is the only way out, and there is no way to make
|  GCC optimizers reasonable, then I believe Paul's proposal is the next
|  option. 
| 
| But that still does not address the issue is that this is not just about
| GCC any more since autoconf can be used many different compilers and is right
| now.

The Autoconf change, as I see it, is to active switches when enough
recent version of GCC is detected as I understand it is the primary
compiler at issue.

|  So if you change autoconf to default to -fwrapv and someone comes alongs
| and tries to use it with say ACC (some made up compiler right now).  The loop
| goes into an infinite loop because they treat (like GCC did) signed type 
overflow
| as undefined, autoconf still becomes an issue.
| 
| If you want to make the code more readable and maintainable, you can use 
macros like:

CPP hackery seldom repairs a fundamentally problematic construct.

| 
| MAX_USING_TYPE(type, othertype, max) \
| { \
|  ... \
|  max = (othertype) _real_max; \
| }
| 
| 
| MAX_TYPE(type, max) \
| { \
|   if (sizeof(type)==sizeof(unsigned int))\
| MAX_USING_TYPE(unsigned int, type, max);
|   else if (. \
|   else \
| {
|   printf(Need another integeral type sized, %d\n, sizeof(type)); \
|   abort (); \
| } \
| }
| 
| 
| Yes people think macros can be less reabable, but this one case actually makes
| it readable.

I dispute it is readable.  More fundamentally, I don't see how that
magically becomes more portable, and maintainable.  The set of
built-in integer types is NOT given by a closed list.  I posit that
any solution based on sizeof hackery is fundamentally non-portable and
broken. 

-- Gaby


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Daniel Berlin

On 29 Dec 2006 07:55:59 -0800, Ian Lance Taylor [EMAIL PROTECTED] wrote:

Paul Eggert [EMAIL PROTECTED] writes:

   * NEWS: AC_PROG_CC, AC_PROG_CXX, and AC_PROG_OBJC now take an
   optional second argument specifying the default optimization
   options for GCC.  These optimizations now default to -O2 -fwrapv
   instead of to -O2.  This partly attacks the problem reported by
   Ralf Wildenhues in
   http://lists.gnu.org/archive/html/bug-gnulib/2006-12/msg00084.html
   and in http://gcc.gnu.org/ml/gcc/2006-12/msg00459.html.

I fully appreciate that there is a real problem here which needs to be
addressed, but this does not seem like the best solution to me.  A
great number of C programs are built using autoconf.  If we make this
change, then they will all be built with -fwrapv.  That will disable
useful loop optimizations, optimizations which are enabled by default
by gcc's competitors.  The result will be to make gcc look worse than
it is.

You will recall that the problem with the original code was not in the
loop optimizers; it was in VRP.  I think we would be better served by
changing VRP to not rely on undefined signed overflow.  Or, at least,
to not rely on it without some additional option.



Actually, I seriously disagree with both patches.

Nobody has yet showed that any significant number of programs actually
rely on this undefined behavior.  All they have shown is that we have
one program that does, and that some people can come up with loops
that break if you make signed overflow undefined..

OTOH, people who rely on signed overflow being wraparound generally
*know* they are relying on it.
Given this seems to be some  small number of people and some small
amount of code (since nobody has produced any examples showing this
problem is rampant, in which case i'm happy to be proven wrong), why
don't they just compile *their* code with -fwrapv?

I posted numbers the last time this discussion came up, from both GCC
and XLC, that showed that making signed overflow wraparound can cause
up to a 50% performance regression in *real world* mathematical
fortran and C codes  due to not being able to perform loop
optimizations.
Note that these were not just *my* numbers, this is what the XLC guys
found as well.

In fact, what they told me was that since they made their change  in
1991, they have had *1* person who  reported a program that didn't
work.
This is just the way the world goes.  It completely ruins dependence
analysis, interchange, fusion, distribution, and just about everything
else.  Hell, you can't even do a good job of unrolling because you
can't estimate loop bounds anymore.

I'll also point out that *none* of these codes that rely on signed
overflow wrapping will work on any *other* compiler as well, as they
all optimize it.

Most even optimize *unsigned* overflow to be undefined in loops at
high opt levels (XLC does it at -O3+), and warn about it being done,
because this gives them an additional 20-30% performance benefit (in
particular on 32 bit fortran codes that are now run on 64 bit
computers, as the  induction variables are usually still 32 bit, but
they have to cast to 64 bit to index into arrays.  Without them
assuming unsigned integer overflow is undefined for ivs, they can't do
*any* iv related optimization here because the wraparound point would
change).  Since XLC made this change in 1993, they have had 2 bug
reports out of hundreds of thousands that were attributable to doing
this.

I believe what we have here is a very vocal minority.  I will continue
to believe so until someone provides real world counter evidence that
people do, and *need to*, rely on signed overflow being wraparound to
a degree that we should disable the optimization.

--Dan


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Gabriel Dos Reis
Daniel Berlin [EMAIL PROTECTED] writes:

[...]

| In fact, what they told me was that since they made their change  in
| 1991, they have had *1* person who  reported a program that didn't
| work.

And GCC made the change recently and got yy reports.  That might say
something about both compilers user base.  Or not.

Please, feel free to ignore those that don't find the transformations
appropriate, they are just free software written by vocal minority.

-- Gaby


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Daniel Berlin

On 29 Dec 2006 19:33:29 +0100, Gabriel Dos Reis
[EMAIL PROTECTED] wrote:

Daniel Berlin [EMAIL PROTECTED] writes:

[...]

| In fact, what they told me was that since they made their change  in
| 1991, they have had *1* person who  reported a program that didn't
| work.

And GCC made the change recently and got yy reports.  That might say
something about both compilers user base.  Or not.


Right, because the way we should figure out what the majority our
users want is to listen to 3 people on a developer list instead of
looking through the means we give users to give feedback, which is
through bug reports.
We've gotten a total of about 10 reports at last count, in the many
years we've been optimizing this.


Please, feel free to ignore those that don't find the transformations
appropriate, they are just free software written by vocal minority.


Wow Gaby, this sure is useful evidence, thanks for providing it.

I'm sure no matter what argument i come up with, you'll just explain it away.
The reality is the majority of our users seem to care more about
whether they have to write typename in front of certain declarations
than they do about signed integer overflow.


Re: Question on BOOT_CFLAGS vs. CFLAGS

2006-12-29 Thread Gerald Pfeifer
On Fri, 15 Dec 2006, Paolo Bonzini wrote:
 http://gcc.gnu.org/install/build.html
 The counter quote is obviously wrong, thanks for the report.

If I see this correctly, Mike's quote came from our installation 
documentation in gcc/doc/install.texi.  Are you going to have a
stab at that, based on Mike's report?

Gerald


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Robert Dewar

Daniel Berlin wrote:


I'm sure no matter what argument i come up with, you'll just explain it away.
The reality is the majority of our users seem to care more about
whether they have to write typename in front of certain declarations
than they do about signed integer overflow.


I have no idea how you know this, to me ten reports seems a lot for
something like this.



Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Richard Guenther

On 12/29/06, Robert Dewar [EMAIL PROTECTED] wrote:

Daniel Berlin wrote:

 I'm sure no matter what argument i come up with, you'll just explain it away.
 The reality is the majority of our users seem to care more about
 whether they have to write typename in front of certain declarations
 than they do about signed integer overflow.

I have no idea how you know this, to me ten reports seems a lot for
something like this.


Not compared to the number of type-based aliasing bugs reported.

Richard.


Re: Compiler loop optimizations

2006-12-29 Thread Christian Sturn
  1) For the function foo10:
  The if-block following if( i == 15 ) will be never executed since
  'i' will never become 15 here. So, this entire block could be
  removed without changing the semantics. This would improve the
  program execution since the if-condition does not need to be
  evaluated in each loop iteration. Can this code transformation be
  automatically performed by a compiler?
 Yes
 If so, which techniques/analyses and optimizations must be applied?
 There are a number of ways to do it, but the easiest is probably value
 range propagation.
 
 Would gcc simplify this loop?
 
 yes

Thank you for your answer. Is there any chance to have gcc dump out
an optimized code in the form the source level language, e.g. can I run
gcc with some optimizations and see how the compiler modified my C
source code? 
Up to now, I used the -S flag to produced the optimized code in
assembly. But at that level it's hard to recognize what optmizations gcc
 performed.

Regards,
Christian


Re: Compiler loop optimizations

2006-12-29 Thread Ian Lance Taylor
Christian Sturn [EMAIL PROTECTED] writes:

 Thank you for your answer. Is there any chance to have gcc dump out
 an optimized code in the form the source level language, e.g. can I run
 gcc with some optimizations and see how the compiler modified my C
 source code? 

You can get an approximation using -fdump-tree-*.  See the
documentation.  That is usually enough to see how the optimizations
affected your code.

There is no support for dumping actual valid source code, though, and
it is unlikely that there ever will be.

Ian


Re: Compiler loop optimizations

2006-12-29 Thread Robert Dewar

Ian Lance Taylor wrote:

Christian Sturn [EMAIL PROTECTED] writes:


Thank you for your answer. Is there any chance to have gcc dump out
an optimized code in the form the source level language, e.g. can I run
gcc with some optimizations and see how the compiler modified my C
source code? 


You can get an approximation using -fdump-tree-*.  See the
documentation.  That is usually enough to see how the optimizations
affected your code.

There is no support for dumping actual valid source code, though, and
it is unlikely that there ever will be.


And indeed it is not in general possible, there are many optimizations
that cannot be expressed in valid C.


Ian




Re: Compiler loop optimizations

2006-12-29 Thread Christian Sturn
On Fri, 29 Dec 2006 15:03:51 -0500
Robert Dewar [EMAIL PROTECTED] wrote:

  There is no support for dumping actual valid source code, though,
  and it is unlikely that there ever will be.
 
 And indeed it is not in general possible, there are many optimizations
 that cannot be expressed in valid C.

Why not? Could you provide an example.

Cannot every assembly program (each optimized C program can
be dumped as assembly code with -S) be transformed into an equivalent
C program?

Christian


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Gabriel Dos Reis
Daniel Berlin [EMAIL PROTECTED] writes:

| On 29 Dec 2006 19:33:29 +0100, Gabriel Dos Reis
| [EMAIL PROTECTED] wrote:
|  Daniel Berlin [EMAIL PROTECTED] writes:
| 
|  [...]
| 
|  | In fact, what they told me was that since they made their change  in
|  | 1991, they have had *1* person who  reported a program that didn't
|  | work.
| 
|  And GCC made the change recently and got yy reports.  That might say
|  something about both compilers user base.  Or not.
| 
| Right, because the way we should figure out what the majority our
| users want is to listen to 3 people on a developer list instead of
| looking through the means we give users to give feedback, which is
| through bug reports.

And surely, this specific issue did not come from users through a bug
report.

| We've gotten a total of about 10 reports at last count, in the many
| years we've been optimizing this.
| 
|  Please, feel free to ignore those that don't find the transformations
|  appropriate, they are just free software written by vocal minority.
| 
| Wow Gaby, this sure is useful evidence, thanks for providing it.
| 
| I'm sure no matter what argument i come up with, you'll just explain it away.

Not really.  I've come to *agree with you* that we should just ignore
those that don't find the transformation useful for real code: they
are vocal minority.  You have strong data that show that since that
transformation has been done in another compiler since 1991, only 1
person reported a program that didn't work.  The count of 10 reports
is most certainly to be accounted for uncertainty inherent to
measurement tools, and concur with the number 1 reported for the other
compiler since 1991. 

Do we have evidence that real world code has been broken?  Barely.
People just invent these things out of the air.  They showed some
codes; has anybody certified the authenticity of the code?  I've seen
nothing to that effect. 

Consequently, if the vocal minority insists, we can point them to the
paragraphs of the C standard that declare the operations
undefined. And if they really do complain that the transformations
break real world code, we can decree that they are part of 3 
people on developer list, therefore not part of the users that should
be listen too.

| The reality is the majority of our users seem to care more about
| whether they have to write typename in front of certain declarations
| than they do about signed integer overflow.

yes, let's care about syntax, semantics is unimportant.

-- Gaby


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Daniel Berlin

On 29 Dec 2006 20:15:01 +0100, Gabriel Dos Reis
[EMAIL PROTECTED] wrote:

Daniel Berlin [EMAIL PROTECTED] writes:

| On 29 Dec 2006 19:33:29 +0100, Gabriel Dos Reis
| [EMAIL PROTECTED] wrote:
|  Daniel Berlin [EMAIL PROTECTED] writes:
| 
|  [...]
| 
|  | In fact, what they told me was that since they made their change  in
|  | 1991, they have had *1* person who  reported a program that didn't
|  | work.
| 
|  And GCC made the change recently and got yy reports.  That might say
|  something about both compilers user base.  Or not.
| 
| Right, because the way we should figure out what the majority our
| users want is to listen to 3 people on a developer list instead of
| looking through the means we give users to give feedback, which is
| through bug reports.

And surely, this specific issue did not come from users through a bug
report.

| We've gotten a total of about 10 reports at last count, in the many
| years we've been optimizing this.
|
|  Please, feel free to ignore those that don't find the transformations
|  appropriate, they are just free software written by vocal minority.
|
| Wow Gaby, this sure is useful evidence, thanks for providing it.
|
| I'm sure no matter what argument i come up with, you'll just explain it away.

Not really.  I've come to *agree with you* that we should just ignore
those that don't find the transformation useful for real code: they
are vocal minority.


You can have all the sarcasm you want, but maybe instead of sarcasm,
you should produce real data to contradict our bug reports, and the
experiences of other compilers in the field (note that Seongbae Park
tells me sun had the same level of complaint, i.e., one report, about
their compiler doing this as well).

Basically, your argument boils down to all supporting data is wrong,
the three people on the mailing list are right, and there are millions
more behind them that just couldn't make it and have never complained
in the past

Don't buy it.
Put up or shut up for once.


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Gabriel Dos Reis
Daniel Berlin [EMAIL PROTECTED] writes:

[...]

| Basically, your argument boils down to all supporting data is wrong,

Really?

Or were you just 

 # You can have all the sarcasm you want, but maybe instead of sarcasm,


Otherwise, you have a serious problem hearing anything contrary to
your firm belief.

-- Gaby


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Thomas Neumann
 For this _specific_ instance of the general problem, C++ users could
 use numeric_limitstime_t::max() and get away with it, but I don't
 believe such a solution (or the one you propose or similar I've seen)
 to this specific instance generalizes to portable, readable and
 maintainable solution to the general problem.   
while a solution that is a generic as numeric_limits is hard to do in C,
a simple calculation of the larges positive value can be done in C:

#define MAXSIGNEDVALUE(x) (~((~((x)0))(8*sizeof(x)-1)))

admittedly this simple definition still overflow during constant folding
(see below for a more complex one that avoid this), but you can
calculate the maximum value with a not too complex macro.

This should be enough for most use cases; if you need defined overflows,
use unsigned data types. And in the rare cases that this might not be
possible, use autoconf magic to find out the types/constants you have to
use. Your code will not be portable to other compilers anyway.
Restricting the optimizer by default just for these obscure corner cases
does not seem justified IMHO, especially if other compilers behave the same.

Thomas

Longer definition that avoids undefined overflows:

#define MAXSIGNEDVALUE(x) ((x)(~((~((unsigned long \
long)0))(8*sizeof(x)-1



Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Daniel Berlin

On 29 Dec 2006 21:04:08 +0100, Gabriel Dos Reis
[EMAIL PROTECTED] wrote:

Daniel Berlin [EMAIL PROTECTED] writes:

[...]

| Basically, your argument boils down to all supporting data is wrong,

Really?

Or were you just

 # You can have all the sarcasm you want, but maybe instead of sarcasm,


Otherwise, you have a serious problem hearing anything contrary to
your firm belief.


This is so funny coming from you it's ridiculous.
Anyway, i'm out of this thread until you decide to put up.
I am confident everyone else will  just ignore you too.


Re: Compiler loop optimizations

2006-12-29 Thread Ian Lance Taylor
Christian Sturn [EMAIL PROTECTED] writes:

 On Fri, 29 Dec 2006 15:03:51 -0500
 Robert Dewar [EMAIL PROTECTED] wrote:
 
   There is no support for dumping actual valid source code, though,
   and it is unlikely that there ever will be.
  
  And indeed it is not in general possible, there are many optimizations
  that cannot be expressed in valid C.
 
 Why not? Could you provide an example.

For example, using the vector instructions available with SSE on i386.

 Cannot every assembly program (each optimized C program can
 be dumped as assembly code with -S) be transformed into an equivalent
 C program?

Only if you permit calling arbitrary functions to implement the
machine instructions which can not be represented in simple C code.

In the very general case, sure, you could unroll those machine
instructions back to arbitrarily complex C code, but that would be a
tedious and ultimately pointless effort.

Ian


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Scott Robert Ladd

Ian Lance Taylor wrote:

I fully appreciate that there is a real problem here which needs to be
addressed, but this does not seem like the best solution to me.  A
great number of C programs are built using autoconf.  If we make this
change, then they will all be built with -fwrapv.  That will disable
useful loop optimizations, optimizations which are enabled by default
by gcc's competitors.  The result will be to make gcc look worse than
it is.


The inclusion of -fwrapv is a good idea from the standpoint of producing 
reliable code; it is a bad idea from the point of GCC PR.


Which begs the question: Should GCC care about its PR?

GCC suffers from many misconceptions due to its complexity. When it 
comes to code optimization, GCC offers more options than any other 
compiler I know. Literally hundreds of options combine in sometimes 
surprising ways, allowing a knowledgeable GCC user to fine-tune their 
code generation.


Back in the old days, GCC was only used by expert UNIX hackers who 
were educated about their tools. Today, GCC is being used by a more 
general audience to develop consumer code. As such, GCC needs to err on 
the side of reliability and backward compatibility, benchmarks be damned.


So if adding -fwrapv to autoconf keeps the current GCC from breaking 
existing code at the cost of some speed, that's a Good Thing. Vendors 
and Gentoo users who really care about performance can manually set 
flags to boost their performance.


I don't want to see GCC dumbed down -- experts need a compiler with 
this sort of fine-tuned power.


..Scott


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Paul Eggert
Ian Lance Taylor [EMAIL PROTECTED] writes:

 I fully appreciate that there is a real problem here which
 needs to be addressed, but this does not seem like the
 best solution to me.

I agree.  It's meant to be a stopgap.  I support coming up
with a better solution than the stopgap.

 The result will be to make gcc look worse than it is.

No, the proposed stopgap patch to Autoconf will make
actually GCC look _better_.

First, the patch will continue to give GCC an advantage over
random non-GCC compilers.  The default GCC compiler options
will be -g -O2 -fwrapv (assuming recent-enough GCC).  With
typical non-GCC compilers, the default options will be plain
-g.  That's a big advantage for GCC, albeit not quite as big
as -g -O2 would be.

Second, if you look at the fine print of the patch, you'll
find that it actually _disfavors_ icc, big-time.  Currently,
the default options for icc are -g -O2, because icc
currently mimics gcc well enough to pacify 'configure'.  But
with the proposed patch, icc will default to -g only.  This
is because icc does not support -fwrapv, and 'configure'
will discover this and disable all optimization with icc
(because that's the only way I know to get wrapv semantics
with icc).

 You will recall that the problem with the original code
 was not in the loop optimizers; it was in VRP.  I think we
 would be better served by changing VRP to not rely on
 undefined signed overflow.  Or, at least, to not rely on
 it without some additional option.

A change in behavior, or an additional option in that area,
would be a help, yes.  Another short-term possibility is for
configure to default to '-O2 -fno-tree-vrp' rather than to
'-O2 -fwrapv'.  I don't know whether that would be a win for
typical apps, though.  And it would be harder to explain,
which is why I chose -fwrapv.

The longer-term goal here is to come up with a way for
ordinary application programmers to easily find code that
has signed overflow problems, and replace it with simple,
portable code that can either avoid the overflow, or detect
it.  Part of the strategy might include If you use GCC, be
careful not to use options such-and-such, because they're
not safe.  Or it might include calls to gcc-specific
builtins.  But whatever rules we use, they should be simple
and easy to apply.  Also, they shouldn't hurt performance
much, compared to the current (nonportable) code.

Currently we don't have anything even close to what would be
necessary here, and it'll be nontrivial to add it.  So the
shorter-term goal is merely to put up a temporary railing.


 Does anybody think that Paul's proposed patch to autoconf would be
 better than changing VRP?

I don't.

I haven't noticed anyone else addressing this question,
which I think is a good one.


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Richard Guenther

On 12/29/06, Paul Eggert [EMAIL PROTECTED] wrote:

Ian Lance Taylor [EMAIL PROTECTED] writes:
 Does anybody think that Paul's proposed patch to autoconf would be
 better than changing VRP?

I don't.

I haven't noticed anyone else addressing this question,
which I think is a good one.


I don't think doing any of both is a good idea.  Authors of the affected
programs should adjust their makefiles instead - after all, the much more
often reported problems are with -fstrict-aliasing, and this one also doesn't
get any special treatment by autoconf.  Even though -fno-strict-aliasing -fwrapv
would be a valid, more forgiving default.  Also as ever, -O2 is what get's
the most testing, so you are going to more likely run into compiler bugs
with -fwrapv.

Richard.


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Florian Weimer
* Andrew Pinski:

 If what you propose is the only way out, and there is no way to make
 GCC optimizers reasonable, then I believe Paul's proposal is the next
 option. 

 But that still does not address the issue is that this is not just about
 GCC any more since autoconf can be used many different compilers and is right
 now.  So if you change autoconf to default to -fwrapv and someone comes alongs
 and tries to use it with say ACC (some made up compiler right now).  The loop
 goes into an infinite loop because they treat (like GCC did) signed type 
 overflow
 as undefined, autoconf still becomes an issue.

Does autoconf enable higher optimization levels for other compilers by
default?

(BTW, I would be somewhat disappointed if this had to be pampered over
on the autoconf side.  If the GNU project needs -fwrapv for its own
software by default, this should be reflected in the compiler's
defaults.  I wish more C programs could be moved towards better
conformance, but this could be unrealistic, especially in the short
term.)


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Florian Weimer
* Daniel Berlin:

 OTOH, people who rely on signed overflow being wraparound generally
 *know* they are relying on it.
 Given this seems to be some  small number of people and some small
 amount of code (since nobody has produced any examples showing this
 problem is rampant, in which case i'm happy to be proven wrong), why
 don't they just compile *their* code with -fwrapv?

A lot of security patches to address integer overflow issues use
post-overflow checks, unfortunately.  Even if GCC optimizes them away,
it's unlikely that it'll break applications in an obvious way.
(Security-related test cases are typically not publicly available.)


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Daniel Jacobowitz
On Fri, Dec 29, 2006 at 10:44:02PM +0100, Florian Weimer wrote:
 (BTW, I would be somewhat disappointed if this had to be pampered over
 on the autoconf side.  If the GNU project needs -fwrapv for its own
 software by default, this should be reflected in the compiler's
 defaults.

I absolutely agree.  My impression is that the current situation is a
disagreement between (some of, at least) the GCC developers, and
someone who can commit to autoconf; but I think it would be a very
bad choice for the GNU project to work around itself.  If we can't
come to an agreement on the list, please ask the Steering Committee.
This is a textbook example of what they're for.

-- 
Daniel Jacobowitz
CodeSourcery


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Richard B. Kreckel
On Fri, 29 Dec 2006, Daniel Berlin wrote:
[...]
 OTOH, people who rely on signed overflow being wraparound generally
 *know* they are relying on it.

Wrong. Many people have relied on that feature because they thought it
was leagal and haven't had the time to check every piece of code they
wrote for conformance with the holy standard. And they don't have the time
now to walk trough the work of their lifetime to see where they did wrong,
at least unless they get some guidance (read: warning message).

 Given this seems to be some  small number of people and some small
 amount of code (since nobody has produced any examples showing this
 problem is rampant, in which case i'm happy to be proven wrong), why
 don't they just compile *their* code with -fwrapv?

That's not so easy since nobody can tell for sure for sure where that code
is. If the compiler would tell us where it might break code, then we could
just scan inside Debian or similar.

Until then, waiting for bugs to surface over time is just, pardon me, a
very stupid idea.

[...]
 I believe what we have here is a very vocal minority.  I will continue
 to believe so until someone provides real world counter evidence that
 people do, and *need to*, rely on signed overflow being wraparound to
 a degree that we should disable the optimization.

This is a red herring: I don't think anybody expects people need wrapping
signed overflow. What they do need is correctness. Actually, all the users
of free software need and deserve that correctness.

Bottom line: without such a warning, -fwrapv should be the default and
should not be turned off by any -O option.

Regards
  -richy.
-- 
Richard B. Kreckel
http://www.ginac.de/~kreckel/



Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Richard Guenther

On 12/29/06, Daniel Jacobowitz [EMAIL PROTECTED] wrote:

On Fri, Dec 29, 2006 at 10:44:02PM +0100, Florian Weimer wrote:
 (BTW, I would be somewhat disappointed if this had to be pampered over
 on the autoconf side.  If the GNU project needs -fwrapv for its own
 software by default, this should be reflected in the compiler's
 defaults.

I absolutely agree.  My impression is that the current situation is a
disagreement between (some of, at least) the GCC developers, and
someone who can commit to autoconf; but I think it would be a very
bad choice for the GNU project to work around itself.  If we can't
come to an agreement on the list, please ask the Steering Committee.
This is a textbook example of what they're for.


But first produce some data please.  I only remember one case where
the program was at fault and not gcc from the transition from 4.0 to 4.1
compiling all of SUSE Linux 10.1 (barring those we don't recognized
of course).

Richard.


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Paul Eggert
Daniel Berlin [EMAIL PROTECTED] writes:

 Nobody has yet showed that any significant number of
 programs actually rely on this undefined behavior.

GCC itself relies on wrapv semantics.  As does glibc.  And
coreutils.  And GNU tar.  And Python.  I'm sure there are
many other significant programs.  I don't have time to do a
comprehensive survey right now.

 people who rely on signed overflow being wraparound generally
 *know* they are relying on it.

That is often true while they're writing the code, but
typically they wrote the code many years ago, and nobody now
remembers where all these assumptions are.

Also, even to this day there is no simple, portable way to
check for signed integer overflow of random system types,
unless you can assume wrapv semantics.  So people continue
to write new code that assumes wrapv semantics.

In practice, I expect that most C programmers probably
assume wrapv semantics, if only unconsciously.  The minimal
C Standard may not entitle them to that assumption, but they
assume it anyway.  Part of this is the Java influence no
doubt.  Sorry, but that is just the way the world goes.

 why don't [the people who need fwrapv] just compile
 *their* code with -fwrapv?

Because they typically don't know they might need fwrapv.
And even if they knew, there's no easy way to reliably
identify which subset of one's code is safe without -fwrapv.
And even if they knew where that subset was, there's
currently no convenient way to tell the Autotools about it.

The proposed Autoconf patch will address a part of this
problem, because it will raise general consciousness about
the issues with optimization and integer overflow.  And that
is a good thing, even if it's only a small part of the
problem.

 I posted numbers the last time this discussion came up,
 from both GCC and XLC, that showed that making signed
 overflow wraparound can cause up to a 50% performance
 regression in *real world* mathematical fortran and C
 codes

Obviously this is a compelling case, and if you know that
the code is safe without -fwrapv, it should be compiled
without -fwrapv.  The proposed patch will let you do that on
a package-by-package basis, as it gives a package's
maintainer a way to tell Autoconf to not assume -fwrapv by
default.  It would be better to have finer-grained control
on a module-by-module basis, but that will take more work
and in the mean time this should be good enough for most
important cases.

 I'll also point out that *none* of these codes that rely on signed
 overflow wrapping will work on any *other* compiler as well

No, they will work on other compilers, since 'configure'
won't use -O2 with those other compilers.

Unless you know of some real-world C compiler that breaks
wrapv semantics even compiling without optimization?  If so,
I'd like to hear the details.

 Most even optimize *unsigned* overflow to be undefined in loops at
 high opt levels (XLC does it at -O3+)

That shouldn't be a problem either.  'configure' has never
defaulted to such high optimization levels.


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Gabriel Dos Reis
Daniel Berlin [EMAIL PROTECTED] writes:

[...]

| This is so funny coming from you it's ridiculous.

You have decided to get personal, that certainly will elevate the
debate I suppose.

I don't see what is so funny about you coming and declaring minority 
any voice, data that go contrary to yours.  This issue surfaced
again because some actual free software got into the GCC optimizers web,
yet you insist that there is no evidence. 

It if weren't because of actual software, I would declare your
position pathetic and move on.

-- Gaby


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Paul Schlie
 Richard Guenther wrote:
 Robert Dewar wrote:
 Daniel Berlin wrote:
 I'm sure no matter what argument i come up with, you'll just explain it
 away.  The reality is the majority of our users seem to care more about
 whether they have to write typename in front of certain declarations
 than they do about signed integer overflow.

 I have no idea how you know this, to me ten reports seems a lot for
 something like this.

 Not compared to the number of type-based aliasing bugs reported.

- as aliasing optimizations are typically more subtle, it's understandable
  how these may continue to be reported.

- however as overflow optimizations are somewhat more obviously identified
  as being related to GCC's arguably somewhat notoriously overzealous
  leveraging of this form of undefined behavior at higher levels of
  optimization regardless of the factual behavior of target machines;
  it's understandable that folks after a while simply stop reporting these
  as bugs (especially as this optimization has been historically so
  vocally defended by the few as being proper, regardless of arguably
  reasonable expectations that it not be included in any generically
  specified level optimization by default to minimize surprise).




Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Robert Dewar

Paul Eggert wrote:


In practice, I expect that most C programmers probably
assume wrapv semantics, if only unconsciously.  The minimal
C Standard may not entitle them to that assumption, but they
assume it anyway.  Part of this is the Java influence no
doubt.  Sorry, but that is just the way the world goes.


That's an interesting point, I suspect it is indeed the
case that more C programmers will assume that C matches Java
semantics (indeed in a sense Java derived its semantics
from C practice).


gcc-4.1-20061229 is now available

2006-12-29 Thread gccadmin
Snapshot gcc-4.1-20061229 is now available on
  ftp://gcc.gnu.org/pub/gcc/snapshots/4.1-20061229/
and on various mirrors, see http://gcc.gnu.org/mirrors.html for details.

This snapshot has been generated from the GCC 4.1 SVN branch
with the following options: svn://gcc.gnu.org/svn/gcc/branches/gcc-4_1-branch 
revision 120274

You'll find:

gcc-4.1-20061229.tar.bz2  Complete GCC (includes all of below)

gcc-core-4.1-20061229.tar.bz2 C front end and core compiler

gcc-ada-4.1-20061229.tar.bz2  Ada front end and runtime

gcc-fortran-4.1-20061229.tar.bz2  Fortran front end and runtime

gcc-g++-4.1-20061229.tar.bz2  C++ front end and runtime

gcc-java-4.1-20061229.tar.bz2 Java front end and runtime

gcc-objc-4.1-20061229.tar.bz2 Objective-C front end and runtime

gcc-testsuite-4.1-20061229.tar.bz2The GCC testsuite

Diffs from 4.1-20061222 are available in the diffs/ subdirectory.

When a particular snapshot is ready for public consumption the LATEST-4.1
link is updated and a message is sent to the gcc list.  Please do not use
a snapshot before it has been announced that way.


Re: [heads-up] disabling ../configure --disable-bootstrap make bootstrap

2006-12-29 Thread Richard Kenner
 Next week, after the merge, the bootstrap rules in the gcc directory 
 will go away.

So what will be the way at that point to bootstrap JUST gcc?


Re: [heads-up] disabling ../configure --disable-bootstrap make bootstrap

2006-12-29 Thread Daniel Jacobowitz
On Fri, Dec 29, 2006 at 05:53:43PM -0500, Richard Kenner wrote:
  Next week, after the merge, the bootstrap rules in the gcc directory 
  will go away.
 
 So what will be the way at that point to bootstrap JUST gcc?

You won't be able to do that any more.  We've been saying that since
the first top level bootstrap rules went in, every time the subject
came up - this really shouldn't be a surprise.

Libgcc will no longer be configured by the gcc subdirectory's makefile.
Therefore there will be no startfiles or libgcc for the new compiler to
use.

-- 
Daniel Jacobowitz
CodeSourcery


Re: [heads-up] disabling ../configure --disable-bootstrap make bootstrap

2006-12-29 Thread Richard Kenner
 You won't be able to do that any more.  We've been saying that since
 the first top level bootstrap rules went in, every time the subject
 came up - this really shouldn't be a surprise.

No, what's been said is that there will be a MODE in which that can't
be done, but it was always claimed that no capability would be lost with
these changes.  Now we're seeing a loss, and it's a very significant one.

 Libgcc will no longer be configured by the gcc subdirectory's makefile.
 Therefore there will be no startfiles or libgcc for the new compiler to
 use.

Sure there is: the one the last iteration used!  Obviously, if you're only
bootstrapping the compiler, you're going to leave everything else unchanged.


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Marc Espie
In article [EMAIL PROTECTED] you write:
I don't think doing any of both is a good idea.  Authors of the affected
programs should adjust their makefiles instead - after all, the much more
often reported problems are with -fstrict-aliasing, and this one also doesn't
get any special treatment by autoconf.  Even though -fno-strict-aliasing 
-fwrapv
would be a valid, more forgiving default.  Also as ever, -O2 is what get's
the most testing, so you are going to more likely run into compiler bugs
with -fwrapv.

As a measure point, in the OpenBSD project, we have disabled
-fstrict-aliasing by default. The documentation to gcc local to our
systems duly notes this departure from the canonical release. We expect
to keep it that way about forever.

If/when we update to a version wher -fwrapv becomes an issue, we'll
probably do the same with it.

Specifically, because we value reliability over speed and strict
standard conformance...


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Richard Kenner
 Specifically, because we value reliability over speed and strict
 standard conformance...

Seems to me that programs that strictly meet the standard of the language
they are written in would be more reliable than programs that are written
in some ill-defined language.


Link tests not allowed

2006-12-29 Thread Douglas B Rupp

I've been beating my head against the wall over this for hours.
Does anybody know how to fix this error?  I've googled it to death, it
turns up alot but I can't find a fix that works.

I build cross compilers all the time with 3.4 and have never run into 
this. I recently switched to 4.x.


I'm trying to build a cross compiler with gcc-4.1.2 (20061130) 
x86_64-linux to ppc-aix


During the target libiberty configure I get:

checking for library containing strerror... configure: error: Link tests
are not allowed after GCC_NO_EXECUTABLES.
make[1]: *** [configure-target-libiberty] Error 1
make[1]: Leaving directory `/home/rupp/ngnat/buildxppcaix'
make: *** [all] Error 2

--Douglas B Rupp
AdaCore



Re: Link tests not allowed

2006-12-29 Thread DJ Delorie

Is your target a newlib target?  If so, are you including --with-newlib?


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Richard Kenner
 Wrong. Many people have relied on that feature because they thought it
 was leagal and haven't had the time to check every piece of code they
 wrote for conformance with the holy standard. And they don't have the time
 now to walk trough the work of their lifetime to see where they did wrong,
 at least unless they get some guidance (read: warning message).

Many people have lots of bugs in their code that don't show up until some
later time when it's used in an unexpected way (e.g., Arianne 5).  It's not
possible to seriously argue that we need to make every compiler in the future
bug-for-bug compatible with some older compiler.

Writing code that makes an assumption that the language they are writing in
says can't be made is a bug, no different than an uninitialized variable
which has been set to zero in every previous compiler.

The problem with warnings in this case is that it's very hard (and not may be
possible) to give a warning without having lots of false positives.  Since
many people are in an environment where every warning must be resolved, those
warnings themselves can cause more problems that the very occaisional bug.


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Paul Eggert
Richard Guenther [EMAIL PROTECTED] writes:

 Authors of the affected programs should adjust their makefiles

That is what the proposed patch is for.  It gives a way for
developers to adjust their makefiles.

A developer of portable software cannot simply put something
like this into a makefile:

  CFLAGS = -g -O2 -fwrapv

as -fwrapv won't work with most other compilers.  So the
developer needs some Autoconfish help anyway, to address the
problem.  The proposed help is only rudimentary, but it does
adjust makefiles to address the issue, and it's better than
nothing.  Further improvements would of course be welcome.

 the much more often reported problems are with
 -fstrict-aliasing, and this one also doesn't get any
 special treatment by autoconf.

That's a good point, and it somewhat counterbalances the
opposing point that -O2 does not currently imply
'-ffast-math'ish optimizations even though the C standard
would allow it to.

I don't feel a strong need for 'configure' to default to
-fstrict-aliasing with GCC.  Application code that violates
strict aliasing assumptions is often unportable in practice
for other reasons, and needs to be rewritten anyway, even if
optimization is disabled.  So -fstrict-aliasing wouldn't
help that much.

In contrast, the wrapv assumption is relatively safe and
does not need to be rewritten to be widely portable in
practice, under the assumptions documented in the proposed
patch.

Also, my admittedly anecdotal impression is that the wrapv
assumption is more pervasive.  I don't know of any strict
aliasing assumptions in coreutils, for example, but I know
of several wrapv assumptions.  I suspect the same thing is
true for many other GNU applications.

All that being said, I have no real objection to having
Autoconf default to -fstrict-aliasing too.  However, I'd
rather not propose that change right now; one battle at a time.

 -O2 is what gets the most testing,

Another good point, but if this change goes through Autoconf
-O2 -fwrapv will get a lot of testing in real-world C
applications, and that will help mitigate any problems in
this area.


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Richard Kenner
 I'm not sure what data you're asking for.  

Here's the data *I'd* like to see:

(1) What is the maximum performance loss that can be shown using a real
program (e.g,. one in SPEC) and some compiler (not necessarily GCC) when
one assumes wrapping semantics?

(2) In the current SPEC, how many programs benefit from undefined overflow
semantics and how much does each benefit?

(3) How many programs are known to rely on wrap semantics?  For each:
  (a) How hard was it to determine there was a problem with that assumption?
  (b) How much work was it to modify the program to not rely on such semantics?


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Marc Espie
On Fri, Dec 29, 2006 at 06:46:09PM -0500, Richard Kenner wrote:
  Specifically, because we value reliability over speed and strict
  standard conformance...

 Seems to me that programs that strictly meet the standard of the language
 they are written in would be more reliable than programs that are written
 in some ill-defined language.

C has been a portable assembler for years before it got normalized and
optimizing compilers took over. There are still some major parts of the
network stack where you don't want to look, and that defy
-fstrict-aliasing

A lot of C programmers don't really understand aliasing rules. If this
wasn't deemed to be a problem, no-one would have even thought of adding
code to gcc so that i can warn about some aliasing violations. ;-)

If you feel like fixing this code, be my guest.


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Paul Eggert
Seongbae Park [EMAIL PROTECTED] writes:

 On 12/29/06, Paul Eggert [EMAIL PROTECTED] wrote:
 -O2 does not currently imply '-ffast-math'ish optimizations even
 though the C standard would allow it to.

 Can you point me to the relevant section/paragraph in C99 standard
 where it allows the implementation to do -ffast-math style optimization ?
 C99 Annex F.8 quite clearly says the implementation can't,
 as long as it claims any conformity to IEC 60559.

This is more of a pedantic standards question than a
real-world programming question, but I'll answer anyway.


C99 does not require implementations to conform to IEC 60559
as specified in C99 Annex F.  It's optional.

Similarly, C99 does not require implementations to conform
to LIA-1 wrapping semantics as specified in C99 Annex H.
That's optional, too.

The cases are not entirely equivalent, as Annex F is
normative but Annex H is informative.  But as far as the
standard is concerned, it's clear that gcc could enable many
non-IEEE optimizations (including some of those enabled by
-ffast-math); that would conform to the minimal standard.


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Andrew Pinski
 
 C has been a portable assembler for years before it got normalized and
 optimizing compilers took over.

18 years.  And now it has been 17 years since C has been standardized so
you can say C has been standardized now for half its life.  18 years is a
long time when it comes to computers.  I know one problem is that most
people who learn C don't learn about the undefined behaviors until they hit
it.  Sequence points is another case in point to the problems of how people
learn C (or C++ for that matter).  We actually have 57 reports of the
sequence point issue in bugzilla.  110 aliasing bug reports.  and only 12
overflow issues.

Of those 12 overflow issues, 9 of them are reported against 4.1 and above
introduced by the VRP pass.
2 are reported because of the folding of (a*C)/C into a (where C is a constant).
1 is about causing a floating point exception (or divide by 0 exception) when 
dividing
INT_MIN by -1 on x86.

So we maybe we should also be talking more about sequence point issue, where it
is unspecified (though in some cases it turns into being undefined because 2 
stores
to the same variable).  That gets more bug reports than even signed overflow 
and sometimes
just as hard to spot as signed overflow (I can give examples where we get way 
different
code output if we did not have sequence points).

  There are still some major parts of the
 network stack where you don't want to look, and that defy
 -fstrict-aliasing

Actually IIRC freebsd and netbsd has already corrected all of the aliasing 
issues
in the BSD network stack.  In fact I remember helping one place that was exposed
in the last 4 years.

Thanks,
Andrew Pinski


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Andrew Pinski
 
 Daniel Berlin [EMAIL PROTECTED] writes:
 
  Nobody has yet showed that any significant number of
  programs actually rely on this undefined behavior.
 
 GCC itself relies on wrapv semantics.  As does glibc.  And
 coreutils.  And GNU tar.  And Python.  I'm sure there are
 many other significant programs.  I don't have time to do a
 comprehensive survey right now.

Where does GCC rely on that?  I don't see it anywhere?
If you can point out specific spots, please file a bug and
I will go and fix them.

Thanks,
Andrew Pinski



Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Paul Eggert
[EMAIL PROTECTED] (Richard Kenner) writes:

 (1) What is the maximum performance loss that can be shown using a real
 program (e.g,. one in SPEC) and some compiler (not necessarily GCC) when
 one assumes wrapping semantics?

 (2) In the current SPEC, how many programs benefit from undefined overflow
 semantics and how much does each benefit?

Those questions are more for the opponents of -fwrapv, so
I'll let them answer them.  But why are they relevant?
Having -fwrapv on by default shouldn't affect your SPEC
score, since you can always compile with -fnowrapv if the
application doesn't assume wraparound.

The SPEC CPU2006 run and reporting rules for peak and base builds
http://www.spec.org/cpu2006/Docs/runrules.html#rule_1.5
are directly on point here:

   Because the SPEC CPU benchmarks are drawn from the
   compute intensive portion of real applications, some of
   them use popular practices that compilers must commonly
   cater for, even if those practices are nonstandard. In
   particular, some of the programs (and, therefore, all of
   base) may have to be compiled with settings that do not
   exploit all optimization possibilities that would be
   possible for programs with perfect standards compliance.

If the only goal is to get a high SPEC score, it's pretty
clear: disable -fwrapv, and if the SPEC program passes its
benchmark tests, you're happy.  But people who are building
reliable software (as opposed to running benchmarks) cannot
be so blase about latent bugs due to wrapv assumptions being
violated.


 (3) How many programs are known to rely on wrap semantics?  For each:
   (a) How hard was it to determine there was a problem with that assumption?

I'm not sure what you're asking for here.

For example, GCC assumes wrapv internally.  No doubt there
is a problem with this assumption if you go out of your way
to make it a problem (bootstrap GCC with -ftrapv, for
example).  Is this a problem in the sense that you
describe?

Or do you mean, how hard is it to determine whether some
real-world compiler, used by a real-world builder of GCC,
won't build GCC correctly due to this problem?  If that's
the question, then all I can say is that I think it's verrry
hard.  There are a lot of compilers out there, and they're
used in a lot of ways.

   (b) How much work was it to modify the program to not
   rely on such semantics?

Nobody knows the answer to that question either.  I could
throw out an estimate for GCC (three person-months, say?
six?) these are just wild guesses.  And that's just one
program.

Part of the problem is that there's no easy way to determine
whether a program relies on these semantics.


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Seongbae Park

On 12/29/06, Paul Eggert [EMAIL PROTECTED] wrote:

Seongbae Park [EMAIL PROTECTED] writes:

 On 12/29/06, Paul Eggert [EMAIL PROTECTED] wrote:
 -O2 does not currently imply '-ffast-math'ish optimizations even
 though the C standard would allow it to.

 Can you point me to the relevant section/paragraph in C99 standard
 where it allows the implementation to do -ffast-math style optimization ?
 C99 Annex F.8 quite clearly says the implementation can't,
 as long as it claims any conformity to IEC 60559.

This is more of a pedantic standards question than a
real-world programming question, but I'll answer anyway

C99 does not require implementations to conform to IEC 60559
as specified in C99 Annex F.  It's optional.




Similarly, C99 does not require implementations to conform
to LIA-1 wrapping semantics as specified in C99 Annex H.
That's optional, too.

The cases are not entirely equivalent, as Annex F is
normative but Annex H is informative.
But as far as the
standard is concerned, it's clear that gcc could enable many
non-IEEE optimizations (including some of those enabled by
-ffast-math); that would conform to the minimal standard.


Not when __STDC_IEC_559__ is defined,
which is the case for most modern glibc+gcc combo.
--
#pragma ident Seongbae Park, compiler, http://seongbae.blogspot.com;


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Gabriel Dos Reis
Andrew Pinski [EMAIL PROTECTED] writes:

|  
|  C has been a portable assembler for years before it got normalized and
|  optimizing compilers took over.
| 
| 18 years.  And now it has been 17 years since C has been standardized so
| you can say C has been standardized now for half its life.  18 years is a
| long time when it comes to computers.  I know one problem is that most
| people who learn C don't learn about the undefined behaviors until they hit
| it.

But did you learn the history of how and why those undefined
behaviour came into existence in the first place?

-- Gaby


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Andrew Pinski
 
 Andrew Pinski [EMAIL PROTECTED] writes:
 
 |  
 |  C has been a portable assembler for years before it got normalized and
 |  optimizing compilers took over.
 | 
 | 18 years.  And now it has been 17 years since C has been standardized so
 | you can say C has been standardized now for half its life.  18 years is a
 | long time when it comes to computers.  I know one problem is that most
 | people who learn C don't learn about the undefined behaviors until they hit
 | it.
 
 But did you learn the history of how and why those undefined
 behaviour came into existence in the first place?

Tell us then.  Aliasing, signed type overflow and sequence points are already 
the
most interesting parts of C anyways so tell us why it became 
undefined/unspecified.  It
might give us a better idea of why we should keep GCC the way it is currently.

Over those 17 years, C has not changed that much.  Even Fortran defined integer 
overflow
(Fortran does not have unsigned types) as undefined back in at least 1977 so I 
don't see
why we should punish the Fortran people on what we decide with C.  So if we 
decide we need
we should only treat signed type overflow as undefined inside loops, then we 
need an option
so Fortran does not get punished for what is decided for the C family of 
languages.

Fortran also has a stricter aliasing than even C, which is why we have 
-fargument-noalias-global.
We also added -fwrapv in fact to support Java where the language is very 
specific at integer overflow
is defined as wrapping.  It was added to disable the already added 
optimizations in 2003 (2 years before
VRP was added), why did nobody raise the issue then?  Instead we are now 
talking about almost 4 years
later about the same issue we should have discussed back then when the option 
to turn on overflow as
wrapping was added.

Thanks,
Andrew Pinski


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Paul Eggert
 GCC itself relies on wrapv semantics.  As does glibc.  And
 coreutils.  And GNU tar.  And Python.  I'm sure there are
 many other significant programs.  I don't have time to do a
 comprehensive survey right now.

 Where does GCC rely on that?  I don't see it anywhere?

It's not like the code has big waving robot arms next to a comment
that says Danger, Will Robinson!.  It's usually more subtle.

But since you asked, I just now did a quick scan of
gcc-4.3-20061223 (nothing fancy -- just 'grep -r') and the first
example I found was this line of code in gcc/stor-layout.c's
excess_unit_span function:

  /* Note that the calculation of OFFSET might overflow; we calculate it so
 that we still get the right result as long as ALIGN is a power of two.  */
  unsigned HOST_WIDE_INT offset = byte_offset * BITS_PER_UNIT + bit_offset;

Here, the programmer has helpfully written a Danger, Will
Robinson! comment, which is how I found this example so quickly.
Typically, though, we won't be so lucky.

I'd guess there are dozens, maybe hundreds, of cases like this in
the GCC sources.  In many cases the problem won't be obvious.  Or
if we see something suspicious, it won't be immediately obvious
whether the potential problem is real.  For example, I haven't
checked whether the above code is a real problem, though that
comment certainly suggests so.  (I suppose the comment might be
incorrect, but if so then that's a different bug.)

 If you can point out specific spots, please file a bug and I will go
 and fix them.

Thanks, but it's a lot of work to find bugs like this.
Particularly if one is interested in finding them all, without a
lot of false alarms.  I don't have time to wade through all of GCC
with a fine-toothed comb right now, and I suspect nobody else does
either.  Nor would I relish the prospect of keeping wrapv
assumptions out of GCC as other developers make further
contributions, as the wrapv assumption is so natural and
pervasive.


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Robert Dewar

Marc Espie wrote:


Specifically, because we value reliability over speed and strict
standard conformance...


Still as a long term goal, it would be good to try to have your
C programs written in C, and not some ill-defined dialect thereof!



Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Robert Dewar

Richard Kenner wrote:

Specifically, because we value reliability over speed and strict
standard conformance...


Seems to me that programs that strictly meet the standard of the language
they are written in would be more reliable than programs that are written
in some ill-defined language.


In the long run, sure, but during the transition, which can be
very long, this is only an academic observation. After all in
the Ada world, we advise many of our big customers to use
-fno-strict-aliasing as a matter of course to avoid similar
aliasing problems in Ada.



Re: Link tests not allowed

2006-12-29 Thread Douglas B Rupp

DJ Delorie wrote:

Is your target a newlib target?  If so, are you including --with-newlib?



Thanks, that was the problem.
Why isn't --with-newlib the default for newlib targets?




Re: Link tests not allowed

2006-12-29 Thread DJ Delorie

 Why isn't --with-newlib the default for newlib targets?

--with-newlib *tells* us that it's a newlib target.


Re: Link tests not allowed

2006-12-29 Thread Douglas B Rupp

DJ Delorie wrote:

Why isn't --with-newlib the default for newlib targets?


--with-newlib *tells* us that it's a newlib target.



Well not knowing what a newlib target was when you asked, I looked in 
configure.in.


It seems that if it's not a newlib target then target-newlib is missing 
from noconfigdirs. So if --with-newlib is needed, what purpose does 
noconfigdirs serve?




Re: [heads-up] disabling ../configure --disable-bootstrap make bootstrap

2006-12-29 Thread Daniel Jacobowitz
On Fri, Dec 29, 2006 at 06:38:27PM -0500, Richard Kenner wrote:
  You won't be able to do that any more.  We've been saying that since
  the first top level bootstrap rules went in, every time the subject
  came up - this really shouldn't be a surprise.
 
 No, what's been said is that there will be a MODE in which that can't
 be done, but it was always claimed that no capability would be lost with
 these changes.  Now we're seeing a loss, and it's a very significant one.

I'm sorry, but I decline to argue with you about this.  I've done it
several times already.  I apologize that the entire process has drawn
out over such a long period that we've had to rehash this argument
again.  I don't believe anyone else considers this important.

The last decision I see in my archives on this subject was in February
2006.

  Libgcc will no longer be configured by the gcc subdirectory's makefile.
  Therefore there will be no startfiles or libgcc for the new compiler to
  use.
 
 Sure there is: the one the last iteration used!  Obviously, if you're only
 bootstrapping the compiler, you're going to leave everything else unchanged.

Maybe I don't understand what you're asking.  The process is:

  - Build supporting libraries for the build system tools
  - Build supporting libraries for the host system tools
  - Build gcc
  - [NEW] Build libgcc
  - If stage  final stage, go back to building some of the host
libraries
  - Build other target libraries

Note that when we get into and then out of the gcc subdirectory, the
startfiles and libgcc haven't been built.  If we don't go on to build
libgcc, the just-built compiler is not functional.  It can't use libgcc
or crtbegin from the system; they might not even exist, depending on
your bootstrap compiler.

Do you mean something different by bootstrapping just the compiler?

-- 
Daniel Jacobowitz
CodeSourcery


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Seongbae Park

On 30 Dec 2006 03:20:11 +0100, Gabriel Dos Reis
[EMAIL PROTECTED] wrote:
...

The C standard, in effect, has an appendix (Annex H) that was not
there in the C89 edition, and that talks about the very specific issue
at hand

   H.2.2  Integer types

   [#1] The signed C integer types int,  long  int,  long  long
   int,  and  the  corresponding  unsigned types are compatible
   with LIA-1.  If an implementation adds support for the LIA-1
   exceptional  values  ``integer_overflow'' and ``undefined'',
   then those types are LIA-1 conformant types.   C's  unsigned
   integer  types  are  ``modulo''  in  the LIA-1 sense in that
   overflows  or  out-of-bounds  results  silently  wrap.An
   implementation  that  defines  signed  integer types as also
   being modulo need not  detect  integer  overflow,  in  which
   case, only integer divide-by-zero need be detected.


which clearly says LIA-1 isn't a requirement - notice if in the
second setence.
H.1 makes it clear that the entire Annex H doesn't add any extra rule
to the language but merely describes what C is in regard to LIA-1.
H.2 doubly makes it clear that C as it defined isn't LIA-1
- again, notice if in H.2p1.
The second sentence of H.3p1 confirms this again:

  C's operations are compatible with LIA−1 in that C
  allows an implementation to cause a notification to occur
  when any arithmetic operation
  returns an exceptional value as defined in LIA−1 clause 5.

i.e. compatible means C's definition doesn't prevent
a LIA-1 conformant implementation.
In other words, all LIA-1 comformant compiler is conformant to C99
in terms of arithmetic and types.
However, not all C99 conformant compiler aren't LIA-1 conformant.
C isn't conformant to LIA-1 but merely compatible,
exactly because of the undefined aspect.

That's enough playing a language laywer for me in a day.
--
#pragma ident Seongbae Park, compiler, http://seongbae.blogspot.com;


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Richard Kenner
   Specifically, because we value reliability over speed and strict
   standard conformance...
 
  Seems to me that programs that strictly meet the standard of the language
  they are written in would be more reliable than programs that are written
  in some ill-defined language.
 
 A lot of C programmers don't really understand aliasing rules. If this
 wasn't deemed to be a problem, no-one would have even thought of adding
 code to gcc so that i can warn about some aliasing violations. ;-)

Sure there's badly-written code out there and sure there are reasons for not
wanting to clean it up, but I find it odd that you'd invoke RELIABILITY as
one of them.  The aliases rules exist not just so that the COMPILER can know
what can and can't alias, but so that the READER of the code can too.  Code
that's hard to read is hard to maintain and hence less reliable.

If the goal really were, as you say, reliability, then the right approach
would seem to me to be to work towards rewriting the code to avoid aliasing
issues.


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Daniel Berlin

On 12/29/06, Richard Kenner [EMAIL PROTECTED] wrote:

 I'm not sure what data you're asking for.

Here's the data *I'd* like to see:

(1) What is the maximum performance loss that can be shown using a real
program (e.g,. one in SPEC) and some compiler (not necessarily GCC) when
one assumes wrapping semantics?


The XLC numbers i was given about a year ago (i assume it was version 8)

SpecINT with undefined signed overflow at -O5 on a P5 2100mhz running
linux: 1634
SpecFP with undefined signed overflow at -O5 on a P5 2100mhz running linux: 3010

SpecINT with wrapping signed overflow at -O5 on a P5 2100mhz running
linux: 1319
SpecFP with wrapping signed overflow at -O5 on a P5 2100mhz running linux: 1624



(2) In the current SPEC, how many programs benefit from undefined overflow
semantics and how much does each benefit?


All of the fortran programs (IE SpecFP) benefit from undefined
*unsigned* overflow semantics due to 32 bit iv vs 64 bit array index
issues.
The same is true of the SpecFP C programs.

All of the fortran and C programs benefit from undefined *signed*
overflow semantics because it makes dependency and loop counting
analysis possible.

Nobody has analyzed it further than that, afaik, mainly because they
don't have discussions about whether it makes sense to lose 50% of
their FP performance to do something none of *their* users ask them
for (note our users may, of course, be different).  So they generally
won't waste their cycles trying to figure out why something they
aren't going to do would hurt them.


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Daniel Berlin

On 12/29/06, Daniel Berlin [EMAIL PROTECTED] wrote:

On 12/29/06, Richard Kenner [EMAIL PROTECTED] wrote:
  I'm not sure what data you're asking for.

 Here's the data *I'd* like to see:

 (1) What is the maximum performance loss that can be shown using a real
 program (e.g,. one in SPEC) and some compiler (not necessarily GCC) when
 one assumes wrapping semantics?

The XLC numbers i was given about a year ago (i assume it was version 8)

SpecINT with undefined signed overflow at -O5 on a P5 2100mhz running
linux: 1634
SpecFP with undefined signed overflow at -O5 on a P5 2100mhz running linux: 3010

SpecINT with wrapping signed overflow at -O5 on a P5 2100mhz running
linux: 1319
SpecFP with wrapping signed overflow at -O5 on a P5 2100mhz running linux: 1624


 (2) In the current SPEC, how many programs benefit from undefined overflow
 semantics and how much does each benefit?

All of the fortran programs (IE SpecFP) benefit from undefined
*unsigned* overflow semantics due to 32 bit iv vs 64 bit array index
issues.
The same is true of the SpecFP C programs.




Just to be clear, the above behavior is not standards conformant, and
they do give a warning that they are doing it.
It is however, the default at -O3 for XLC, and AFAIK, at all opt levels for icc.


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Daniel Berlin

Just to address the other compiler issue


No, they will work on other compilers, since 'configure'
won't use -O2 with those other compilers.


icc defaults to -O2 without any options, so unless you are passing
-O0, it will enable this.



Unless you know of some real-world C compiler that breaks
wrapv semantics even compiling without optimization?  If so,
I'd like to hear the details.


Sure. All of them,AFAIK, because they make the assumptions during
constant folding, and they all still constant fold at -O0.
It just so happens that it tends to affect a lot smaller number of
programs, because it essentially ends up being a very very local
optimization.
But it's still going to break some programs, even at -O0.


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Richard Kenner
 Those questions are more for the opponents of -fwrapv, so
 I'll let them answer them.  But why are they relevant?
 Having -fwrapv on by default shouldn't affect your SPEC
 score, since you can always compile with -fnowrapv if the
 application doesn't assume wraparound.

(1) If -fwrapv isn't the default, it won't be as well tested as it will
if it were the default. 

(2) People don't always read every detail of the manual to find out
the set of options to use.  People often do apples to apples
comparisons between compilers or else just use the highest -O level.
And they'll come back and say that GCC isn't as good as some other
compiler because of it.

 Or do you mean, how hard is it to determine whether some
 real-world compiler, used by a real-world builder of GCC,
 won't build GCC correctly due to this problem?

Right.

 If that's the question, then all I can say is that I think it's verrry
 hard.  There are a lot of compilers out there, and they're
 used in a lot of ways.

Unfortunately, what you, I, or somebody else might think isn't DATA!
What we need to have is some example of real-world program and a real-world
compiler that caused the program to malfunction because the program assumed
wrap semantics and the compiler didn't.  We need to know how long it took
to debug this and find out what was the cause.

(b) How much work was it to modify the program to not
rely on such semantics?
 
 Nobody knows the answer to that question either.  I could throw out
 an estimate for GCC (three person-months, say?  six?) these are just
 wild guesses.  And that's just one program.  Part of the problem is
 that there's no easy way to determine whether a program relies on
 these semantics.

Again, data is needed, not guesses.  If I had to guess, I'd say there were
AT MOST a handful of places in GCC that needed changing and the total
time to fix them would be measured in MINUTES, not months.  But real
data would be nice here.  (Note that here I mean the time to FIX the
program: the previous question was the amount of time to LOCATE the error.)


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Richard Kenner
 But since you asked, I just now did a quick scan of
 gcc-4.3-20061223 (nothing fancy -- just 'grep -r') and the first
 example I found was this line of code in gcc/stor-layout.c's
 excess_unit_span function:
 
   /* Note that the calculation of OFFSET might overflow; we calculate it so
  that we still get the right result as long as ALIGN is a power of two.  
 */
   unsigned HOST_WIDE_INT offset = byte_offset * BITS_PER_UNIT + bit_offset;
 
 Here, the programmer has helpfully written a Danger, Will
 Robinson! comment, which is how I found this example so quickly.
 Typically, though, we won't be so lucky.

That code's pretty strange, for a number of reasons, and has a number
of errors.  First of all, align is unsigned int, not the signed
HOST_WIDE_INT.  Secondly, precisely because of overflow, we're careful
everywhere else in that part of the compiler to do all arithmetic with trees,
so the precision is twice HOST_WIDE_INT, not HOST_WIDE_INT.  The code,
as written, only does the adjustment if OFFSET fits in signed HOST_WIDE_INT
and that's wrong too.

This is badly broken code, so if not wrapping breaks it, that would be a GOOD
thing!

 I'd guess there are dozens, maybe hundreds, of cases like this in
 the GCC sources.

I wouldn't.  As I said, that's really broken code.  For any code in a similar
form (and I agree there's some, but not much), I can demonstrate a failure
mode that DOESN'T involve wrapping semantics.  A lot of this code (luckily
not that piece!) is code that I wrote and I was EXTREMELY careful to avoid
doing computations of objects that might overflow as integers instead of
trees.  I'm not saying there are no bugs, just that if there ARE, I think
it's good that the issue of wrapping semantics would make them more visible
because they each represent poor quality code that needs to be fixed.

 Nor would I relish the prospect of keeping wrapv assumptions out of
 GCC as other developers make further contributions, as the wrapv
 assumption is so natural and pervasive.

It's neither natural not pervasive to me!  I would never write code that way
on the grounds that depending on overflow makes the code harder to read
because it's no longer as intuitive as code that doesn't depend on overflow
behavior.


Re: [heads-up] disabling ../configure --disable-bootstrap make bootstrap

2006-12-29 Thread Richard Kenner
  I don't believe anyone else considers this important.

The history on this sort of thing is that people don't pay attention
until it happens and then everybody starts yelling about bootstrap
time increasing ...

   - Build supporting libraries for the build system tools
   - Build supporting libraries for the host system tools
   - Build gcc
   - [NEW] Build libgcc
   - If stage  final stage, go back to building some of the host
 libraries
   - Build other target libraries
 
 Do you mean something different by bootstrapping just the compiler?

The problem is that last step: it takes a LONG time to build libjava,
for example.  If I make a change that I need to give a sanity check to,
I want to compile GCC with it, but not all the other additional code: that's
for a later state in the development/testing cycle.  Since building a stage
of GCC is about three times faster than other target libraries, if there's
no way to suppress that, the time to do this test goes up by a factor of four.


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Richard Kenner
 which clearly says LIA-1 isn't a requirement - notice if in the
 second setence.
 H.1 makes it clear that the entire Annex H doesn't add any extra rule
 to the language but merely describes what C is in regard to LIA-1.
 H.2 doubly makes it clear that C as it defined isn't LIA-1
 - again, notice if in H.2p1.

That's the way I read it too.  Indeed, perhaps -fwrapv really ought to
be called -flia-1?


Re: [heads-up] disabling ../configure --disable-bootstrap make bootstrap

2006-12-29 Thread Ian Lance Taylor
[EMAIL PROTECTED] (Richard Kenner) writes:

   I don't believe anyone else considers this important.
 
 The history on this sort of thing is that people don't pay attention
 until it happens and then everybody starts yelling about bootstrap
 time increasing ...
 
- Build supporting libraries for the build system tools
- Build supporting libraries for the host system tools
- Build gcc
- [NEW] Build libgcc
- If stage  final stage, go back to building some of the host
  libraries
- Build other target libraries
  
  Do you mean something different by bootstrapping just the compiler?
 
 The problem is that last step: it takes a LONG time to build libjava,
 for example.  If I make a change that I need to give a sanity check to,
 I want to compile GCC with it, but not all the other additional code: that's
 for a later state in the development/testing cycle.  Since building a stage
 of GCC is about three times faster than other target libraries, if there's
 no way to suppress that, the time to do this test goes up by a factor of four.

Would you feel OK if there were a make target to do a bootstrap
without building the other target libraries?  The change from today's
bootstrap with --disable-bootstrap would be that it would build
libiberty, libcpp, and friends at each stage, rather than only once.

Ian


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Robert Dewar

Richard Kenner wrote:

which clearly says LIA-1 isn't a requirement - notice if in the
second setence.
H.1 makes it clear that the entire Annex H doesn't add any extra rule
to the language but merely describes what C is in regard to LIA-1.
H.2 doubly makes it clear that C as it defined isn't LIA-1
- again, notice if in H.2p1.


That's the way I read it too.  Indeed, perhaps -fwrapv really ought to
be called -flia-1?


Is a smiley missing here?

If not, this is seriously confused, lia-1 is MUCH more than just
wrapping integer types.



Re: [heads-up] disabling ../configure --disable-bootstrap make bootstrap

2006-12-29 Thread Richard Kenner
 Would you feel OK if there were a make target to do a bootstrap
 without building the other target libraries?  

Yes.  That would only be a small increase in time (libiberty).


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Richard Kenner
 If not, this is seriously confused, lia-1 is MUCH more than just
 wrapping integer types.

Then indeed there is confusion here.  It was sounding to me like GCC
would already had support for lia-1, but just needed to define signed
overflows.  If GCC does not and will not support lia-1, why are we
talking about it?


Re: [heads-up] disabling ../configure --disable-bootstrap make bootstrap

2006-12-29 Thread Daniel Jacobowitz
On Sat, Dec 30, 2006 at 12:30:06AM -0500, Richard Kenner wrote:
- Build supporting libraries for the build system tools
- Build supporting libraries for the host system tools
- Build gcc
- [NEW] Build libgcc
- If stage  final stage, go back to building some of the host
  libraries
- Build other target libraries
  
  Do you mean something different by bootstrapping just the compiler?
 
 The problem is that last step: it takes a LONG time to build libjava,
 for example.  If I make a change that I need to give a sanity check to,
 I want to compile GCC with it, but not all the other additional code: that's
 for a later state in the development/testing cycle.  Since building a stage
 of GCC is about three times faster than other target libraries, if there's
 no way to suppress that, the time to do this test goes up by a factor of four.

Oh!  If that's all you mean, I misunderstood; it would not be at all
difficult to add this.  It would probably be just a matter of
documentation; I think there's already an appropriate target.  And if
I'm reading correctly, it's spelled stage3-bubble.  That will build
build libraries, host libraries, gcc, and libgcc.  No other target
libraries are bootstrapped.

Does that help?

On Fri, Dec 29, 2006 at 09:35:38PM -0800, Ian Lance Taylor wrote:
 Would you feel OK if there were a make target to do a bootstrap
 without building the other target libraries?  The change from today's
 bootstrap with --disable-bootstrap would be that it would build
 libiberty, libcpp, and friends at each stage, rather than only once.

I think it would even be possible to not bootstrap those host libraries
- but unwise for the reasons we wanted them bootstrapped originally,
and they're very quick to build.

In a combined tree we bootstrap binutils too.  That's less obviously
useful.  But in a GCC-only tree we bootstrap intl, gcc, libcpp,
libdecnumber, libiberty, and zlib: all things linked directly into
the compiler.

-- 
Daniel Jacobowitz
CodeSourcery


Do we want non-bootstrapping make back?

2006-12-29 Thread Daniel Jacobowitz
Once upon a time, the --disable-bootstrap configure option wasn't
necessary.  make built gcc, and make bootstrap bootstrapped it.

Is this behavior useful?  Should we have it back again?

The trivial implementation is to build separate makefiles using the
existing sed magic and have the non-bootstrap one as default, with a
bootstrap: target that forwards to make -f Makefile.boot.  Obviously
better implementations are possible, and if you mix and match targets
then strange things may happen, but that was true beforehand.  Anyway,
that would let us eliminate the configure-time decision - if there's
a convincing reason to do so.

-- 
Daniel Jacobowitz
CodeSourcery


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Paul Eggert
Daniel Berlin [EMAIL PROTECTED] writes:

 Just to address the other compiler issue

 No, they will work on other compilers, since 'configure'
 won't use -O2 with those other compilers.

 icc defaults to -O2 without any options, so unless you are passing
 -O0, it will enable this.

Thanks, I didn't know that (I don't use icc).

If -O0 is the only way to convince icc to give us wraparound
arithmetic, then I guess we'll have to default to -O0 for icc.

On the other hand, we haven't had any reports problems with icc in
this area.  So perhaps its optimizations are cleverer, and avoid
the gotchas that beta GCC seems to have blundered into.  In that
case we needn't bother with -O0.

 they make the assumptions during constant folding, and they all
 still constant fold at -O0.

Will this actually cause problems in practice?  I don't see how.


Re: Do we want non-bootstrapping make back?

2006-12-29 Thread Andrew Pinski
 Once upon a time, the --disable-bootstrap configure option wasn't
 necessary.  make built gcc, and make bootstrap bootstrapped it.
 
 Is this behavior useful?  Should we have it back again?

Doesn't make all-gcc already do that?  Or do you mean including the target
libraries?

Thanks,
Andrew Pinski


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2006-12-29 Thread Paul Eggert
 Nor would I relish the prospect of keeping wrapv assumptions out of
 GCC as other developers make further contributions, as the wrapv
 assumption is so natural and pervasive.

 It's neither natural not pervasive to me!  I would never write code
 that way

That's great, but GCC has had many other hands stirring the pot.
I daresay a careful scan would come up with many other examples of
undefined behavior due to signed integer overflow.  (No doubt
you'll be appalled by them as well, but there they are.)

 I think it's good that the issue of wrapping semantics would make
 them more visible because they each represent poor quality code that
 needs to be fixed.

Of course it's OK if GCC developers make a careful manual sweep of
their large code base.  However, I don't think this is a realistic
default strategy for most application developers: the process is
too time-consuming and error-prone.

 People often do apples to apples comparisons between compilers
 or else just use the highest -O level.  And they'll come back
 and say that GCC isn't as good as some other compiler because of it.

It's obviously OK to enable these optimizations at the highest
optimization level.  The thing I'm worried about is enabling them
at -O2, which is the default level for Autoconf-generated
'configure' scripts.

 What we need to have is some example of real-world program and a real-world
 compiler that caused the program to malfunction because the program assumed
 wrap semantics and the compiler didn't.  We need to know how long it took
 to debug this and find out what was the cause.

We have one recent example of this.  The problem was reported here:
http://lists.gnu.org/archive/html/bug-gnulib/2006-12/msg00084.html
I don't know how long it took Ralf to debug it, but he's
subscribed to the bug-gnulib list (which this is being CC'ed to)
so perhaps he can fill us in.

I have better figures for the fixing effort, since I did most of
that work.  The bug was fixed 3 days later, here:
http://lists.gnu.org/archive/html/bug-gnulib/2006-12/msg00212.html
The fix is small, but I would not call it trivial; it does not
match the fix Ralf originally proposed, which missed some
problems.  I didn't keep track, but I'd guess my overall effort
took two to four person-hours total, including integration,
testing, etc.

In that particular gnulib module (mktime) I'd guess there are five
more problems like it.  I haven't had the time to look at them or
fix them, though.

Gnulib has about 400 modules.  Most of them are far simpler than
mktime, but I'm sure some of these other modules have wrapv
related problems too.  And Gnulib is just a small part of a large
number of GNU applications.  It would take a lng time to wade
through all that code looking for all wrapv issues.

If you still want lots more concrete examples, I'm afraid we'll
have to spend more resources, resources that I don't have right
now.  But the initial data do not look promising to me.  I think
it will take a lot of time to find these problems in real-world
programs, and that fixing them will not always be trivial.


[Bug preprocessor/29612] [4.0/4.1/4.2/4.3 Regression] gcc --save-temps does not give multi-character character constant error

2006-12-29 Thread jakub at gcc dot gnu dot org


--- Comment #2 from jakub at gcc dot gnu dot org  2006-12-29 08:15 ---
Subject: Bug 29612

Author: jakub
Date: Fri Dec 29 08:15:08 2006
New Revision: 120257

URL: http://gcc.gnu.org/viewcvs?root=gccview=revrev=120257
Log:
PR preprocessor/29612
* directives.c (do_linemarker): Set pfile-buffer-sysp always, not
only when new_sysp is non-zero.

* gcc.dg/cpp/pr29612-1.c: New test.
* gcc.dg/cpp/pr29612-2.c: New test.

Added:
trunk/gcc/testsuite/gcc.dg/cpp/pr29612-1.c
trunk/gcc/testsuite/gcc.dg/cpp/pr29612-2.c
Modified:
trunk/gcc/testsuite/ChangeLog
trunk/libcpp/ChangeLog
trunk/libcpp/directives.c


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29612



[Bug preprocessor/29612] [4.0/4.1/4.2/4.3 Regression] gcc --save-temps does not give multi-character character constant error

2006-12-29 Thread jakub at gcc dot gnu dot org


--- Comment #3 from jakub at gcc dot gnu dot org  2006-12-29 08:16 ---
Subject: Bug 29612

Author: jakub
Date: Fri Dec 29 08:16:32 2006
New Revision: 120258

URL: http://gcc.gnu.org/viewcvs?root=gccview=revrev=120258
Log:
PR preprocessor/29612
* directives.c (do_linemarker): Set pfile-buffer-sysp always, not
only when new_sysp is non-zero.

* gcc.dg/cpp/pr29612-1.c: New test.
* gcc.dg/cpp/pr29612-2.c: New test.

Added:
branches/gcc-4_2-branch/gcc/testsuite/gcc.dg/cpp/pr29612-1.c
branches/gcc-4_2-branch/gcc/testsuite/gcc.dg/cpp/pr29612-2.c
Modified:
branches/gcc-4_2-branch/gcc/testsuite/ChangeLog
branches/gcc-4_2-branch/libcpp/ChangeLog
branches/gcc-4_2-branch/libcpp/directives.c


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29612



[Bug preprocessor/29612] [4.0/4.1/4.2/4.3 Regression] gcc --save-temps does not give multi-character character constant error

2006-12-29 Thread jakub at gcc dot gnu dot org


--- Comment #4 from jakub at gcc dot gnu dot org  2006-12-29 08:17 ---
Subject: Bug 29612

Author: jakub
Date: Fri Dec 29 08:17:43 2006
New Revision: 120259

URL: http://gcc.gnu.org/viewcvs?root=gccview=revrev=120259
Log:
PR preprocessor/29612
* directives.c (do_linemarker): Set pfile-buffer-sysp always, not
only when new_sysp is non-zero.

* gcc.dg/cpp/pr29612-1.c: New test.
* gcc.dg/cpp/pr29612-2.c: New test.

Added:
branches/gcc-4_1-branch/gcc/testsuite/gcc.dg/cpp/pr29612-1.c
branches/gcc-4_1-branch/gcc/testsuite/gcc.dg/cpp/pr29612-2.c
Modified:
branches/gcc-4_1-branch/gcc/testsuite/ChangeLog
branches/gcc-4_1-branch/libcpp/ChangeLog
branches/gcc-4_1-branch/libcpp/directives.c


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29612



[Bug preprocessor/29612] [4.0 Regression] gcc --save-temps does not give multi-character character constant error

2006-12-29 Thread jakub at gcc dot gnu dot org


--- Comment #5 from jakub at gcc dot gnu dot org  2006-12-29 08:25 ---
Fixed in 4.3/4.2/4.1.


-- 

jakub at gcc dot gnu dot org changed:

   What|Removed |Added

  Known to fail|4.0.4 4.1.2 4.2.0 4.3.0 |4.0.4 4.1.1
  Known to work|3.4.0   |3.4.0 4.1.2 4.2.0 4.3.0
Summary|[4.0/4.1/4.2/4.3 Regression]|[4.0 Regression] gcc --save-
   |gcc --save-temps does not   |temps does not give multi-
   |give multi-character   |character character
   |character constant error   |constant error


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29612



[Bug middle-end/30143] [4.2 only] OpenMP can produce invalid gimple

2006-12-29 Thread jakub at gcc dot gnu dot org


--- Comment #10 from jakub at gcc dot gnu dot org  2006-12-29 08:43 ---
4.2 version of the patch at
http://gcc.gnu.org/ml/gcc-patches/2006-12/msg01504.html


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30143



[Bug fortran/30320] program crash for SUM applied to zero-size array

2006-12-29 Thread toon at moene dot indiv dot nluug dot nl


--- Comment #1 from toon at moene dot indiv dot nluug dot nl  2006-12-29 
09:03 ---


*** This bug has been marked as a duplicate of 30321 ***


-- 

toon at moene dot indiv dot nluug dot nl changed:

   What|Removed |Added

 Status|UNCONFIRMED |RESOLVED
 Resolution||DUPLICATE


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30320



[Bug fortran/30321] program crash for SUM applied to zero-size array

2006-12-29 Thread toon at moene dot indiv dot nluug dot nl


--- Comment #2 from toon at moene dot indiv dot nluug dot nl  2006-12-29 
09:03 ---
*** Bug 30320 has been marked as a duplicate of this bug. ***


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30321



[Bug fortran/30321] program crash for SUM applied to zero-size array

2006-12-29 Thread tkoenig at gcc dot gnu dot org


--- Comment #3 from tkoenig at gcc dot gnu dot org  2006-12-29 09:50 ---
I'll do this.


-- 

tkoenig at gcc dot gnu dot org changed:

   What|Removed |Added

 AssignedTo|unassigned at gcc dot gnu   |tkoenig at gcc dot gnu dot
   |dot org |org
 Status|NEW |ASSIGNED
   Last reconfirmed|2006-12-28 21:20:56 |2006-12-29 09:50:13
   date||


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30321



[Bug libfortran/30162] I/O with named pipes does not work

2006-12-29 Thread tkoenig at gcc dot gnu dot org


--- Comment #6 from tkoenig at gcc dot gnu dot org  2006-12-29 09:51 ---
(In reply to comment #5)
 I will work at it.

Thanks, I'll be happy to assist with discussions and review.

(Those who can, fix; those who can't, review :-)


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30162



[Bug fortran/24325] ICE in gfc_get_function_type

2006-12-29 Thread patchapp at dberlin dot org


--- Comment #3 from patchapp at dberlin dot org  2006-12-29 12:00 ---
Subject: Bug number PR24325

A patch for this bug has been added to the patch tracker.
The mailing list url for the patch is
http://gcc.gnu.org/ml/gcc-patches/2006-12/msg01813.html


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=24325



[Bug fortran/24325] ICE in gfc_get_function_type

2006-12-29 Thread pault at gcc dot gnu dot org


--- Comment #4 from pault at gcc dot gnu dot org  2006-12-29 12:16 ---
I just submitted the patch.

Paul


-- 

pault at gcc dot gnu dot org changed:

   What|Removed |Added

 AssignedTo|unassigned at gcc dot gnu   |pault at gcc dot gnu dot org
   |dot org |
 Status|NEW |ASSIGNED
   Last reconfirmed|2006-07-19 09:07:02 |2006-12-29 12:16:21
   date||


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=24325



[Bug libstdc++/30226] FAIL: abi_check

2006-12-29 Thread paolo at gcc dot gnu dot org


--- Comment #2 from paolo at gcc dot gnu dot org  2006-12-29 12:52 ---
Subject: Bug 30226

Author: paolo
Date: Fri Dec 29 12:52:14 2006
New Revision: 120261

URL: http://gcc.gnu.org/viewcvs?root=gccview=revrev=120261
Log:
2006-12-29  Paolo Carlini  [EMAIL PROTECTED]

PR libstdc++/30226
* config/abi/pre/gnu.ver: Do not export ctypechar::widen.

Modified:
trunk/libstdc++-v3/ChangeLog
trunk/libstdc++-v3/config/abi/pre/gnu.ver


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30226



[Bug libstdc++/30226] FAIL: abi_check

2006-12-29 Thread pcarlini at suse dot de


--- Comment #3 from pcarlini at suse dot de  2006-12-29 12:53 ---
Fixed.


-- 

pcarlini at suse dot de changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution||FIXED


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30226



[Bug libstdc++/30226] FAIL: abi_check

2006-12-29 Thread pcarlini at suse dot de


-- 

pcarlini at suse dot de changed:

   What|Removed |Added

   Target Milestone|--- |4.3.0


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30226



  1   2   >