Re: changing configure to default to gcc -g -O2 -fwrapv ...

2007-01-01 Thread Gabriel Dos Reis
Ian Lance Taylor [EMAIL PROTECTED] writes:

[...]

|   I believe the best option is
| going to be to take an case by case approach to selecting which
| optimizations should be enabled by default, and which optimizations
| should not be done except via a special extra option (by which I mean
| not a -O option, but a -f option).
| 
| I appreciate your need to move this discussion along, but I'm not
| entirely happy with what I take to be stampeding it by introducing
| what I believe would be a completely inappropriate patch to autoconf,
| rather than, say, opening a gcc bugzilla problem report for the cases
| you feel gcc should handle differently.

Ian --

  I see the Autoconf patch proposal as another option on the table; I don't
believe it is the only one.  As I said in a previous message, *if it
is infeasible to convince GCC that not all undefined behaviour are
equal*, then I support Paul's proposal as the way to move forward.

I do hope your and Richard G's constructive search for middle ground
will find echoes within the middle-end maintainers.

Although I cannot speak for Paul, I have the feeling a recognition of
existing practice by GCC and an action to accept and generate
reasonable codes for unreasonable codes that assume wrapping
semantics will satisfy him. Of course, it is not easy to define
precisely unreasonable; but if people, for a moment, insist less on
labeling the discussion with individuals we may make good progress.

[...]

| I already took the time to go through all the cases for which gcc
| relies on signed overflow being undefined.  I also sent a very
| preliminary patch providing warnings for those cases.  I believe that
| we will get the best results along those lines, not by introducing an
| autoconf patch.

I do appreciate your preliminary patch -- and I'm sure Paul find it
useful too, as tool to advance in this discussion.  I suspect that,
what is not clear is whether the other side (I hate that expression)
is amenable to agreeing on that course or whether the seemingly
prevalent attitude but it is undefined; but it is not C is the
opinion of the majority of middle-end maintainers. 

-- Gaby


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2007-01-01 Thread Paul Eggert
Ian Lance Taylor [EMAIL PROTECTED] writes:

 I'm not entirely happy with what I take to be stampeding it by
 introducing what I believe would be a completely inappropriate patch
 to autoconf, rather than, say, opening a gcc bugzilla problem report
 for the cases you feel gcc should handle differently.

Ralf Wildenhues suggested bugzilla originally, but Andrew Pinski
responded http://gcc.gnu.org/ml/gcc/2006-12/msg00460.html that the
problem has been observed many, many times and talked about a lot of
time on this list and implied strongly that the issue was settled and
was not going to change.  And bugzilla entries complaining about the
issue (e.g., 18700, 26358, 26566, 27257, 28777) have been closed with
resolution INVALID and workaround use -fwrapv.  So it seemed to me
like it would have been a waste of everybody's time to open another
bugzilla entry; the recommended solution, apparently, was to use
-fwrapv.  Hence the Subject: line of this thread.

 Historically we've turned on -fstrict-aliasing at -O2.  I think it
 would take a very strong argument to handle signed overflow
 differently from strict aliasing.

I take your point that it might be cleaner to establish a new GCC
option rather than overload -O2.  That would be OK with me.  So, for
example, we might add an option to GCC, -failsafe say, to disable
unsafe optimizations that may well cause trouble with
traditional/mainstream applications.  We can then change Autoconf to
default to -O2 -failsafe.

However, in thinking about it more, I suspect most application
developers would prefer the safer optimizations to be the default, and
would prefer enabling the riskier ones only with extra -f options.
Thus, perhaps it would be better to add an option -frisky to enable
these sorts of optimizations.

Whichever way it's done, the idea is to give a convenient way to
enable/disable the more-controversial optimization strategies that
cause problems with many real-world programs that don't conform
strictly to the C standard.

 You are asserting that most programmers assume -fwrapv, but, except
 for your initial example, you are presenting examples which gcc
 already does not change.

That's true, but so far the only position that the GCC developers have
taken that users can rely on, is that all these examples rely on
undefined behavior, and that unless you specify -fwrapv GCC is
entitled to break them in the future if it doesn't already break them.
For many C applications, that is not a tenable situation: right or
wrong, there are simply too many places where the code assumes wrapv
semantics.

In the long run, there has to be a better way.  In the short run, all
we have now is -fwrapv, so we can use that.

 I already took the time to go through all the cases for which gcc
 relies on signed overflow being undefined.  I also sent a very
 preliminary patch providing warnings for those cases.  I believe that
 we will get the best results along those lines, not by introducing an
 autoconf patch.

I think in the long run the best results will come from a series of
changes, some to GCC, some to Autoconf, some to Gnulib, and some no
doubt elsewhere.  I welcome adding warnings to GCC so that programmers
are made aware of the problems.  If the warnings are reliable and do
not have too many false alarms, they will go a long way towards fixing
the problem.  However, I doubt whether they will solve the problem all
by themselves.

I have not installed the Autoconf patch (much less published a new
version of Autoconf with the patch) because I too would prefer a
better solution.  But the bottom line is that many, many C
applications need a solution that errs on the side of reliability, not
one that errs on the side of speed.  As far as I can tell the Autoconf
patch is so far the only proposal on the table with this essential
property.


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2007-01-01 Thread Richard Guenther

On 12/31/06, Richard Kenner [EMAIL PROTECTED] wrote:

 Are you volunteering to audit the present cases and argue whether they
 fall in the traditional cases?

I'm certainly willing to *help*, but I'm sure there will be some cases
that will require discussion to get a consensus.

 Note that -fwrapv also _enables_ some transformations on signed
 integers that are disabled otherwise.  We for example constant fold
 -CST for -fwrapv while we do not if signed overflow is undefined.
 Would you change those?

I don't understand the rationale for not wrapping constant folding when
signed overflow is undefined: what's the harm in defining it as wrapping
for that purpose?  If it's undefined, then why does it matter what we
fold it to?  So we might as well fold it to what traditional code expects.


The reason is PR27116 (and others, see the difficulties in fixing PR27132).
We cannot do both, assume wrapping and undefined behavior during
foldings at the same time - this leads to wrong-code.  Citing from a
message from
myself:

Other than that I'm a bit nervous if we both
introduce signed overflow because it is undefined and at the same time
pretend it doesn't happen because it is undefined.  Like given

  a - INT_MIN  a
- a + INT_MIN  a
- INT_MIN  0

which is true, even for a == INT_MIN for which the original expression
didn't contain an overflow.  I.e. the following aborts

#include limits.h

extern void abort(void);

int foo(int a)
{
 return a - INT_MIN  a;
}

int main()
{
 if (foo(INT_MIN))
   abort ();
 return 0;
}

because we fold the comparison to 1.

This was while trying to implement a - -1 to a + 1 folding.  The problematic
folding that existed at that point was that negate_expr_p said it would
happily negate INT_MIN (to INT_MIN with overflow flag set), which is
wrong in this context.

Richard.


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2007-01-01 Thread Richard Guenther

On 1/1/07, Richard Kenner [EMAIL PROTECTED] wrote:

 the seemingly prevalent attitude but it is undefined; but it is not
 C is the opinion of the majority of middle-end maintainers.

Does anybody DISAGREE with that attitude?  It isn't valid C to assume that
signed overflow wraps.  I've heard nobody argue that it is.  The question
is how far we go in supporting existing code that's broken in this way.


I don't disagree with that attitude, I even strongly agree.  We support broken
code by options like -fno-strict-aliasing and -fwrapv.  I see this discussion as
a way to prioritize work we need to do anyway: annotate operations with
their overflow behavior (like creating new tree codes WRAPPING_PLUS_EXPR),
clean up existing code to make it more obvious where rely on what semantics,
add more testcases for corner-cases and document existing
(standard-conformant) behavior more explicitly.

Note that we had/have a similar discussion like this (what's a valid/useful
optimization in the users perspective) on the IEEE math front - see the
hunge thread about -ffast-math, -funsafe-math-optimizations and the
proposal to split it into -fassociative-math and -freciprocal-math.  We also
have infrastructure work to do there, like laying grounds to implement
proper contraction support.

Richard.


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2007-01-01 Thread Robert Dewar

Richard Kenner wrote:

the seemingly prevalent attitude but it is undefined; but it is not
C is the opinion of the majority of middle-end maintainers.


Does anybody DISAGREE with that attitude?  It isn't valid C to assume that
signed overflow wraps.  I've heard nobody argue that it is.  The question
is how far we go in supporting existing code that's broken in this way.


Well we all understand that the official language definition of C,
starting with KR first edition, and continuing into the successive
C standards, says that overflow is undefined or machine dependent.

That's not in doubt

However, I am not willing to agree with the notion that code that
does not follow this rule is broken. It is that attitude which
can lead to inappropriate decisions, instead we should simply say,
in a less judgmental mode, that the code is not strictly standard.
We should also recognize that many (most? nearly all?) large C
programs are in practice in this non-standard category.

In practice, ANY compiler has to go beyond what the standard says
to produce a usable product. The objective of a C compiler is not
simply to conform to the standard, but rather to be a useful tool
for use by C programmers.

With C (and in fact with any language, even a language perceived
to be fiercely standardized like Ada), it is the case that these
two things are not quite the same. For instance in Ada, the set
of representation clauses that must be accepted according to the
standard is fairly small. Tight packing of 3-bit element arrays
for example is not required. But in practice many Ada programs
require support beyond this minimal subset (and indeed we have
users who depend on 3-bit tight packing). Sure we can lecture
them on standard requirements, but GNAT is more useful to them
if we support this optional feature, even if it results in far
less efficient code for some cases (the standard would allow
such components to use 4-bits resulting in much more efficien
code).

When it comes to erroneous and bounded error cases, a compiler
could be maximally unfriendly. For example, in the GNAT binder
there is an option -p (for pessimistic), that says choose the
worst possible elaboration order, consistent with the standard
requirements. It is a useful option for checking portability
of your code. It would be 100% standards conforming to make
the -p switch the default and only setting, but it would make
GNAT much less useful, since many large programs end up making
assumptions about elaboration order (basically they set things
up to work with some particular compiler).

Again in the Ada context, there are cases where efficiency and
usability conflict. For example, suppose you use an address
clause to position a variable at a specified address, and this
address is misaligned. That's erroneous (undefined in C-speak).
We are allowed to do anything, e.g. delete the system disk,
and the user cannot officially complain. Unofficially that
user is going to be hopping mad. In practice we find that only
a very small number of our users are maximum efficiency oriented
(some of them are happy to consider moving to Java even if it
costs them a factor of 10 in efficiency as an extreme case).
So in the address clause case, we by default generate checking
code that raises Program_Error if the address is misaligned.
That's inefficient, and unnecessary. It can be turned off, but
by default it's on even at the highest optimization level.

By using the word broken, you are implicitly criticizing
the code, and contributing to an environment where compiler
writers are a little too ready in my view to write off the
programmers of such broken code as incompetent, and not
worth worrying about. That can lead to the wrong balance
of decisions in tricky trade off cases.


Re: RFC: SMS problem with emit_copy_of_insn_after copying REG_NOTEs

2007-01-01 Thread Jan Hubicka
 Hi,
 Sorry for possibly causing confusion. I had tested the patch on my ICE
 testcase and bootstrapped for -enable-languages=C, but didn't run the
 full bootstrap. Bootstrapping the latest Andrew's patch on ppc-linux
 and testing it on SPU.

Vladimir,
I bootstrapped/regtested the patch myself on i686 before commiting it,
so the rule was met here.  Unfortunately i686 don't seems to show the
regression.  I've bootstrapped/regtested x86_64 and i686 with Andrew's
patch and it works all fine.

Honza


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2007-01-01 Thread Geert Bosch


On Dec 31, 2006, at 19:13, Daniel Berlin wrote:

Note the distinct drop in performance across almost all the benchmarks
on Dec 30, including popular programs like bzip2 and gzip.

Not so.

To my eyes, the specint 2000 mean went UP by about 1% for the
base -O3 compilation. The peak enabled more unrolling, which
is helped by additional range information provided by absence
of -frwapv.

So, I'd say this run would suggest enabling -fwrapv for
at least -O1 and -O2. Also, note that we never have
focussed on performance with -fwrapv, and it is quite
likely there is quite some improvement possible.

I'd really like using -fwrapv by default for -O, -O[s12].
The benefit of many programs moving from undefined semantics
to implementation-defined semantics, overflow wraps like in
old compilers far outweighs even an average performance loss
of 2% as seen in specfp.

As undefined execution can result in arbitrary badness,
this is really at odds with the increasing need for many
programs to be secure. Since it is almost impossible to
prove that programs do not have signed integer overflow,
it makes far more sense to define behavior in such cases.
Note that were talking defaults: for not-so-sophisticated
programmers, we should focus on being safe. People smart
enough to proof their program can't cause signed integer
overflow, can certainly figure out compiler options to
disable -fwrapv.

  -Grt



Re: changing configure to default to gcc -g -O2 -fwrapv ...

2007-01-01 Thread Ian Lance Taylor
Paul Eggert [EMAIL PROTECTED] writes:

 Ralf Wildenhues suggested bugzilla originally, but Andrew Pinski
 responded http://gcc.gnu.org/ml/gcc/2006-12/msg00460.html that the
 problem has been observed many, many times and talked about a lot of
 time on this list and implied strongly that the issue was settled and
 was not going to change.  And bugzilla entries complaining about the
 issue (e.g., 18700, 26358, 26566, 27257, 28777) have been closed with
 resolution INVALID and workaround use -fwrapv.  So it seemed to me
 like it would have been a waste of everybody's time to open another
 bugzilla entry; the recommended solution, apparently, was to use
 -fwrapv.  Hence the Subject: line of this thread.

Well, Andrew does not speak for the gcc community as a whole (and
neither do I).  Looking through your list of bugs:

18700: I believe this is correct default behaviour.
26358: I think this is questionable default behaviour.
26566: I think this is questionable default behaviour.
27257: I think this is correct default behaviour.
28777: I think this is questionable default behaviour.

The common theme of these five cases is that I think that gcc should
not by default use the fact that signed overflow is undefined to
completely remove a loop termination test.  At least, not without a
warning.


  Historically we've turned on -fstrict-aliasing at -O2.  I think it
  would take a very strong argument to handle signed overflow
  differently from strict aliasing.
 
 I take your point that it might be cleaner to establish a new GCC
 option rather than overload -O2.  That would be OK with me.  So, for
 example, we might add an option to GCC, -failsafe say, to disable
 unsafe optimizations that may well cause trouble with
 traditional/mainstream applications.  We can then change Autoconf to
 default to -O2 -failsafe.
 
 However, in thinking about it more, I suspect most application
 developers would prefer the safer optimizations to be the default, and
 would prefer enabling the riskier ones only with extra -f options.
 Thus, perhaps it would be better to add an option -frisky to enable
 these sorts of optimizations.

I don't agree with this point.  There is a substantial number of
application developers who would prefer -failsafe.  There is a
substantial number who would prefer -frisky.  We don't know which set
is larger.  We get a lot of bug reports about missed optimizations.

Also, it does not make sense to me to lump together all potentially
troublesome optimizations under a single name.  They are not all the
same.


 I think in the long run the best results will come from a series of
 changes, some to GCC, some to Autoconf, some to Gnulib, and some no
 doubt elsewhere.  I welcome adding warnings to GCC so that programmers
 are made aware of the problems.  If the warnings are reliable and do
 not have too many false alarms, they will go a long way towards fixing
 the problem.  However, I doubt whether they will solve the problem all
 by themselves.
 
 I have not installed the Autoconf patch (much less published a new
 version of Autoconf with the patch) because I too would prefer a
 better solution.  But the bottom line is that many, many C
 applications need a solution that errs on the side of reliability, not
 one that errs on the side of speed.  As far as I can tell the Autoconf
 patch is so far the only proposal on the table with this essential
 property.

I don't really see how you move from the needs of many, many C
applications to the autoconf patch.  Many, many C applications do not
use autoconf at all.

I think I've already put another proposal on the table, but maybe I
haven't described it properly:

1) Add an option like -Warnv to issue warnings about cases where gcc
   implements an optimization which relies on the fact that signed
   overflow is undefined.

2) Add an option like -fstrict-signed-overflow which controls those
   cases which appear to be risky.  Turn on that option at -O2.

It's important to realize that -Warnv will only issue a warning for an
optimization which actually transforms the code.  Every case where
-Warnv will issue a warning is a case where -fwrapv will inhibit an
optimization.  Whether this will issue too many false positives is
difficult to tell at this point.  A false positive will take the form
this optimization is OK because I know that the values in question
can not overflow.

Ian


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2007-01-01 Thread Joseph S. Myers
On Mon, 1 Jan 2007, Geert Bosch wrote:

 As undefined execution can result in arbitrary badness,
 this is really at odds with the increasing need for many
 programs to be secure. Since it is almost impossible to
 prove that programs do not have signed integer overflow,
 it makes far more sense to define behavior in such cases.

For a program to be secure in the face of overflow, it will generally need 
explicit checks for overflow, and so -fwrapv will only help if such checks 
have been written under the presumption of -fwrapv semantics.  Unchecked 
overflow will generally be a security hole whether or not -fwrapv is used.  
-ftrapv *may* be safer, by converting overflows into denial-of-service, 
but that may not always be safe either (and is probably very slow), and 
explicit checks would still be needed for unsigned overflow: computing an 
allocation size as n*sizeof(T) will likely use unsigned arithmetic, and 
undetected overflow is a problem there as well.

Examples of overflow checks that require -fwrapv semantics have been noted 
in this thread, but users expecting security through such checks need to 
audit the rest of their code for unchecked overflow both signed and 
unsigned.  If they do such an audit and go for checks requiring -fwrapv 
rather than checks that work without -fwrapv, then they can add -fwrapv 
(or autoconf could provide AC_WRAPV for such programs to use, that would 
add -fwrapv to both the default CFLAGS and any provided by the user).

Note that if you want wrapping semantics with GCC you can cast to 
unsigned, do unsigned arithmetic and cast back, since GCC defines the 
results of converting out-of-range unsigned values to signed types.  
(Likewise, GCC provides two different ways of allowing aliasing locally 
rather than using -fno-strict-aliasing globally: you can use unions or the 
may_alias attribute.)

-- 
Joseph S. Myers
[EMAIL PROTECTED]


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2007-01-01 Thread Robert Dewar

Geert Bosch wrote:


As undefined execution can result in arbitrary badness,
this is really at odds with the increasing need for many
programs to be secure. Since it is almost impossible to
prove that programs do not have signed integer overflow,


That seems a bit pessimistic, given the work Praxis
has done in the area of proving SPARK programs exception
free. Potentially these same techniques could work with
programs written in a suitable subset of C (and for
highly secure programs, you would want to use such a
subset in any case).

Still, in practical terms, it is true that overflow
being undefined is unpleasant. In Ada terms, it would
have seemed better in the C standard to reign in the
effect of overflow, for instance, merely saying that
the result is an implementation defined value of the
type, or the program is terminated. Any other outcome
seems unreasonable, and in practice unlikely.

The important thing is to stop the optimizer from
reasoning arbitrary deeply from lack of overflow.

For example if we have

   if (Password == Expected_Password)
   delete_system_disk;
   else
   xxx

and the optimizer figures out that xxx will unconditionally
cause signed integer overflow (presumably due to some bug),
we don't want the optimizer saying

hmm, if that check is false, the result is undefined, so
I can do anything I like, for anything I will choose to
call delete_system_disk, so I can legitimately remove
the check on the password being correct.

This kind of back propagation, while strictly allowed
by the standard, seems totally unaccpetable to me. The
trouble is (we have been through this in some detail in
the Ada standardization world), it is hard to define
exactly and formally what you mean by this kind of
back propagation.



Re: changing configure to default to gcc -g -O2 -fwrapv ...

2007-01-01 Thread Paul Schlie
 Ian Lance Taylor wrote:
 ...
 I don't personally see that as the question.  This code is
 undefined, and, therefore, is in some sense not C.  If we take
 any other attitude, then we will be defining and supporting
 a different language.  I think that relatively few people want
 the language C plus signed integers wrap, which is the language
 we support with the -fwrapv option.
 ...

No, all such code is perfectly legal C and specified to have undefined
semantics in the instance of signed overflow; which seems clear from the
excerpts noted by Gabriel http://gcc.gnu.org/ml/gcc/2006-12/msg00763.html
may be specified by an implementation to assume whatever behavior desired.

Thereby full liberty in given to the implementers of the compiler to
apply whatever semantics are deemed most desirable; regardless of their
practical utility, or historical compatibility.  Ultimately the issue is
to what degree should optimizations preserve semantics otherwise expressed
and/or historically expected in their absents; and how should they when
deemed desired be invoked (i.e. by named exception or default at -Ox).




Re: changing configure to default to gcc -g -O2 -fwrapv ...

2007-01-01 Thread Richard Kenner
 Still, in practical terms, it is true that overflow
 being undefined is unpleasant. In Ada terms, it would
 have seemed better in the C standard to reign in the
 effect of overflow, for instance, merely saying that
 the result is an implementation defined value of the
 type, or the program is terminated. Any other outcome
 seems unreasonable, and in practice unlikely.

My feeling is that GCC, even in its most agressive mode, should treat
overflow as implementation dependent.  I don't think that there's any
optimization that depends on it being undefined in the full sense.


Re: RFC: SMS problem with emit_copy_of_insn_after copying REG_NOTEs

2007-01-01 Thread Vladimir Yanovsky

I've bootstrapped OK C/C++/Fortran on PPC. make check-gcc is running now

Thanks,
Vladimir

On 1/1/07, Jan Hubicka [EMAIL PROTECTED] wrote:

 Hi,
 Sorry for possibly causing confusion. I had tested the patch on my ICE
 testcase and bootstrapped for -enable-languages=C, but didn't run the
 full bootstrap. Bootstrapping the latest Andrew's patch on ppc-linux
 and testing it on SPU.

Vladimir,
I bootstrapped/regtested the patch myself on i686 before commiting it,
so the rule was met here.  Unfortunately i686 don't seems to show the
regression.  I've bootstrapped/regtested x86_64 and i686 with Andrew's
patch and it works all fine.

Honza



Re: changing configure to default to gcc -g -O2 -fwrapv ...

2007-01-01 Thread Paul Eggert
Ian Lance Taylor [EMAIL PROTECTED] writes:

 Also, it does not make sense to me to lump together all potentially
 troublesome optimizations under a single name.

As a compiler developer, you see the trees.  But most users just see a
forest and want things to be simple.  Even adding a single binary
switch (-fno-risky/-frisky) will be an extra level of complexity that
most users don't particularly want to know about.  Requiring users to
worry about lots of little switches (at least -fwrapv/-fundefinedv/-ftrapv,
-fstrict-signed-overflow/-fno-strict-signed-overflow, and
-fstrict-aliasiang/-fno-strict-aliasing, and probably more) makes GCC
harder to use conveniently, and will make things more likely to go
wrong in practical use.

That being said, I guess I wouldn't mind the extra complexity if
-fno-risky is the default at -O2.  The default would be simple, which
is good enough.

 I don't really see how you move from the needs of many, many C
 applications to the autoconf patch.  Many, many C applications do not
 use autoconf at all.

Sure, and from the GCC point of view it would be better to address
this problem at the GCC level, since we want to encourage GCC's use.

However, we will probably want to address this at the Autoconf level
too, in some form, since we also want to encourage GNU software to be
portable to other compilers with this problem, which include icc and
xlc.

I think we will also want to address the problem at the Gnulib level
too, since we want it to be easier to write software that is portable
even to compilers or compiler options that have undefined behavior on
signed overflow -- this will help answer the question What do I do
when -Warnv complains?.

 1) Add an option like -Warnv to issue warnings about cases where gcc
implements an optimization which relies on the fact that signed
overflow is undefined.

 2) Add an option like -fstrict-signed-overflow which controls those
cases which appear to be risky.  Turn on that option at -O2.

This sounds like a good approach overall, but the devil is in the
details.  As you mentioned, (1) might have too many false positives.

More important, we don't yet have an easy way to characterize the
cases where (2) would apply.  For (2), we need a simple, documented
rule that programmers can easily understand, so that they can easily
verify that C code is safe: for most applications this is more
important than squeezing out the last ounce of performance.  Having
the rule merely be does your version of GCC warn about it on your
platform? doesn't really cut it.

So far, the only simple rule that has been proposed is -fwrapv, which
many have said is too conservative as it inhibits too many useful
optimizations for some numeric applications.  I'm not yet convinced
that -fwrapv harms performance that much for most real-world
applications, but if we can come up with a less-conservative (but
still simple) rule, that would be fine.  My worry, though, is that the
less-conservative rule will be too complicated.

 There is a substantial number of application developers who would
 prefer -failsafe.  There is a substantial number who would prefer
 -frisky.  We don't know which set is larger.

It's a controversial point, true.  To help resolve this difference of
opinion, we could post a call for comments on info-gnu.  The call
should be carefully worded, to avoid unduly biasing the results.


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2007-01-01 Thread Gabriel Dos Reis
Ian Lance Taylor [EMAIL PROTECTED] writes:

| Gabriel Dos Reis [EMAIL PROTECTED] writes:
| 
|  I do hope your and Richard G's constructive search for middle ground
|  will find echoes within the middle-end maintainers.
| 
| This seems likely, since Richard and I are two of the three middle-end
| maintainers, and I don't recall hearing from the other one in this
| discussion.

I'm glad to hear that, and I do hope that will lay down ground for
moving forward.

|  I do appreciate your preliminary patch -- and I'm sure Paul find it
|  useful too, as tool to advance in this discussion.  I suspect that,
|  what is not clear is whether the other side (I hate that expression)
|  is amenable to agreeing on that course or whether the seemingly
|  prevalent attitude but it is undefined; but it is not C is the
|  opinion of the majority of middle-end maintainers. 
| 
| I don't personally see that as the question.  This code is undefined,
| and, therefore, is in some sense not C. 

This is probably an area where opinions diverge. Nobody is arguing
what the ISO C standard says:  signed integer arithmetic overflow is
undefined behaviour.  Where opinions diverge, I suspect, is how
undefined behaviour should be interpreted.  
As I've produced evidence before, undefined behaviour is a black
hole term that contains both erroneous programs whose diagnostic
would require solving the halting problem, and a hook for the
implementation to provide useful conformant extensions.   Now, we are faced
with taking advantage of that limbo and do some code transformations,
or provide another useful extensions (e.g. LIA-1).  

Consequently, saying it is not C is not accurate and not helping
making decisions that move forward in a useful way -- what  the ISO C
standard defines (and ISO C++ for that matter) is a family of
languages called C.

However, as I said earlier, I very much support your effort and do
hope it finds echo with the middle-end maintainers.

| If we take any other
| attitude, then we will be definining and supporting a different
| language.  I think that relatively few people want the language C
| plus signed integers wrap, which is the language we support with the
| -fwrapv option.
| 
| What I think we need to do is introduce a warning option to optionally
| warn about optimizations which are unusually risky for existing code.
| And I think we need to provide more fine-grained control over which
| optimizations we implement.

Yes, I agree.  I filled a PR to request -Wundedined.  I did not raise
it to blocker for GCC-4.3.0, but I would not mind if a middle-end
maintainer does :-)

-- Gaby


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2007-01-01 Thread Andrew Pinski
On Mon, 2007-01-01 at 10:00 -0800, Paul Eggert wrote:
 Ian Lance Taylor [EMAIL PROTECTED] writes:
 
  Also, it does not make sense to me to lump together all potentially
  troublesome optimizations under a single name.
 
 As a compiler developer, you see the trees.  But most users just see a
 forest and want things to be simple.  Even adding a single binary
 switch (-fno-risky/-frisky) will be an extra level of complexity that
 most users don't particularly want to know about.  Requiring users to
 worry about lots of little switches (at least -fwrapv/-fundefinedv/-ftrapv,
 -fstrict-signed-overflow/-fno-strict-signed-overflow, and
 -fstrict-aliasiang/-fno-strict-aliasing, and probably more) makes GCC
 harder to use conveniently, and will make things more likely to go
 wrong in practical use.

Then the question is why does C developers act differently than Fortran
developers when it comes to undefinedness?

Look at Fortran argument aliasing, we get almost no bugs about that
undefinedness.  We have an option to change the way argument aliasing
works, in the same way we have an option for signed overflow.  I don't
see why overflow will be any different from argument aliasing.

Maybe the problem is that C (and C++) developers don't know the language
they writing in and expect anything to work they wrote.  I think this is
a problem with the teaching of C (and C++) today and it is hard for us
to fix that issue.  We can try educate people by adding we treat signed
overflow as undefined in the documentation in more obvious place but
this is not going to help in general as developers don't read the docs.

The problem with the current signed overflow undefined optimizations is
that conflict in some are wrapping, others are saturation (most of the
VRP and a-'0'9) and then more are extending (a*10/5).

Thanks,
Andrew Pinski



Re: changing configure to default to gcc -g -O2 -fwrapv ...

2007-01-01 Thread Robert Dewar

Andrew Pinski wrote:


Look at Fortran argument aliasing, we get almost no bugs about that
undefinedness.  We have an option to change the way argument aliasing
works, in the same way we have an option for signed overflow.  I don't
see why overflow will be any different from argument aliasing.


Well you should see :-)

The point is that in practice many C compilers have implemented wrapping
in the past, and large bodies of C code and the C knowledge base have
developed assuming (correctly in practice) that wrapping could be relied
on. I don't think you can begin to make the same statement for Fortran
argument aliasing (and I speak as someone who developed hundreds of
thousands of lines of Fortran in the early days -- late 60's and early
70's).


Maybe the problem is that C (and C++) developers don't know the language
they writing in and expect anything to work they wrote. 


That's not right, and I worry that this attitud stands in the way of
getting the right compromise approach. In fact a lot of competent C
programmers know the language pretty well, but they know the language
that is used and works (quite portably even) in practice, not the
language of the standard (which in this case, if you take undefined
at its worse literally) is a difficult language to write in, when
it comes to checking for overflow, as has been clear from the discussion 
in this thread.



I think this is
a problem with the teaching of C (and C++) today and it is hard for us
to fix that issue.  We can try educate people by adding we treat signed
overflow as undefined in the documentation in more obvious place but
this is not going to help in general as developers don't read the docs.


Education won't help deal with large bodies of legacy C that are written
with wrapping assumed (including as we have seen, gcc itself, and other
components of the gnu system). Over time such code can be fixed, but
it is more likely that people will just resort to fwrapv in practice,
which seems non-optimal to me (I am pretty sure that a moderated
intermediate position between overflow-always-undefined and
overflow-always-wraps can be devised which will in practice
work out fine on the efficiency and portability sides).

In terms of reading the documents, warning messages (and error messages
where possible) are probably more help in teaching the fine details of
the language, and I think everyone agrees that where possible, it would
be a good thing if gcc could warn that it was taking advantage of
knowing that signed overflow cannot occur (i.e. should not occur) in
a given situation.


The problem with the current signed overflow undefined optimizations is
that conflict in some are wrapping, others are saturation (most of the
VRP and a-'0'9) and then more are extending (a*10/5).


Yes, and you can't paint them all with a broadbrush. If you look at the
last one for instance (a*10/5) it's a perfectly reasonable optimization
to optimize this to a*2, and indeed Ada introduces a special rule for
intermediate results which allow them to be correct instead of
overflowing precisely to allow this optimization (and to allow the more
general use of double precision intermediate result for computing 
a*b/c). Interestingly, this special Ada rule does introduce some

degree of non-portability, but there is often a trade-off between
portability and efficiency.


Thanks,
Andrew Pinski




Re: changing configure to default to gcc -g -O2 -fwrapv ...

2007-01-01 Thread Richard Kenner
 More important, we don't yet have an easy way to characterize the
 cases where (2) would apply.  For (2), we need a simple, documented
 rule that programmers can easily understand, so that they can easily
 verify that C code is safe

I'm not sure what you mean: there's the C standard.  That says exactly
what you are allowed to rely on and what you can't.  That's the rule that
programmers can easily understand.  The fact that we choose to avoid
breaking as much code as possible that DOESN'T follow that rule doesn't
mean we want to encourage people to write more code like that in the
future: in fact quite the contrary.


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2007-01-01 Thread Joe Buck
On Mon, Jan 01, 2007 at 10:24:36AM -0800, Andrew Pinski wrote:
 Then the question is why does C developers act differently than Fortran
 developers when it comes to undefinedness?

In the case of int overflow wrapping, I think it's because the Bell Labs
folks appeared to assume wrapping semantics from Day One in all their
code.  This meant that no one could write a C compiler that trapped int
overflows, or Unix wouldn't work.

There's a concept in Anglo-American law called an easement, which
basically means that if a property owner allows a use of his/her property
for many years, to the point where the surrounding community counts on it,
the owner cannot suddenly forbid the use (other countries have different
mechanisms to support pass through rights, and the US has gotten more
propertarian in recent years, but the point remains).  So, it appears
that there is a nearly 20 year old easement in C, much as I dislike saying
so.

That said, we don't want to cripple loop optimization.  So this raises
the question of whether there is some rule or heuristic that can be used
that will allow loop optimization to assume that there is no overflow,
while not breaking the uses that we see in programs like gcc.

An alternative path is to try to get rid of the easement, by helping
everyone fix their code.  We'd have to see just how bad the problem is.
One way is to build distros with trapping integer arithmetic and see what
breaks, though that won't find all errors.

 



Re: changing configure to default to gcc -g -O2 -fwrapv ...

2007-01-01 Thread Joe Buck
On Mon, Jan 01, 2007 at 07:33:06AM -0500, Richard Kenner wrote:
  the seemingly prevalent attitude but it is undefined; but it is not
  C is the opinion of the majority of middle-end maintainers.
 
 Does anybody DISAGREE with that attitude?  It isn't valid C to assume that
 signed overflow wraps.  I've heard nobody argue that it is.  The question
 is how far we go in supporting existing code that's broken in this way.

The problem is that often-unconscious assumptions that int overflow wraps
are very widespread.  If the compiler won't build GNU/Linux distros, then
we have a serious problem no matter what the standard says.

For one thing, we are hypocrites if we tell people that the fact that
gcc broke their code that assumes -fwrapv is not our problem, while gcc
itself assumes -fwrapv in several places!

We could say that the uses in fold-const are ok while others aren't, but
that would require coming up with a rule that distinguishes the cases.

 


Re: [PATCH] Relocated compiler should not look in $prefix.

2007-01-01 Thread Gerald Pfeifer
On Tue, 12 Dec 2006, Mark Mitchell wrote:
 If you want to make a patch, and Gerald approves it, it's fine by me.
 But, fwprop is described as a new feature (faster compiler, better
 code), and the build system affects people building the compiler.  The
 change we're talking about seems to affect only people debugging the
 compiler.

I think these indeed are relevant differences.  At the same time, I am 
sure there are GCC hackers/power users in addition to those regularily 
reading all of gcc@ and gcc-patches@, and it may be desirable to announce 
such changes to them as well.

So, how about adding a section Developer-relevant Changes at the end of 
gcc-4.2/changes.html?

Andrew, if you want to hack up a patch towards that end, I'll be glad to 
review/approve (unless some others strongly disagree, in which we'll need
to consider this in more detail).

Gerald


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2007-01-01 Thread Bruce Korb
Daniel Berlin wrote:
 Sorry, but it's rather impossible to argue against someone who seems
 to believe users should not be responsible and held responsible for
 knowing the rules of the language they are programming in.  Down this
 path, madness lies.
 strict aliasing is really just what the standard says about
 accessing memory and objects.
 It's not strict, or arcane argumentation. It's just what a C
 compiler is supposed to enforce.
 
 If ya'll want to invent and program in some relaxed variant of C with
 less rules, fine. But don't call it C and don't pretend it is the C
 language.

The point is:  Which C language?  The one I teethed on (circa 1974)?
The classic, with proper structure extensions?  1989?  1999?
The current draft proposal?

Changing syntax and semantics should not be impossible (it's being done),
but it must be glacially slow, deliberate, with compelling need, and
with years of warning around it.  And not just warnings seen and heard
only by folks who participate in standards committees and compiler
development.  By real warnings in compilers that wake  people to the
fact that the semantics of their coding style are in the process of being
altered, so watch out.  Instead, the attitude seems to be that if you
have not a full nuanced grasp of the full meaning of the standard that
your compiler was written to, well, then, you just should not consider
yourself a professional programmer.  OTOH, if professional programmers
should have such a grasp, then why is it all these language lawyers
are spending so much time arguing over what everyone ought to be able to
understand from the documents themselves?

My main point is simply this:  change the language when there is compelling
need.  When there is such a need, warn about it for a few years.
Not everybody (actually, few people) reads all the literature.
Nearly everybody likely does read compiler messages, however.

WRT strict aliasing, I've never seen any data that indicated that the
language change was compelling.  Consequently, as best I can tell it
was a marginal optimization improvement.  So, I doubt its value.
Still, it should have had compiler warnings in advance.


Re: GCC optimizes integer overflow: bug or feature?

2007-01-01 Thread Gerald Pfeifer
On Tue, 19 Dec 2006, Ian Lance Taylor wrote:
 Here is a quick list of optimizations that mainline gcc performs which
 rely on the idea that signed overflow is undefined.  All the types
 are, of course, signed.  I made have made some mistakes.  I think this
 gives a good feel for the sorts of optimizations we can make with this
 assumption.

Thanks for compiling this exhaustive list, Ian!

Currently our documentation on -fwrapv is rather short and does not
provide examples or anything to provide such a feel:

  This option instructs the compiler to assume that signed arithmetic 
  overflow of addition, subtraction and multiplication wraps around
  using twos-complement representation.  This flag enables some 
  optimizations and disables others.  This option is enabled by default 
  for the Java front-end, as required by the Java language specification.

This flag enables some optimizations and disables others is all we
have.  I wonder whether you could perhaps add (part of) your list to
this documentation?  Or would that be too specific?

Gerald


Re: RFC: SMS problem with emit_copy_of_insn_after copying REG_NOTEs

2007-01-01 Thread Jan Hubicka
Hi,
I've commited the following patch that fixes the obvious problem of
calling emit_insn_1 for INSN_LIST argument.  It seems to solve the
problems I can reproduce and it bootstraps x86_64-linux/i686-linux and
Darwin (thanks to andreast).  The patch was preaproved by Ian.  This is
meant as fast fix to avoid bootstrap.  Andrew's optimization still makes
sense as an microoptimization and the nested libcall issue probably
ought to be resolved, but can be dealt with incrementally.

My apologizes for the problems.
Honza

Index: ChangeLog
===
--- ChangeLog   (revision 120315)
+++ ChangeLog   (working copy)
@@ -1,3 +1,8 @@
+2007-01-01  Jan Hubicka  [EMAIL PROTECTED]
+
+   * emit-rtl.c (emit_copy_of_insn_after): Do not call copy_insn_1 for
+   INSN_LIST.
+
 2007-01-01  Mike Stump  [EMAIL PROTECTED]
 
* configure.ac (HAVE_GAS_LITERAL16): Add autoconf check for
Index: emit-rtl.c
===
--- emit-rtl.c  (revision 120313)
+++ emit-rtl.c  (working copy)
@@ -5302,7 +5302,7 @@ emit_copy_of_insn_after (rtx insn, rtx a
else
  REG_NOTES (new)
   = gen_rtx_INSN_LIST (REG_NOTE_KIND (link),
-copy_insn_1 (XEXP (link, 0)),  REG_NOTES (new));
+XEXP (link, 0),  REG_NOTES (new));
   }
 
   /* Fix the libcall sequences.  */


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2007-01-01 Thread Bruce Korb

 I don't agree with this point.  There is a substantial number of
 application developers who would prefer -failsafe.  There is a
 substantial number who would prefer -frisky.  We don't know which set
 is larger.  We get a lot of bug reports about missed optimizations.

six vs. half a dozen.  Picking one is an excellent idea.
Personally, -frisky makes a bit more sense because the performance
critical application writers tend to be more tuned in to tuning.

 Also, it does not make sense to me to lump together all potentially
 troublesome optimizations under a single name.  They are not all the
 same.

No, but anything that makes things easier for application writers
is going to be useful.  At some point, it may be useful to look
carefully at all the -Wmumble/-fgrumble options and provide
convenient ways to clump them without requiring surgery.
Another day.

 I don't really see how you move from the needs of many, many C
 applications to the autoconf patch.  Many, many C applications do not
 use autoconf at all.

Autoconf handles many, many C applications, even if there are very
few when compared to the universe of all C applications. :)

 I think I've already put another proposal on the table, but maybe I
 haven't described it properly:
 
 1) Add an option like -Warnv to issue warnings about cases where gcc
implements an optimization which relies on the fact that signed
overflow is undefined.

This is completely necessary without regard to any other decisions.

 2) Add an option like -fstrict-signed-overflow which controls those
cases which appear to be risky.  Turn on that option at -O2.

Not a good plan.  -O2 should be constrained to disrupting very few
applications.  (e.g. loop unrolling seems unlikely to cause problems)
Defer the appear to be risky stuff to several years after the warning
is out.  Please.

 It's important to realize that -Warnv will only issue a warning for an
 optimization which actually transforms the code.  Every case where
 -Warnv will issue a warning is a case where -fwrapv will inhibit an
 optimization.  Whether this will issue too many false positives is
 difficult to tell at this point.  A false positive will take the form
 this optimization is OK because I know that the values in question
 can not overflow.

Rethinking wrapping is  going to take a lot of effort and will need
a lot of time.

Richard Kenner wrote:
 I'm not sure what you mean: there's the C standard.
We have many standards, starting with KRv1 through the current draft.
Which do you call, the C standard?


Re: GCC optimizes integer overflow: bug or feature?

2007-01-01 Thread Richard Kenner
 Currently our documentation on -fwrapv is rather short and does not
 provide examples or anything to provide such a feel:
 
   This option instructs the compiler to assume that signed arithmetic 
   overflow of addition, subtraction and multiplication wraps around
   using twos-complement representation.  This flag enables some 
   optimizations and disables others.  This option is enabled by default 
   for the Java front-end, as required by the Java language specification.
 
 This flag enables some optimizations and disables others is all we
 have.  I wonder whether you could perhaps add (part of) your list to
 this documentation?  Or would that be too specific?

Might it be better to describe this option in terms of the effect on
the language, namely to say that it *defines* signed overflow in terms
of wrapping, rather than making what would necessarily be vague comments
about optimizations?


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2007-01-01 Thread Richard Kenner
 Changing syntax and semantics should not be impossible (it's being done),

What change?  There has never been a version of the C language, all the
way from KRv1 to the present, that defined signed overflow.

The problem is the previous compilers never took advantage of the permission
to make it undefined and so people wrote programs that were technically
incorrect, but worked with existing compilers.

 By real warnings in compilers that wake  people to the
 fact that the semantics of their coding style are in the process of being
 altered, so watch out. 

The problem, as has been discussed here at length, is that there's no WAY
for the compiler to determine in any reliable manner, whether a particular
construct is depending on wrapping semantics or not.

Suppose you have the straightforward:

int a = b + c;

Might b+c overflow in that program?  Might the author be depending on that
overflow wrapping?  There's simply no way to know.

You certainly don't want to produce a warning on every addition operation
where you can't prove that it won't overflow because that would produce so
many warnings as to be useless.

That means it's impossible to do what you are suggesting, which is to
warn about every usage whose definition would change, as you consider it.

The BEST you can do is to look at each time that the compiler would want
to make a transformation (optimization) based on assuming that overflow
is undefined and warn about THAT if it can't prove that the overflow
can't occur.  However, there are still two problems with this:

(1) It will only detect cases where the present optimization technology
would make a change; later advances in the compiler might affect more of them.
(2) The false-positive rate of such a warning might well ALSO be too high
to be useful if, in real programs, most of these cases DO NOT in practice
overflow, but the compiler isn't able to prove it.

 why is it all these language lawyers are spending so much time
 arguing over what everyone ought to be able to understand from the
 documents themselves?

The discussions in this thread aren't what is valid standard C: everybody
agrees on that.  The issue is what we should do about the cases in legacy
programs that are invalid.

 My main point is simply this:  change the language when there is compelling
 need.  

There is no proposal to change the language, nor would this list be the proper
place for such a proposal anyway.  

 WRT strict aliasing, I've never seen any data that indicated that the
 language change was compelling.  Consequently, as best I can tell it
 was a marginal optimization improvement.  So, I doubt its value.

The problem there is that strict aliasing is in itself not an optimization,
but merely provides data for use by nearly all the optimizers.  So the best
we can do is to measure the effect having that data has on optimizers right
now.  As optimizers get more sophisticated, there are two competing effects:
first, they might be able to do a better job at getting the data without
relying on type-based aliasing.  But also, they might make better use of
the data.

The other thing about aliasing rules is that code that violates them is
also harder to read and understand because knowing what can and can't
alias is also important to the human reader. So there are multiple reasons
for wanting code to conform to these rules.


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2007-01-01 Thread Richard Kenner
 Richard Kenner wrote:
  I'm not sure what you mean: there's the C standard.
 We have many standards, starting with KRv1 through the current draft.
 Which do you call, the C standard?

The current one.  All others are previous C standards. However, it
doesn't matter in this case since ALL of them have signed overflow
being undefined.


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2007-01-01 Thread Paul Eggert
Mark Mitchell [EMAIL PROTECTED] writes:

 * Dan Berlin says that xlc assumes signed overflow never occurs, gets
 much better performance as a result, and that nobody has complained.

Most likely xlc and icc have been used to compile the gnulib
mktime-checking code many times without incident (though I can't
prove this, as I don't use xlc and icc myself).  If so, icc and
xlc do not optimize away the overflow-checking test in question
even though C99 entitles them to do so; this might help explain
why they get fewer complaints about this sort of thing.

 I haven't yet seen that anyone has actually tried the obvious: run SPEC
 with and without -fwrapv.

Richard Guenther added -fwrapv to the December 30 run of SPEC at
http://www.suse.de/~gcctest/SPEC/CFP/sb-vangelis-head-64/recent.html
and
http://www.suse.de/~gcctest/SPEC/CINT/sb-vangelis-head-64/recent.html.
Daniel Berlin and Geert Bosch disagreed about how to interpret
these results; see http://gcc.gnu.org/ml/gcc/2007-01/msg00034.html.
Also, the benchmarks results use -O3 and so aren't directly
applicable to the original proposal, which was to enable -fwrapv
for -O2 and less.

 Also, of the free software that's assuming signed overflow wraps, can we
 qualify how/where it's doing that?  Is it in explicit overflow tests?
 In loop bounds?

We don't have an exhaustive survey, but of the few samples I've
sent in most of code is in explicit overflow tests.  However, this
could be an artifact of the way I searched for wrapv-dependence
(basically, I grep for overflow in the source code).  The
remaining code depended on -INT_MIN evaluating to INT_MIN.  The
troublesome case that started this thread was an explicit overflow
test that also acted as a loop bound (which is partly what caused
the problem).


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2007-01-01 Thread Paul Eggert
[EMAIL PROTECTED] (Richard Kenner) writes:

 More important, we don't yet have an easy way to characterize the
 cases where (2) would apply.  For (2), we need a simple, documented
 rule that programmers can easily understand, so that they can easily
 verify that C code is safe

 I'm not sure what you mean: there's the C standard.

(2) was Ian Lance Taylor's proposal to add an option which acts like
-fwrapv except not so all-encompassingly.  That is, as I understood
it, -fstrict-signed-overflow would act like -fwrapv except it would
guarantee wrapv semantics only in some cases, not in all cases.  This
is intended to be a compromise between -O2 and -O2 -fwrapv, a
compromise that gets almost all the performance of former and almost
all the safety of the latter.  (Or maybe I got it backwards and that
is what -fno-strict-signed-overflow would mean, but the spelling of
the option isn't the crucial point here, the semantics are.)

So the question is, what the some cases would be.  That is, how
would we write the documentation for -fstrict-signed-overflow?  This
is not a question that the C standard can answer.  Nor do I think it
an easy question to answer -- at least, we don't have an answer now.


gfortran year end status report

2007-01-01 Thread Steve Kargl
Gfortran has achieved many milestones this year and hopefully the
contributors can continue to move forward with bug fixes, conformance
to Fortran 95 standard, and the implementation of Fortran 2003 features.
A few highlights from the past year are:

  1) Jakub Jelinek committed the front end support for OpenMP 2.5.
  2) Erik Edelmann and Paul Thomas have implemented allocatable
 components (also known as TR 15581).
  3) Paul Thomas has implemented the array-valued TRANSFER intrinsic
 and an inline version of DOT_PRODUCT.
  4) Roger Sayle has improved the processing of WHERE statements and
 blocks.  This included improved dependency analysis of array
 indices and the use of single bits to construct masks.
  5) Jerry DeLisle added support for the Fortran 2003 streaming IO
 extension.
  6) Francois-Xavier Coudert implemented the use of a optimized BLAS
 library (-fexternal-blas) for matrix operations such as matmul
 rather than the built-in algorithm.
  7) Several Fortran 2003 feature have be added.  These include 
 VOLATILE, VALUE and PROTECTED statement/attribute, IMPORT, and
 the ISO_FORTRAN_ENV intrinsic modules.
  8) Thomas Koenig implements the Intel 4-byte record marker scheme,
 which is now the default for unformatted files so that gfortran
 is compatible with g77 and most other compilers.

There are, of course, many other improvements to gfortran.  The
gfortran developers (at least the one writing this text) wish to
thank all those who have tested and submitted feedback (i.e., bug
reports).  As always, the gfortran developers encourage fresh blood
to take a stab at fixing a bug or implementing missing functionality.

As of this writing, I am aware only two bugs in gfortran that 
prevent full (unbuggy) support of Fortran 77.  These involve
an implied do-loop in a data statement with more 65K elements,
and nested implied do-loops in a data statement.  See 
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19925
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=23232
The gfortran wiki has quick links to important bugs if someone
has an inclinations to attack a bug.  See the wiki for

  # Bug Bashing (status 24th November 2006)
* ICE-ON-VALID-CODE, REJECTS-VALID  WRONG-CODE 28 bugs (13 assigned)
* ICE-ON-INVALID-CODE  ACCEPTS-INVALID 30 bugs (2 assigned)
* DIAGNOSTIC 41 bugs (6 assigned) 

There were 420 commits to the gfortran front end in 2006.  A ChangeLog
for a multi-author patch is credited to the first listed name. The
number of commits by committer are

100 Paul Thomas
 52 Francois-Xavier Coudert
 33 Steven G. Kargl
 27 Brooks Moses
 25 Roger Sayle
 24 Tobias Burnus
 18 Erik Edelmann, Tobias Schlueter
 17 Jakub Jelinek, Thomas Koenig
 13 Jerry DeLisle
  9 Kazu Hirata
  8 Andrew Pinski
  7 Bernhard Fischer, H.J. Lu
  5 Richard Guenther, Asher Langton
  4 Richard Henderson
  3 Rafael Avila de Espindola, Daniel Franke, Feng Wang
  2 Steven Bosscher, Toon Moene, Volker Reichelt
  1 Karl Berry, Janne Blomqvist, Per Bothner, Bud Davis, Steve Ellcey,
Wolfgang Gellerich, Kaveh R. Ghazi, Jan Hubicka, Geoffrey Keating,
Kaz Kojima, Joseph S. Myers, Diego Novillo, Carlos O'Donell, 
Gerald Pfeifer, Richard Sandiford, Danny Smith, Mike Stump

There were 133 commits to libgfortran in 2006.  The number of
commits by committer is as follows:

46 Jerry DeLisle
34 Francois-Xavier Coudert 
14 Thomas Koenig
 8 Janne Blomqvist, Steven G. Kargl
 4 Tobias Burnus, Paul Thomas
 3 Paolo Bonzini, Jakub Jelinek 
 2 Roger Sayle, Danny Smith 
 1 John David Anglin, Rainer Emrich, Richard Guenther,
   Carlos O'Donell, Dale Ranta

Within these commits, over 450 problem reports as listed in Bugzilla
were fixed.  The PRs listed in ChangeLogs were fixed by the following
individuals.

John David Anglin 27254
Paolo Bonzini 25259, 26188
Tobias Burnus 29452, 29625
Janne Blomqvist   25828, 25949, 27919
Steven Bosscher   27378, 28439, 29101
Tobias Burnus 23994, 27546, 27546, 27546, 27588, 28224, 28585, 29452,
  29601, 29657, 29711, 29806, 29962, 39238
FX Coudert16580, 18791, 19777, 20460, 20892, 21435, 23862, 24285,
  24518, 24549, 24685, 24903, 25425, 25681, 26025, 26540,
  26551, 26712, 26769, 26801, 27320, 27478, 27524, 27552,
  27553, 27588, 27874, 27895, 27958, 27965, 28081, 28094,
  28129, 28163, 29067, 29210, 29288, 29391, 29489, 29565,
  29711, 29713, 29810, 29892, 
Bud Davis 21130, 28974
Jerry DeLisle 17741, 19101, 19260, 19261, 19262, 19310, 19904, 20257,
  22423, 24268, 24459, 25289, 25545, 25598, 25631, 25697,
  25828, 25835, 26136, 26423, 26464, 26499, 26509, 26554,
  26661, 26766, 26880, 26890, 26985, 27138, 27304, 27360,
  27575, 27634, 27704, 27757, 27954, 28335, 28339, 28354,
  29053, 29099, 29277, 29563, 29752, 30005, 30014, 30145,
  30200
Erik 

Re: changing configure to default to gcc -g -O2 -fwrapv ...

2007-01-01 Thread Ian Lance Taylor
Bruce Korb [EMAIL PROTECTED] writes:

 WRT strict aliasing, I've never seen any data that indicated that the
 language change was compelling.  Consequently, as best I can tell it
 was a marginal optimization improvement.  So, I doubt its value.
 Still, it should have had compiler warnings in advance.

I've seen programs that doubled in speed when strict aliasing was
turned on.

You won't see this effect on an x86, since all current x86
implementations are out-of-order processors.  You can think of an
out-of-order processor as doing on-the-fly perfectly accurate alias
analysis.

For in-order processors with complex scheduling requirements, strict
aliasing can make a dramatic difference in execution time, because it
give the compiler considerable freedom to reorder loads and stores.

Ian


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2007-01-01 Thread Ian Lance Taylor
Bruce Korb [EMAIL PROTECTED] writes:

  2) Add an option like -fstrict-signed-overflow which controls those
 cases which appear to be risky.  Turn on that option at -O2.
 
 Not a good plan.  -O2 should be constrained to disrupting very few
 applications.  (e.g. loop unrolling seems unlikely to cause problems)
 Defer the appear to be risky stuff to several years after the warning
 is out.  Please.

Why should we handle signed overflow any differently from strict
aliasing?  We currently enable strict aliasing at -O2.

Ian


Re: Link tests not allowed

2007-01-01 Thread Jim Wilson

Douglas B Rupp wrote:
I'm happy to try writing a patch, but my version of gcc/configure 
doesn't look like what you described.


I tried a build with the gcc-4.1.x branch, and gcc/nm is computed 
correctly, so the problem I described on mainline does not exist here. 
Unfortunately, I wasn't able to produce a problem as I don't have a copy 
of ppc-aix to use for the sysroot, and I don't see any configure error 
before the build fails because the sysroot is missing.  Also, I don't 
have your complete configure command, or your config.log file, so I can 
not do anything further here without more info.

--
Jim Wilson, GNU Tools Support, http://www.specifix.com


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2007-01-01 Thread Daniel Berlin

On 1/1/07, Paul Eggert [EMAIL PROTECTED] wrote:

Mark Mitchell [EMAIL PROTECTED] writes:

 * Dan Berlin says that xlc assumes signed overflow never occurs, gets
 much better performance as a result, and that nobody has complained.

Most likely xlc and icc have been used to compile the gnulib
mktime-checking code many times without incident (though I can't
prove this, as I don't use xlc and icc myself).  If so, icc and
xlc do not optimize away the overflow-checking test in question
even though C99 entitles them to do so; this might help explain
why they get fewer complaints about this sort of thing.

 I haven't yet seen that anyone has actually tried the obvious: run SPEC
 with and without -fwrapv.

Richard Guenther added -fwrapv to the December 30 run of SPEC at
http://www.suse.de/~gcctest/SPEC/CFP/sb-vangelis-head-64/recent.html
and
http://www.suse.de/~gcctest/SPEC/CINT/sb-vangelis-head-64/recent.html.
Daniel Berlin and Geert Bosch disagreed about how to interpret
these results; see http://gcc.gnu.org/ml/gcc/2007-01/msg00034.html.
Also, the benchmarks results use -O3 and so aren't directly
applicable to the original proposal, which was to enable -fwrapv
for -O2 and less.


No offense, but all enabling wrapv at O2 or less would do is cause
more bug reports about
1. Getting different program behavior between O2 and O3
2. Missed optimizations at O2
It also doesn't fit with what we have chosen to differentiate
optimization levels based on.

IMHO, it's just not the right solution to this problem.



 Also, of the free software that's assuming signed overflow wraps, can we
 qualify how/where it's doing that?  Is it in explicit overflow tests?
 In loop bounds?

We don't have an exhaustive survey, but of the few samples I've
sent in most of code is in explicit overflow tests.  However, this
could be an artifact of the way I searched for wrapv-dependence
(basically, I grep for overflow in the source code).  The
remaining code depended on -INT_MIN evaluating to INT_MIN.  The
troublesome case that started this thread was an explicit overflow
test that also acted as a loop bound (which is partly what caused
the problem).


If your real goal is to be able to just write explicit bounds
checking, and you don't want wrapping semantics for signed integers in
general (which i don't think most people do, but as with every single
person on this discussion, it seems we all believe we are in the 99%
of programmers who want something), then we should just disable this
newly added ability for VRP to optimize signed overflow and call it a
day.
VRP's optimizations are not generally useful in determining loop
bounds (we have other code that does all the bound determination) or
doing data dependence, so you would essentially lose no performance
except in very weird cases.

Of course, you will still be able to come up with cases where signed
overflow fails to wrap.  But IMHO, we have to draw the line somewhere,
and i'm fine with if you want to test overflow, do it like this and
we will guarantee it will work.

We do the same thing with type punning through unions (guarantee that
reading a different member than you write will work) , even though the
standard says we don't have to.

All the arguments about what most people are going to want are
generally flawed on all sides.  Where there are reasonable positions
on both sides, nobody ever accurately predicts what the majority of a
hugely diverse population of language users is going to want, and
almost everyone believes they are in that majority.

--Dan


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2007-01-01 Thread Mark Mitchell
Daniel Berlin wrote:

 Richard Guenther added -fwrapv to the December 30 run of SPEC at
 http://www.suse.de/~gcctest/SPEC/CFP/sb-vangelis-head-64/recent.html
 and
 http://www.suse.de/~gcctest/SPEC/CINT/sb-vangelis-head-64/recent.html.
 Daniel Berlin and Geert Bosch disagreed about how to interpret
 these results; see http://gcc.gnu.org/ml/gcc/2007-01/msg00034.html.

Thank you for pointing that out.  I apologize for having missed it
previously.

As others have noted, one disturbing aspect of that data is that it
shows that there is sometimes an inverse correlation between the base
and peak flags.  On the FP benchmarks, the results are mostly negative
for both base and peak (with 168.wupwise the notable exception); on the
integer benchmarks it's more mixed.  It would be nice to have data for
some other architectures: anyone have data for ARM/Itanium/MIPS/PowerPC?

So, my feeling is similar to what Daniel expresses below, and what I
think Ian has also said: let's disable the assumption about signed
overflow not wrapping for VRP, but leave it in place for loop analysis.

Especially given:

 We don't have an exhaustive survey, but of the few samples I've
 sent in most of code is in explicit overflow tests.  However, this
 could be an artifact of the way I searched for wrapv-dependence
 (basically, I grep for overflow in the source code).  The
 remaining code depended on -INT_MIN evaluating to INT_MIN.  The
 troublesome case that started this thread was an explicit overflow
 test that also acted as a loop bound (which is partly what caused
 the problem).

it sounds like that would eliminate most of the problem.  Certainly,
making -INT_MIN evaluate to INT_MIN, when expressed like that, is an
easy thing to do; that's just a guarantee about constant folding.
There's no reason for us not to document that signed arithmetic wraps
when folding constants, since we're going to fold the constant to
*something*, and we may as well pick that answer.

I don't even necessarily think we need to change our user documentation.
 We can just choose to make the compiler not make this assumption for
VRP, and to implement folding as two's-complement arithmetic, and go on
with life.  In practice, we probably won't miscompile many
non-conforming programs, and we probably won't miss two many useful
optimization opportunities.

Perhaps Richard G. would be so kind as to turn this off in VRP, and
rerun SPEC with that change?

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2007-01-01 Thread Richard Kenner
 No offense, but all enabling wrapv at O2 or less would do is cause
 more bug reports about
 1. Getting different program behavior between O2 and O3
 2. Missed optimizations at O2
 It also doesn't fit with what we have chosen to differentiate
 optimization levels based on.
 
 IMHO, it's just not the right solution to this problem.

I agree. I think -O1 or less would make more sense (though I wouldn't be in
favor of that either), but I agree that making that sort of a distinction
between -O2 and -O3 is a bad idea.

 VRP's optimizations are not generally useful in determining loop
 bounds (we have other code that does all the bound determination) or
 doing data dependence, so you would essentially lose no performance
 except in very weird cases.

The question that I'd like to understand the answer to is what kinds of
optimizations DO we get by having VRP optimized signed overflow.  Is it just
the elimination of tests on overflow?  If so, then it strikes me as
definitely wrong since those tests are probably there precisely to test for
overflow.

It's somewhat analogous to the Ada issue with VRP a while ago: if I say that
a type T has a range of 10 to 20, I don't want VRP to delete a validity test
to see if a variable of that type has a valid value.  (It's a little more
complex in the Ada case since an out-of-range value is a bounded error in
Ada, not undefined, but the issue is very similar.)

 But IMHO, we have to draw the line somewhere, and i'm fine with if
 you want to test overflow, do it like this and we will guarantee it
 will work.

I think that's an interesting issue because there's no simple way of doing an
overflow test that doesn't assume wrapping semantics.

 Where there are reasonable positions on both sides, nobody ever
 accurately predicts what the majority of a hugely diverse population
 of language users is going to want, and almost everyone believes
 they are in that majority.

I agree.  That's why I support a middle-of-the-road position where we make
very few guarantees, but do the best we can anyway to avoid gratuitously
(meaning without being sure we're gaining a lot of optimization) breaking
legacy code.


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2007-01-01 Thread Mark Mitchell
Richard Kenner wrote:

 Where there are reasonable positions on both sides, nobody ever
 accurately predicts what the majority of a hugely diverse population
 of language users is going to want, and almost everyone believes
 they are in that majority.
 
 I agree.  That's why I support a middle-of-the-road position where we make
 very few guarantees, but do the best we can anyway to avoid gratuitously
 (meaning without being sure we're gaining a lot of optimization) breaking
 legacy code.

Yes, I think that you, Danny, Ian, and I are all agreed on that point,
and, I think, that disabling the assumption about signed overflow not
occurring during VRP (perhaps leaving that available under control of a
command-line option, for those users who think it will help their code),
 is the right thing to try.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Link tests not allowed

2007-01-01 Thread Douglas B Rupp

Jim Wilson wrote:

Douglas B Rupp wrote:
I'm happy to try writing a patch, but my version of gcc/configure 
doesn't look like what you described.


I tried a build with the gcc-4.1.x branch, and gcc/nm is computed 
correctly, so the problem I described on mainline does not exist here. 
Unfortunately, I wasn't able to produce a problem as I don't have a copy 
of ppc-aix to use for the sysroot, and I don't see any configure error 
before the build fails because the sysroot is missing.  Also, I don't 
have your complete configure command, or your config.log file, so I can 
not do anything further here without more info.


Would you like the complete config.log by private email?

../gcc-41/configure   --target=powerpc-ibm-aix5.2.0.0 
--prefix=/home/rupp/gnat --with-gnu-as 
--with-local-prefix=/home/rupp/gnat/local --enable-threads=posix 
-disable-nls --disable-multilib --enable-checking=release 
--enable-languages=c,ada



config.log fragment
.
configure:4787: checking for pid_t
configure:4811: /home/rupp/ngnat/buildxppcaix/./gcc/xgcc 
-B/home/rupp/ngnat/buildxppcaix/./gcc/ 
-B/home/rupp/gnat/powerpc-ibm-aix5.2.0.0/bin/ 
-B/home/rupp/gnat/powerpc-ibm-aix5.2.0.0/lib/ -isystem 
/home/rupp/gnat/powerpc-ibm-aix5.2.0.0/include -isystem 
/home/rupp/gnat/powerpc-ibm-aix5.2.0.0/sys-include -c -O2 -g 
conftest.c 5

configure:4817: $? = 0
configure:4821: test -z
 || test ! -s conftest.err
configure:4824: $? = 0
configure:4827: test -s conftest.o
configure:4830: $? = 0
configure:4841: result: yes
configure:5861: checking for library containing strerror
configure:5869: error: Link tests are not allowed after GCC_NO_EXECUTABLES.




Re: changing configure to default to gcc -g -O2 -fwrapv ...

2007-01-01 Thread Paul Eggert
Mark Mitchell [EMAIL PROTECTED] writes:

 let's disable the assumption about signed overflow not wrapping for
 VRP, but leave it in place for loop analysis.

As far as I know this will work for all the wrapv-assuming code that
we've found, so it should be an improvement.  Thanks to all for
helping to think it through.

 it sounds like that would eliminate most of the problem.  Certainly,
 making -INT_MIN evaluate to INT_MIN, when expressed like that, is an
 easy thing to do; that's just a guarantee about constant folding.

Well, no, just to clarify: the GCC code in question actually computed
- x, and relied on the fact that the result was INT_MIN if x (an
unknown integer) happened to be INT_MIN.  Also, now that I'm thinking
about it, some the Unix v7 atoi() implementation relied on x + 8
evaluating to INT_MIN when x happened to be (INT_MAX - 7).  These are
the usual kind of assumptions in this area.

 I don't even necessarily think we need to change our user documentation.

Here I'd like to demur, since I think it's useful to document
something that users can rely on.

I'm not asking that we document every possible wrapv-assuming code
that happens to work.  I'm only asking for enough so that users can
easily write code that tests for signed integer overflow, or to
compute sums, products, etc. that would apply if -fwrapv were in
effect.


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2007-01-01 Thread Ian Lance Taylor
[EMAIL PROTECTED] (Richard Kenner) writes:

 The question that I'd like to understand the answer to is what kinds of
 optimizations DO we get by having VRP optimized signed overflow.  Is it just
 the elimination of tests on overflow?  If so, then it strikes me as
 definitely wrong since those tests are probably there precisely to test for
 overflow.

VRP as currently written adjust limits out to infinity of an
appropriate sign for variables which are changed in loops.  It then
assumes that the (signed) variable will not wrap past that point,
since that would constitute undefined signed overflow.

For example:

extern void bar (void);
void
foo (int m)
{
  int i;
  for (i = 1; i  m; ++i)
{
  if (i  0)
bar ();
}
}

Here the limit for i without -fwrapv becomes (1, INF].  This enables
VRP to eliminate the test i  0.  With -fwrapv, this test of course
can not be eliminated.  VRP is the only optimization pass which is
able to eliminate that test.

Ian


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2007-01-01 Thread Richard Kenner
 Here I'd like to demur, since I think it's useful to document
 something that users can rely on.

 I'm not asking that we document every possible wrapv-assuming code
 that happens to work.  I'm only asking for enough so that users can
 easily write code that tests for signed integer overflow, or to
 compute sums, products, etc. that would apply if -fwrapv were in effect.

I'd rather encourage users to write code that conforms to the standard
and where overflow is undefined.  Supporting old code is one thing,
but new code ought to follow the standard.


Re: Link tests not allowed

2007-01-01 Thread Daniel Jacobowitz
On Mon, Jan 01, 2007 at 06:19:08PM -0800, Douglas B Rupp wrote:
 configure:4811: /home/rupp/ngnat/buildxppcaix/./gcc/xgcc 
 -B/home/rupp/ngnat/buildxppcaix/./gcc/ 
 -B/home/rupp/gnat/powerpc-ibm-aix5.2.0.0/bin/ 
 -B/home/rupp/gnat/powerpc-ibm-aix5.2.0.0/lib/ -isystem 
 /home/rupp/gnat/powerpc-ibm-aix5.2.0.0/include -isystem 
 /home/rupp/gnat/powerpc-ibm-aix5.2.0.0/sys-include -c -O2 -g 
 conftest.c 5

I would recommend trying to link a program using exactly that command
line (without the -c conftest.c of course) and seeing what it tells you.
The problem is usually obvious.

-- 
Daniel Jacobowitz
CodeSourcery


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2007-01-01 Thread Ian Lance Taylor
Paul Eggert [EMAIL PROTECTED] writes:

 Ian Lance Taylor [EMAIL PROTECTED] writes:
 
  Also, it does not make sense to me to lump together all potentially
  troublesome optimizations under a single name.
 
 As a compiler developer, you see the trees.  But most users just see a
 forest and want things to be simple.  Even adding a single binary
 switch (-fno-risky/-frisky) will be an extra level of complexity that
 most users don't particularly want to know about.  Requiring users to
 worry about lots of little switches (at least -fwrapv/-fundefinedv/-ftrapv,
 -fstrict-signed-overflow/-fno-strict-signed-overflow, and
 -fstrict-aliasiang/-fno-strict-aliasing, and probably more) makes GCC
 harder to use conveniently, and will make things more likely to go
 wrong in practical use.

You're right.  I should have said: it does not make sense to me to
lump together all potentially troublesome optimizations under a single
option.  But it would be reasonable to introduce a single option which
groups a collection of other options.

I don't think -frisky is a good name for that option.  A better name
would be -fstrict.


  1) Add an option like -Warnv to issue warnings about cases where gcc
 implements an optimization which relies on the fact that signed
 overflow is undefined.
 
  2) Add an option like -fstrict-signed-overflow which controls those
 cases which appear to be risky.  Turn on that option at -O2.
 
 This sounds like a good approach overall, but the devil is in the
 details.  As you mentioned, (1) might have too many false positives.
 
 More important, we don't yet have an easy way to characterize the
 cases where (2) would apply.  For (2), we need a simple, documented
 rule that programmers can easily understand, so that they can easily
 verify that C code is safe: for most applications this is more
 important than squeezing out the last ounce of performance.  Having
 the rule merely be does your version of GCC warn about it on your
 platform? doesn't really cut it.
 
 So far, the only simple rule that has been proposed is -fwrapv, which
 many have said is too conservative as it inhibits too many useful
 optimizations for some numeric applications.  I'm not yet convinced
 that -fwrapv harms performance that much for most real-world
 applications, but if we can come up with a less-conservative (but
 still simple) rule, that would be fine.  My worry, though, is that the
 less-conservative rule will be too complicated.

I hope I'm not beating a dead horse, but I think it's important to
understand that there is a big difference between -fwrapv and
full-powered optimizations based on signed overflow being undefined.
-fwrapv specifies precisely how signed overflow should behave, and
thus requires the compiler to handle numbers in a fully specified
manner.  But the vast majority of C/C++ code never involves signed
overflow, and thus does not require any special handling.  For
example, my earlier code involving optimizing ((X * 10) / 5) to (X /
2).  This transformation is invalid under -fwrapv because it will
mishandle certain values of X.  But it is valid to make this
transformation if the actual values of X are such that overflow never
actually occurs.

Therefore, we have these more-or-less easily characterized aspects of
signed overflow handling:

1) Signed numbers always wrap on overflow using standard
   twos-complement arithmetic (-fwrapv, Java).

2) We assume that in loops using signed indexes, the index value never
   wraps around.  That is, if the variable i is always incremented
   (decremented) in a loop, we assume that i = INT_MAX (= INT_MIN)
   is always true.

   2a) We use that assumption only when considering the number of
   times the loop will be executed, or when considering whether
   the loop terminates, or when determining the final value the
   variable will hold.  In particular we do not eliminate the loop
   termination test unless we can determine precisely how many
   times the loop will be executed.

   2b) We apply that assumption to all uses of the variable i.

3) We perform algebraic simplifications based on the assumption that
   signed arithmetic in the program never overflows.  We can safely
   translate ((X * 10) / 5) to (X / 2) because we assume that X * 10
   will not overflow.  If this arithmetic does overflow, we guarantee
   that you will always get some valid number, though we don't try to
   specify which number it will be.

4) We permit an exception to occur if there is a signed overflow.  If
   we can prove that some expression causes signed overflow, we are
   permitted to assume that that case will never arise.

5) We require an exception to occur if there is a signed overflow
   (-ftrapv).

Case 1 may not be used with any of the other cases.  The same is true
of case 5.  The other cases may be used in any combination.

So this suggests some options:

-fwrapv-loop-iterations (controls 2a)
-fwrapv-loop-indexes (controls 2b)
-fwrapv-loop 

Re: changing configure to default to gcc -g -O2 -fwrapv ...

2007-01-01 Thread Robert Dewar

Ian Lance Taylor wrote:


I don't think -frisky is a good name for that option.  A better name
would be -fstrict.


or perhaps -fstandard

which says my program is 100% compliant ISO C. please mr. compiler
make any assumptions you like based on knowing this is the case. If
my claim that I am 100% compliant is wrong, you may punish me by
doing arbitrary horrible things to my code.

P.S. I won't mind if you warn me about these horrible things, but
I won't insist you do so, or blame you if you cannot.

Then all those who know ISO C, and of course would never dream of
writing anything that is non-conforming can use this switch and
know that the compiler will not be fettered by worrying about other
ignorant programmers junk code.

Furthermore, once this switch is on, the compiler writers will know
that the compiler need not worry, and can really go to town with all
sorts of amazing optimizations based on this assumption.


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2007-01-01 Thread Richard Kenner
 VRP as currently written adjust limits out to infinity of an
 appropriate sign for variables which are changed in loops.  It then
 assumes that the (signed) variable will not wrap past that point,
 since that would constitute undefined signed overflow.

But isn't that fine since OTHER code is going to assume that loop invariants
don't overflow?  Or is it that we'd have to refine VRP's test to only do it
in that case?

   for (i = 1; i  m; ++i)
 {
   if (i  0)
   bar ();
 }

Of course, this is an example where either the programmer is doing something
very silly or else is expecting overflow and depending on wrap semantics, so
it seems to me marginal to remove that if.  My suggestion would be to issue
a warning saying that the test will never be false, but leaving it in.


Re: changing configure to default to gcc -g -O2 -fwrapv ...

2007-01-01 Thread Andrew Pinski
On Mon, 2007-01-01 at 22:21 -0500, Richard Kenner wrote:
 
 Of course, this is an example where either the programmer is doing
 something
 very silly or else is expecting overflow and depending on wrap
 semantics.

or it comes from inlining of something like get() which has bounds
checking.

-- Pinski



Re: changing configure to default to gcc -g -O2 -fwrapv ...

2007-01-01 Thread Gabriel Dos Reis
[EMAIL PROTECTED] (Richard Kenner) writes:


[...]

|for (i = 1; i  m; ++i)
|  {
|if (i  0)
|  bar ();
|  }
| 
| Of course, this is an example where either the programmer is doing something
| very silly or else is expecting overflow and depending on wrap semantics, so
| it seems to me marginal to remove that if.  My suggestion would be to issue
| a warning saying that the test will never be false, but leaving it in.

That make sense to me.

-- Gaby


[Bug c++/30340] pure virtual function called on const declared with previous declaration without a definition, const assigned by temporary

2007-01-01 Thread gdr at integrable-solutions dot net


--- Comment #5 from gdr at integrable-solutions dot net  2007-01-01 09:47 
---
Subject: Re:  pure virtual function called on const  declared with previous
declaration without a definition, const  assigned by temporary

pinskia at gcc dot gnu dot org [EMAIL PROTECTED] writes:

| For the last question on this code:
| C c(1, B());
| 
| What is the life time of the temp that holds B()?

That temporary is destroyed at the of the declaration of c --
e.g. right before the semicolon if I'm allowed that expression.

-- Gaby


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30340



[Bug bootstrap/30341] New: Makefile using mv instead of ln not working on WinXP Cygwin Bash

2007-01-01 Thread rob1weld at aol dot com
I am compiling on WinXP using Cygwin's Bash - I compiled 4.1.1 OK but not 4.2.0
(CVS). The makefile works away for a long time and finally stops - unfinished.

What I did to fix it:

After you run the ./configure script there will be a makefile in your build
directory. Open it in wordpad and also open the makefile from your 4.1.1 build
in another wordpad window.

Goto the wordpad with the 4.1.1 Makefile and search for the text
stage1-start:: . Do the same with the 4.2.0 Makefile. Rename the 4.2.0
stage?-start:: sections to Origonal-stage?-start:: and the 4.2.0
stage?-end:: sections to Origonal-stage?-end::. Now copy the 4.1.1
stage?-start:: 
and 0 stage?-end:: sections into the 4.2.0 Makefile at each appropriate
place. Hope that is clear.

You _could_ also fixup stageprofile-start:: while you are there, I did in my
Makefile but have not yet typed make stageprofile to test if profiled
building is working in 4.2.0. 

Don't forget that 4.2.0 has an extra directory that is not in 4.1.1
(libdecnumber) so in each of the stage?-start/end sections you will need to
add a couple of lines for the libdecnumber directory.

I found that the 4.1.1 Makefile worked perfectly (for 4.1.1) and adding those
sections into the 4.2.0 Makefile caused it to work correctly. Prior to making
this fix I was unable to get make to build properly. It kept trying to coping
the directories into each other instead of renaming them and shuffling them (if
that is how it could best be described).

The 4.2.0 Makefile says We use mv on platforms where symlinks to directories
do not work or are not reliable. I've not got to running down why the
configure script chose the mv method but it breaks on Windows XP (for me).
Other people claim to have done a cygwin build (with far fewer things enabled)
but they might not have built it on Windows XP (where ln works fine and mv
copies into the directory instead of overwriting the directory - it is not
dos copy).

If the maintainers where to use the 4.1.1 release version style of operation
for the stages of the 4.2.0 (CVS) make then it would work properly (for me and
I imagine for others too - not a lot of WinXP builds reported).

Here is a tiny portion of one stage of the 4.2.0 makefile to further
demonstrate:

# * We build only C (and possibly Ada).


.PHONY: stage1-start stage1-end

Origonal-stage1-start::
@: $(MAKE); $(stage); \
echo stage1  stage_current ; \
echo stage1  stage_last; \
$(SHELL) $(srcdir)/mkinstalldirs $(HOST_SUBDIR)
@cd $(HOST_SUBDIR); [ -d stage1-gcc ] || \
  mkdir stage1-gcc; \
mv stage1-gcc gcc 
@cd $(HOST_SUBDIR); [ -d stage1-intl ] || \
  mkdir stage1-intl; \
mv stage1-intl intl 
@cd $(HOST_SUBDIR); [ -d stage1-libcpp ] || \
  mkdir stage1-libcpp; \
mv stage1-libcpp libcpp 
@cd $(HOST_SUBDIR); [ -d stage1-libdecnumber ] || \
  mkdir stage1-libdecnumber; \
mv stage1-libdecnumber libdecnumber 
@cd $(HOST_SUBDIR); [ -d stage1-libiberty ] || \
  mkdir stage1-libiberty; \
mv stage1-libiberty libiberty 
@cd $(HOST_SUBDIR); [ -d stage1-zlib ] || \
  mkdir stage1-zlib; \
mv stage1-zlib zlib 
@[ -d stage1-$(TARGET_SUBDIR) ] || \
  mkdir stage1-$(TARGET_SUBDIR); \
mv stage1-$(TARGET_SUBDIR) $(TARGET_SUBDIR) 

Origonal-stage1-end:: 
@if test -d $(HOST_SUBDIR)/gcc ; then \
  cd $(HOST_SUBDIR); mv gcc stage1-gcc  ; \
fi
@if test -d $(HOST_SUBDIR)/intl ; then \
  cd $(HOST_SUBDIR); mv intl stage1-intl  ; \
fi
@if test -d $(HOST_SUBDIR)/libcpp ; then \
  cd $(HOST_SUBDIR); mv libcpp stage1-libcpp  ; \
fi
@if test -d $(HOST_SUBDIR)/libdecnumber ; then \
  cd $(HOST_SUBDIR); mv libdecnumber stage1-libdecnumber  ; \
fi
@if test -d $(HOST_SUBDIR)/libiberty ; then \
  cd $(HOST_SUBDIR); mv libiberty stage1-libiberty  ; \
fi
@if test -d $(HOST_SUBDIR)/zlib ; then \
  cd $(HOST_SUBDIR); mv zlib stage1-zlib  ; \
fi
@if test -d $(TARGET_SUBDIR) ; then \
  mv $(TARGET_SUBDIR) stage1-$(TARGET_SUBDIR)  ; \
fi
rm -f stage_current

stage1-start::
@: $(MAKE); $(stage); \
echo stage1  stage_current ; \
echo stage1  stage_last; \
$(SHELL) $(srcdir)/mkinstalldirs $(HOST_SUBDIR) $(TARGET_SUBDIR)
@cd $(HOST_SUBDIR); [ -d stage1-gcc ] || \
  mkdir stage1-gcc; \
set stage1-gcc gcc ; \
ln -s $$1 $$2 
@cd $(HOST_SUBDIR); [ -d stage1-intl ] || \
  mkdir stage1-intl; \
set stage1-intl intl ; \
ln -s $$1 $$2 
@cd $(HOST_SUBDIR); [ -d stage1-libcpp ] || \
  mkdir stage1-libcpp; \
set stage1-libcpp libcpp ; \
ln -s $$1 $$2 
@cd $(HOST_SUBDIR); [ -d stage1-libdecnumber ] || \
  mkdir stage1-libdecnumber; \

[Bug bootstrap/30342] New: Tough time building 4.2.0 (CVS) on WinXP with Cygwin

2007-01-01 Thread rob1weld at aol dot com
I had a lot of trouble getting __everything__ to work. I've tried rebuilding a
few times this last month and have managed to get everything (really) working
except I can not compile ada (I will try some more).

Here is the output of gcc-v :

Using built-in specs.
Target: athlon_xp-pc-cygwin
Configured with: /cygdrive/C/makecygwin/gcc-4_2-branch/configure
--disable-werror --verbose --target=athlon_xp-pc-cygwin
--enable-languages=c,ada,c++,fortran,java,objc,obj-c++ --prefix=/usr
--exec-prefix=/usr --sysconfdir=/etc --libdir=/usr/lib --libexecdir=/usr/lib
--mandir=/usr/share/man --infodir=/usr/share/info --enable-shared
--enable-static --enable-nls --enable-multilib --without-included-gettext
--enable-version-specific-runtime-libs
--with-gxx-include-dir=/include/c++/4.2.0 --enable-libstdcxx-debug
--enable-libgcj --enable-libgcj-debug --enable-java-awt=gtk,xlib
--enable-java-gc=boehm --enable-objc-gc --with-system-zlib
--enable-threads=posix --without-tls --enable-sjlj-exceptions
--enable-hash-synchronization --enable-libada --enable-libssp
--enable-libmudflap --enable-win32-registry --with-x
--x-includes=/usr/X11R6/include --x-libraries=/usr/X11R6/lib
--with-cpu=athlon-xp --with-arch=athlon-xp --with-tune=athlon-xp
athlon_xp-pc-cygwin
Thread model: posix
gcc version 4.2.0 20061225 (prerelease)

I have mudflaps and gomp working on Windows XP. I also used --with-x .

Here are some notes for any having trouble enabling every possible flag on
WinXP
(good for testing but it takes two days to compile). This may be verbose. These
notes assume 80 columns - hope this input window does too. I try to fix them
up.


This is some info to encourage people to attempt to build gcc for Windows XP
using the Cygwin environment (get setup program from: http://www.cygwin.com/) .

The end result is a new compiler tool chain with c and fortran that pass
almost every test, the ada will not build and there are quite a few errors in
the other packages. I am enabling ssp, gomp, mudflap, awt - these are not
working too badly, but need a maintainer to do some fixing.

I have read http://gcc.gnu.org/install/specific.html#windows . It claims GCC
will build under Cygwin without modification. I did not find I was able to
build either 4.1.1 release or 4.2.0 prerelease as is. Hopefully the info that
follows will point out some bugs / shortcomings and encourage others to try.
The page http://gcc.gnu.org/gcc-4.2/buildstat.html is EMPTY!

I will limit much of the following to 4.2.0 ONLY - but to build gcc with Cygwin
you can only start from an old version of gcc. The Cygwin Setup program uses
gcc 3.4.4-3 as the newest version. To go from gcc 3.4.4-3 (release) to 4.2.0
(prerelease) it is advisable to build 4.1.1 (release) along the way. The gcc
3.4.4-3 version is so old that it will reject many of the gcc options that the 
makefiles pass to it, you don't want to remove to many features. When jumping a
major version number it is best to use a release version of the same version
(4.1.1) to build an experimental version 4.2.0 (prerelease). I know that
version 4.1.2 fixes many of the troubles with 4.1.1 but that version was not a
'release' version at the time of this writing and you are going to build
4.2.0 anyway so I do not suggest 4.1.2 but the choice is yours.

Make sure you build gcc with --enable-threads=posix and NOT
--enable-threads=win32 on Cygwin / Windows XP or you'll find that many Linux
/ Unix programs will not compile properly.

I am enabling (almost) every possible ./configure option possible in my build.
If you want to use the same options as I did (to duplicate my test and fix
whats broken - HINT to maintainers try compiling gcc for Windows XP) you will
need to get the following (I hope this is a complete list - see the
installation page http://gcc.gnu.org/install/index.html for more info):

1) Base   - select to install everything
2) Devel  - autoconf, automake, binutils, bison, byacc, cvs, cvsutils,
dejagnu, doxygen, flex, gcc-* (everything starting with the
words gcc-), gettext, gettext-devel, guile-1.6.7-4 (click
the S box to get guile source code - don't use a newer
version), make, pkg-config, readline, (hope I didn't miss
anything).
3) Gnome  - atk, glib, gtk, pango
3) Publishing - tetex-*
4) Utils  - cygutils, file, patch, time, upx
5) X11- select to install everything
6) Other  - goto http://www.gimp.org/~tml/gimp/win32/downloads.html and get
 the newest gimp. atk-1.12.3.zip, cairo-1.2.6.zip, glib-2.12.6.zip,
   gtk+-2.10.6.zip, pango-1.14.8.zip, libiconv-1.9.1.bin.woe32.zip,
and gettext-0.14.5.zip. You need both the 'cygwin setup'
versions and the 'gimp website' version. You'll need to write
.pc files for them and use cygcheck -c to make sure they are
found.

In addition you may want to get Mortens Cygwin X-Launcher (to help Windows 

[Bug libfortran/30162] I/O with named pipes does not work

2007-01-01 Thread tkoenig at gcc dot gnu dot org


--- Comment #8 from tkoenig at gcc dot gnu dot org  2007-01-01 15:17 ---
(In reply to comment #7)
 I have formatted named pipe I/O working, at least for the equivalent test 
 cases
 given here.

Great!

If you want me to, I'll be willing to test your patch.

Thomas


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30162



[Bug c++/30340] pure virtual function called on const declared with previous declaration without a definition, const assigned by temporary

2007-01-01 Thread fang at csl dot cornell dot edu


--- Comment #6 from fang at csl dot cornell dot edu  2007-01-01 16:42 
---
You can confirm the lifetime of B() by printing something during its
destruction, and during the constructor of C.  You'll be left with a dangling
reference to a temporary whose vptr has been invalidated, hence the error
message.  


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30340



[Bug c++/30340] pure virtual function called on const declared with previous declaration without a definition, const assigned by temporary

2007-01-01 Thread pinskia at gcc dot gnu dot org


--- Comment #7 from pinskia at gcc dot gnu dot org  2007-01-01 16:57 ---
Invalid as the life time of B() ends after the assigmnet statement ends so the
code is undefined after that point.


-- 

pinskia at gcc dot gnu dot org changed:

   What|Removed |Added

 Status|UNCONFIRMED |RESOLVED
 Resolution||INVALID


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30340



[Bug libfortran/30162] I/O with named pipes does not work

2007-01-01 Thread jvdelisle at gcc dot gnu dot org


--- Comment #9 from jvdelisle at gcc dot gnu dot org  2007-01-01 17:51 
---
Preliminary patch for formatted only.

Index: io/unix.c
===
*** io/unix.c   (revision 120301)
--- io/unix.c   (working copy)
*** fd_flush (unix_stream * s)
*** 349,355 
size_t writelen;

if (s-ndirty == 0)
! return SUCCESS;;

if (s-physical_offset != s-dirty_offset 
lseek (s-fd, s-dirty_offset, SEEK_SET)  0)
--- 349,358 
size_t writelen;

if (s-ndirty == 0)
! return SUCCESS;
!   
!   if (s-file_length == -1)
! return SUCCESS;

if (s-physical_offset != s-dirty_offset 
lseek (s-fd, s-dirty_offset, SEEK_SET)  0)
*** fd_sfree (unix_stream * s)
*** 562,567 
--- 565,574 
  static try
  fd_seek (unix_stream * s, gfc_offset offset)
  {
+ 
+   if (s-file_length == -1)
+ return SUCCESS;
+ 
if (s-physical_offset == offset) /* Are we lucky and avoid syscall?  */
  {
s-logical_offset = offset;
*** static try
*** 583,589 
  fd_truncate (unix_stream * s)
  {
if (lseek (s-fd, s-logical_offset, SEEK_SET) == -1)
! return FAILURE;

/* non-seekable files, like terminals and fifo's fail the lseek.
   Using ftruncate on a seekable special file (like /dev/null)
--- 590,596 
  fd_truncate (unix_stream * s)
  {
if (lseek (s-fd, s-logical_offset, SEEK_SET) == -1)
! return SUCCESS;

/* non-seekable files, like terminals and fifo's fail the lseek.
   Using ftruncate on a seekable special file (like /dev/null)
*** fd_to_stream (int fd, int prot)
*** 1009,1015 
/* Get the current length of the file. */

fstat (fd, statbuf);
!   s-file_length = S_ISREG (statbuf.st_mode) ? statbuf.st_size : -1;
s-special_file = !S_ISREG (statbuf.st_mode);

fd_open (s);
--- 1016,1027 
/* Get the current length of the file. */

fstat (fd, statbuf);
! 
!   if (lseek (fd, 0, SEEK_CUR) == (off_t) -1)
! s-file_length = -1;
!   else
! s-file_length = S_ISREG (statbuf.st_mode) ? statbuf.st_size : -1;
! 
s-special_file = !S_ISREG (statbuf.st_mode);

fd_open (s);


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30162



[Bug target/29281] natPlainDatagramSocketImpl.cc:148: internal compiler error

2007-01-01 Thread pinskia at gcc dot gnu dot org


--- Comment #8 from pinskia at gcc dot gnu dot org  2007-01-01 18:27 ---
No feedback in 3 months.


-- 

pinskia at gcc dot gnu dot org changed:

   What|Removed |Added

 Status|WAITING |RESOLVED
 Resolution||INVALID


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29281



[Bug c/30343] New: False positive: allocating zero-element array

2007-01-01 Thread gjasny at web dot de
Hi,

The following code produces a false positive warning allocating zero-element
array.

template class T, int size = 0 class Array
{
public:
  Array() {
if (size) {
  new T[size];
}
  }
};

void foo() {
  Arrayint bar;
}

the new command is guarded by a if(size). So allocating a zero size array is
impossible. It would be really nice if gcc could check for this condition,
also.

Thanks,
Gregor


-- 
   Summary: False positive: allocating zero-element array
   Product: gcc
   Version: 4.2.0
Status: UNCONFIRMED
  Severity: enhancement
  Priority: P3
 Component: c
AssignedTo: unassigned at gcc dot gnu dot org
ReportedBy: gjasny at web dot de
 GCC build triplet: gcc version 4.2.0 20061217 (prerelease) (Debian 4.2-
20061217-1)
  GCC host triplet: i486-linux-gnu
GCC target triplet: i486-linux-gnu


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30343



[Bug c++/30344] New: template type argument cannot be initialized by a default parameter

2007-01-01 Thread Bernd dot Donner at gmx dot net
The following code sample is compiled by several other compilers. Gcc compiles
the following example, when the function f is put into the global scope.
The example can also be compiled, when v has only a single template parameter.



template class T1, class T2
class v { };

class c2 {
void f(const vint, int t = vint, int()) { }
};

int main() { }



The code cannot be compiled by gcc 3.4, 4.1.2 and 4.2
In the example it is of course essential that t is a _constant_ reference.

Bernd Donner


-- 
   Summary: template type argument cannot be initialized by a
default parameter
   Product: gcc
   Version: 4.2.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: c++
AssignedTo: unassigned at gcc dot gnu dot org
ReportedBy: Bernd dot Donner at gmx dot net


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30344



[Bug c++/30344] template type argument cannot be initialized by a default parameter

2007-01-01 Thread pinskia at gcc dot gnu dot org


--- Comment #1 from pinskia at gcc dot gnu dot org  2007-01-01 19:02 ---


*** This bug has been marked as a duplicate of 57 ***


-- 

pinskia at gcc dot gnu dot org changed:

   What|Removed |Added

 Status|UNCONFIRMED |RESOLVED
 Resolution||DUPLICATE


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30344



[Bug c++/57] [DR 325] GCC can't parse a non-parenthesized comma in a template-id within a default argument

2007-01-01 Thread pinskia at gcc dot gnu dot org


--- Comment #33 from pinskia at gcc dot gnu dot org  2007-01-01 19:02 
---
*** Bug 30344 has been marked as a duplicate of this bug. ***


-- 

pinskia at gcc dot gnu dot org changed:

   What|Removed |Added

 CC||Bernd dot Donner at gmx dot
   ||net


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=57



[Bug c/30343] False positive: allocating zero-element array

2007-01-01 Thread pinskia at gcc dot gnu dot org


--- Comment #1 from pinskia at gcc dot gnu dot org  2007-01-01 19:03 ---


*** This bug has been marked as a duplicate of 4210 ***


-- 

pinskia at gcc dot gnu dot org changed:

   What|Removed |Added

 Status|UNCONFIRMED |RESOLVED
 Resolution||DUPLICATE


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30343



[Bug middle-end/4210] should not warning with dead code

2007-01-01 Thread pinskia at gcc dot gnu dot org


--- Comment #19 from pinskia at gcc dot gnu dot org  2007-01-01 19:03 
---
*** Bug 30343 has been marked as a duplicate of this bug. ***


-- 

pinskia at gcc dot gnu dot org changed:

   What|Removed |Added

 CC||gjasny at web dot de


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=4210



[Bug middle-end/30253] [4.3 Regression] ICE with statement expression inside a conditional

2007-01-01 Thread pinskia at gcc dot gnu dot org


--- Comment #13 from pinskia at gcc dot gnu dot org  2007-01-01 19:21 
---
I am going to test this and then apply it as obvious once the testing is
finished.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30253



[Bug c++/30340] pure virtual function called on const declared with previous declaration without a definition, const assigned by temporary

2007-01-01 Thread mjtruog at fastmail dot ca


--- Comment #8 from mjtruog at fastmail dot ca  2007-01-01 20:34 ---
Thank you for looking at this.  My mistake.

I didn't realize that when you assign a temporary to a const , the object is
still destroyed after the assignment (and should then not be used in such a
way, since the contents is undefined).


-- 

mjtruog at fastmail dot ca changed:

   What|Removed |Added

   Severity|normal  |critical


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30340



[Bug middle-end/30322] ((-i-1) + i) +1) is turned into ~i + (i+1) and never into 0 on the tree level

2007-01-01 Thread roger at eyesopen dot com


--- Comment #3 from roger at eyesopen dot com  2007-01-01 20:46 ---
Hi Richard (Happy New Year),

I was wondering whether you could confirm whether the patch I committed fixes
the loop termination conditions in tramp3d?  It resolves the example code given
in the description which now reduces to return 0;, but I'm curious if this is
sufficient to catch the underlying problem or whether we really need a tree
combiner and/or reassociation in order to optimize these loops.

Thanks in advance,

Roger


-- 

roger at eyesopen dot com changed:

   What|Removed |Added

 Status|UNCONFIRMED |NEW
 Ever Confirmed|0   |1
   Last reconfirmed|-00-00 00:00:00 |2007-01-01 20:46:00
   date||


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30322



[Bug fortran/25135] [4.2 and 4.1 only] Interface name does not conflict with subroutine name

2007-01-01 Thread pault at gcc dot gnu dot org


--- Comment #3 from pault at gcc dot gnu dot org  2007-01-01 20:47 ---
I seem not to have taken this one on.

Paul


-- 

pault at gcc dot gnu dot org changed:

   What|Removed |Added

 AssignedTo|unassigned at gcc dot gnu   |pault at gcc dot gnu dot org
   |dot org |
 Status|NEW |ASSIGNED
   Last reconfirmed|2006-02-26 20:07:48 |2007-01-01 20:47:44
   date||


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=25135



[Bug target/30230] Incorrect ia64 EH info when an EH region ends in the middle of a bundle

2007-01-01 Thread wilson at gcc dot gnu dot org


--- Comment #4 from wilson at gcc dot gnu dot org  2007-01-01 21:00 ---
Fixed by Jakub's patch for 4.1, 4.2, and mainline.


-- 

wilson at gcc dot gnu dot org changed:

   What|Removed |Added

 Status|ASSIGNED|RESOLVED
 Resolution||FIXED


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30230



[Bug preprocessor/29966] crash in cc1 with backtrace from free()

2007-01-01 Thread patchapp at dberlin dot org


--- Comment #9 from patchapp at dberlin dot org  2007-01-01 21:53 ---
Subject: Bug number PR preprocessor/29966

A patch for this bug has been added to the patch tracker.
The mailing list url for the patch is
http://gcc.gnu.org/ml/gcc-patches/2006-12/msg01848.html


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29966



[Bug preprocessor/28165] _Pragma GCC system_header broken

2007-01-01 Thread patchapp at dberlin dot org


--- Comment #3 from patchapp at dberlin dot org  2007-01-01 21:56 ---
Subject: Bug number PR preprocessor/28165

A patch for this bug has been added to the patch tracker.
The mailing list url for the patch is
http://gcc.gnu.org/ml/gcc-patches/2006-12/msg01850.html


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=28165



[Bug preprocessor/19753] different LANG settings and ccache don't work together

2007-01-01 Thread patchapp at dberlin dot org


--- Comment #2 from patchapp at dberlin dot org  2007-01-01 21:57 ---
Subject: Bug number PR preprocessor/19753

A patch for this bug has been added to the patch tracker.
The mailing list url for the patch is
http://gcc.gnu.org/ml/gcc-patches/2006-12/msg01851.html


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19753



[Bug preprocessor/28709] [4.0/4.1 regression] Bad diagnostic pasting tokens with ##

2007-01-01 Thread patchapp at dberlin dot org


--- Comment #5 from patchapp at dberlin dot org  2007-01-01 21:57 ---
Subject: Bug number PR preprocessor/28709

A patch for this bug has been added to the patch tracker.
The mailing list url for the patch is
http://gcc.gnu.org/ml/gcc-patches/2006-12/msg01852.html


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=28709



[Bug preprocessor/22168] #if #A == #B should have a diagnostic in ISO C mode

2007-01-01 Thread patchapp at dberlin dot org


--- Comment #14 from patchapp at dberlin dot org  2007-01-01 21:57 ---
Subject: Bug number PR preprocessor/22168

A patch for this bug has been added to the patch tracker.
The mailing list url for the patch is
http://gcc.gnu.org/ml/gcc-patches/2006-12/msg01853.html


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=22168



[Bug fortran/25080] ICE/missing error on different ranks for dummy and actual arguments

2007-01-01 Thread pault at gcc dot gnu dot org


--- Comment #3 from pault at gcc dot gnu dot org  2007-01-01 21:58 ---
(In reply to comment #2)
 We now reject the reporter's code as we should. We could still reject the code
 in comment #1, but none of the other compilers I tried reject it. Marking this
 as low priority (I think it will be fixed by Paul Thomas' patch for in-file
 checking).

12.4.1.1 Actual arguments associated with dummy data objects

If the dummy argument is an assumed-shape array, the rank of the dummy
argument shall agree with the rank of the actual argument.

12.4.1.4 Sequence association

The rank and shape of the actual argument need not agree with the rank and
shape of the dummy argument, but the number of elements in the dummy argument
shall not exceed the number of elements in the element sequence of the actual
argument. If the dummy argument is assumed-size, the number of elements in the
dummy argument is exactly the number of elements in the element sequence.

Apart from the requirement on the number of elements in the dummy (PR25071),
gfortran complies with the standard and, as noted already, fixes the original
bug.

I am therefore marking this bug as fixed.

Paul


-- 

pault at gcc dot gnu dot org changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution||FIXED


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=25080



[Bug target/29166] broken unwind information for many life variables resulting in register corruption

2007-01-01 Thread schwab at gcc dot gnu dot org


--- Comment #6 from schwab at gcc dot gnu dot org  2007-01-01 22:03 ---
Subject: Bug 29166

Author: schwab
Date: Mon Jan  1 22:03:23 2007
New Revision: 120319

URL: http://gcc.gnu.org/viewcvs?root=gccview=revrev=120319
Log:
PR target/29166
* config/ia64/ia64.c (ia64_compute_frame_size): Account space for
save of BR0 in extra_spill_size instead of spill_size.
(ia64_expand_prologue): Save BR0 outside of the gr/br/fr spill
area.
(ia64_expand_epilogue): Restore BR0 from its new location.

testsuite/:
* g++.dg/eh/pr29166.C: New test.

Added:
trunk/gcc/testsuite/g++.dg/eh/pr29166.C
Modified:
trunk/gcc/ChangeLog
trunk/gcc/config/ia64/ia64.c
trunk/gcc/testsuite/ChangeLog


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29166



[Bug target/29166] broken unwind information for many life variables resulting in register corruption

2007-01-01 Thread schwab at gcc dot gnu dot org


--- Comment #7 from schwab at gcc dot gnu dot org  2007-01-01 22:07 ---
Subject: Bug 29166

Author: schwab
Date: Mon Jan  1 22:07:30 2007
New Revision: 120320

URL: http://gcc.gnu.org/viewcvs?root=gccview=revrev=120320
Log:
PR target/29166
* config/ia64/ia64.c (ia64_compute_frame_size): Account space for
save of BR0 in extra_spill_size instead of spill_size.
(ia64_expand_prologue): Save BR0 outside of the gr/br/fr spill
area.
(ia64_expand_epilogue): Restore BR0 from its new location.

testsuite/:
* g++.dg/eh/pr29166.C: New test.

Added:
branches/gcc-4_2-branch/gcc/testsuite/g++.dg/eh/pr29166.C
Modified:
branches/gcc-4_2-branch/gcc/ChangeLog
branches/gcc-4_2-branch/gcc/config/ia64/ia64.c
branches/gcc-4_2-branch/gcc/testsuite/ChangeLog


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29166



[Bug c++/30348] New: '#define false FALSE' undefines '#define FALSE false'

2007-01-01 Thread h8_spam at sonic dot net
I ran into an issue where doing #define FALSE false followed by #define false
FALSE undefined the first FALSE which is not what I would expect.

Perhaps this is part of the standard, but in case not, I'm reporting it.

---

#define FALSE false
#define TRUE true

#ifndef true
#define true TRUE
#endif

#ifndef false
#define false FALSE
#endif

main() {
  bool test1 = FALSE;
  bool test2 = TRUE;
}



[envy viewer] g++ test.cc
test.cc: In function `int main()':
test.cc:13: error: `FALSE' undeclared (first use this function)
test.cc:13: error: (Each undeclared identifier is reported only once for each 
   function it appears in.)
test.cc:14: error: `TRUE' undeclared (first use this function)


-- 
   Summary: '#define false FALSE' undefines '#define FALSE false'
   Product: gcc
   Version: 3.3.6
Status: UNCONFIRMED
  Severity: minor
  Priority: P3
 Component: c++
AssignedTo: unassigned at gcc dot gnu dot org
ReportedBy: h8_spam at sonic dot net


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30348



[Bug target/29166] broken unwind information for many life variables resulting in register corruption

2007-01-01 Thread schwab at suse dot de


--- Comment #8 from schwab at suse dot de  2007-01-01 22:11 ---
Fixed for 4.2+.


-- 

schwab at suse dot de changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution||FIXED
   Target Milestone|--- |4.2.0


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29166



[Bug middle-end/30253] [4.3 Regression] ICE with statement expression inside a conditional

2007-01-01 Thread pinskia at gcc dot gnu dot org


--- Comment #14 from pinskia at gcc dot gnu dot org  2007-01-01 22:20 
---
Subject: Bug 30253

Author: pinskia
Date: Mon Jan  1 22:19:58 2007
New Revision: 120321

URL: http://gcc.gnu.org/viewcvs?root=gccview=revrev=120321
Log:
2007-01-01  Andrew Pinski  [EMAIL PROTECTED]

PR middle-end/30253
* gimplify (voidify_wrapper_expr): Update for
GIMPLIFY_MODIFY_STMT.

2007-01-01  Andrew Pinski  [EMAIL PROTECTED]

PR middle-end/30253
* gcc.c-torture/compile/statement-expression-1.c: New test.



Added:
trunk/gcc/testsuite/gcc.c-torture/compile/statement-expression-1.c
Modified:
trunk/gcc/ChangeLog
trunk/gcc/gimplify.c
trunk/gcc/testsuite/ChangeLog


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30253



[Bug middle-end/30253] [4.3 Regression] ICE with statement expression inside a conditional

2007-01-01 Thread pinskia at gcc dot gnu dot org


--- Comment #15 from pinskia at gcc dot gnu dot org  2007-01-01 22:21 
---
Fixed.  Thanks both for the report.


-- 

pinskia at gcc dot gnu dot org changed:

   What|Removed |Added

 Status|ASSIGNED|RESOLVED
 Resolution||FIXED


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30253



[Bug c++/30348] '#define false FALSE' undefines '#define FALSE false'

2007-01-01 Thread pinskia at gcc dot gnu dot org


--- Comment #1 from pinskia at gcc dot gnu dot org  2007-01-01 22:34 ---
so what is happening here is the following:

#define FALSE false
#define false FALSE

bool a = FALSE;

So we get again:
bool a = FALSE;


This is the same problem as:
int b;

#define a b
#define b a

int main() {
  int test2 = a;
}

also true/false are not defined in C++ but rather they are keywords


-- 

pinskia at gcc dot gnu dot org changed:

   What|Removed |Added

 Status|UNCONFIRMED |RESOLVED
 Resolution||INVALID


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30348



[Bug c++/30348] '#define false FALSE' undefines '#define FALSE false'

2007-01-01 Thread h8_spam at sonic dot net


--- Comment #2 from h8_spam at sonic dot net  2007-01-01 22:43 ---
Right, but since true and false are keywords, I would expect the #define true
TRUE and false FALSE to be no-ops.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30348



[Bug c++/30348] '#define false FALSE' undefines '#define FALSE false'

2007-01-01 Thread h8_spam at sonic dot net


--- Comment #3 from h8_spam at sonic dot net  2007-01-01 22:44 ---
So I would expect it NOT to be the same as the a b b a example you give.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30348



[Bug debug/8354] Incorrect DWARF-2/3 emitted for const + array

2007-01-01 Thread gary at intrepid dot com


--- Comment #12 from gary at intrepid dot com  2007-01-01 22:47 ---
Jim Wilson posted thi follow up to the GDB list:
http://sourceware.org/ml/gdb/2007-01/msg7.html

 From: Jim Wilson wilson at specifix dot com 
 Date: Mon, 01 Jan 2007 14:15:47 -0800 
 Subject: RE: how to support C type qualifiers applied to arrays? 

On Thu, 2006-12-14 at 12:22 -0800, Gary Funck wrote:
 The main difficulty is that GCC doesn't create new qualified
 types for declarations.  Rather, it sets TREE_READONLY()
 and TREE_THIS_VOLATILE() in the DECL node for declarations
 such as:
volatile int A[10];

If you look at the types created by the C front end, they are OK.
c_build_qualified_type knows how to handle an array correctly.

The problem arises in the DWARF2 output code.  gen_type_die calls
type_main_variant for all types other than vector types, which strips
off the const and volatile type modifiers.  Then it clumsily tries to
put them back later in gen_variable_die, except that for array types, it
puts them back in the wrong place.

This seems to answer the question I asked long ago.  Why are we trying
to put back qualifiers from the decl?  Because gen_type_die stripped
them off.  This seems wrong.

If we fix gen_type_die to stop calling type_main_variant, and if we fix
gen_variable_die to stop adding back the type qualifiers, then I get the
right result.  So I think I was on the right track before, we just need
another little change to gen_type_die in addition to what I already
described.

I haven't investigated this in detail yet.  There may be other parts of
the code that expect to see a type main variant here, so we might need
other cascading fixes.  This still seems fixable to me though.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=8354



[Bug preprocessor/21521] -finput-charset -save-temps converts characters twice

2007-01-01 Thread patchapp at dberlin dot org


--- Comment #4 from patchapp at dberlin dot org  2007-01-01 22:55 ---
Subject: Bug number PR preprocessor/21521

A patch for this bug has been added to the patch tracker.
The mailing list url for the patch is
http://gcc.gnu.org/ml/gcc-patches/2007-01/msg00027.html


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21521



Re: [Bug c++/30348] '#define false FALSE' undefines '#define FALSE false'

2007-01-01 Thread Andrew Pinski
On Mon, 2007-01-01 at 22:43 +, h8_spam at sonic dot net wrote:
 Right, but since true and false are keywords, I would expect the
 #define true
 TRUE and false FALSE to be no-ops.


How?  Preprocessing happens before tokenazation happens.

-- Pinski



[Bug c++/30348] '#define false FALSE' undefines '#define FALSE false'

2007-01-01 Thread pinskia at gmail dot com


--- Comment #4 from pinskia at gmail dot com  2007-01-01 23:37 ---
Subject: Re:  '#define false FALSE' undefines '#define FALSE
false'

On Mon, 2007-01-01 at 22:43 +, h8_spam at sonic dot net wrote:
 Right, but since true and false are keywords, I would expect the
 #define true
 TRUE and false FALSE to be no-ops.


How?  Preprocessing happens before tokenazation happens.

-- Pinski


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30348



[Bug pch/13675] #including a precompiled header more than once in the same unit fails

2007-01-01 Thread tim at klingt dot org


--- Comment #14 from tim at klingt dot org  2007-01-01 23:53 ---
this is still a problem in the 4.2 branch


-- 

tim at klingt dot org changed:

   What|Removed |Added

 CC||tim at klingt dot org


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=13675



[Bug middle-end/30311] [4.3 regression] revision 120211 failed to compile perlbench

2007-01-01 Thread jsm28 at gcc dot gnu dot org


--- Comment #5 from jsm28 at gcc dot gnu dot org  2007-01-02 00:38 ---
Subject: Bug 30311

Author: jsm28
Date: Tue Jan  2 00:38:21 2007
New Revision: 120329

URL: http://gcc.gnu.org/viewcvs?root=gccview=revrev=120329
Log:
gcc:
PR middle-end/30311
* caller-save.c (add_stored_regs): Only handle SUBREGs if inner
REG is a hard register.  Do not modify REG before calling
subreg_nregs.
* rtlanal.c (subreg_get_info): Don't assert size of XMODE is a
multiple of the size of YMODE for certain lowpart cases.

gcc/testsuite:
* gcc.c-torture/compile/pr30311.c: New test.

Added:
trunk/gcc/testsuite/gcc.c-torture/compile/pr30311.c
Modified:
trunk/gcc/ChangeLog
trunk/gcc/caller-save.c
trunk/gcc/rtlanal.c
trunk/gcc/testsuite/ChangeLog


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30311



[Bug middle-end/30349] New: gcc/libssp/ssp.c:177: ICE: in cgraph_expand_all_functions, at cgraphunit.c:1220

2007-01-01 Thread danglin at gcc dot gnu dot org
/test/gnu/gcc/objdir/./gcc/xgcc -B/test/gnu/gcc/objdir/./gcc/
-B/opt/gnu/gcc/gcc
-4.3.0/hppa2.0w-hp-hpux11.11/bin/
-B/opt/gnu/gcc/gcc-4.3.0/hppa2.0w-hp-hpux11.11
/lib/ -isystem /opt/gnu/gcc/gcc-4.3.0/hppa2.0w-hp-hpux11.11/include -isystem
/op
t/gnu/gcc/gcc-4.3.0/hppa2.0w-hp-hpux11.11/sys-include -c -g -O2 -fPIC
-mdisable-
indexing -W -Wall -gnatpg  a-nlcoty.ads -o a-nlcoty.o
../../../gcc/libssp/ssp.c: In function '__stack_chk_fail_local':
../../../gcc/libssp/ssp.c:177: warning: visibility attribute not supported in
th
is configuration; ignored
../../../gcc/libssp/ssp.c: At top level:
../../../gcc/libssp/ssp.c:177: internal compiler error: in
cgraph_expand_all_fun
ctions, at cgraphunit.c:1220


-- 
   Summary: gcc/libssp/ssp.c:177: ICE: in
cgraph_expand_all_functions, at cgraphunit.c:1220
   Product: gcc
   Version: 4.3.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: middle-end
AssignedTo: unassigned at gcc dot gnu dot org
ReportedBy: danglin at gcc dot gnu dot org
 GCC build triplet: hppa2.0w-hp-hpux11.11
  GCC host triplet: hppa2.0w-hp-hpux11.11
GCC target triplet: hppa2.0w-hp-hpux11.11


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30349



Re: [Bug middle-end/30349] New: gcc/libssp/ssp.c:177: ICE: in cgraph_expand_all_functions, at cgraphunit.c:1220

2007-01-01 Thread Andrew Pinski
This should have been fixed by:

2007-01-01  Jan Hubicka  [EMAIL PROTECTED]
   Andrew Pinski  [EMAIL PROTECTED]

   * cgraphunit.c (cgraph_optimize): Call cgraph_add_new_functions
   before starting IPA passes.

-- Pinski



[Bug middle-end/30349] gcc/libssp/ssp.c:177: ICE: in cgraph_expand_all_functions, at cgraphunit.c:1220

2007-01-01 Thread pinskia at gmail dot com


--- Comment #1 from pinskia at gmail dot com  2007-01-02 01:12 ---
Subject: Re:   New: gcc/libssp/ssp.c:177: ICE: in
cgraph_expand_all_functions, at cgraphunit.c:1220

This should have been fixed by:

2007-01-01  Jan Hubicka  [EMAIL PROTECTED]
   Andrew Pinski  [EMAIL PROTECTED]

   * cgraphunit.c (cgraph_optimize): Call cgraph_add_new_functions
   before starting IPA passes.

-- Pinski


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30349



[Bug target/29867] [4.3 Regression] building libgfortran fails because of multiple definitions gcc-4.3-20061111

2007-01-01 Thread eesrjhc at bath dot ac dot uk


--- Comment #18 from eesrjhc at bath dot ac dot uk  2007-01-02 01:14 ---
(In reply to comment #17)
 Created an attachment (id=12839)
 -- (http://gcc.gnu.org/bugzilla/attachment.cgi?id=12839action=view) [edit]
 fixincludes: find headers in distro-specfic paths

Sorry about the delay in replying.

I can confirm that with this patch I can now bootstrap successfully:

=== gfortran Summary ===

# of expected passes15820
# of expected failures  7
# of unsupported tests  17
/home/roger/src/gcc-svn/build_amber/gcc/testsuite/gfortran/../../gfortran 
version 4.3.0 20070101 (experimental)

Thanks very much, I really appreciate your hard work.

Roger.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29867



[Bug c++/30303] [4.2/4.3 regression] ICE with invalid constructor definition

2007-01-01 Thread pinskia at gcc dot gnu dot org


--- Comment #4 from pinskia at gcc dot gnu dot org  2007-01-02 02:10 ---
I am testing a slightly different patch which is closer to what the rest of the
function does, in that return NULL_TREE instead of error_mark_node.
This fixes the problem still and also removes the error throws different
exceptions which seems like a good idea.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30303



[Bug middle-end/30349] gcc/libssp/ssp.c:177: ICE: in cgraph_expand_all_functions, at cgraphunit.c:1220

2007-01-01 Thread dave at hiauly1 dot hia dot nrc dot ca


--- Comment #2 from dave at hiauly1 dot hia dot nrc dot ca  2007-01-02 
02:28 ---
Subject: Re:  gcc/libssp/ssp.c:177: ICE: in cgraph_expand_all_functions, at
cgraphunit.c:1220

 This should have been fixed by:

Will check.

Dave


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30349



Re: [Bug libstdc++/29286] [4.0/4.1/4.2/4.3 Regression] placement new does not change the dynamic type as it should

2007-01-01 Thread Daniel Berlin

On 1 Jan 2007 00:41:44 -, mark at codesourcery dot com
[EMAIL PROTECTED] wrote:



--- Comment #26 from mark at codesourcery dot com  2007-01-01 00:41 ---
Subject: Re:  [4.0/4.1/4.2/4.3 Regression] placement
 new does not change the dynamic type as it should

dberlin at gcc dot gnu dot org wrote:

 If we add a placement_new_expr, and not try to revisit our interpretation of
 the standard, we can just DTRT and fix placement new. This would be best for
 optimizations, and IMHO, for users.

I agree that treating placement new specially makes sense.  The first
argument to a placement new operator could be considered to have an
unspecified dynamic type on entrance to the operator, while the return
value has the dynamic type specified by the operator.  (So that the
pointer returned by new (x) int has type int *.)


Right.



I'm not sure that placement_new_expr is the best way to accomplish this,
but, maybe it is.  Another possibility would be to define an attribute
or attributes to specify the dynamic type of arguments and return types,
and then have the C++ front end annotate all placement new operators
with those attributes.

It would be nice if we could transform those attributes on
gimplification to something like an
an alias preserving cast (or something of that nature) that states
that the  cast is type unioning for alias purposes (IE that the
possible types of the result for TBAA/etc purposes is the union of the
type of the cast and the type of the cast's operand)..
Not a fully fleshed out idea, just something that popped into my head.


[Bug libstdc++/29286] [4.0/4.1/4.2/4.3 Regression] placement new does not change the dynamic type as it should

2007-01-01 Thread dberlin at dberlin dot org


--- Comment #27 from dberlin at gcc dot gnu dot org  2007-01-02 03:01 
---
Subject: Re:  [4.0/4.1/4.2/4.3 Regression] placement new does not change the
dynamic type as it should

On 1 Jan 2007 00:41:44 -, mark at codesourcery dot com
[EMAIL PROTECTED] wrote:


 --- Comment #26 from mark at codesourcery dot com  2007-01-01 00:41 
 ---
 Subject: Re:  [4.0/4.1/4.2/4.3 Regression] placement
  new does not change the dynamic type as it should

 dberlin at gcc dot gnu dot org wrote:

  If we add a placement_new_expr, and not try to revisit our interpretation of
  the standard, we can just DTRT and fix placement new. This would be best for
  optimizations, and IMHO, for users.

 I agree that treating placement new specially makes sense.  The first
 argument to a placement new operator could be considered to have an
 unspecified dynamic type on entrance to the operator, while the return
 value has the dynamic type specified by the operator.  (So that the
 pointer returned by new (x) int has type int *.)

Right.


 I'm not sure that placement_new_expr is the best way to accomplish this,
 but, maybe it is.  Another possibility would be to define an attribute
 or attributes to specify the dynamic type of arguments and return types,
 and then have the C++ front end annotate all placement new operators
 with those attributes.
It would be nice if we could transform those attributes on
gimplification to something like an
an alias preserving cast (or something of that nature) that states
that the  cast is type unioning for alias purposes (IE that the
possible types of the result for TBAA/etc purposes is the union of the
type of the cast and the type of the cast's operand)..
Not a fully fleshed out idea, just something that popped into my head.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29286



[Bug libstdc++/29286] [4.0/4.1/4.2/4.3 Regression] placement new does not change the dynamic type as it should

2007-01-01 Thread mark at codesourcery dot com


--- Comment #28 from mark at codesourcery dot com  2007-01-02 03:24 ---
Subject: Re:  [4.0/4.1/4.2/4.3 Regression] placement
 new does not change the dynamic type as it should

dberlin at dberlin dot org wrote:

 It would be nice if we could transform those attributes on
 gimplification to something like an
 an alias preserving cast (or something of that nature) that states
 that the  cast is type unioning for alias purposes (IE that the
 possible types of the result for TBAA/etc purposes is the union of the
 type of the cast and the type of the cast's operand)..

That sounds reasonable to me.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29286



[Bug libfortran/24459] [4.1 Only] gfortran namelist problem

2007-01-01 Thread jvdelisle at gcc dot gnu dot org


--- Comment #22 from jvdelisle at gcc dot gnu dot org  2007-01-02 04:35 
---
*** Bug 30193 has been marked as a duplicate of this bug. ***


-- 

jvdelisle at gcc dot gnu dot org changed:

   What|Removed |Added

 CC||mjw99 at ic dot ac dot uk


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=24459



[Bug libfortran/30193] Namelist issues when reading in asterisk preceeded arrays

2007-01-01 Thread jvdelisle at gcc dot gnu dot org


--- Comment #8 from jvdelisle at gcc dot gnu dot org  2007-01-02 04:35 
---
Already fixed

*** This bug has been marked as a duplicate of 24459 ***


-- 

jvdelisle at gcc dot gnu dot org changed:

   What|Removed |Added

 Status|UNCONFIRMED |RESOLVED
 Resolution||DUPLICATE


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30193



[Bug fortran/30170] The STATUS specified in OPEN statement at (1) cannot have the value SCRATCH if a FILE specifier is present

2007-01-01 Thread jvdelisle at gcc dot gnu dot org


--- Comment #2 from jvdelisle at gcc dot gnu dot org  2007-01-02 04:39 
---
I see no need to provide this non-standard behavior.  A simple edit of the
source code of the user program will resolve this.


-- 

jvdelisle at gcc dot gnu dot org changed:

   What|Removed |Added

 Status|UNCONFIRMED |RESOLVED
 Resolution||WONTFIX


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30170



  1   2   >